Close Advertisement


Synthetic Instrumentation Aids Tactical Radio Testing – Part II

Testing military tactical radios such system requires integrated test solutions using synthetic instrumentation. Test management software plays a critical role.


Keywords in this Article:

No Keywords

  • Page 1 of 1
    Bookmark and Share

Article Media

Part I of this article appeared in the May issue of COTS Journal. It explored the concept of SI, the levels of maintenance as known in the military applications and how they specifically apply to radio testing. Part II here continues along those lines getting into test management software in detail.

Every software test program has associated costs. This applies whether developing a program to test a single UUT, address many different end items, or maintain existing test programs as changes occur. When developing test applications, it is reasonable to select software architectures that optimize the investment. One such architectural decision is to incorporate a Test Executive. Test Executives separate the responsibilities of performing a test from those of sequencing and administering one. The resulting modularity increases code re-use and makes it easier to develop and maintain test programs. Here we'll examine the example of an RF military radio test to demonstrate how responsibilities are split between a Test Program and a Test Executive. That provides insight into how this is realized in software.

Consider, for example, that the goal is to test tactical military radios. An application may be tasked to perform a sequence of tests that assure the radio is ready for service, or otherwise identify faulty subsystems and assist the operator in performing diagnoses and repair. The testing is decomposed into groups, organized around the radio subsystems, consisting of: Establishing the unit is safe to turn on; Applying power' Performing the built-in test (BIT); Verifying keyboard operations; Verifying the RF transmitter; Verifying the RF receiver; and finally, Removing power.

Discrete Test Steps

Since each subsystem has many features, the groups are decomposed into discrete test steps. Each test step stimulates some part of the UUT, takes a measurement, and compares the value against some prescribed limits. Any of these steps might detect a fault. Consider also that there is a test operator, as in Figure 1, which depicts a typical test application. Such an operator might go through these steps to run the application. The operator uses a Graphical User Interface (GUI) to interact with the test application. For quality assurance purposes, the application might authenticate the user, for example, to distinguish an operator from an administrator. The operator identifies the UUT and receives instructions on how to connect the UUT to the instruments. The operator specifies what action to take in the event a UUT fault is detected. Does the test abort, retry, or proceed?

Figure 1
A typical test application where the operator identifies the UUT and receives instructions on how to connect the UUT to the instruments.

The application runs through each test step, perhaps many thousands, using the test set instruments to stimulate the UUT and measure, evaluate, and record results against prescribed limits. As the test steps proceed the application may query the operator, or direct that some manual action be taken. If faults occur, the application must notify the operator and take the directed action. The operator may have to modify the test sequence, for example, to repeat a test step or to skip test steps and target a particular subsystem for trouble shooting later. When the test sequence is completed, the application prompts the operator to disconnect the UUT. Throughout the test sequence, the results are displayed and logged.

There are serious disadvantages to allocating all of these responsibilities to a single piece of software. Large portions of the software are not unique to testing a single UUT. If it becomes necessary to test a different end item, then it would be economical to re-use the common software. Designing and programming such a test application requires different types of skills. For an applications programmer, creating user interfaces, controlling sequences, establishing access levels, and writing logs is common work. On the other hand, designing and implementing an effective test is typically the domain of a subject matter expert specializing in the device under test. Accordingly, test systems strive to implement software architectures that separate the responsibilities of configuring and executing tests from those that perform the actual tests. One way to efficiently accomplish this is with Test Executives.

Adding Test Executive Layer

Figure 2 depicts a Test Executive and a Test Program in place of the test application in Figure I. The Test Executive provides the user interface, the user access control, sequencing, and logging functions for executing the Test Program. The Test Executive and Test Program are separated from each other and consequently the same Test Executive may support any number of different Test Programs. Implementing a Test Executive in this manner frees the test programmer to focus on the specific task of writing a test for their UUT. This involves defining the groups, implementing the test steps, and setting the pass/fail criteria for each. Note that the Test Program is also different from the instrument which has its own software that may be synthetic or fixed as described in earlier sections.

Figure 2
The Test Executive provides the user interface, the user access control, sequencing, and logging functions for executing the Test Program

Figure 3 depicts a view into a software architecture, illustrating the features of a Test Executive by highlighting the various components and the information flow between them. The figure shows a Test Executive and a Test Program (identified as products) and the connections between them. These two software products may be identical software technologies from one manufacturer, or completely different technologies, written in different languages, created by different companies. The first key in implementing the architecture is to ensure that the connections are "late bound,: which means that the Test Program is not compiled and linked into the Test Executive, but may instead be discovered at run time. The benefit of this approach, given a precise specification of the connections, is that the Test Executive can support many different implementations of the Test Program.

Figure 3
Example view of a Test Executive architecture.

Many Parts of the Puzzle

The depicted Test Executive consists of an Access Controller, a Test Program Selector, a User View, a Logger, and a Persistent Storage. The Access Controller is responsible for processing operator access requests and notifying the User View of the access level to be granted to the operator. For example, one access level may provide simply the capability to run a test sequentially while another may provide the additional capability to run a test out of sequence. The Test Program Selector is responsible for locating the available Test Programs, discovering their identification information and presenting them to the User View, which in turn presents them to the operator. Subsequently, the operator can select the desired Test Program to load.

The User View is responsible for all interaction with the test operator. Such an implementation ensures that the user experience is consistent, regardless who authors the Test Program. The User View provides a main window where the operator can select a Test Program and enter sequencing that determines how the sequencer behaves when a test step fails. For example, in Normal mode the sequence may exit. In Halt at No Go (HANG) mode, the sequence halts and waits for an operator decision to proceed.

In Force Go mode, the sequence proceeds despite the fault. The User View also provides methods for the Test Program to display various messages and receive operator input. These may include displaying instructions, asking for manual measurements, or displaying images. The Logger is responsible for capturing and formatting test history and presenting it to the User View for immediate feedback, and to Persistent Storage for later analysis. The Logger may also capture Test Executive events such as system initialization and operator login.

Following Standard Pattern

Test programs are generally implemented with a standard pattern. This gives a clue to the implementation of the Sequencer and the Test Program. A typical Test Program pattern contains Test Groups and Test Groups contain Test Steps. Each Test Step stimulates or configures the UUT, performs some measurement, and then evaluates the measurement against some parameters unique to that step. Accordingly, three software classes emerge: a Test Program Class, a Tests Group Class, and a Test Step class. The classes are depicted in the Figure 4.

Figure 4
Test Executive and Test Program separation with the three classes depicted.

The three classes are related by composition such that a TestProgram class consists of TestGroup classes and each TestGroup class consists of TestStep classes. The depicted TestProgram class has a name and a description property (TestProgramName and TestProgramDescription) so that users of the class can read and display them. The TestProgram class also exposes a method which returns an array (or list) of the instanced TestGroup objects. The TestGroup objects are constructed and the TestGroup[] list is populated when the TestProgram object is constructed.

The TestGroup class is similar. It has a name and a description property and it also exposes a method which returns an array (or list) of the instanced TestStep objects. The TestStep objects are constructed and the TestStep[] list is populated when the TestGroup object is constructed. The TestStep class exposes a method which causes the step to be executed and, in turn, returns a TestResult object. The implementation of the Run() method is a function of any particular TestStep instance, but each instance performs stimulus and measurement actions resulting in a parameter able to be evaluated. The TestResult object (not depicted) contains the test identification, test evaluation criteria, and the measurement. Given these three classes, and a Test Executive/Test Program interface that provides a form of object visibility, a sequencer can be developed, and the Test Program executed under sequencer control.

Full Featured Sequencer

This basic logic, and the properties and methods of the Test Program classes can be extended to implement a full featured sequencer. Capability is needed to address the operator options. These include halting when faults are detected, branching into diagnostic test steps when faults are detected, running groups and test steps out of order, and single stepping test steps. This is, of course, a simplification. The Run() method, as implemented here, will block as the test step executes, and must be handled accordingly.

Often test sequence execution requires recursive calls to traverse a test flow that has multiple groups, child groups, and test steps, but the idea is to traverse through the complete list of test steps. Test Groups and Test Steps will often have properties such as 'stop-on-fail' flags, loop counts, descriptions, and other items that will enable a full-featured sequencer to be designed. The intent here is to identify some of the different responsibilities of Test Executives and Test Programs, show one of the many techniques for separating them, and to demonstrate that the composition of a Test Program can be leveraged in developing a Test Executive sequencer.

Test Management Software

Software packages are available which provide a Test Executive and the development tools useful for creating Test Programs that run under the executive. Similar to Software Development Environments used in general software development, these test management software suites are rich in features. They include compilers and debuggers. The compilers may address specific test languages such as the Abbreviated Test Language for All Systems (ATLAS), may support common programming languages and technologies such as C#, VBScript, and .NET, or may even use graphical methods that create the content of test steps visually, directly from the instrument front panels. All such packages include debuggers which support breakpoints, single step program execution and variable examination.

This article explored the requirements placed on test equipment by the three levels of maintenances commonly found in radio test. It identified the instrumentation necessary to test military radios. It explained the concept of SI and described how SI can support development of radio test equipment that satisfies the requirements of all three maintenance levels. Test Executive software plays an important role in enabling radio test equipment to test multiple UUTs. It further explained the role of Test Executives by exploring their software architecture, defining Test Program Management software, and identifying some proven Test Program Management suites.

Astronics Test Systems
Irvine, CA
(949) 859-8999