Insights Blog Automated Testing for Terminal- Based Applications

White Paper

Automated Testing for Terminal- Based Applications

The process and level of effort for building automated tests for terminal-based applications is similar to that of automating tests for applications running on other platforms, including MS Windows and the web.

Executive Overview

The process and level of effort for building automated tests for terminal-based applications is similar to that of automating tests for applications running on other platforms, including MS Windows and the web.

As compared to other platforms, test developers for applications accessed through a terminal emulator need only be concerned with a limited set of application control types. The stateful nature of terminal-based apps reduces the complexity of the navigational aspects of tests, but it also offers challenges for recovery when failures occur.

To reduce test creation, data management and test maintenance times, Qualitest has partnered with Zeenyx Software, using its flagship product AscentialTest in client engagements, which supports a wide variety of terminal-based emulators.

For illustrative purposes, this paper will use an IBM iSeries (AS/400) application to demonstrate how to create automated tests for terminal based applications. However, the concepts discussed are applicable to iSeries (OS/400), zSeries (zOS) and any other platforms that use terminals.

This paper will discuss the setup, development and execution of automated tests for terminal-based applications, highlighting the similarities and differences as compared to other platforms.


Since users typically access terminal-based applications through a terminal emulator, it makes sense to test them in the same manner.

AscentialTest provides support for the terminal emulator that is provided by IBM for accessing the iSeries (AS/400). If it is not already in use for manual testing, it can be easily downloaded and installed on any windows-based computer.

AscentialTest provides an application profile for the iSeries emulator so no additional steps are required for set up.

Test Architecture

AscentialTest is an object-oriented testing tool. Regardless of the target application platform, AscentialTest captures ‘snapshots’ of application pages or screens.

Snapshots are smart images that contain application objects along with their attributes. As compared to the web, applications accessed through a terminal emulator contain a very limited set of application controls. While web-based applications contain web lists, links, dropdowns, radio lists, check boxes, push buttons and an array of tree and grid-like controls, terminal-based applications are comprised of the following control types:

  • TerminalText
  • TerminalField
  • TerminalLabel
  • TerminalCommand

Figure 1 below shows the different control types in an AscentialTest snapshot of an iSeries page.


The Elements tree to the left of the image, displays all of the elements on the sample screen. If the element is highlighted in the image, it lights up in the Elements tree. AscentialTest not only recognizes that an object exists, but it also recognizes the class of each object so that it can anticipate the attributes that an object will contain and how it will behave.

AscentialTest communicates with the emulator through HLLAPI. The API provides a mechanism for synchronization so that AscentialTest can detect when the terminal session is ready for the next command. The user does not need to be concerned with waiting for a ‘ready state’ since the synchronization is automatic.


The approach for building automated tests for terminal-based applications is exactly the same as for other platforms. The approach breaks down to the following tasks:

  • Create a test plan
  • Define application objects
  • Build reusable steps
  • Build data-driven tests
  • Specify test data
  • Execute tests
  • Report results

Creating a Test Plan

The Test Plan is a natural language description of test objectives in outline form. It documents the test requirements. An outline is an efficient form of notation because it allows tests to be grouped, where test requirements are shared by multiple test cases. Where a list of test cases would require a lot of repeated language, an outline allows a requirement to be stated once and then “inherited” by the outline levels nested beneath it. A test plan describes each transaction to be tested along with necessary test conditions.

Figure 2 displays a section of the test plan template. The symbol [+] indicates that the line can be expanded to view hidden detail. In this example, if the Credit Card line in the fly-out on the right line were to be expanded, a list of the credit cards to be tested would be exposed. A fully qualified test is described in one or several levels depending on the richness of the test requirements for a given feature.

The full description of a Sale transaction in the example above could be read as follows: “Customer Transaction / Sales / Regular Sale / Credit Card / VISA / Approval Expected”. If the test plan were fully expanded, the lowest level branches of the tree would be reached. The terminal nodes, those with no children, contain a TestCaseID, which corresponds to a line(s) in a data table, where the test case data that fulfills the requirements of the test is stored.

Defining Application Objects

To define an object, the user selects the element from the snapshot using the mouse and drags it over to the repository, which is located on the far right of Figure 3 below.

The App Object Editor, located below the snapshot, displays the definition of the object selected in the App Object tree. Each object definition is comprised of three components:

  • Class
  • Name
  • Path

The class of the object defines its attributes and behavior. For example, objects in the class of TerminalField allow users to input text. They share a text attribute that stores the value of the text that they contain. The name of the object is used to refer to it in steps and tests. The path is used by AscentialTest to locate the object.

There are a lot of visual cues that streamline the process of capturing objects. A green checkmark indicates that an object has been successfully defined. When the user selects the object in the App Object tree, it highlights in the snapshot image and vice versa.

Building Reusable Steps

A step is a logical unit of a test or set of tests. It is comprised of actions that are executed upon application objects. The limited set of control types in a terminal-based application means that only a small number of test action types are required. Steps designed to be shared between more than one test are referred to as reusable steps.

Each step in a test project is designed for a specific purpose and no other step repeats the actions that it contains. For example, there is only one step defined for ‘User Login’. Any test that requires a ‘User Login’ will include that step. By avoiding duplication, test maintenance is minimized so when there is one change to the target application, there will be only one change required in the test project.

In AscentialTest, actions are automatically generated when the user selects an object and an action type from a list of action types that are available for the selected object. The left panel of Figure 4 displays a step in the process of being built while the right panel contains an application snapshot.

To build a step, the user selects from the list of actions below the snapshot. The action is automatically generated. If the action type requires a data value, a data object is automatically generated along with the action to prompt the user for data. The generated data object has built-in edits to ensure that the data entered is of the correct data type.

Steps may have two types of parameters: input and output. Input parameters are usually used when setting the state of an application object. Output parameters may be used for verification or as input parameters to other steps.

Parameters are not limited to simple types. Steps can input or output data objects of arbitrary complexity. If any of the input fields in the snapshot contain test data, those values will be captured along with the actions.

Building Data-driven Tests

Tests are built by selecting and arranging steps in an order which carries out the requirements of the test.

The test displayed in Figure 5 below contains two steps. The first step is the one displayed in Figure 4 above. It inputs data into the Environment field and clicks the Command Button. The second step verifies that the system has navigated to the correct environment.


To build a new test, the user selects steps from the Project Explorer on the left and drags them to the Test Editor. Data objects are automatically generated for all step parameters.


Specifying Test Data

Data tables are automatically generated in AscentialTest based on the input parameters of tests using the dialog displayed in Figure 6.

Data is input into the data table for each instance of the test. The RowId is used to select the row that satisfies the requirements specified in the Test Plan once the test has been associated with the test requirement by dragging the test from the Project Explorer to the Test Plan node as shown in Figures 7 and 8.

Executing Tests

Tests are run directly from the Test Plan. Automated test cases are designed to be run unattended on a single or multiple targets. Suites can be run for minutes, hours or days depending on how many tests have been selected.

Once test execution is complete, results are presented within the context of the Test Plan. A summary, including the number of successful vs. failed cases, along with a number of errors is presented at the top of the report. Details are provided where errors occur, including a comparison of the actual and expected results.

Reporting Results

It is important to manage the test execution process. AscentialTest provides an overview that keeps track of the number of tests that have passed, failed or have been blocked. Figure 9 displays an example of the overview:

It is also important to keep the project team informed of testing progress. Figures 10 and 11 (below) are samples of reports that can be generated:



Special Topics


Recovering from unexpected errors is one of the challenges related to building test automation for terminal-based applications. Because of their stateful nature, several actions may be required to return the terminal-based application to its ‘base state’, where each test is designed to start and stop. The solution is to build a recovery step that is executed at the conclusion of each test. The step identifies the current screen and then calls its ‘Dismiss’ action. While typing either ‘F3’ or ‘F12’ may suffice for most screens, others may require a series of actions.

In some cases it may be necessary to complete the current transaction. In any case, the recovery mechanism ensures that the target application is at a known starting state before each test begins.

Scrolling Tables

Terminal screens often include a scrolling table area where multiple pages of data are displayed. The record required for a given test may not always be found on the first page so the table row must be scrolled into view. The solution is to encapsulate a set of actions in a reusable method that checks for the target row and then scrolls the table with a ‘Page Down’ action until the target row is found.


By combining Qualitest’s comprehensive testing services with AscentialTest’s ability to reduce test creation, data management, and test maintenance times, test automation for terminal-based applications can be developed quickly and efficiently.

Qualitest clients get testing solutions specific to their industry that are tailored to the nuances of their business, providing an excellent return on investment (ROI) while improving software quality.

About Qualitest Group

Qualitest is the world’s largest pure play software testing and QA company. Testing and QA is all that we do! We design and deliver contextualized solutions that leverage deep industry-specific understanding with technology-specific competencies and unique testing-focused assets. Qualitest delivers results by combining customer-centric business models, critical thinking and the ability to gain a profound comprehension of customers’ goals and challenges.

For more information on Qualitest services, please visit

About Zeenyx Software

Zeenyx Software Inc. was founded in 2006 by Brian Le Suer and Dave Laroche, former EVP of Research and Development and Chief Architect / Founder respectively at Segue Software. After leaving Segue, Brian and Dave worked together at Star Quality Consulting, a boutique consultancy serving Fortune 500 clients.

The goal of Zeenyx’s flagship product, AscentialTest, is to radically reduce test creation and maintenance times over previous generation automated testing products. Drawing on Dave and Brian’s experience as both product developers and consultants, AscentialTest was built around six key design criteria:

  1. Build the most powerful object recognition engine on the planet.
  2. Eliminate the need for programming skills to build robust, easy to maintain tests.
  3. Kill test frameworks – eliminate the need to build and maintain them.
  4. Simplify test maintenance so that an application change requires only a single change to its automated tests.
  5. Get rid of messy spreadsheets with keywords and test data.
  6. Make testers more productive by automating the process of building automated tests.

AscentialTest’s revolutionary step-based approach incorporates all six design criteria and has led to broad market acceptance across many different industries. AscentialTest is used by enterprise testing organizations at Fortune 500 companies, small one and two man testing teams at private firms, as well as multiple consultant organizations. Zeenyx Software is privately held and located in Hopkinton, Massachusetts.

For more information on AscentialTest, please visit