Skip to main content

Test Techniques

This guideline briefly describes test techniques used at the different test levels. Each project defines the test levels and when to use the various test techniques.

The purpose of this document is not to describe all test techniques but to describe the techniques that have been decided to be appropriate for use in development projects at ABB.

Intended for

Software engineers and test engineers who design test cases at different levels to verify functionality and requirements.

The goal

Testing aims to detect as many faults as possible and to demonstrate that the product works according to the specification. Since testing all different inputs (exhaustive testing) is impossible, test strategies and techniques are used to make the testing efficient. This document describes some commonly used techniques.

Regardless of the test techniques used, both positive and negative tests are needed. Positive tests demonstrate the system's normal/specified behavior. Negative tests run the system with different input conditions to check whether the system can handle the application's unlikely conditions. Black-box and white-box testing techniques efficiently design test cases that verify positive, negative, and unspecified conditions. Black-box techniques are used at all testing levels, while white-box testing is mainly conducted on the lower levels.

Black-box testing

Black-box testing is a test strategy that investigates a program's input-output conditions without considering its internal structure. The purpose is to test against a specification (requirements and design), and no insight into or knowledge about the implementation is needed. Hence, the tester does not examine the programming code and does not need any further knowledge of the program other than its specifications. Since only the specification is needed, test cases can be designed as soon as the specifications are complete.

The black-box techniques will achieve good coverage of the implemented functionality since all normal execution paths are covered. However, error sequences, recovery situations, and timing/performance issues are challenging to catch with these techniques and should be tested with other techniques. The black-box techniques described in this section are used to detect functional faults.

Equivalence class partitioning (testing)

An equivalence class is a data set for which the software’s behavior is assumed to be the same. Thus, the result of testing a single value from an equivalence partition is considered representative of the complete partition. This technique reduces the number of different values for which test cases should be created.

When testing a function (method in a class), equivalence class testing uses the associated parameters (attributes for a method in a class). Each parameter is analyzed, and all equivalence classes are determined.

Boundary value analysis

Boundary value analysis can be considered a special case of equivalence class partitioning, since it concentrates the selection of test cases on and around the partition class limits, realizing that many problems can be found when testing on and around limits. Typically, boundary value analysis tests something with zero as input, the maximum integer value, counter overflow, and testing when something gets full (e.g., the heap) or empty, etc.

Interface testing

Testing is conducted to evaluate whether systems or components pass data and control correctly to one another. Faults found during interface testing can be:

  • Interface misuse: A calling component calls another component and makes an error when using its interface, e.g., parameters are in the wrong order.
  • Interface misunderstanding: A calling component embeds incorrect assumptions about the called component's behavior.
  • Timing errors: The calling and called components operate at different speeds, and out-of-date information is accessed.

Interface testing is often conducted using equivalence class partitioning or boundary value analysis.

Dynamic testing of sequence diagrams

A sequence diagram describes how groups of objects collaborate to accomplish some system behavior. This collaboration is implemented as a series of messages between objects. Typically, a sequence diagram describes the detailed implementation of a single use case (or one variation of a single use case). Sequence diagrams are not useful for showing the behavior within an object. Testing sequence diagrams is often a part of functional testing.

Sequence diagrams are tested by stimulating the program with the start value of the sequence diagram and then checking whether the program's correct flow is followed. The sequence diagram defines input, correct flow, and expected output (end state). The code is either stepped through manually at the design test level, run through automatically with printouts for each passed operation, or automatically checked in the test program.

State-transition diagrams

State-transition diagrams describe all of the states that an object can have, the events under which an object changes states (transitions), the conditions that must be fulfilled before the transition will occur (guards), and the activities undertaken during the life of an object (actions). State-transition diagrams are useful for describing the behavior of individual objects over the complete set of use cases that affect those objects. State-transition diagrams are not useful for describing the collaboration between objects that cause the transitions.

It is important to test the transitions rather than the states described in the diagram to understand the functionality fully. The different states are the expected output of the test cases for the transitions. Either the test program automatically checks for the different transitions and states, or printouts should be added to the test code to show that they have been tested.

Error guessing

The purpose of error guessing is to use the experience and intuition of a tester to detect faults. The idea is to make educated guesses on which areas are most error-prone and what types of faults are injected in these areas. A procedure for error guessing cannot be given, and the best way to explain the concept is to present an example:

When testing a sorting subroutine, the following are situations to explore:

  • The input list is empty.
  • The input list contains one entry.
  • All entries in the input list have the same value.
  • The input list is already sorted.

Thus, using a tester's experience, special cases that might have been overlooked are enumerated. Then, the test case design is carried out to expose the possible faults.

Working with error guessing, earlier experiences from the system, and knowledge about commonly made faults are valuable inputs.

White-box testing

White-box testing (equal to structure-based testing) is a test strategy that investigates a program's internal structure. The purpose is to design test cases that verify that all the code has been executed and is correct. Hence, white-box testing does not guarantee that the complete specification has been implemented. A coverage criterion needs to be defined to conduct white-box testing. The coverage criterion defines what “all code has been executed” means.

Workflow

There are two main ways to work with white-box methods and coverage analysis. If test cases have been derived, these can be used as input. Then, the code's coverage is checked. If not, a decision has to be made to reach full coverage. The test execution with coverage analysis is preferably done with a tool that pre-processes the code.

Coverage methods

A coverage measure must be defined to determine whether full coverage has been achieved. There are several different coverage measures. In this document, only the most commonly used are described. The following program example is used for all coverage measures:

if A>10 and B>10 then        
D:=20;
end;

Statement coverage is defined as every statement in a program executed once. This is the weakest coverage requirement, and it does not consider, for example, simple if statements, logical operators, and loop termination decisions.

An example of a test case for the program is:
a) A=11, B=11 => all statements have been executed

Branch (decision) coverage is defined as every decision in a program that has taken all possible outcomes at least once. A decision is a boolean operand that controls the flow of the program, for example, in an if or while statement. One example of decision coverage is shown below. In this case, the program should be exercised with test cases that cause the if-statement to be both true and false.

Examples of test cases for the program are:

a) A=11, B=11 => if-statement is true
b) A=0, B=11 => if-statement is false

Hence, the program has taken both decisions in the if-statements.

Condition coverage is defined as every condition in a program that has been both true and false at least once. A condition is a boolean operand in a statement, such as an if or while statement.

Examples of test cases for the program are:

a) A=11, B=0 => Condition (A>10) is true and (B>10) is false
b) A=0, B=11 => Condition (A>10) is false and (B>10) is true

Hence, the program has been exercised with test cases that cause the conditions in the if-statement to be both true and false.

Condition/Decision coverage is defined as every condition in a decision in a program and every decision in the program has taken all possible outcomes at least once.

Examples of test cases for the example program are:

a) A=11, B=11 => Condition (A>10) is true, (B>10) is true, if-statement is true
b) A=0, B=0 => Condition (A>10) is false, (B>10) is false, if-statement is false

Hence, the program has been exercised with test cases that cause the conditions in the if statement to be both true and false, and the program has made both decisions in the if statement.

Modified condition/decision coverage (MCDC) is defined as every condition in a decision that has taken all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision’s outcome independently by varying just that decision while holding fixed all other possible conditions.

Examples of test cases are:

a) A=11, B=11 => if-statement is true
b) A=0, B=11 => if-statement is false, condition (A>10) affects the outcome
c) A=11, B=0 => if-statement is false, condition (B>10) affects the outcome

Hence, the program has made both decisions in the if statement by independently varying the conditions.

Owner: Software Development Team