System Test Overview
The system test is a crucial step in the development process as it integrates and tests deliverables from the various development streams.
This test defines the use cases for testing functionality across products and ensures that the final verification and validation of the complete system are performed before a system is released to the market. The system test is performed to ensure quality and meet the user's needs.
Intended for
Test leads, test engineers, product owners, product managers, and release owners.
System requirements, deliverables, and releases
Product portfolio management (PPM) delivers the necessary system requirements that are "cloned" to system epics in Azure DevOps (ADO). The system stream works alongside the development streams to break down the system epics into epics, which the development streams then implement. This results in software and hardware deliverables back to the system stream, which are integrated and tested in different system configurations.
The system tests engineers use the information available in the system epics to create typical use cases. Based on these use cases, they develop the test cases, test applications, and test environments to execute the tests. If any issues are found during testing, they are reported as bugs for further investigation and corrections.
When is a system test required?
A system test is required to release a new or changed system. Changes can be new or updated functionality or only error corrections.
There are some exceptions where a system test is not required for a product release. These are:
- Small corrections of a product where tests in a system configuration are not needed.
- Changes that do not affect performance or capacity within or between the products.
- New limited functionality that will not affect the remaining system.
A decision with motivation is required not to run the system tests. The motivation is based on an analysis of the dependencies between products within the system and the impact of the products' changes. The analysis is included in the "Test Strategy and Plan" document for the product, reviewed by system test engineers, and accepted by the system test lead, product manager, release owner, product owner, and head of quality.
System test stages
The system test process supports agile development and is aligned with increments and iterations. After passing the product tests, deliverables from the development streams can be integrated into system test environments. System test cases can be iteratively developed depending on the added functionality in the deliverables.
Each stage has an entry and an exit criterion (see System Test Entry-Exit Criteria). In system integration and test (SIT), each integration's entry and exit criteria are applied until all delivered products are in beta candidate status. The system type test (STT) is performed when all products are in beta status, and the release acceptance test (RAT) is performed when all products are in release candidate status.
At SIT and STT stages, the following tests must be considered and/or repeated:
- Installation tests: Test the installation of software on hardware.
- Integration tests: Verify interfaces and interactions between products in a system.
- Smoke tests: Ensure essential functions and applications are working as expected.
- System tests: Progressive tests on new functionality in iterations and regression tests of unchanged functionality from earlier iterations, based on the system epic acceptance criteria.
- Migration tests: Ensure the system can be upgraded from previous versions.
- Disturbance tests: Test the stability of the system for disturbances.
- Performance tests: Response time, resource utilization, etc.
- Capacity tests: Tests of the capacity limits of the system.
- Endurance tests: To ensure the system is stable for a longer time (performance degradation, memory leaks, etc.).
- Load tests: Evaluate how an application performs under heavy load.
- Stress tests: Push the application to extreme limits to determine how it handles unexpected or extreme conditions.
- Compatibility tests: Verifies an application works across multiple operating systems, browsers, and devices.
- Security tests: To detect cyber security issues in the system.
System integration and test
SIT iteratively builds up the system use cases, test cases, test environments, and test applications to verify the system epics. It is also intended to give early feedback to the development streams on system and product-related issues.
SIT verifies the interactions between various products in a system. It deals with product integration, core system-level functionality checks (smoke tests), regression tests of implemented functionality, progressive tests of new system functionality, and verification interfaces between the products.
The intermediate deliverables (alfas) from the development streams are analyzed to ensure functions across products can be integrated and tested in a system. The system team works with the system architects, product managers, and product owners to learn about the system requirements and identify the system use cases to be tested. The test cases, test environment, and test applications are iteratively built up during the SIT stage.
All test cases developed in one SIT iteration will become part of regression tests in the next iteration. Therefore, it is recommended to automate system tests whenever possible. With manual tests, a selection of the available test cases should be run in the regression test suite.
SIT continues until the agreed-upon scope of system epics is available and all products are in beta status with the expected quality goals fulfilled. The last SIT iteration should be done with all the products in beta candidate status and any serious issue must be logged into the list "frequency of crashes and serious problems". By then, the test cases, test environment, and test application should be ready for a final round (i.e., the STT stage).
System capacity (i.e., non-functional requirements), such as alarms and subscriptions in large configurations, are regularly tested in SIT iterations. These tests are mainly automated and stress the system to find stability issues early.
Tests for system capacity requirements, such as alarms, subscriptions, etc., are started in this phase. The aim is to increase the system size and capacity to the levels described in the system requirements and test environment strategy. Achieve at least one round of test coverage.
80% of the regression test cases must be successfully performed before starting STT. If the system release owner has a good reason for it, the regression test coverage can be reduced below 80%. If the quality of earlier versions delivered to the system test is good, the SIT can be reduced, e.g., for startup and disturbance tests.
The goal is to achieve the required quality in SIT to run the STT once. Therefore, product managers, product owners, and quality control managers must ensure the products have reached the required quality levels. The development streams in the STT stage expect only minor adjustments.
Bugs found in SIT tests are reported to the development streams for corrections. If a bug found in SIT is difficult to repeat in a product test environment, the SW/HW engineers can be invited to trace faults in the system test environment. Any bugs found during one SIT iteration are verified by the following upcoming SIT iterations (i.e., the subsequent integration).
At the "Start of STT" meeting (entry criteria), it is checked if any test cases executed in SIT can be reused as part of STT. In that case, the following must be fulfilled:
- For bugs found in SIT, system test engineers and CCB must provide an impact analysis in ADO, including regression tests. The identified regression test must be performed and documented.
- Updated test cases must be reviewed and approved before they are used in tests.
System type test
STT is the last run of the implemented tests in SIT with all products in the beta candidate (BC) state. The checkpoints in the "Start of STT" checklist (entry criteria) are fulfilled before any tests are allowed to start.
The test cases, test environments, and test applications are completed and ready for the final run. The list of "frequency of crashes and serious problems" has been evaluated and included in the decision to start STT.
From now on, any scope changes must be submitted as a change request, with an impact analysis and approval by the system release owner and head of quality.
Bug corrections during STT may require regression tests to be re-executed. A technical review of the bug is performed to clearly describe which tests need to be re-executed for the new system beta.
In STT, the system's actual behavior shall be recorded in Azure DevOps, and deviations from expected results shall be logged as bugs.
A final system build is prepared for the RAT when all test cases are successfully executed and the expected quality levels are reached.
Release acceptance test
During RAT, a subset of tests is re-executed based on the influence of late changes in the builds. The purpose is to check if the core system is intact and not influenced by late changes in software.
Before starting RAT, the "Start of RAT" checklist (entry criteria) must be fulfilled.
The system is already available and correctly configured in the STT stage and can be reused in RAT. No software or hardware changes (patches, etc.) should happen during this stage.
The system environment dedicated to endurance tests is continuously executed for a pre-defined duration with a disturbance at normal load.
The testers shall use the reviewed user manuals for test activities in this phase.
If a severe problem is found during RAT and requires correction, a decision meeting with the head of quality is required to decide whether the RAT should restart or continue.
During RAT, the following tests are mandatory:
- Greenfield installation and configuration.
- Smoke tests.
- Import and export.
- Backup and restore.
- Endurance tests.
Test coordination
The system projects verify system functionality, capacity, usability, performance, and other system-related use cases. All the related system epics must also be verified.
The system test team can receive requests to verify epics and features from the development streams if a specific system test environment is needed to perform the tests. However, this is a support for the development streams; the result is owned by the corresponding development stream.
System tests also include the use of customer applications. The system test teams often work closely with the ABB business units and sometimes with the end customer. Commissioning engineers can be hired to participate in system tests to provide customer thinking and hands-on experience.
The system test center is responsible for all system tests and has a coordination team responsible for the overall test planning and prioritization of the system releases. The team decides what to be tested, when, and the scope of the tests depending on test environment availability and team capacity. An overall plan for all system test activities in the system test lab references the individual test plans the system releases.
The system test must verify new system functions and perform regression tests on existing functionality. New system functions can interfere negatively with existing functions, but the development stream may also rewrite them to improve stability and performance without adding any new functionality.
System test environment strategy
The system test environment strategy defines the available system configurations and future needs. The common test environment strategy describes all available test configurations. The test configurations can have different hardware setups and sizes, different test applications, and be intended for different product lines (800xA, S+, Freelance, etc.). For example, if a configuration has many field buses, it can be suitable for testing batch and information management (IM) applications with libraries from petrochemical business units.
The strategy ensures all types of system environments are available to run tests from a customer's point of view. Examples are application engineering, system maintenance (upgrades, backup, restore, fault-tracing), runtime characteristics, and system usability.
Some system configurations are used to measure and verify system capacity, reliability, and performance.
The test team must be involved early in the planning of upcoming system releases to ensure there is time to acquire, configure, and build applications ready to perform tests in SIT and STT. The system test environment strategy can be part of the system test strategy and plan document.
System test planning and execution
System test planning and execution are aligned with the synchronized program increments (SPIs), and program increment (PI) planning is performed with the teams in the system streams.
The milestone checklist ensures the planning considers all artifacts to be ready at release time (test strategy and plan, test result, DSAC, etc.). The milestone checklist also contains the necessary checkpoints/meetings to be planned ("Start of STT", "Start of RAT", "IVA", etc.).
Planning preparations
Some preparations are needed before the SIT for a release can start. The preparations can begin as soon as a decision is made to plan the release. The result from the preparations is assessed at M2 using the milestone checklist, where the targeted scope of the release is decided.
Typical tasks:
- Planning with Product Portfolio Management: Participate in planning sessions with PPM, where the system scope, goals, sales restrictions, etc., are defined. Ensure all requirements have a clear description and well-defined acceptance criteria.
- Update the test strategy and plan document: The test strategy and plan document covers the overall planning of system releases, affected test configurations, and test teams. The system test leads continuously refine the test strategy.
- Draft of system environment strategy: Describes the changes needed for test configurations and applications based on the system epics. The system test strategy and plan are continuously refined during the SIT and approved before entering the STT.
- Plan test resources and budget: Align resources and budget with the overall test strategy and test plans. Prioritize the planned tests based on resource and configuration capacity.
- Identify test-related risks: Collaborate with the release owner to identify test-related risks to be included in the risk register.
SIT planning and execution
The SIT builds up test cases, applications, and test environments until the scope of the system release is reached. The tests must be planned to ensure test results can be provided early for feedback to the development streams.
Encourage the test leads and test engineers to collaborate with release owners, product managers, and the development teams in the streams to understand the incremental deliverables' contents and how they can be tested.
Before the SIT is completed, reviews and approvals of the test artifacts are required before entering the STT.
Typical tasks:
- Update the system test strategy and plan document: Based on the system epics delivered from the development streams, iteratively update the test strategy and plan. Collaborate with product managers, product owners, and release owners to anticipate testing needs in the iterations for the teams.
- Update the system test environment(s): Plan the necessary updates of the test environments, including new hardware, test applications, and test environments.
- Plan for test case and test suite updates in ADO: Iteratively create and update the test cases and test suites (test cycles) in ADO. Define the common use cases with product management and product owners and develop the required test cases. Automate the test when possible.
- Iteratively perform system tests: Execute the added and changed test cases and test suites in ADO based on the functionality received from the development streams. Perform automated regression tests and select manual regression tests based on changes in the deliverables.
- Report test progress and bugs: Iteratively report test progress using the test matrix in ADO. Ensure bugs found during tests are clearly defined and reported to the streams.
- Prepare the "Start of System Test" checklist: In collaboration with the release owner, decide when beta status is reached on the development stream deliverables (no more added functionality) and ensure the "Start of System Test" checklist is fulfilled.
STT planning and execution
The STT is planned after the last iteration in the SIT when the development streams have delivered all expected functionality and met the quality criteria.
Typical tasks:
- Repeat failed tests: Repeat any failed tests in the last SI iteration, along with bug validation of corrected bugs.
- Start of STT checklist meeting: Check that the development stream deliverables are in beta status and that the expected functionality is available. The quality criteria for starting the final run of tests must be met. If time is critical, a decision can be made to reduce the requirements on system capacity or system size agreed with product management.
- Final updates of system test artifacts: Make the final updates of all the system test artifacts (system test environment, test cases, test suites, etc.).
- Perform system test (complete set): Run the ADO test suites and test cases on the beta deliverables from the development streams. If the test fails during ST, new betas (no patches) must be delivered.
- Review test result: Invite to a technical review meeting where the test result is evaluated. The meeting recommends whether the quality criteria are met or if additional beta versions are required.
- Prepare the "Start of RAT" checklist: A request to development streams to fill in the "Start of RAT" checklist is sent, with a request to deliver a release candidate from the development stream.
RAT planning and execution
The system (RAT) can start when the development streams have delivered the final release candidates.
Typical tasks:
- "Start of RAT" checklist meeting: After the release candidates are delivered and the system test configurations are loaded with the latest delivered release candidates, related development streams are invited to present their RAT checklists.
- Perform RAT tests: Run the selected ADO test suites and test cases on the release candidates from the development streams.
- Review test result: A technical review meeting is arranged to evaluate the RAT test result. The meeting recommends whether quality criteria are met or whether new release candidates must be requested by the development streams.
- Release summary: An assessment of the quality compared with earlier system releases. Is the quality better/same/worse, and what are the risks for the release? The release summary is prepared and discussed with the SteCo.
- Internal validation acceptance (IVA): The final checkpoint meeting was held shortly before the M5 meeting. The RAT result must be reviewed before this meeting.
- Media verification: Verify the physical media from production in selected test configurations.
Final approval and closure
- Complete ADO test results: Perform a final review and approval of the test results. The tests should be repeatable within the next 10 years, so all approved documentation, test cases, test environment, test applications, etc., must be baselined for future use.
Checkpoint meetings
The system test lead arranges checkpoint meetings for "Start of STT," "Start of RAT," and "Internal Validation Acceptance" (IVA). One or several meetings are arranged to assess the quality status according to the checklists. Capture the results and decisions in the meeting minutes and/or checklists.
Product owners and system test engineers must provide input for the checklists, and a record of each checkpoint meeting should summarize the status and conclusions. The quality control manager (QCM) has the authority to halt the decision to pass the checklist at milestones.
Recommendations to system test engineers
- Cooperate with the development streams and teams to ensure product development progress.
- Understand the system epics and their acceptance criteria. The requirements should be mapped in the ADO test plan or the STT matrix table. If possible, participate in the review of the system epics acceptance criteria to ensure testability.
- Check with the development organization to verify how much of the existing code has been rewritten and how much has been written without system or product requirements (code can be rewritten to get better test possibilities and maintenance without adding any functionality). Do not forget that old functionality must also be tested, and new functionality may interfere negatively with the old system.
- Use real user applications and user libraries. Different configurations should have different applications. Note that the system test does not have the task of verifying all types of customer applications. If the system requirements and typical configurations are described at a high level, they need to be broken down into use cases and detailed configurations.
- Iterative reviews shall be made. Remember to involve one review member who has customer experience, e.g., from a support line or commissioning work.
Bug reporting
All problems found during the system test shall be written and tracked in ADO (see How-to Manage Bugs).