The system test is a very exciting test stage. Unit testing and integration testing focus more on the internal aspects of the application. The system test is much more about the external view. For the first time, the software is viewed as a black box and tested against the technical requirements. These now also play a much greater role than in the lower test levels. This means that specialist employees, future users or customers can also actively participate in testing. However, this change in perspective also brings with it completely new challenges. For this reason, I have dedicated an entire book to this topic: “The system test - from requirements to proof of quality”
The ISTQB defines system testing as “a level of testing focused on verifying that a system as a whole meets the specified requirements”.
The big difference to previous test stages is that here the integrated overall system is tested against the requirements. The system is viewed like a black box from the outside.
Functional and non-functional requirements usually serve as the test basis for the system test. However, user stories, business processes or user documentation are also suitable. If this basis is missing or incomplete, users or specialist departments can be questioned. The old system can also serve as a test oracle, e.g. for system replacements or migrations.
Technical requirements are often available in text form. How can test cases be created from these requirements? In practice, a combination of two approaches has proven successful:
This interaction results in a good set of test cases for the system test. In practice, it is often the case that many questions arise when creating test cases for the system test. If I want to create a specific test case, the sentences in the documents must also be specific. And this is often not the case: “The system must be performant”, “The application should be easy to use”, “The button should be green”. Gaps also quickly become apparent here. All these queries need to be clarified.
The clarified points must of course also be known to the developer. It is therefore all the more important to start creating test cases at an early stage.
The test object for the system test is the integrated overall system. This means that the software or application is tested via the user interface or the interfaces. The system must therefore also be complete for a final evaluation.
Integration with other systems is not part of system testing, but is often referred to as system integration testing.
The aim of the system test is to check whether the functional and non-functional requirements for the application are fulfilled and sufficiently implemented. In practice, there are some challenges lurking here that need to be considered at an early stage (see also typical problems).
There is also a difficulty lurking here. For a valid statement on the system test, it is necessary that the test environment corresponds to the production environment. Or is at least very similar to it. Especially when it comes to non-functional test types such as performance testing or reliability. And this duplicate infrastructure can be expensive. This is also a massive advantage of virtual machines and cloud solutions. If the later productive system is a parameterized virtual machine, it can also be cloned cost-effectively for the system test.
Test data management becomes significantly more demanding from the system test level onwards. Unit and integration tests usually still handle test data in the local environment of the test case. System tests, however, may require significantly more extensive test data sets, e.g. created contracts, historical data, linked data sets, etc. Two methods have become established in practice here:
For a long time, test automation in system testing was poor. Tests under the hood, such as unit and integration tests, were well supplied with tools simply due to their proximity to development. With the advent of agile projects in particular, however, there has been a massive boost at system test level. This applies to test automation solutions for surface tests, web tests and interface tests. There is a wealth of possibilities here today. I have summarized how to implement test automation in the book “Basiswissen Testautomatisierung”.
The model of test levels comes from a time long before agile projects. It is therefore often ignored in Scrum and the like. But if we take a closer look at the model, we can see how important the ideas and aspects of the test levels are in agile contexts too, of course. Just like with system testing: Test the system from the outside. Prepare test data. Consider test cases. And, of course, set up test automation.
In this context, the test pyramid model also crops up time and again. It provides a similar perspective to the test levels.
It is therefore also worth looking through the system test lens in agile software development. The intention of the system test can be transferred and the relevant aspects used in your own project.
When I look at the projects of the last few years, the same challenges occur again and again in everyday system testing:
In practice, the system test is often the test stage that is started with. This is because this test stage is easier to grasp for specialist departments, testers and management. You take the requirements and test the system against them. This is easier to understand than arguing some small-scale unit tests at code level.
Unfortunately, in this situation, the system test often reveals the gaps in the other test stages. Instabilities and inconsistent software behavior then indicate that there is a lack of robustness in the substructure. Due to the late testability of the system test, these findings unfortunately come very late.
It is therefore particularly important to establish the other test levels as well. Only one system test is not enough.
Even if the system test can only be carried out quite late, the test cases, for example, can be designed before the first line of code has been written. Questions and ambiguities are thus resolved at an early stage.