Blog

The Complexity of Integration - Richard Seidl

Written by Richard Seidl | Sep 30, 2019 10:00:00 PM

The dictionary defines integration as the “(re)creation of a unit (from differentiation); completion” and thus a major goal towards which this process is heading. The concrete manifestations, especially in software development, could hardly be more different: horizontal, vertical, microservices, APIs, loosely coupled, layers, silos, closely interlinked, as an integration platform or file-based, encapsulation, etc. - the integration of which has all the more goals, as integration-specific aspects such as the integration strategy, including requirements for the test environment, are also added here.

Structure is needed to maintain an overview here. And at this point, we would like to point out a few dimensions that can help with the classification of your own integration tests or with checking for other necessary dimensions.

Dimension: Test objectives

A typical test objective of the integration test is to verify correct communication between the objects to be integrated. As is so often the case, this is primarily about minimizing the risk of undetected errors and finding the effects of errors at an early stage.

The focus is usually on functional aspects. However, non-functional aspects also play an important role: reliability, usability, information security, compatibility, transferability, changeability, performance/efficiency (see ISO 25010). Depending on the industry, application purpose and customer requirements, the non-functional aspects can become more and more important and should be taken into account accordingly.

Dimension: Test objects

The test object to be tested significantly influences the design of the tests and the test environment: interfaces, services, APIs, databases, subsystems, but also infrastructure and hardware.

It is essential for the success of the integration tests that the individual integration objects have already been tested as a product or subsystem, regardless of the communication at the interfaces. Otherwise, in the event of an error, it is not possible to determine whether the problems stem from errors relevant or irrelevant to interfaces. This can save a lot of time for troubleshooting.

Dimension: Test level

Depending on the test object, the integration test takes place at a different level, at a different level of abstraction:

  • units: There is usually good test support here from the development environment or framework.
  • System components: Integrated libraries or databases support the test.
  • Systems: Even if software system interfaces are well documented, integration is complex and error-prone.
  • Integration of software and hardware: This is a particular challenge for tests, as non-functional aspects must also be taken into account.
  • Integration of software and data: Both describe information that must fit together. The former usually describes a generic part and the latter the project-specific part. The balancing act between generic and project-specific testing is important here.

It can also become complex across all levels: If integration takes place at system level (often black box), but focuses on features that are only recognizable from a white box perspective, this balancing act is a particular challenge for many development teams.

In any case, different teams from different levels with different perspectives must work together during integration in order to enable efficient integration.

Dimension: Test basis

The test basis can be, for example: Interface specifications, definitions of communication protocols, sequence diagrams, models such as state diagrams, architecture descriptions, software and system designs, work flows, use cases or descriptions of data structures.

In general, the higher the degree of formalization, the more you can rely on the results. If we can be sure that the interface specifications of the communicating products are consistent with each other, then a lot has already been gained.

Dimension: Typical errors

Integration tests can detect many different types of errors, for example: incorrect data structures, faulty interfaces, incorrect assumptions about the transferred data, missing data, problems with performance or security (encryption).

These dimensions can be considered in almost any combination and therefore cover a wide range of possible integration tests. While component or system tests are rather homogeneous, this field of integration tests is very diverse in terms of implementation, technology and methodology. Many tests, especially non-functional tests, can only be tested in an automated manner. The number of possible combinations described here, which have an impact on the test, also suggests the need to increase efficiency through test automation.

Establishing test automation itself is also a major challenge, as this step often requires adjustments to the frameworks or the development of in-house test tools - an effort that should not be underestimated.

Fazit

As a test manager or tester, being overwhelmed by the wealth of possibilities almost makes you want to bury your head in the sand. But don’t be discouraged and simply start small: Draw up a field with the most important dimensions and check where you have already implemented integration tests well, where there may still be gaps and where further tests would bring real added value, and then start improving. Good luck with that!