Blog

Test Automation of Different System Types - Richard Seidl

Written by Richard Seidl | Apr 4, 2021 10:00:00 PM

Different system types place different demands on successful test automation. Among other things, they determine the automation approach and also limit the possible tools.

Desktop applications

For a long time, pure desktop applications were the only thing that could be automated. They usually only consist of software and do not communicate with other systems. As a result, they hardly have to take any boundary conditions and interfaces into account and the developers can concentrate entirely on the purpose of the software itself. In addition to developer tests at component level that focus on their own functionality, automated testing via the user interface makes sense. Tools that support the technology of the application or its user interface can be used for this purpose. In addition, it may be necessary to access or create test data via the file system (and any formats used by the application). Further integration levels are not necessary, at least at application level. However, the internal components are of course integrated with each other according to an integration strategy. Sometimes it is helpful to build auxiliary functionalities into the application for test purposes, which provide support for test preparation or checking the results, for example.

With this type of software system, automation is considerably simplified, as the dependencies on other software systems are limited - so it is highly likely that a limited set of interfaces will be sufficient to automate a functional test.

The fact that this system type is inherently designed for only one simultaneous user also avoids a number of problems in automation - for example, tests with several users at the same time can be dispensed with. However, a test of several parallel instances should not be dispensed with.

Client-server systems

The first level of complexity for test automation was the emergence of client-server systems. Here, data is stored centrally on a server, which can also be responsible for significant parts of the system’s functionality. To differentiate this, a distinction is made between “fat client” and “thin client”: As the name suggests, a “fat client” contains significantly more parts of the functionality, while a “thin client” is seen more as a display and input screen and most of the functionality remains on the server.

An important decision for the automated test of such systems is whether the user interface should also be considered or whether a manual test is more efficient. Especially in the case of thin clients, it may be sufficient to let the automation work directly via the client interfaces and thus dispense with automated GUI tests. One of the advantages of such an approach is the generally greatly increased execution speed of the automated test cases. In the case of fat clients, such an approach is usually not appropriate, as a large part of the functionality lies in the client and would therefore not be tested.

An important aspect of client-server systems is that in most cases several users work with the system in parallel. There are several possible scenarios for an automated test:

  • Multiple users via the same test computer
  • Multiple users via different physical computers
  • Multiple users via different virtualized computers

In the first case, the processing of the automated test cases must be parallelized on a physical computer. However, most test tools do not offer any explicit support for this, as automation via the GUI requires a certain degree of exclusivity. The creation of functionality specifically required for this can be very costly.

The second option involves the difficulty of controlling the test environment. Configuring and maintaining several physical computers for the purpose of automation requires considerable effort, even if they are identical.

The most commonly used option is testing across multiple virtualized computers. This method has the advantage of being able to simply multiply a defined configuration of a computer across several instances.

Experience shows that many test teams using a virtualized environment for the first time greatly underestimate the necessary administration activities and the associated effort. Effective configuration management for the virtualized test computers is critical to success.

Web applications

A special case of client-server applications is now very widespread: Web applications. Here there is generally no specific client for an application, but a generic one - the browser. Due to the strong standardization of the transmitted data (HTTP and HTML), specific methods can be used here that target these protocols and take advantage of their use (e.g. capture & replay at protocol level). Many tools explicitly support web applications. The parallelization of test executions is also easier to implement in web applications, as some tools do not access the interface of the applications via the physical GUI, but via JavaScript, or can even execute their tests at the level of the underlying protocols and formats, which normally speeds up the test execution time considerably.

One specification that significantly simplifies a tool decision is answering the question of whether automated tests should be carried out on different browsers and browser versions. This depends specifically on the functionalities that are made available within the browser, i.e. JavaScript, Ajax and similar techniques.

Another scenario in which automated test sequences can be provided in multiple browsers is semi-automated tests. For example, automated functional tests can be run and screenshots captured during execution so that a tester can manually review these screenshots after the run and check for correct display. This method can represent a good balance between manual testing effort and automation effort, as the machine verification of the correct display of web pages with dynamic content cannot yet be guaranteed to be stable, at least at present.

Test automation of mobile applications

In general, the automation of mobile applications can be compared conceptually to the testing of client-server systems, with the difference that mobile devices take over communication with the server instead of desktop clients. From this perspective, the question naturally arises as to why the automation of mobile application testing should be considered separately at all. Apart from the conceptual similarity of these areas of application, however, it is clear that some special challenges can arise in this area of application that justify separate consideration. This chapter will therefore mention some of these special challenges when testing mobile applications and describe a possible procedure in these test projects.

Major challenges in the automation of mobile applications are:

The selection of test platforms

Currently, the device landscape consists of a large number of potentially relevant end devices for test automation projects and a major problem is therefore the sensible selection of a subset of devices for test execution.

Dealing with interrupts

In mobile applications, interrupts (e.g. incoming calls, SMS, push notifications, …) present tool manufacturers and test automation engineers with major challenges. If emulators or simulators are used for the test execution, the interrupts can be simulated relatively easily. However, this is difficult to do on physical end devices.

Different hardware of the end devices

Mobile devices in particular feature a large number of devices with a wide variety of hardware. The diversity in screen size, resolution and dot density is also only found in this form on mobile devices. These factors are less relevant when automating desktop applications.

Network performance and different types of network connections

As mobile applications often have to function with different and constantly changing network connections, the question arises as to how this aspect can be taken into account in test automation. In practice, these tests are carried out either manually in field tests or in a test environment with simulated network connections (WAN emulators).

Test automation for embedded systems

Embedded systems are characterized by the embedding of software in hardware. The system to be tested is therefore not just the software, but hardware and software together. When different components with different hardware dependencies interact, the complexity of developing and testing embedded systems can become arbitrarily complex. Additional factors such as interacting external systems or data dependencies can further increase the complexity.

Embedded systems are also often systems that have a certain degree of safety criticality. This means that high levels of quality, such as high material values or human life, depend on the correctness of their behavior. There are corresponding standards such as ISO/IEC 25000 or domain-specific versions such as ISO 26262 for the automotive sector or EN 50128 for the railroad sector.

In such standards, there are method tables for the type of test approach as well as classifications for the tools used. According to this classification, EN 50128, for example, distinguishes between T1, T2 and T3, where T1 means ‘has no influence on the test object’, T2 refers to verification and test tools whose errors could result in errors not being detected and T3 refers to the types of tools that have a direct influence on the test object. The tools presented for test automation therefore fall under T2, for which separate quality verifications are then necessary.

Data Warehouses

Data warehouses are an example of complex systems with many interfaces, data and often a less intuitive structure. In principle, databases and the underlying systems and standard products themselves do not differ significantly from other applications in the test - they have specific requirements and use cases. In contrast, data warehouses represent central data collections from several systems of a company, whose structure and preparation of the data allows comprehensive analysis to support management and business decisions.

A data warehouse (DWH) essentially fulfills two principles:

  • Integration of data
  • Separation of data

The operation of a DWH, starting with data procurement and the storage of data in the DWH database through to the management of databases for subsequent data analyses and evaluations, is known as “data warehousing”.

Apart from the organizational problems and the infrastructure (large amounts of data, legally sensitive data, etc.), there are other aspects that make manual testing almost impossible:

  • Many technical interfaces with many source systems for data
  • No graphical user interface
  • Complex core functionalities such as historization of data or consistency checks
  • Import, export and semantics of the data are complex and in many cases not known in detail to the test team

In most cases, a combination of automated approaches is necessary for a comprehensive test. For the core area of the data warehouse, i.e. the central data storage and actual data warehouse functionality, such as historization, references or other core functionalities, there are usually consistency rules or rules that the data in the core system must comply with. Here, an automated system can check whether these rules are actually being adhered to, e.g. whether the most recent data record in the data history always has the marker for the currently valid data record.

Other approaches for a DWH test are

  • Plausibility checks: Valid and invalid data records are imported. The valid data records must now be accepted in the target system and the invalid data records must appear as “rejected” in the log.
  • Post-implementation: Part of the system’s functionality is post-implemented in the automation framework for testing purposes.
  • Defined input-output pairs: A known set of test data records derived from the import and transformation rules according to test design methods is imported and the result is compared with known, also derived result data.

Cloud-based systems

The most important feature of cloud computing is that applications, tools, development environments, management tools, storage capacity, networks, servers, etc. are no longer provided or operated by the user themselves, but are “rented” from one or more providers who offer the IT infrastructure to the public as cloud services via a network. This has the advantage for the user that there are no acquisition and maintenance costs for IT infrastructure. Only the services actually consumed are paid for the duration of use. Standardized and highly scalable services enable many companies to make use of services that were previously almost impossible to pay for.

One problem that users of cloud services have to deal with is data security. It is their responsibility to decide which data they want to share outside their company and to what extent.

The outsourcing of operating environments and the use of external services for functionalities also have a significant impact on testing. Especially in such multi-layered scenarios, in which responsibilities do not lie with a single party and functionality-relevant parts are developed and operated independently of each other, a continuous review of the functionality of the systems in the sense of a frequently performed regression test is necessary.

It is important to clearly define and clarify the scope for testing and test automation: Tests that a cloud infrastructure provider has to carry out are generally focused differently than tests carried out by a platform or software developer or even the customer themselves.