Blog

Test automation - Types of Test Design - Richard Seidl

Written by Richard Seidl | Mar 24, 2021 11:00:00 PM

The aim of test automation is to increase the efficiency of any test activities. Even at the test design stage, there are different types of test automation that pursue different approaches:

  1. capture & replay
  2. script-based
  3. data-driven
  4. keyword-driven
  5. model-based

With each level, the demands on the test design also increase. Techniques such as Capture & Replay deliver faster results, but keyword or model-based techniques provide durable and scalable solutions.

The special consideration of the test design in the context of automated test execution is due to the more or less frequent changes that software development is subject to. These can be caused by changed requirements or discovered errors and lead to the fact that the manual maintenance effort for automatically executable test scripts can easily negate the advantage of automated test execution in the event of frequent changes.

Capture & Replay

Capture & replay” refers to the approach of recording the manual execution of a test case (capture) and then being able to repeat this recording as often as required (replay). This requires test tools that can both record the tester’s actions and then reproduce them at the interface of the test object. This demand on the test tool increases with the complexity of the test object and, in extreme cases, can lead to the use of test robots that add, change or remove hardware components, for example. In the simplest and most common case, this technology is used for testing via operation and observations in a web browser. The test tool only needs to record the actions of the mouse and keyboard on the interface, as well as the position and content of the expected reactions.

Two options for this are to save the absolute coordinates of all mouse clicks or the object IDs (if available) of the clicked elements. The latter has the advantage that the rearrangement of the interface elements does not directly require the test cases to be re-recorded. One of the best-known representatives of “Capture & Replay” tests of web applications is Selenium, which enables recording, e.g. with the corresponding plugin for the Firefox browser, and allows these recordings to be replayed on many of the most widely used browsers.

Selenium is a very powerful automation tool. It can also be used to implement the higher design levels. In fact, this is the rule.

The great advantage of “Capture & Replay” is that technical experts can create and execute test cases directly without the support of test or software experts. This can be done without a great deal of preparation or training time. The main disadvantage is the comparatively high adaptation effort: the test case has to be re-recorded every time the system to be tested changes: in the event of changes to content, changes to the order of operation or minimal changes to the page layout (only for the variant based on absolute coordinates). Even with comparatively small changes, the majority of the test cases must therefore be recreated.

Since test case creation consists of recording manually executed test cases, each test case must be executed manually again. A single change (depending on the complexity of the system) is associated with low to medium effort. However, due to the high frequency of adaptations measured against changes to the test object, the test design costs are very high. This method is therefore mainly suitable for use when you want to demonstrate results quickly, when changes to the system are the exception rather than the rule, or merely as a supplement to the approaches listed below.

Script-based test automation

In script-based test automation, test cases are designed in the form of executable scripts (executable test code). The test designer must therefore have programming skills and a development environment available. Such test scripts can be designed in any programming language. At the component test level, xUnit tests are frequently used, i.e. unit tests that are written directly in the x language in which the program to be tested is also written: So for Java-based programs jUnit, for program in C++ cppUnit, etc. At the system test level, the choice of programming language is most likely to depend on the test interface of the system under test.

The main advantage of script-based test design over capture & replay is the independence of the test case definition from the test execution. Changes to the test design are therefore possible without having to re-execute the test. This is particularly advantageous for more extensive tests. However, the main disadvantages of script-based test design are similar to those of Capture & Replay: in comparatively many cases of changes, test cases have to be adapted manually. However, the frequency is not quite as high, as changes such as moving interface elements can be incorporated comparatively easily into the test design by using programming tools. This would be possible by abstracting the specifically requested positions of interface elements into separate classes.

The main advantage is that the need for change does not necessarily entail the re-execution of the entire test case. You can now weigh up whether the manual adaptation of the test code without test execution is more time-consuming than the manual execution of a test case and automatic logging as in the capture & replay approach. This depends on the specific challenge. However, our experience shows that with increasing test duration or for time-critical tests, the effort for manual execution is significantly higher than the effort for adapting the test script. Furthermore, there may be additional aspects such as the creation and use of screenshots in test cases or data that is complex to create during test execution, which make adapting the test script significantly easier than executing the test.

The susceptibility to errors during manual test execution should also not be neglected. As every error is automatically logged and an incorrect operation also leads to an unintentional change of state, the test would have to be restarted from the beginning, which further increases the effort required for Capture & Replay. Overall, we believe that the advantages of the script-based approach outweigh those of Capture & Replay.

Data-driven test automation

In contrast to the two previous approaches, data-driven testing focuses on the data. The basis is a test design that abstracts from concrete data, e.g. by only specifying the expected parameters, but not their values. The tester now has the task of defining suitable data. This data can be defined in any format (suitable for the framework used). In practice, Excel tables or SQL databases are often commonplace. One type of test design would be to provide a table that provides one column for each required input parameter and one column for each expected output parameter. The tester must then enter specific values in the rows of the table. An adapter establishes the link with the test object, transfers the input parameters and compares actual results with the predicted results.

A typical example is the FitNesse test tool. FitNesse is a wiki web server that can be installed locally on the PC. This means that the data can be created directly as a table on a separate test wiki page. Alternatively, the values can also be imported from an Excel spreadsheet in edit mode using the “Spreadsheet to FitNesse” button. The test execution can be started by pressing the “Test” button. Depending on whether the actual result corresponds to the expected result or not, the corresponding line is colored green or red by FitNesse.

Keyword-driven test automation

The two approaches described above are based on programming languages and were focused either on behavior or data. In the following, the focus is on increasing maintainability through the use of additional levels of abstraction.

In keyword-driven test design, abstract keywords are used instead of concrete commands, which are replaced by concrete commands at runtime depending on the configuration in an adaptation layer. Changes in system behavior that do not require a change in the scope of the language can thus be transferred to the corresponding test cases with comparatively little effort by swapping the keywords. The improved readability in particular is often perceived as a significant advantage. If new scenarios arise, new test cases must be written for them. New possible system actions or reactions entail the expansion of the number of keywords. There are numerous tools that support keyword-driven testing. Any programmer can create a corresponding framework on their own. Somewhat more powerful tools with editor support, e.g. for auto completion, are xText or Cucumber. These allow the entire permitted language to be defined.

The main advantage of this approach is the higher level of abstraction, which makes it comparatively easy to create and simplify the maintainability of the test cases. Furthermore, in many cases no extensive adaptation is necessary, as the test cases are converted into concrete commands in the adaptation layer anyway: Quite a few changes that affect the concrete interface format, for example, and are added to the test cases in the adaptation layer can therefore be made comparatively inexpensively and once for all test cases in the adaptation layer. The disadvantages include a medium level of complexity in the preparation, as both the language in which the keywords are to be defined and the tasks of the adaptation layer must be coordinated with all stakeholders.

In terms of the technical possibilities, these results can also be achieved with the script-based approach. However, this requires extensive preparation in the form of abstraction levels to be created, such as wrapper classes or libraries for the translation of keywords into sequences of actions. Furthermore, testers need to be disciplined in order not to leave the higher level of abstraction for solutions that are needed quickly and to create test scripts that are difficult to maintain. The modeling frameworks used, on the other hand, force the testers to remain at the modeling level and thus ensure permanent, improved maintainability. Furthermore, frameworks such as xText offer the possibility of generating convenient editors from the language description alone.

Model-based test automation

Model-based test design uses models as the basis for the test design. Models are abstract descriptions of arbitrary artifacts, which in our case are to be used for the test design. They can define the system behavior or the test behavior. They can be a separate test model or part of a common test and development model. They can be (non-)deterministic, timeless or time-dependent, discrete, continuous or a mixture of these. They can describe data flow, control flow or a mixture of these. The behavior can be continuous or event-driven. This results in a wide variety of possible applications of all kinds of models for test design.

Models can be used, for example, not only to describe individual test cases at an abstract level, but also to describe entire sets of test cases. This is achieved by defining branches, conditional statements, reactions to external events and much more at model level. A model can be used to obtain a holistic overview of the entire system and the interdependencies between components. This helps significantly in earlier phases of system development, e.g. during the review of requirements or architectural designs. Of course, all test-relevant information is also bundled here and the test cases are derived from this information as a whole.

If the test cases are derived automatically from the model, there are further advantages: Minor changes in the model, but which relate to a large number of test cases, can be automatically transferred to the test cases. The test generator often proceeds in such a way that certain quality objectives, such as achieving a certain coverage at model level, are selected as the final criterion for the test design. This means that changes that involve further test steps, a changed number of parameters or a change in the expected behavior can also be adopted automatically. The corresponding test cases are generated automatically.

If you wanted to test an application that describes the possible career paths described here, there would be a direct test case - the good case, which implies that every test is passed at the first attempt. If the bad cases are included, this small example already shows that an infinite number of paths are theoretically possible due to circular reasoning. During test generation, a test generator is usually only guided by the structure of the model. If states are added, changed or removed in this model, it automatically includes them in a new automatic test design.

The main advantage over the keyword-driven approach is that the entire sequence, including possible branches, nesting, loops or parallel behavior, can now be described and used for the test design. The model is interpreted by a model-based test generator in such a way that individual control flow-based sequences are generated by this model, which are then converted into a previously defined target format. Common target formats include executable test code, the human-readable variant of this, e.g. for validation, and test documentation. Accordingly, one of the advantages is that it is possible to react to any type of change in the model level or the exports of the associated test generator. The resulting test cases are then generated automatically. The high risk of manually adapting a large number of test cases with the associated effort as with the previously described variants no longer exists here. This results in a significant increase in test efficiency. One of the disadvantages is the high level of preparation required.

Comparison of test automation types

A qualitative comparison based on our experience shows the following cost structure:

  Capture & Replay Script-based Data-driven Keyword-driven Model-basedt
Initial Costs very low low low medium high
Recurring Costs very high medium medium low very low

 

This already shows that higher levels of test automation save costs in the long term, while lower levels are better suited for a quick assessment. In general, the framework conditions must always be taken into account when making a choice: Which project approach will be used? Who creates the test cases? How much capacity is available?

You can find more information on choosing the right strategy in my book Basiswissen Testautomatisierung.