4 min read

Test design with model-based testing

Test design with model-based testing

Model-based testing sounds like the holy grail of quality assurance: efficiency, automation and maximum precision. But in practice, you stumble across unexpected hurdles. Perhaps you have also experienced how the shiny promises fade in reality. There is often a gap between theory and application that is bigger than expected. Why is that? Is model-based testing just another buzzword that doesn’t pass the practical test? Or are there ways to bridge the gap and exploit its full potential? Let’s delve into the world of model-based testing and shed light on the stumbling blocks and innovative approaches that offer hope.

Podcast episode on model-based testing

In this episode, I talk to Matthias Hamburg about model-based testing. Matthias is an experienced expert in the software testing community and provides valuable insights into the problems and solutions of model-based testing. He reports on studies that show that test automation often does not deliver the desired results and explains the difficulties he has experienced in practice, such as insufficient modeling knowledge and gaps between modeling and test execution. Matthias introduces a new tool that aims to close these gaps and emphasizes the importance of no-code generation to make testers’ work easier. A fascinating conversation about the future of software testing!

“The World Quality Report shows that test automation often fails to achieve the business goals that were hoped for.” - Matthias Hamburg

Matthias Hamburg was Managing Consultant at Sogeti Deutschland GmbH until his retirement in 2019. His professional focus is on test analysis, test management and test process improvement. He continues to volunteer for the German Testing Board (GTB) and its umbrella organization ISTQB. Among other things, he publishes the Advanced Test Analyst syllabus and the standard glossary of test terms in English and German.

Model-based testing: Why theory and practice often drift apart

The gap between aspiration and reality

Model-based testing (MBT) promises efficiency and quality assurance at the highest level. In practice, however, MBT often fails to deliver the expected results. Studies such as the World Quality Report by Capgemini and Sogeti confirm this phenomenon: test automation often fails to achieve business objectives. CI/CD processes stutter instead of running smoothly, and the expected increase in efficiency does not materialize. There seems to be a gap between the brilliant theory and the sobering practice.

Obstacles on the way to successful implementation

One of the main problems with model-based testing is the lack of professional training in modeling techniques. Imagine someone trying to play a complex piece of music without any knowledge of sheet music - the result will hardly be convincing. Similarly, a lack of training for testers leads to inefficient models. In addition, there is often no seamless transition from modeling to test execution. Gaps open up between test design and implementation, making the process more difficult. Tools such as Selenium or Playwright are helpful instruments, but they do not completely close these gaps.

A ray of hope: An innovative approach in MBT

But there is hope on the horizon. A new tool is entering the scene and addressing precisely these challenges. It initially generates abstract test cases from the model that can be checked manually - a kind of dress rehearsal before the curtain falls. As soon as a concrete application is available, the abstract test cases are executed automatically. This seamless transition from modeling to test execution is like a well-oiled gear in the gears of quality assurance. It is particularly noteworthy that the tool also reacts flexibly to changes in the software and adapts accordingly.

Practical example: Optimization using the two-phase approach

This tool has already demonstrated its strengths in a pilot application. It made it possible to try out new methods and optimize the entire test process. The two-phase approach - first generating abstract test cases and checking them manually, then automating them - has proven to be particularly effective. It not only helps to identify early errors, but also improves overall quality assurance. A win-win situation, you could say.

Looking ahead: potential and opportunities

Of course, this new tool is not the end of the line. There is still room for improvement. Support for data-driven testing could be expanded and additional test design methods such as equivalence class formation could be integrated. But the direction is right. If such tools are used more widely in the future, systematic test procedures could finally be given the status they deserve.

Frequently asked questions about Model Based Testing

GenAI in test automation

GenAI in test automation

The use of AI in test automation opens up exciting opportunities to increase efficiency and make development processes more flexible. By using...

Weiterlesen
Test accessibility with those affected

Test accessibility with those affected

Accessibility in software development plays a central role in making digital products accessible to all people. It is particularly important to...

Weiterlesen
Software Analysis

Software Analysis

The acquisition of complex software for a public mobility provider presented the project team with unexpected challenges. With an approach that...

Weiterlesen