GenAI in test automation
The use of AI in test automation opens up exciting opportunities to increase efficiency and make development processes more flexible. By using...
Model-based testing sounds like the holy grail of quality assurance: efficiency, automation and maximum precision. But in practice, you stumble across unexpected hurdles. Perhaps you have also experienced how the shiny promises fade in reality. There is often a gap between theory and application that is bigger than expected. Why is that? Is model-based testing just another buzzword that doesn’t pass the practical test? Or are there ways to bridge the gap and exploit its full potential? Let’s delve into the world of model-based testing and shed light on the stumbling blocks and innovative approaches that offer hope.
In this episode, I talk to Matthias Hamburg about model-based testing. Matthias is an experienced expert in the software testing community and provides valuable insights into the problems and solutions of model-based testing. He reports on studies that show that test automation often does not deliver the desired results and explains the difficulties he has experienced in practice, such as insufficient modeling knowledge and gaps between modeling and test execution. Matthias introduces a new tool that aims to close these gaps and emphasizes the importance of no-code generation to make testers’ work easier. A fascinating conversation about the future of software testing!
“The World Quality Report shows that test automation often fails to achieve the business goals that were hoped for.” - Matthias Hamburg
Matthias Hamburg was Managing Consultant at Sogeti Deutschland GmbH until his retirement in 2019. His professional focus is on test analysis, test management and test process improvement. He continues to volunteer for the German Testing Board (GTB) and its umbrella organization ISTQB. Among other things, he publishes the Advanced Test Analyst syllabus and the standard glossary of test terms in English and German.
Model-based testing (MBT) promises efficiency and quality assurance at the highest level. In practice, however, MBT often fails to deliver the expected results. Studies such as the World Quality Report by Capgemini and Sogeti confirm this phenomenon: test automation often fails to achieve business objectives. CI/CD processes stutter instead of running smoothly, and the expected increase in efficiency does not materialize. There seems to be a gap between the brilliant theory and the sobering practice.
One of the main problems with model-based testing is the lack of professional training in modeling techniques. Imagine someone trying to play a complex piece of music without any knowledge of sheet music - the result will hardly be convincing. Similarly, a lack of training for testers leads to inefficient models. In addition, there is often no seamless transition from modeling to test execution. Gaps open up between test design and implementation, making the process more difficult. Tools such as Selenium or Playwright are helpful instruments, but they do not completely close these gaps.
But there is hope on the horizon. A new tool is entering the scene and addressing precisely these challenges. It initially generates abstract test cases from the model that can be checked manually - a kind of dress rehearsal before the curtain falls. As soon as a concrete application is available, the abstract test cases are executed automatically. This seamless transition from modeling to test execution is like a well-oiled gear in the gears of quality assurance. It is particularly noteworthy that the tool also reacts flexibly to changes in the software and adapts accordingly.
This tool has already demonstrated its strengths in a pilot application. It made it possible to try out new methods and optimize the entire test process. The two-phase approach - first generating abstract test cases and checking them manually, then automating them - has proven to be particularly effective. It not only helps to identify early errors, but also improves overall quality assurance. A win-win situation, you could say.
Of course, this new tool is not the end of the line. There is still room for improvement. Support for data-driven testing could be expanded and additional test design methods such as equivalence class formation could be integrated. But the direction is right. If such tools are used more widely in the future, systematic test procedures could finally be given the status they deserve.
Model-based testing is an approach in which tests are created on the basis of formal models of the system under test. These models represent the expected behavior and are used to automatically generate test cases.
Model-based testing enables efficient test case generation, improves test coverage and reduces manual effort. It helps to detect errors at an early stage and increase the quality of the system.
In contrast to traditional methods based on manual test case creation, model-based testing uses formal models to automatically generate test cases. This leads to more systematic and comprehensive test coverage.
Various models such as state machines, sequence diagrams, activity diagrams or decision tables are used. The choice of model depends on the specific requirements and the system to be tested.
By introducing modeling tools and training the test team, model-based testing can be gradually integrated into existing processes. It is important to analyze existing test cases and create models accordingly.
There are various tools such as Tosca MBT, Conformiq or IBM Rational Test Workbench that support modeling and automatic test case generation.
Challenges can include the initial effort required to create the model, the complexity of the models and the need for expert knowledge within the team.
By systematically modeling all possible system states and transitions, tests can be generated that ensure comprehensive coverage and thus reduce the risk of undetected errors.
Yes, model-based testing can be used in agile environments. It supports rapid iterations and adaptations, as models can be easily updated and new test cases can be generated automatically.
Model-based testing enables the automatic generation of test scripts from models, which increases the degree of automation and reduces the effort required for manual test case creation.
The use of AI in test automation opens up exciting opportunities to increase efficiency and make development processes more flexible. By using...
Accessibility in software development plays a central role in making digital products accessible to all people. It is particularly important to...
The acquisition of complex software for a public mobility provider presented the project team with unexpected challenges. With an approach that...