Automotive Testing
There is an incredible amount of software in a car and it has to be tested. As this is a highly regulated environment, the work of a tester is...
In this episode, we discuss the challenges and solutions when testing forms. How can the tester of an immense variety of forms and input fields fail? We discuss how this task has been successfully accomplished through the use of automation and model-based approaches. The focus is on the use of data models, validation and calculation rules and the implementation of automated tests to significantly reduce manual effort. Finally, it discusses how crucial it is to detect and correct errors at an early stage in order to ensure high quality.
“We had an insanely high manual testing effort, which ultimately meant that we couldn’t keep up with really covering all the variations.” - Lilia Gargouri, Simon Bergler
Lilia Gargouri, an experienced software developer and Head of Quality Assurance at mgm technology partners, is characterized by extensive technical knowledge and outstanding skills in creative problem solving. Her unique skills in creative testing from a user’s perspective, combined with her deep expertise in end-to-end test automation, make her the driving force behind efficient quality assurance, even for complex and long-lived enterprise software.
Simon Bergler began his career as a natural scientist with a degree in general biology and a master’s degree in virology. After three years in project management in biotechnology, he switched to the IT sector as a career changer. There he started at mgm technology partners as a manual QA tester and shortly afterwards successfully switched to test automation. Simon now combines his scientific know-how with the IT skills he has acquired to deliver high-quality solutions.
Highlights of this Episode:
In this podcast episode, Richie, Lilia and Simon discuss the challenges and solutions when testing a variety of form variations. They talk about the importance of automation and the use of modeling tools to minimize testing efforts.
Hello and welcome to a new episode of the ‘Software Testing’ podcast. I’m Richie, your host, and today I’m pleased to bring you an exciting discussion from German Testing Day 2024. Today’s topic is testing forms - a seemingly never-ending task given the multitude of variants and input fields. Our guests, Lilia and Simon, have been working intensively on this challenge. In this episode, they share their experiences and solutions for mastering the form jungle.
Lilia and Simon were faced with the task of testing a large number of forms that had to be adapted for all federal states. These forms came from a tax context and therefore had specific requirements. Simon explains: ’Essentially, we had to deal with a large number of forms at the same time. With over 255 form variations per federal state, each with more than 100 fields, it quickly became clear that manual testing would not be enough. The complexity was enormous and required innovative approaches.
The first realization was that manual testing alone was not enough. ‘We had an insanely high manual testing effort,’ recalls Simon. To counteract this, they turned to automation. A crucial step was the use of a data model together with a UI model based on the A12 low-code platform. Lilia describes: ’The subject matter experts first cast the entire subject matter of the form into a data model. These models enabled them to generate the test data automatically, ensuring high test coverage.
Another tool in Lilia and Simon’s arsenal was the use of an SMT solver to generate valid test data sets. ‘This SMT solver includes the implementation of algorithms for field assignment,’ Lilia explains. This automated generation enabled them to ensure that their tests were always based on up-to-date data. This was particularly important as the requirements changed frequently. They also used model-based approaches to run automated UI tests. This allowed them to minimize maintenance effort and react quickly to changes.
The results of their efforts were impressive. ‘If we add everything up and run everything once, we’re looking at a total test duration of 12 to 14 hours for one federal state,’ reports Simon. This time would have taken several hundred hours manually. Automation not only saved them time, but also significantly increased the quality of their tests. They also developed a library for UI tests with smart IDs, which further optimized the process.
Although much progress has been made, there is still room for improvement. Lilia reports that they are currently working on a solution to automatically check PDF content - an important aspect for industries such as insurance and e-government. They are also working on an accessibility library to ensure that all applications are accessible. These continuous improvements demonstrate the team’s commitment to quality and efficiency in the testing process.
There is an incredible amount of software in a car and it has to be tested. As this is a highly regulated environment, the work of a tester is...
Behavior Driven Development, or BDD for short, is a powerful framework that is often misunderstood or misapplied. It is based on clear communication...
What you can learn from another industry! Alessandro comes from the software sector, Thomas from the pharmaceutical industry. In conversations, they...