8 min read

Software testing in the future - Interview with Wolfgang Platz

Software testing in the future - Interview with Wolfgang Platz

After successfully building up a test consulting company in the early 2000s, Wolfgang Platz founded Tricentis in 2007. With over 20 years of technology experience and extensive knowledge in the field of software testing, he has significantly shaped Tricentis, both on the product side and on the side of the associated software testing methodology.
Today, he is responsible for the strategy and vision of Tricentis with the goal of making Continuous Testing for Enterprise DevOps a reality.

What challenges do you think software testing will face in the future?

The hunger for software

From my point of view, it is as follows. The hunger for software is dominant and it is massive. There are forecasts from IDC that say that 40 percent of all code will be written in the next year alone. This shows how extreme this hunger for software is. In comparison, the increase in additional resources in software development from universities worldwide is around 20 percent. Per Ano, we are at 35 or 40 percent more software that we want, but there are only 25 percent more developers. This raises the question of how this will work. Microsoft predicts that the future of software development will be less about real development and more about parameterization and configuration on an abstract level and the phenomenon of the citizen developer. If you look at Google today, you also see no-code and low-code development and massive growth, which leads to immense additional momentum. What is interesting is what is being done in this low no-code. Today, the applications developed there are very simple. The moment you want a real system, you can’t avoid development. With the right systems, we see that enterprise applications are all trying to meet this enormous need for individualization by building massive configuration layers that allow no- and low-code development to be added to their kernel. Many companies are very active in this area and are trying to simplify things massively. They have their own program to support low and no-code development on their base platform. Not least because this is the key to offering SaaS, which is a must with the strong individual development. On the one hand, we have enterprise systems that are increasingly trying to push the coding threshold upwards. This means you can do more and more with no- and low-code. In addition, we have the now existing low no-code platforms, which are suitable for solving simple things without coding.

Low- and no-code

In my view, two things will happen that mainly relate to back-office operations. This used to be done with Excel, now it is put together with low and no code and all of these are very simple processes. Today, the complexity barrier of these in-house developments is below the test level. Of course, a tester says that everything has to be tested. But we know that simple things are not tested. But we will see two things. On the one hand, the complexity will increase and on the other hand, the individual solution approaches will combine. Suddenly there is a complexity that requires testing. At this moment, there are the citizen developers and the citizen testers. At this moment, we are dealing with a great need for no-code naming, which will follow the no-code development. A no-code developer will not simply make a coding for his car mission. With a certain time delay, this great momentum of citizen development that we will see in the next few years will spill over into the test discipline. From a tooling perspective, there will therefore be two worlds. There will still be the coding world that exists today with the various frameworks and people will try their hand at Selenium and Appium, for example. In addition, there will be a larger, more abstracted business definition world in which people will carry out software testing without having to go into the code. That’s my view on the development of skills.

Pendulum movements in software testing

In terms of organization, we will see a pendulum movement. In our customer base, we see a pendulum movement. Ten years ago, everyone wanted to have a TCoE (Test Center of Excellence). That has been over for five years at the latest. With a time lag, people have realized that TCoE is not going anywhere and this large pool of manual testers acting on an order-by-order basis is irrelevant. There were all kinds of problems and the response from many customers was that TCoE had to be dissolved. Then there was the idea of incorporating everything into development and a dedicated software test would be unnecessary. However, the question then arose as to what the actual principle was here. In a large organization, this only works for the first three or five teams. It was then adopted by Agile Development, which achieved great results. It was scaled up and suddenly there was a rude awakening, as the productivity expectations were nowhere near matching the productivity increases. How was that possible? When you scale, two things happen. You find out that you do need higher levels of testing. It’s like integration, system integration and maybe even user acceptance, which is needed after all. It’s not just about an isolated application, but about an entire system network. The question is who does it. The agile teams all think only up to the limits of their agile spectrum and that is in no way a reproach, because they are set up that way. So we come to the Hegelian thesis-antithesis-synthesis movement, which sees in the middle that there is something we know as digital or linked-TCoE. A very lean body is still used, which in some cases specifies or recommends standards in the test approach, test methodology and automation. However, there is real governance in the presentation of results. However, it is still a central body that works very closely with the teams. This is a recommendation we have made that works well for our customers. Even for the higher levels of tests in our own agile test teams, it brings together the artifacts from the lower levels and aggregates them so that we have a higher-level test. That would be the organizational form that I see. The wrong approach would be to hand everything over to the agile teams in the hope that it will be solved there. The fact is that this doesn’t work. Developers are Progressive Animals and not Regressive Animals, which is why they don’t like to do unit tests. Push forward. Make things new. All of this is expressed in interesting figures. We conducted a survey of Swiss banks, which showed that they create code coverage for release one, i.e. for new tests, which gradually decreases. The reason for this is that nobody does maintenance. The coverage that you get from unit tests therefore evaporates over time. This means that de facto you have to do something that seems inefficient, as the tests are intercepted at a higher integration level. But you have the huge advantage that developers no longer have to do this. Now you can embed people who see software testing as a task and not as an evil. They can maintain and sustain these tests. The microservice architecture helps a lot here because you always have an API wrapping around the service. This allows a business logic test to be set up at an API level, which works much better in maintenance. In our experience, people do these contract tests. They are prepared to write and maintain them at this level.

One-Test-Definition

I have a third topic that I wanted to talk to you about and that interests me personally. The topic is called the one-test definition. What do I mean by that? We all differentiate between functional and non-functional tests. In the case of non-functional tests, I would like to pick out the two that involve the most effort over time, namely performance and security. There is the following thesis. An application exists or has the right to exist because it offers a certain functionality that no other application has. This is because it solves some problem in a way that no other application does. Why do I need it at all if there is another one that does exactly that? I postulate that someone who writes a business doesn’t build redundant functionality. Why would you do that? If you do, it’s because you don’t know that it already exists somewhere else. However, if I build a new functionality, this means that I will always build new test cases that have never existed before, because the functionality is also new. This means that the functional test case portfolio is always individual to a certain extent. If you could use two completely identical test case portfolios, then the applications would be the same. So why do this? Finding number one is that functional test portfolios are always application specific. I keep harping on this because non-functional ones are not necessarily application-specific. If you look at security testing today, you don’t want someone to be able to do security intrusions or attacks. These attacks are all highly generic and you don’t want to see them in any application. It doesn’t matter if it’s a web check-in or a banking app. You want that application to be secure. The level of security can be described generically. It’s very similar with performance. For performance, we know that 1.3 seconds is the length. We don’t want to wait any longer for a website. We know that there is now an application-specific parameterization in which concurrency the use cases must be run through. 10,000 people sit on one use case at the same time and 5,000 on the other. However, this is not specific to the functionality of the application. This is a parameter that is only set by the environment if I have 10,000 employees. However, the use case is not affected by this. What does that mean? With these thoughts, we ask ourselves why we cannot take the functional navigation through the application as a basis and enrich it with these generic requirements in order to develop a security or performance test. The difficulty for a security test is not defining the requirements, but navigating through the application and trying to find the bad intrusions in this context. This is a task that the functional test has already solved. With performance, the requirement is not to have the use case as experienced by the user. I just want to make sure that it works 10,000 times at the same time. So I have to enter different data and position environment data accordingly. But the functional navigation through the application allows the load to be generated in the background, which we have already done. So why can’t I reuse a functional definition for the non-functional area and enrich it there with the generic code? And that’s what I call a one-test definition. We have now used this at Tricentis and implemented it for the first step, performance. You take the functional test portfolio, add the communication track in the background and then enrich it with a wide variety of data constellations so that you can get 10,000 users on it at the same time. However, the flow is that of the functional application. The update and maintenance work from the functional application at the touch of a button. Initially, you still have to enrich the functional flow with the specifics of this test type, which are generic. Once you have done this, the update and maintenance can always be carried out at the push of a button. This gives you something new and the moment your functional test goes green, you have achieved two things. Firstly, the application is reasonably stable, otherwise it would not survive the smoke test. Secondly, the functionality script is maintained. I can now build on this maintained situation and push the update up to the performance and security test. I run this immediately and at least have a smoke performance test and a smoke security test. This can now also be done by people who don’t need to be experts as long as everything is green. If it goes red, you need someone to look inside. This is possible at both the security and performance level. But you can raise the standard of the work to another level and achieve continuity in quality assurance from a non-functional perspective as well. That is the idea of one-test definition.

What do you think testers and test managers will need to learn and do in 2022 to be fit for future changes?

In the future, it will be much more important to understand how DevOps works than it is today. As a test manager, you also need to understand how Docker and Kubernetes basically work. How are they used in principle? That wasn’t necessary in the past. Tests tended to be singular events, which is no longer the case today.

Otherwise, I still see a frightening lack of knowledge among all test managers when it comes to methodology. The mistakes of 20 years ago are still being made. The main thing is to completely automate. Nobody asks themselves what additional coverage is generated by such a test. People are still not aware of the significance of the transition from an intuitive test approach to a methodical test approach. It is important to know when we do what. When do we need to do a test data design, a test case design? I recommend this basic test methodology to everyone.

Unit Testing

For me, the unit test is the most essential of all test stages. It is also the first thing I look at when I start a new consulting project. Why? This...

Weiterlesen
Software testing in the future - Interview with Deutschen Bahn

Software testing in the future - Interview with Deutschen Bahn

Bettina Buchholz is Strategic Lead for Quality Assurance & Test at DB Netz AG and is Product Owner of the test-focused CI/CD pipeline MoQ-AP (Modular...

Weiterlesen
Software testing in the future - Interview with Tilo Linz

Software testing in the future - Interview with Tilo Linz

Tilo Linz is CEO and co-founder of imbus AG, a leading solution provider for software quality and software testing and has been active in the field...

Weiterlesen