Blog

The Fight Against Technical Debt - Richard Seidl

Written by Richard Seidl | May 31, 2013 11:00:00 PM

As part of an agile development team, testers have a special responsibility. Among other things, they must prevent technical debt from getting out of hand.

Technical Debt

“Technical debt” is a term for inadequate software. It is intended to express the problem of inadequate software quality in business management terms. It is intended to show managers that failures in software development have negative consequences that result in costs later on. The term debt is a reminder that you have to pay it off at some point. The amount of a project’s software debt can be measured. It can be expressed as an absolute cost figure or relative to the development costs of the respective project. The term was coined by Ward Cunningham at the OOPSLA conference in 1992. In Cunningham’s original sense, “technical debt” is “all the not quite right code which we postpone making it right”.

Employees of CAST Software Limited in Texas have proposed a formula for calculating debt. This formula combines problem types with effort and effort with money. The basic formula is a database of experience from numerous projects, which shows the average effort required to eliminate software defects. Refactoring a method, for example, can cost half a day. At an hourly rate of US$70, that’s $280. Adding a missing exception handler might only take an hour, but implementing necessary security checks in a component could take several days. To the Texans’ credit, they have identified, classified and assigned effort figures to a number of defect types.

Examples of the types of defect are

  • built-in SQL queries
  • empty catch blocks
  • too complex conditions
  • missing comments
  • redundant code blocks
  • inconsistent naming
  • loops without emergency brake
  • too deeply nested code

The debt for such code defects is the number of defect occurrences multiplied by the hours of rectification times the hourly cost.

In addition to these statically recognizable defects, there are also the calculated errors that occasionally occur during testing. Due to time constraints, not all errors are eliminated in a release, provided they do not affect the software’s ability to run. These include errors in the output, e.g. incorrectly calculated amounts and shifted texts, as well as errors in the input check. There are also performance problems such as excessively long response times and time-out interruptions. Users can live with such shortcomings temporarily, but at some point they become annoying and must be rectified before the final release. The cost of fixing them is one of the project debts that can be calculated based on the effort involved. Bill Curtis estimates the median debt level for agile projects to be $3.61 per statement. This is the absolute measure of debt. It is also possible to measure technical debt relative to the cost of development.

Timely detection of problems to avoid technical debts

One advantage of agile teams is the constant feedback. That’s why testers are part of the team. When it comes to avoiding a decline in quality, the testers in an agile team have more to do than just testing. They ensure the quality of the product on the spot during its development. This is done through a series of timely control measures. These include reviews of the user stories, checking the code, accepting the unit test results and continuous integration testing.

When reviewing the stories, the aim is to analyze, supplement and, if necessary, improve the story texts. The product owner will often overlook something or explain it inadequately. The testers should draw their attention to this and work with them to make up for the missing points and clarify the inadequate passages.

When checking the code, the testers can evaluate conformity with the coding rules, compliance with the architecture guidelines and the design of the code. Many omissions, such as missing security checks and inadequate error handling, can only be identified in the code. Automated code analysis is a good way to achieve this rapid feedback. When accepting the unit test results, the testers must ensure that there are enough test cases of good quality and that sufficient module test coverage is achieved. It is not necessarily their job to do the unit test themselves, although there are agile projects where this is done.

The agile testers should endeavor to always be able to give the developers feedback quickly. “Continuous integration” makes this possible. The tester has an integration test framework into which he inserts the new components. The existing components are already available. They are supplemented daily with the new ones. Of course, test automation plays a decisive role here. With the test tools, it is possible to repeat the regression test daily and also run the functional test of the latest components. Any problems that arise can then be reported back to the developers immediately. This is the decisive advantage over conventional, bureaucratic quality assurance, where it often takes weeks for error messages and defect reports to be reported back to the developers. This meant that valuable developer hours were lost.

This is no longer the case with agile development. There are no more such idle times. Instead, the testers must constantly run behind the developers in order to maintain the same pace as the developers themselves. As a result, testers need to be familiar with the development environment, have powerful tools at their disposal and have a good relationship with the developers. If these three conditions are not met, then the testers will not be able to deliver the benefits expected of them, no matter how well they master the test technology. Agile testing demands more from testers than was previously the case.

The timely detection of problems and rapid feedback to the developers are the main benefits of agile testing. They must be guaranteed. They are also the reason why the testers should be together with the developers. Whether they really need to be physically present is another question. Opinions differ here.

What is “done”?

Johanna Rothman and Lisa Crispin addressed this issue at the Belgium Testing Days in 2012. The question is, what is “done”? According to Johanna Rothman, this is a question that the whole team has to answer. However, the testers should initiate and drive the discussion. They should also feed the discussion with arguments for more quality. Rothman claims “you have to get the team thinking about what is done. Does it mean partially done, as in it is ready for testing or fully done, as in it is ready for release?” A certain level of quality is required to release a temporary release. A completely different level of quality is required to declare the product finally finished. There is a long way between these two states. The testers must ensure that development continues until the target state is reached. They must convince the product owner that this is necessary. Otherwise, the problems are simply postponed to maintenance, as used to be the case with conventional development projects. Rothman therefore suggests using Kanban progress boards to show the relative quality status of individual components. This allows everyone to see how far the components are from the desired quality level. The team actually needs two progress boards, one for the functional status and one for the quality status.

The functional status of a software product is easier to assess than the qualitative status. It is visible whether a function is available or not. The quality status is not so easily visible. You can only know how many bugs are still in the software once you have tested all of its functions. You can only judge how good the code is once you have analyzed the code in detail, and you can only judge how good the overall system is once you have used it for a while. The best indicators of the quality of the software are the number of errors found so far relative to the functional test coverage and the number of code defects relative to the number of code statements tested. There should be target values for both measures, which the testers propose and agree with the other team members. This allows the position of the individual component on the Kanban board to be fixed and the distance between the actual state and the target state to be recognized by everyone in the team.

Lisa Crispin points out that software quality is the ultimate measure of agile development. Functional progress must not be achieved at the expense of software quality. After each release - every 2 to 4 weeks - quality should be measured again. If it is not sufficient, it can be improved in the course of the next release alongside functional development. If it is too poor, the next release must be a revision release in which the errors are removed and the software is refactored. Crispin even admits to a separate quality assurance team that works alongside the development team to track the quality of the software created and report back to the development team. This would bring back the old separation between development and testing.

Johanna Rothman believes that the testers must have a say in what “done” means, right from the start of the project. “To be done also means that the quality criteria set by the team are met”. This means that these criteria must be accepted and practiced by everyone involved. Everyone in the team must be aware of their responsibility for quality and play their part. “Everybody in the team needs to take responsibility for quality and for keeping technical debt at a manageable level. The whole team has to make a meaningful commitment to quality”. Although the quality of the software is a matter for the team as a whole, the testers in the team have a special responsibility. They must ensure that technical debt is contained and reduced.