2 min read

Measuring and Testing Applications

Application Value Assessment

Measuring, testing and assessing the quantity, quality and complexity of application systems forms a sound basis for decisions on IT strategy. Particularly in the case of legacy systems with a large, evolved scope of code and a wide variety of languages, data can be determined and comparability created that can be used to evaluate various further development scenarios (maintenance, migration, refurbishment, etc.). To do this, however, it is necessary to select the appropriate key figures and metrics from the very large number available.

Goals of a measurement project

Typical objectives that are pursued with a project to measure application systems are

  • Determining the size, complexity and quality of a system’s code
  • The development of a metrics database for ongoing quality assurance
  • Identifying and estimating alternative further development strategies

In order to be able to fulfill the first objective, the measurement of the system, it is necessary to structure the source code of the application and spread it out for the measurement in such a way that a meaningful comparison of the measurement results is possible. This can be done, for example, by language (Cobol, PL1, C++, etc.) or by specific technical areas. Subsequently, the individual areas can be compared with each other in terms of size, quality and complexity and anomalies can be examined in more detail. The measurement is not limited to program code; databases and user interfaces can also be measured.

The creation of a metrics database enables the ongoing measurement of the system over time. This makes it possible to observe whether, for example, the overall quality has improved or deteriorated in the event of changes or extensions to the system. Ideally, ongoing measurement is embedded in the quality assurance process.

The results of the measurement provide a uniform basis for evaluating various further development scenarios and estimating their cost. This can be used as a basis for the future IT strategy. Possible scenarios are

  • Annual maintenance of the system
  • New development of the system
  • migration
  • Refurbishment
  • Encapsulation

Selection of metrics

A prerequisite for every measurement project is the selection of suitable metrics. This selection is also the first step in the ISO9126 measurement process. However, the literature provides little guidance on which metrics to select. Software systems are complex constructs with many different properties that can be measured. In his book “Software Complexity Metrics”, Horst Zuse alone identified more than 300 metrics. Each metric measures a different property, e.g. the McCabe metric measures internal process complexity, the Halstead metric measures language complexity or the function point metric measures the interactions between a system and its environment. Ultimately, it is a question of the objective of the measurement project and the type of system as to which metrics are most suitable.

In the measurement projects carried out by one of the authors over the last 20 years, experience with a wide variety of systems has resulted in a compilation of size, complexity and quality metrics that have proven their worth many times over in the analysis and evaluation of application systems. These include complexity metrics such as data complexity, control flow complexity and language complexity, but also quality metrics such as those for measuring portability, maintainability, conformity and testability.

The measuring process

The measurement process for carrying out the measurement, testing and evaluation of the system typically consists of the following steps:

  1. selection of metrics
  2. structuring of the system to be measured
  3. configuration of the tools, adaptation to local conditions (e.g. own language constructs)
  4. execution of the measurement
  5. transfer of the measured values to a metrics database
  6. evaluation of the measurement results
  7. implementation of the estimates

Results

In addition to the metric reports on the size, quality and complexity of the system and the subsystems/components as well as the effort estimates for various further development strategies, evaluations can be created to prepare the information for different target groups. This is done, for example, in the form of scorecards that relate the qualities and complexities to each other, or in management dashboards to obtain an up-to-date overview of the system.

Modeling Metrics for UML Diagrams

UML quantity metrics Quantity metrics are counts of the diagram and model types contained in the UML model. The model types are further subdivided...

Weiterlesen

1 min read

Estimation of Test Projects

The planning of a requirements-based system test presupposes that the test effort can be calculated based on the requirements. If test points are...

Weiterlesen

Analytical Quality Assurance

Checking and measuring software artifacts Analytical quality assurance offers a cost- and resource-saving way of checking software artifacts - such...

Weiterlesen