Modern IT systems are becoming ever larger and more complex. It may be possible to build them once with professional help, but users are often overwhelmed by the maintenance and further development of such complex systems. The constant development of new systems increases the maintenance problem. Software must be constantly corrected, changed, refurbished and further developed in order to keep pace with operational reality. The more new software is created, the more capacity is needed to maintain it. Users therefore need more and more staff to maintain the old systems on the one hand and to build new applications on the other.
It is not a solution to produce more and more code, faster and faster. Instead, the goal must be to gain functionality without increasing the amount of code. The aim is to limit the amount of code and still provide the required functionality. This can only be achieved by using external, ready-made modules that users do not have to develop and maintain themselves. The responsibility for maintenance and further development lies with the provider. The emergence of cloud computing promises a solution to this problem. In the cloud, the user has the opportunity to combine the advantages of prefabricated software with the benefits of in-house development. Users do not have to solve every detailed problem themselves. He uses the standard solutions and concentrates his energy on the actual company-specific functions.
Service-oriented requirements analysis is a prerequisite for the use of cloud services. The first step is to model the business process in order to define the context of the application. This is followed by a search for services that are suitable for solving the problem.
The user searches for modules that can be used for the planned system. Only when the list of potential modules is sufficient does the user begin to detail his requirements in such a way that they match the available services. The functional and non-functional requirements define the minimum acceptance criteria. Only functions that must be available are described here and quality limits that must be adhered to are defined.
The requirements specification contains the following content:
This document must be automatically analyzable as such, because on the one hand the processing steps of the use cases must be compared with the operations of the potential services and on the other hand test cases must be generated for testing the services. The main purpose of the requirements documentation is to serve as a test oracle. It must therefore be comparable with the interface definition and thus on the same semantic level. Likewise, it contains information that links it upwards to the business process. In this respect, the document is a link between the superordinate business process and the subordinate services.
The test procedure involves two phases:
The static analysis compares the content of the requirements with the content of the interface definition. A text analyzer scans the specification and builds tables of use cases, processing steps and interface data. A table of test cases is also created. A suitable test case is generated for each processing step, each action, each status and each rule. The test cases are supplemented with information from the interface definitions. At the same time, the interface schema is analyzed. In addition to checking the rules and measuring the size, complexity and quality of the interfaces, tables are also created here. The tables from the specification analysis are then compared with those from the interface analysis. The processing steps of the use cases are paired with the operations, the business interface data with the parameter data of the operations. Where a pairing does not match, a reference is made to an incompatibility.
If a service proves to be incompatible with the user’s requirements, the user has four alternatives:
The user will decide on a case-by-case basis which of these alternatives fits best. What we want to avoid is the user starting to develop their own services. This alternative should only be allowed as a last resort.
At the end of the static analysis, we know whether the service can be considered at all based on its interface definition. To do this, we have measured its size, complexity and static quality. Secondly, we can compare the structure and content of the service interface definition with the structure and content we would like to have. If they are too far apart, we don’t even need to start with the second phase - the dynamic analysis.
During dynamic analysis, the service is executed and the results are recorded for comparison purposes. The starting point for the dynamic analysis is the interface schema and the test case table obtained from the requirements specification.
The analysis comprises eight steps:
The test case table that is generated from the requirements specification is not complete. For example, the assignment of the test values must still be completed by the tester. An automated system then combines the test case table with the interface definition to form a structured test script. The tester can refine and add to the script.
From here on, everything runs automatically. Test objects emerge from the scripts. The generator combines the interface schema with the preconditions to generate a series of requests for each test case. The requests are then sent by a test driver. This also receives the responses and saves them. The validator compares the responses with the post-conditions and indicates deviating values in a defect report.
In the final step, the test metric is aggregated and evaluated (test coverage, correctness, performance rating, etc.).
With the help of the test metrics report, the tester can assess the extent to which the behavior of the service is suitable for the target application.
The results of the test serve as a decision-making aid and must be presented in a form that the decision-maker can easily understand and evaluate.
In future, code modules will increasingly be offered as ready-made services that users only need to integrate into their business processes. Programming will be done in a higher-level process description language such as BPMN, S-BPM and BPEL, if this term can be used at all. The interfaces are operated from there and the individual services are called up. Users no longer need to worry about the detailed implementation. However, they still have to test the modules they use. Individually in a service unit test, as described here, and as a whole in an integration test. In any case, a paradigm shift is now imminent. We are now switching from object-oriented to service-oriented software development. This should, if not finally solve the maintenance problem mentioned at the beginning of this article, then at least alleviate it to some extent.