Testing a Geoportal
The test process Motivation Many authorities and companies that work with spatial information are now building geoportals as an essential part of...
Error-free functionality, good usability, security and performance are essential in order to achieve broad acceptance of e-government systems. But how does the client of such a system know whether the delivered software actually fulfills all the requirements? The following article describes the approach and benefits of systematic acceptance tests using the example of the e-government platform of the Free State of Saxony.
In 2005, the Free State of Saxony began building a central infrastructure platform for e-government applications as part of an e-government initiative. This contains components that are required for the realization of e-government processes. By simply integrating the basic components, new e-government applications can be effectively mapped and made easily accessible to users. The basic components of the e-government platform include the life situation portal Amt24, the form service, the Saxony Atlas geoportal, the integration framework, a central content management system and components for electronic payment transactions and electronic signatures and encryption. For those responsible for the e-government platform, one thing is clear: none of these components may go live before the acceptance tests have been completed and the acceptance criteria have been met. The usefulness of systematic tests has been proven time and again, both for individual software and standard products. Thousands of errors have been documented and rectified in the error management system since 2005. Tens of thousands of test cases have been carried out. The spectrum of errors ranges from minor layout problems to serious errors that would have been fatal in productive operation. However, the test also uncovered faulty or incomplete requirements that could be corrected before going live, e.g. by means of a change request.
What needs to be considered in the structured test of an e-government platform is explained below using the test process.
In order to avoid surprises later on, the planning of the test should begin at the same time as the planning of the development project, with budget, staffing and scheduling being planned specifically for the test project. The test objectives are essential for planning the test effort. For particularly high-profile systems, the focus is on performance and usability in addition to correct functionality. Systems that perform critical functions or process personal data must be tested for security vulnerabilities. The integration of old and new specialist systems makes a comprehensive test of the interfaces and data transfer indispensable.
Standards such as the test process according to ISTQB, software quality characteristics according to ISO 9126-1:2001 and test documentation according to IEEE 829:2008 help with effective test planning.
The test preparation phase includes the test specification, the provision of test tools and the test environment as well as the procurement or creation of test data. The establishment of a special test center has proven particularly useful for this purpose. Many e-government projects involve test specialists from IT and testers from different departments and authorities. The test center not only offers them shared workstations but also the opportunity to exchange information on methodological and technical issues and to work together effectively beyond the line organization. Many questions about the target behavior of the application to be tested are often clarified quickly during the test specification phase.
With the usual work contracts in the public sector, the acceptance of services often has to take place within a tight time frame. In the Free State of Saxony, it has therefore proved useful to define milestones with interim versions while development is still underway. A release test of the interim version provides information about the delivery quality in good time and makes it possible to rectify any deficiencies found even before the project is made available for acceptance. In the acceptance test, only a regression test of meaningful test cases from the release tests is then carried out.
While testers from the specialist area essentially concentrate on the feasibility of the most important use cases and contribute their experience with typical weak points, test specialists check the standard conformity of interfaces and ensure test coverage of all defined requirements.
The project manager must be regularly informed about the test progress, error figures and any problems that arise. For example, in the event of problems with inter-agency IT communication, measures such as network activations or increasing server capacities can be initiated in good time before the go-live date. As the current defects can be viewed at any time by recording them in the defect management system, disputes about individual defect reports (error or feature? change request?) can be dealt with promptly by the project management.
A description of residual risks and the evaluation of the delivered software against the acceptance criteria form the core of the test report that the test team delivers to the project manager. The acceptance criteria, which are of course part of the contract for work and services with the supplier, can take various forms. For the e-government platform, the test coverage to be achieved is usually combined with error-based metrics.
Example:
Based on the acceptance recommendation in the test report, the project manager can now decide on the final acceptance and further measures.
When the e-government platform and its first components were put out to tender in Saxony in 2004, it was clear that a standard was to be created for subsequent projects. A test center was set up in which test specialists from ANECON specify and carry out functional test cases together with testers from the departments of the Free State of Saxony. In addition, security and usability tests are carried out in accordance with current standards, e.g. BITV, BSI, W3C or OWASP. Load and performance tests are also an essential part of the test projects, as they also serve to determine response times for the service level agreements with the company.
Since the start of testing activities in the Free State of Saxony, all projects on the platform have been subject to these types of tests as well as testing against the specifications of the operating environment. The cooperation between test specialists and testers from the specialist departments in the test center has proven to be one of the key success factors, as have the change, release and error management processes that have now become a matter of course. The effort is worth it, because internal requirements such as the EU Services Directive and the increasing expectations of users of e-government require an ever greater range of functions and ever stronger networking of the IT systems concerned. And their success depends heavily on the acceptance of users, who expect an error-free, secure and fast system.
The test process Motivation Many authorities and companies that work with spatial information are now building geoportals as an essential part of...
Integration testing is not so easy to grasp, because many things can be integrated. Modules, components, layers, services or entire systems....
Different system types place different demands on successful test automation. Among other things, they determine the automation approach and also...