21 min read

Testing Webservices

The biggest obstacle to the use of web services is the problem of testing them. This article deals with this problem. The solution lies in the ability to simulate the use of services. Requests need to be generated and responses need to be validated automatically in a fast and reliable way. To achieve this, the authors have developed a tool - WSDLTest - that uses a test data description language based on pre- and post-condition assertions for both the generation of WSDL requests and the validation of WSDL responses. WSDLTest is part of a larger complex set of tools - DataTest - for creating and processing system test data. The architecture and functionality of this tool and the experience gained from its use are presented here.

Origin of the web

Web services are becoming increasingly important for the IT business, especially since the advent of service-oriented architecture. IT users are looking for a way to increase the flexibility of their IT systems in order to react quickly to changes in their business environment. When a competitor comes with a new marketing approach, they need to be able to follow this approach in a short time. The adaptability of IT systems has become vital to a company’s survival. When new laws are passed, such as the Sorbane Oxley Act or Basel-II, companies must be able to implement them within months. Changes to laws and regulations cannot be postponed. They must be implemented by a certain date, which is often only a short time away.
Under this time pressure, it is no longer possible to plan and organize long-running projects. It is necessary to design and install a functioning solution within a limited period of time. This demand for immediate response requires the existence of reusable components that can be glued together within a standard framework to support a customer-specific business process. This standard framework is very often a service-oriented architecture as offered by IBM, Oracle and SAP.The components are the web services; the overlying business process can be defined using the business process execution language BPEL.The glue for binding the business process to the web services as well as for linking the web services together is the Web Service Description Language - WSDL.The web service components themselves come from various sources. Some are purchased, some are taken from the open source community, some are newly developed and others are taken from the existing software systems, i.e. they are recycled to be reused in the new environment. Usually this means that they are packaged.

Necessity for testing web services

Regardless of where they come from, no one can guarantee that the web service components will work as expected. Even those that are purchased may not exactly fit the task at hand. The fact that they are not compatible can lead to serious interaction errors. The recycled components can be even worse. Legacy programs tend to contain many hidden bugs that compensate for each other in a given context. However, when they are moved to a different environment to perform a slightly different function, the bugs suddenly come to the surface. The same thing can happen with open source components. Perry and Kaiser have shown that the correctness of a component in one environment will not apply to another environment. Therefore, components must be retested for each environment in which they are reused.

The specific problems associated with testing web applications were addressed by Nguyen. The complexity of the architecture with the interaction between different distributed components - web browser, web server, data server, middleware, application server, etc. - brings many new sources of potential errors. The many possible interactions between the components in combination with the many different parameters also increase the need for more test cases, which in turn drives up the cost of testing. The only way to manage these increased costs is through test automation.

With self-developed services, the problem of reliability is the same as with any new software. They must be subjected to extensive testing at all levels - at the unit level, at the component level and finally at the system level. Experience with new systems shows that the error rate of newly developed software fluctuates between 3 and 6 errors per 1000 instructions. These errors must be found and eliminated before the software goes into production. A significant proportion of these errors are due to incorrect assumptions made by the developers about the nature of the task and the behavior of the environment. Such errors can only be uncovered by testing in the target environment - with data generated by others who have a different perspective on the requirements. This is the primary reason for independent testers.
Regardless of where the web services come from, they should go through an independent testing process, not only individually but also in conjunction with each other. This process should be well defined and supported by automated tools so that it is fast, thorough and transparent. Transparency is particularly important when testing web services so that test cases are traceable and interim results can be examined. Due to the amount of test data required, it is also necessary to automatically generate the inputs and automatically validate the outputs. A high level of functional coverage is achieved by generating different combinations of representative test data. A high degree of correctness is ensured by comparing the test results with the expected results.

Existing tools for testing web services

There is no shortage of tools for testing web services. In fact, the market is full of them. The problem lies not so much in the quantity, but in the quality of the tools. Most of them are new developments that are not yet fully developed. They are also difficult to adapt to local conditions and require users to submit data via the web client’s user interface. User interface testing is not the most effective means of testing web services, as R. Martin points out in a recent article in IEEE Software Magazine. He suggests using a test bus to bypass the user interface and test the services directly. This is the approach taken by the authors.
A typical tool on the market is the Mercury tool “Quicktest Professional”. It allows users to fill out and submit a web page. It then traces the request from the client workstation through the network. This is done by instrumenting the SOAP message. The message is traced back to the web service that processes it. If this web service calls another web service, the link to this service is traced. The content of each WSDL interface is recorded and stored in a trace file. In this way, the tester is able to trace the path of the web service request through the architecture and examine the message content at different stages of processing.

Parasoft offers a similar solution, but the queries are not started by a web client, but are generated from the business process procedures written in BPEL. This generates a larger volume of data on the one hand and simulates real conditions on the other. It is expected that most requests for web services will come from the business process scripts that control the business processes. The BPEL language was developed for this purpose, so it makes sense to test with it. What is missing in the Parasoft solution is the ability to inspect the responses. They have to be inspected visually.

One of the pioneers in the field of web testing is the company Empirix. The Empirix eTester tool enables testers to simulate business processes via web clients. Their requests are recorded and translated into test scripts. The testers can then modify and vary the scripts in order to mutate a request into several variants and thus carry out a comprehensive functional test. The scripts are written in Visual Basic for Applications so that anyone familiar with VB can easily work with them. With the scripts it is also possible to verify the answer results with the expected results. Unexpected results are sorted out and sent to the error reporting system.

Other testing companies such as Software Research Associates, Logica and Compuware are working on similar approaches, so it is only a matter of time before the market is flooded with web service testing tools. After that, it will be some time before the desired quality level of tools is reached. Until this is the case, there is still some potential for customized solutions such as those described in this article. The various approaches to test automation are covered by Graham and Fewster.

The WSDL test approach

The WSDLTest tool takes a slightly different approach to the other commercial tools for testing web services. It is based on the schema of the WSDL description, i.e. it starts with a static analysis of the schema. Two objects are generated from this schema. One is a template for a service request. The other is a test script. The test script allows the user to manipulate the arguments in the web service request template. It also allows the user to check the results in the web server response. The test driver is a separate tool that reads and sends the web service requests and receives and saves the web service responses.

The motivation for developing this tool

It is often the circumstances of a project that motivate the development of a tool. In this case, the project was to test an e-government website. The general user, the citizen, was to access the website via the standard web user interface. However, the local authorities had IT systems that also needed to access the website in order to obtain information from the central government database. To this end, it was decided to offer them a web service interface. A total of nine different services were defined, each with their own request and response formats.
The user interface to the eGovernment website was tested manually by human testers who simulated the behavior of potential users. For the web services, a tool was needed that simulates the behavior of the user programs by automatically generating typical requests and sending them to the web service. As the responses of the web service are not readily visible, it was also necessary to validate the responses automatically. The motivation for developing the tool can therefore be summarized as follows:

  • Web services cannot be trusted, so they must be tested intensively
  • All queries with all representative combinations of arguments should be tested
  • All responses with all representative result states should be validated
  • To test web services, it is necessary to generate WSDL requests with specific arguments to check the target functions
  • To check the correctness of web services, it is necessary to validate the WSDL responses against the expected results

Generate a template request from the WSDL schema

All tests are a test against something. There must be a source for the test data and there must be an oracle against which the test results can be compared. In the case of WSDLTest, the oracle is the WSDL schema. This schema is either generated automatically from the interface design or it is written manually by the developer. As a third and more advanced alternative, it can be created from the BPEL process description. Regardless of how it is created, the schema defines the basic complex data types in accordance with the rules of the XML schema standard. Complex data types can include other complex data types so that the data tree is represented with single and multiple occurrences of the tree nodes. The schema then defines the base nodes, i.e. the tips of the tree and their sequence. These are the actual parameters. The following example is an excerpt from the type definitions of an eGovernment web service schema

<definitions>
<types>
<schema>
<complexType name="getProfile">
<sequence>
<element name="DatWsRequest_1"/>
</sequence>
</complexType>
<complexType name="DatWsRequest">
<sequence>
<element name="memoID" type="string" nillable="true"/>
</sequence>
</complexType>
</schema>
</types>

<message name="DatWebService_getProfile">
<part name="parameters" element="getProfile"/>
</message>

<portType name="DatWebServiceInterface">
<operation name="getProfile">
<input message="Interface_getProfile"/>
<output message="getProfileResponse"/>
<fault name="DatWsException" message="DatWsException"/>
</operation>
</portType>

<binding name="DatWebServiceInterfaceBinding" type="tns:DatWebServiceInterface">
<soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/>
</binding>
</definitions>

The parameter description is followed by the message descriptions, which identify the names and components of each message, regardless of whether it is a request or a response. This is followed by the port type definitions. Each service operation to be called is listed with the names of its input and output messages. These message names are references to the previously defined messages, which in turn are references to the previously defined parameters. The port types are followed by the bindings, which describe the SOAP prototypes composed of service operations. A WSDL schema is a tree structure in which the SOAP prototypes refer to the service operations, which in turn refer to the logical messages, which in turn refer to the parameters, which in turn refer to the various data types. The data types can in turn reference each other. Parsing this tree is called tree walking. The parser selects a top node and follows it through all its branches, collecting all subordinate nodes on the way down. At the end of each branch it finds the basic data types such as integers, booleans and strings. WSDLTest goes one step further by assigning a set of representative data values to each basic data type. For example, integer values are assigned a range from 0 to 10000 and string values are assigned different character combinations. These representative data sets are stored in tables and can be edited by the user before the test data is generated. The complete WSDL structure of the WSDL interface therefore looks as follows:

Web service interface
SOAP prototypes
service operations
logical messages
parameters
data types
elementary data types
representative values

The task of the data generator is to traverse the WSDL schema tree down to the level of the basic data types and select representative values for this type. The values are selected at random from the set of possible values. An XML data group is created from these values as follows:

<validate>
<validate>
<DatWsRequest>
<memoID>Kat221</memoID>
<subCompID>ThisData</subCompID>
<functionID>4711</functionID>
<version>1</version>
</DatWsRequest>
</validate>
</validate>

In this way, a WSDL service request file with sample data is generated and saved for future use. At the same time, a test script is created that allows the tester to overwrite the originally generated values. This script is a template with the data element names and their values.
By changing the script, the tester can now change the values of the service request. Since a script is also generated for the output data, the tester receives a template for verifying the service responses.

<complexType name="getProfile">
<complexType name="DatWsRequest">
<sequence>
<element name="memoID" type="string" nillable="true"/>
<element name="subCompID" type="string" nillable="true"/>
<element name="functionID" type="string" nillable="true"/>
<element name="version" type="string" nillable="true"/>
</sequence>
</complexType>
</complexType>
<getProfile>
<getProfile>
<DatWsRequest>
<memoID>XXXXXXXXX</memoID>
<subCompID>YYYYYYYYY</subCompID>
<functionID>ZZZZZZZZZ</functionID>
<version>ThisData</version>
</DatWsRequest>
</getProfile>
</getProfile>

We see here the result of the schema analysis, namely a WSDL template that serves as input for the final WSDL test data generation. A WSDL request is a cascading data structure that starts with type definitions, to which the message definitions refer, to which the input/output operations associated with a port refer. Since most of the code, such as that which defines the ports and the SOAP containers, is purely technical in nature and is also highly repetitive, it is practical to have a WSDL template that can be copied and adapted to the respective web service request.

Writing precondition assertions

The test scripts for WSDLTest are sequences of precondition assertions that define possible states of the web service request. It should be noted that the same assertion scripting language is also used for testing other data types such as relational databases, XML files and text files. A state is a combination of predefined values for the data types specified in the WSDL interface definition. The values are assigned to the individual data elements, but their assignment can be interdependent, so that a specific combination of values can be determined by the tester, for example:

assert new.Account_Status = "y"
if(old.Account_Balance < "0");

There are six different assertion types for the assignment of test data values:

  • the assignment of another existing data value from the same interface
  • the assignment of a constant value
  • the assignment of a set of alternative values
  • the assignment of a value range
  • the assignment of a concatenated value
  • the assignment of a calculated value

Another existing data value is assigned by referring to this value. The value referred to must be within the same WSDL.

assert new.Account_Owner = old.Customer_Name;

A constant value is assigned by specifying the value as a literal in this instruction. All literals are enclosed in quotation marks.

assert new.Account_Balance = "0";

A set of alternative values is assigned by means of an enumeration. The enumeration values are separated by an or sign “!”.

assert new.Account_Status = "0" ! "1" ! "2" ! "3";

According to this assertion, the values are assigned alternately starting with 0. The first account occurrence has the status 0, the second the status 1, the third the status 2 and so on.
The assignment of a value range is used for the limit value analysis. It is carried out by specifying the lower and upper limits of a numerical range.

assert new.Account_Status = ["1" : "5"];

This causes the values 0, 1, 2, 4, 5, 6 to be assigned in alternating order. The first occurrence of an account has the status 0, the second 1, the third 2, the fourth 4, the fifth 5 and the sixth 6. A concatenated value is assigned by joining two or more existing values with two or more constant values in a single string.

assert new.Account_Owner =
"Mr. " | Customer_Name | " from " | Customer_City;

The assignment of a calculated value is specified as an arithmetic expression in which the arguments can be existing values or constants

assert new.Account_Balance = old.Account_Balance / "2" + "1";

The assert assignments can be conditional or unconditional. If they are conditional, they are followed by a logical expression that compares a data variable of the WSDL interface with another data variable of the same interface or with a constant value.

assert new.Account_Owner = "Smith"
if(old.Account_Number = "100922" &
old.Account_Balance > "1000");

The tester adapts the assertion statements in the script generated from the WSDL script to create a representative test data profile with equivalence classes and limit value analysis as well as progressive and degressive value sequences. The aim is to manipulate the input data in such a way that a wide range of representative service requests can be tested. To achieve this, the testers should know what the web service is supposed to do and assign the data values accordingly.

file: DAT-WS;
if (object = "DatWsRequest");
assert new.memoID = old.memoID;
assert new.subCompID = old.subCompID;
assert new.functionID = "4711";
assert new.version = "1";
endObject;
if (object = "DatWSProfile");
assert new.attributeName = old.attributeName;
assert new.typ = 21 ! 22 ! 23;
assert new.value = "Sneed";
endObject;
if (object = "DatWsCheckAssertion");
assert new.memoID = "Kati";
assert new.version = "3";
assert new.notAfter = "ThisData";
assert new.notBefore = "ThatData";
endObject;
end;

Overwrite the template data

As soon as the assertion scripts are available, it is possible to overwrite the template of a web service request with the asserted data. This is the task of the XMLGen module. It compares the WSDL file generated by the WSDLGen module with the assertion script written by the tester. The data names in the assertion script are checked against the WSDL schema and the assertions are compiled into symbol tables. There are different tables for the constants, the variables, the assignments and the conditions.

After the assertion script has been compiled, the corresponding WSDL file is read and the data values are replaced by the values derived from the assertions. If the assertion refers to a constant, the constant replaces the existing value of the XML data element with the name that corresponds to that in the assertion. If the assertion refers to a variable, the value of the XML data element with this variable name is moved to the target data element. Alternative values are assigned one after the other in ascending order until the last value is reached, then it starts again with the first value. Range values are assigned as the limit values plus and minus one.

<DatWsCheckAssertion>
<memoID>Kati</memoID>
<version>3</version>
<DatWsConditions>
<notAfter>ThisData</notAfter>
<notBefore>ThatData</notBefore>
</DatWsConditions>
</DatWsCheckAssertion>

At the end, there is a sequence of web service requests with different representative states that combines the originally generated data with the data assigned by the assertion scripts. By changing the assertions, the tester can change the states of the requests and thus ensure maximum data coverage.

Activate the web services

Once the web service requests have been generated, it is now possible to send them to the server. This is the task of the WS-Test driver. It is a simulated BPEL process with a loop construct. The loop is controlled by a list of web services arranged in the order in which they are to be called. The tester can edit the list to change the order according to the test requirements.
The test driver takes the name of the next web service to be called from the list. It then reads the generated WSDL file with the name of this web service and sends a series of requests to the specified service. When testing in synchronous mode, it waits until a response is received before sending the next request. When testing in asynchronous mode, multiple requests are sent until a wait command is displayed in the web service list. At this point, it waits until it has received responses for all services sent before continuing with the next request.
The responses are accepted and stored in separate response files to be checked later by a post-processor. It is not the driver’s job to create requests or check the responses. The requests are created by the preprocessor. The driver only sends them out. The responses are checked by the postprocessor. The driver only saves them. In this way, the role of the test driver is reduced to that of a simple dispatcher. BPEL procedures are very well suited for this, as they have all the necessary properties to call web services within a predefined workflow.

Writing post-condition assertions

The same assertion language is used for verifying the web service responses as for constructing the web service requests. However, the assertions have the opposite meaning here. Data is not assigned from existing variables and constants, but compared with the data values of previous responses or with constant values.

The comparison with another existing data value is made by referring to this value.

assert new.Account_Owner = old.Account_Owner;

The comparison with a constant value is made by specifying the value as a literal in this instruction. As with the assignment, the literals are enclosed in quotation marks.

assert new.Account_Balance = "33.50";

The alternate assertion is used to check whether the response data matches at least one value from a list of values.

assert new.Account_Status = "0" ! "1" ! "2" ! "3" ;

An assertion can also check whether the response data is within a value range (e.g. between 100 and 500).

assert new.Account_Balance = ["100.00" : "500.00"];

The assertion for comparison can also include calculated values. The expression is evaluated first, then the response data is checked again.

assert new.Account_Balance = old.Account_Balance – "50";

As with the preconditions, the postconditions can also be unconditional or conditional. If they are conditional, they are qualified by a logical if expression. The expression can contain a comparison of two variables in the web service response or a variable in the response with a constant value.

assert new.Account_Status = "3"
if(old.Account_Balance < "1000");

In all cases where the assertion is not true, an error message is recorded showing both the expected and the actual value. Provided there is an assertion check for each attribute of the WSDL response, the response check is 100%. However, it is not necessary to check every attribute. The tester may decide to limit the check to critical variables. If this is the case, the data coverage is lower. Data coverage is measured by the number of confirmed results in relation to the sum of all results.

Validation of the answers

The XMLVal module fulfills the task of verifying the results of the web service. To do this, it must first translate the post condition assertions into internal tables with variable references, constants, enumerations and ranges. In doing so, it checks the data names and types against the names and types declared in the WSDL schema to ensure consistency.

+-----------------------------------------------------------+
| WSDL Response Validation Report |
| File: DAT-WS.wsdl Params: Y Y Y Y |
| Object: DatWsCheckAssertion Date: 26.02.06 |
| Type : WSDL System: WebService |
| |
| Key Fields of Record(new,old) |
+-----------------------------------------------------------+
| New:DatWsCheckAssertion |
| Old:DatWsCheckAssertion |
+-----------------------------------------------------------+
| Non-Matching Fields | Non-Matching Values |
+------------------------+----------------------------------+
| RecKey:27013 | |
| New: memoId | Marta |
| Old: memoId | Kati |
+------------------------+----------------------------------+
| RecKey:27022 | |
| New: Version | 2 |
| Old: Version | 1 |
+-----------------------------------------------------------+
| Total Number of old Responses checked: 10 |
| Number of old Responses found in new File: 10 |
| Number of old Responses not in new File: 00 |
| Number of new Responses found in old File: 10 |
| Number of new Responses not in old File: 00 |
| Total Number of Attributes checked: 70 |
| Total Number of non-Matching Attributes: 07 |
| Percentage of matching Attributes: 90 % |
| Percentage of matching Responses: 100 % |
+-----------------------------------------------------------+

Once the assertion scripts have been successfully compiled, the tool uses the compiled table to check the response of the web service. First, it stores the expected values in a table with a key for each object occurrence. Second, it parses the WSDL result file and matches the objects there with the objects in the assertion tables. If a match is found, the attributes of that object are extracted and their values are compared with the expected values. If they do not match the verification condition, the data names and values are output in a list of non-matching results. It is then the tester’s task to find out why the results do not match. In addition to listing the assertion violations, the XMLVal tool also produces some statistics on the degree of data coverage and the degree of correctness.

Components of the WSDL test tool

The WSDL test tool is made up of several components. These are

  • GUI Shell
  • XML File Writer
  • XML File Reader
  • Assertion Compiler
  • XSD Tree Walker
  • WSDL Analyzer
  • Table Processor
  • Error Handler
  • Random Request Data Generator
  • Selective Request Generator Request Validator
  • Validation Report Generator
  • WS Test Driver

Some of these components were adopted from an existing tool, others were newly developed. The concept of XML processing has been published and can be reused. The assertion language and the assertion compiler were taken from an earlier tool and extended. This enabled a prototype version to be made ready for use in less than a month. The tool was later refined and extended. In its original form, the tool was structured as follows:

  • The shell is implemented with Borland Delphi
  • The core is implemented with Borland C++
  • The tool does not use a database, but only temporary work files
  • The tool was developed for operation in an MS Windows environment or an equivalent environment
  • The connection between shell and core is made via an XML parameter file

The user interface

The Windows interface of WSDLTest is designed to accept parameters from the user, select files from a directory and call the backend processes The three directories from which files can be selected are:

  • the assertion directory with the assertion text files
  • the old file directory with the test inputs - csv, sql, xml and wsdl files
  • the new file directory with the test outputs - csv, xml and wsdl files

There is also an output directory in which the logs and reports are collected. It is important that the file names in all four directories are the same, as this is how they are linked together. Only the extension may vary. For example, the actual response “Message.wsdl” is compared with the expected response “Message.wsdl” using the assertion script “Message.asr” to generate the report “Message.rep”. The assertion scripts have the extension .asr. The names of all files belonging to a specific project are displayed together with their types so that the user can select them.

The assertion compiler

The assertion compiler was developed to read the assertion scripts and translate them into internal tables that can be interpreted at test time. A total of nine tables are generated for each file or object. These are

  • A header table with information about the test object
  • A key table with an entry for up to 10 keys that are used to relate the old and new files
  • An assertion table with an entry for up to 80 assertion statements
  • A condition table with an entry for each precondition to be fulfilled
  • A constant table with an entry for each constant value to be compared or generated
  • An alternative table with an entry for each alternative value that an attribute can have
  • A concatenation table with an entry for each concatenated value
  • A calculation table with the operands and operators of the arithmetic expressions
  • A replacement table with up to 20 fields whose values can be replaced by other values

Assertion compilation must take place before a file can be generated or validated. It is of course possible to compile many assertion scripts at once before starting to generate and validate files. The results of the assertion compilation are written out in a log file, which is displayed to the user at the end of each compilation run.

The WSDL request generator

The request generator consists of three components:

  • Schema preprocessor
  • Random query generator
  • data allocator

XML schemas are normally generated by a tool. This results in tags with many attributes being written together in one line. To make the XML text easier to read and parse, the schema preprocessor splits such long lines into many indented short lines. This simplifies the processing of the schema by the subsequent components.
The random request generator then generates a series of WSDL requests with constant values as data. The assignment of constants depends on the element type. String data is assigned from representative strings, numeric data is assigned by adding or subtracting constant intervals. Date and time values are taken from the clock. Within the request, the data can be specified as a variable between the tags of an element or as an attribute for an element. This poses a problem as attributes have no types. They are strings by definition. Only string values are therefore assigned here. The end result is a request in XML format with randomly assigned data elements and attributes.
The data allocator reads in the randomly generated WSDL requests and writes out the same requests with adapted data variables. In the assertion scripts, the user assigns certain arguments to the request elements by name. When compiling the assertion scripts, the names and arguments are saved in symbol tables. For each data element of the request, the data assigner accesses the symbol tables and checks whether an assertion value or a value set has been assigned to this data element. If so, the random value of this element is overwritten by the assigned value. If not, the random value is retained. In this way, the WSDL requests become a mixture of random values, representative values and limit values. The tester can use the assertions to control which arguments are sent to the web service. The WSDL requests are stored in a temporary file where they are available to the request dispatcher. To distinguish the requests, each request is assigned a test case identifier as a unique key.

The WSDL response validator

The Response Validator consists of only two components:

  • Response preprocessor
  • Data Checker

The response preprocessor works in a similar way to the schema preprocessor. It untangles the responses that come back from the web service so that they can be processed more easily by the data checker. The responses are taken from the test dispatcher’s queue file.
The Data Checker reads the responses and identifies the results by either their tag or attribute name. Each tag or attribute name is checked against the data names in the output assertion table. If a match is found, the value of that element or attribute is compared to the claimed value. If the actual value differs from the asserted value, a discrepancy is reported in the response validation report. This is repeated for each response.
To differentiate between responses, the test case identifier assigned to the request is also inherited by the response, so that each response is linked to a specific request. This allows the tester to include the test case identifier in the assertions and compare responses based on the test case.

The WSDL request dispatcher

The WSDL request dispatcher is the only Java component. It takes the generated requests from the input queue file, packs them into a SOAP envelope and sends them to the desired web service. If there is a problem reaching the web service, it handles the exception. Otherwise, it simply records the exceptions and moves on to the next request. The responses are taken from the SOAP envelope and stored in the output queue file, where they are uniquely identified by the test case identifier.
The request dispatcher also records the time at which each test case is sent and the time at which the response is returned. These are then the start and end times of the respective test case. By instrumenting the methods of the web services with probes that record the time of their execution, it is possible to link specific methods to specific test cases. This makes it easier to identify errors in the web services. It also enables the identification of specific methods that are passed through one or more web services on the path of a service. This is a topic for future research.

Experience with the WSDL test tool

The experience with WSDLTest in an eGovernment project is encouraging. Nine different web services were tested there with an average of 22 requests per service. A total of 47 different responses were verified. Of these, 19 contained at least one incorrect result. So of the more than 450 errors found in the overall project, around 23 were discovered in the web services test. It is difficult to say how many errors could have been found in other ways during testing. To judge the efficiency of one approach, you would have to compare it with another. What can be said is that the test was relatively cheap and did not cost more than two weeks of effort for the testers.
It seems that the tool could become a suitable instrument for testing web services as long as the WSDL interfaces are not too complex. If they are too complex, the task of writing the assertions becomes too difficult and errors occur. At this point, the tester cannot be sure whether an observed error is caused by the web service or by an incorrectly formulated assertion. A similar experience was reported about 30 years ago from the U.S. ballistic missile defense project. There, about 40% of the reported errors were actually errors in the test procedures. If you look at the test technology used in that project by the RXVP test lab, you realize that the basic test methods - setting preconditions, checking postconditions, instrumenting the software, monitoring test paths, and measuring test coverage - have hardly changed since the 1970s. Only the environment has changed.

Requirements for future work

Testing software systems is a complex task that has not yet been fully understood. Many topics are involved, topics such as:

  • Specifying test cases
  • Generating test data
  • Monitoring test execution
  • Measuring test coverage
  • Validation of test results
  • Tracking system errors, etc.

The WSDLTest tool addresses only two of these many problems - the generation of test data and the validation of test results. Another tool, TextAnalyzer, analyzes the requirements documents to extract the functional and non-functional test cases. These abstract test cases are then stored in a test case database. It would be necessary to somehow use these test cases to generate the pre- and post-condition statements. This would mean closing the gap between the requirements specification and the test specification. The biggest obstacle here is the informality of the requirements specification. It is a matter of deriving formal, detailed expressions from an abstract, informal description. It is the same problem that the model-driven development community faces.

Another direction for future work is test monitoring. It would be useful to follow the path of a web service request through the system. The tool for this is TestDocu, also from the authors. This tool instruments the server components to record which request they are currently working on. By handling a trace file, it is possible to monitor the execution order of the web services, as many services are often involved in processing a request. The biggest remaining task for the future is to integrate these different testing tools so that they can be used as a whole and not individually. This means building a generic testing framework with a common ontology and standardized interfaces between the individual tools.

There is still the fundamental question of the extent to which the test should be automated. Perhaps it is not so wise to automate the entire web testing process, relying instead on the skills and creativity of the human tester. Automation often tends to hide important problems. The question of test automation versus creative testing remains an important topic in the testing literature.

Fazit

In this article, we reported on a tool for supporting web service tests. The WSDLTest tool generates web service requests from the WSDL schemas and adapts them according to the precondition assertions written by the tester. It sends the requests and records the responses. After testing, it then verifies the response content against the post-condition assertions written by the tester.

The tool is still under development, but has already been used in an eGovernment project to speed up the testing of web services. Future work will focus on linking this tool with other testing tools that support other testing activities.

Testing of Cloud Services

Modern IT systems are becoming ever larger and more complex. It may be possible to build them once with professional help, but users are often...

Weiterlesen

1 min read

Estimation of Test Projects

The planning of a requirements-based system test presupposes that the test effort can be calculated based on the requirements. If test points are...

Weiterlesen
Software testing in the future - Interview with Michael Mlynarski

Software testing in the future - Interview with Michael Mlynarski

Michael is a computer scientist who accidentally (or not) founded QualityMinds. He has around 20 years of experience in software engineering,...

Weiterlesen