17 min read

What has Object Oriented Technology Achieved?

This article deals with the goals of object orientation and examines the extent to which these goals have actually been achieved. The first protagonists of object technology set themselves very high goals. Goals that can only be measured to a limited extent. They wanted to increase productivity many times over, reduce maintenance costs to almost zero, promote reuse and ensure the portability of the software. As a number of empirical studies have shown, these goals have only been partially achieved. Like all IT technologies, object technology has been “oversold”. Nevertheless, it has also had many positive influences on software development, above all on reuse and portability - in the end, it was worth going in this direction. But we must not be satisfied with that now. The search for the Holy Grail of software development continues._

Objectives of object technology

If we want to make a judgment about the success of object technology, we have to measure the technology against its own goals. The most important arguments for the introduction of object-oriented methods and programming languages, according to the first protagonists of the movement, are

  • Increase in productivity
  • Reduction in maintenance costs
  • Increase reusability
  • Improving portability

This article looks at these four goals and examines the extent to which object technology - i.e. OO analysis, OO design, OO programming and OO testing - has come closer to achieving each goal. Has object orientation delivered what it promised? At the end, after examining each of these goals, we return to this question and try to provide an answer.

To increase productivity

According to Tom Love, founder of Productivity Products International Corporation, object orientation was to bring about a major increase in productivity. He wrote in the magazine Datamation in May 1987:

“Structured programming was only a small help. It provided only 10-15 % improvements of 10 to 15 times … Object oriented programming stands ready to provide such needed improvements because it offers a radically different, more intuitive way of conceptualizing and building systems …” At the first OOPSLA conference in Portland, Oregon in 1986, more than 1000 supporters of the new OO development method appeared. All were firmly convinced that it would bring about a revolution in software development. All the problems that developers had been struggling with for years would vanish into thin air. It will be possible to move effortlessly from analyzing requirements to designing the architecture and creating the code. The old semantic barriers between the levels of abstraction would disappear. Not only would the passage from requirements to code be much faster, the code produced would contain far fewer errors than before, and that would mean less testing. Leading software technologists of the time were convinced of this.

Positive characteristics

One of the strongest proponents of object technology was David Thomas from Carleton University in Canada. He was involved in the development of the Smalltalk language and was convinced that object orientation would bring a significant increase in software productivity. He wrote:

„We have been convinced since 1975 that the object-oriented approach to software development is the best of all known techniques. It will revolutionize programming.”

By combining the data with the procedures that process it, all the problems that arise from separating the data from the algorithms would disappear. Complex systems could therefore be constructed much more easily from the combined building blocks (data + algorithm). The developer also has more combination options. According to Tom Love, the separation of data from procedures was one of the biggest productivity-inhibiting factors in previous software development. Eliminating this was a great achievement of object technology. The encapsulation of data and functions in independent objects makes it possible to trigger and move these objects. The objects only react to messages from outside. They receive predefined orders and deliver agreed results. How the results come together remains hidden from the outside world - information hiding. This means that external procedures have no access to the encapsulated data and functions, but instead have a well-defined interface in which data exchange is regulated. This property makes it possible to assemble large systems from many tiny building blocks. If the majority of the modules are prefabricated, a correspondingly large part of the development effort is eliminated.

Another productivity-driving factor is inheritance. Both data and functions are programmed at a higher level of abstraction and passed on to the lower levels, where they can also be modified. This saves the developers of the subordinate code units the effort of programming these data and functions from scratch.

This redundant coding was another productivity-inhibiting factor in previous structured software development. More than 50 percent of the code was redundant. The developers copied the code and only changed a few lines. With inheritance, they could now adopt the original code without having to move and change it. Rebecca Wirfs-Brock writes about this:

_„You don’t destroy the original code, you just extend it and build upon it… Inheritance makes it possible to define complex new objects without the bother of writing everything from scratch.“

In addition to these productivity-enhancing properties of object-oriented programming, there is also the flexibility provided by polymorphism. The coupling of modules in procedural systems was generally very rigid. It was statically defined which module called which other modules. There was even a special link run to bind the modules to each other to form a run unit. This meant that if the developer wanted a different combination of modules, he had to put together a different run unit.

Although there were dynamic calls in COBOL and PL/I, these were rarely used because the handling was cumbersome and opaque. You had to bind all potentially callable modules together in order to determine at runtime which module would actually be called next.

In object-oriented languages, it is much easier to identify the desired method by its class at compile time. This makes the code much more flexible and allows more binding combinations. This feature is controversial, but it has its advantages: code with deeply nested case statements can be simplified by using polymorphic calls.

Positive experiences

Has object technology actually led to an increase in productivity in development? This question cannot be answered unequivocally. There are both positive and negative reports.
On the positive side, graphical interfaces such as Rank Xerox’s Multimedia Spreadsheet for the U.S. CIA and IBM’s TPF2 timesharing system would not have been possible without object technology. Ann Hardy of Key Logic claims:

„The TPF2 software has been written in an object oriented version of PL/I. We could not have built the system without it.”

The same applies to Apple’s Lisa and Macintosh. Both products would not have been possible without the new development technology. This statement underlines the fact that for some types of application there is no alternative to object technology. Object orientation is also indispensable in other application areas, such as telecommunications and multimedia. No one would think of doing it any other way, except in an even more advanced form of programming such as aspect-oriented programming.
But there are also plenty of success stories in the classic data processing sector when it comes to productivity. By using C# in conjunction with a .Net framework, an automotive supplier was able to achieve a 200 percent increase in productivity, mainly through the use of prefabricated building blocks. Similar productivity increases have been achieved in Java projects, for example at Sparkasseninformatik in the development of new booking systems or at Allianz Versicherung in the replacement of old debt collection systems.

These success stories speak for themselves. When used correctly, object technology can increase productivity and accelerate development. Critical voices are heard at the end of the article.

To reduce maintenance costs

Even more than increasing productivity, object technology aimed to reduce maintenance costs. During the 1980s, the cost of maintaining existing systems continued to grow and by the end of the decade accounted for more than two thirds of total costs: the cost of maintaining and developing old systems was twice as high as the cost of developing new systems.

Representatives of object technology have claimed that this is primarily a consequence of procedural programming. The programs had become too large and too complex. In addition, maintenance suffered from the poor quality of the code, which was very difficult to understand. With the introduction of object orientation in design and programming, everything was supposed to get better. Maintenance costs were to be reduced by half.

Maintenance cost drivers

Software has three dimensions - size, complexity and quality. The maintainability of a software system is determined by these three dimensions. The new object technology promised to reduce the size and complexity of software systems and increase quality. This should reduce maintenance costs.

The very assertion that maintenance effort depends on the characteristics of the software itself was wrong. Maintenance costs depend on various factors, including the maintenance environment, the skills of the maintenance staff and the tools available to the staff. Therefore, changing the software can only have a limited impact on maintenance costs.

Let’s start with the size. Procedural systems were indeed too big, especially the building blocks were too big. The larger the building blocks - modules and procedures - the more difficult it is to understand them. The main reason for the excessive size of procedural systems was the “copy & paste” technique. Programmers moved large sections of code from one module to another, changing only a few lines. In this way, the code used to be reused. However, changes would have to be incorporated in many places. Instead of creating new subroutines, developers extended the code in place, making it bigger and bigger. With the help of inheritance technology and the creation of small, reusable classes, object technology was able to significantly reduce the amount of code, but at the expense of complexity.

Complexity is determined by the number of relationships between the code modules. Although object technology reduces the size of the individual building blocks or procedures, it creates more building blocks. Proper OO systems consist of a large number of small classes, each with a limited number of data and functions. The administration of the many source modules alone causes a great deal of effort. But what weighs much more is the large number of relationships between the building blocks. Every inheritance, every association is a relationship or another dependency. The more relationships between the system modules, the greater the system complexity. Added to this is the dependency on other developers. The developer of a class that inherits from another class and uses another class is dependent on the developer of that class. If this developer drops out - for whatever reason - the dependent developer must take over his code. One of the authors worked on a project in which pure object orientation was abandoned for this reason 9. We can conclude from this that although OO systems have become smaller, they have become more complex due to the many relationships. With object technology, the devil was driven out with Beelzebub.

As far as quality is concerned, it is difficult to prove any improvement. The many GoTo branches have disappeared, but this is not clearly a merit of object technology, structured programming has achieved this. The cohesion of the modules, or the connection of the internal functions, has grown, but so has the coupling or the dependency of internal functions on external functions. It can be said that the modules have become smaller, but the number of dependencies has increased.

There are also high-quality procedural systems. Quality is therefore not necessarily linked to the programming technique. It follows from the correct use of the respective technique, whether structured or object-oriented. Nothing is worse than the incorrect use of object technology because it has not been properly understood.

Studies on the maintainability of object-oriented software

The claim by proponents of object technology that it would reduce maintenance costs by 30 to 50 percent has never been confirmed, although many researchers have tried.

One of the first studies was carried out by Professor Norman Wilde from the University of West Florida back in 1993. He used a C++ system from the Belcore Corporation. His team analyzed and measured the source code with tools and conducted interviews with the responsible maintenance personnel.

It turned out that some of the much-praised features of OO programming had negative consequences for maintenance, especially inheritance when it goes beyond a certain depth. Another problem arose from the large number of interactions between methods in different classes. A third problem was caused by dynamic binding with polymorphism. This makes the code more flexible, but also makes it difficult to understand. The average developer cannot cope with this and makes wrong decisions when changing code, which leads to errors that are difficult to find.

Last but not least, Belcore had to correct more errors with the new object-oriented systems than previously with the old procedural systems. On top of this, the performance of the systems decreased. The runtimes of the new C++ systems increased by 45 percent compared to the old C systems. The result of this study was that object orientation has some advantages, but also serious disadvantages. Decisive for the maintainability of OO programs were the depth of the class hierarchy, the number of associations and the use of polymorphic bindings. This meant that the more object-oriented the work, the higher the maintenance costs.

Another study on maintainability took place at the University of Wisconsin in 2007, this time with Java programs. A team of researchers led by Professor Michael Eierman investigated the question of the superiority of object-oriented systems in terms of maintainability. For the purpose of comparison, they defined maintenance as a collective term for error corrections, functional extensions, changes and refactoring measures. Maintainability was defined as the minimization of the effort required to perform this activity.

It is claimed that the properties of object-oriented software are more likely to help reduce maintenance costs than the properties of procedurally developed software. The researchers have put forward five hypotheses:

  • It is easier to understand OO systems.
  • It is easier to plan OO maintenance interventions.
  • It is quicker to build up know-how about OO systems.
  • It is easier to diagnose errors in OO systems.
  • The scope of changes to OO systems is smaller.

In order to clarify whether these assumptions really apply, Professor Eierman involved 162 advanced students in his experiment. The students had learned both COBOL and Smalltalk. They were asked to correct and modify either a COBOL or a Java application. On the one hand, they were asked to correct an error and, on the other, to add an additional calculation to the code. 81 students chose Smalltalk and 81 COBOL. This produced the results summarized in Table 1.

Professor Eiermann’s team concluded that there was no significant difference between the maintainability of procedural and object-oriented solutions. What is gained by one activity is lost by the other. In the end, maintaining the Smalltalk solution actually cost 3 percent more. Eierman and Dishaw conclude that OO software is not easier to maintain, at least not according to the results of their study.

The author of this article carried out a comparison of the maintainability of COBOL and Java code himself at Hagenberg University of Applied Sciences. There, students were given the same task - order processing in COBOL and in Java - and were asked to correct an error and make a change in both versions.

The error correction was completed fastest in the COBOL code in all teams - no team needed more than an hour. The change was faster in the Java code, however. It was easier to create an additional subclass in Java than to rewrite a new section in the COBOL code. No team needed more than two hours for the Java extension. They needed significantly longer for the COBOL change.

This may be due to the fact that the students were more familiar with Java. But it also shows how difficult it is to make such comparisons. The subjects of such studies will always be more familiar with one technology than the others. Maintainability is difficult to measure because maintenance involves many different activities - error correction, modification, enhancement, optimization and refurbishment. What is easy in one programming language is difficult in another. In the end, we can only conclude that statements about maintainability cannot be verified. It may be that object orientation leads to a reduction in maintenance costs in the long term, but it is very difficult to prove this.

Testability of object-oriented software

A large part of maintenance costs is due to testing, and the additional effort required to test object-oriented systems is undisputed. Robert Binder, internationally recognized testing expert and author of several books on testing, claims that OO software is not only more difficult to test, it also causes more errors. Boris Beitzer says it costs four times more to test OO software than previous procedural software, and James Martin, the guru of the structured world, saw a huge wave of testing effort rolling towards us with the introduction of object technology.

Firstly, the strong modularization leads to more intermodular dependencies. Methods in one class under test use methods in other classes, and these in turn use other methods in even more distant classes. One of the authors of this article tried desperately to simulate all external references of C++ classes in order to be able to test them. In the end, he had to give up because there was no way to code so many stubs. The same problem arises with JUnit. To test one class, the tester has to include all other affected classes in the test or simulate them by “class flattening”. Either way, the tester has more work than if the test object can be tested on its own.

Secondly, data-related classes are given methods that are procedurally independent of each other, but are linked to each other via shared data attributes. This means that one method can leave behind an object state that influences the behavior of subsequent methods. The input area of a method includes not only the parameters that come from outside, but also the internal attributes of the encapsulated object. Their state influences the test and must be predetermined by the tester. It is important to test not only all branches in all methods, but also all relevant object states. Thirdly, when developing a reusable class, it is not yet known for what purpose the method will be used. It had to be designed in such a way that it can fulfill any potential purpose. Such openness forces a comprehensive test that covers all possible uses. Reuse has its price.

As early as 1996, Capers Jones examined 600 object-oriented projects in 150 different application companies and came to the following conclusions:

  • The number of errors resulting from the incorrect use of object technology was strikingly high.
  • Errors in analysis and architecture have a much greater impact than errors in procedural analysis and design methods.
  • It took more than twice as much effort to uncover the causes of errors.
  • The error density is higher because the code is more compact.
  • Only where more than 50 percent of the code is reused does the error rate fall.

It was to be expected that object technology would increase the testing effort. Object-oriented code has many more dependencies and many more possible uses. In order to test them all, greater effort is required. The answer came in the form of test automation. To get a grip on the test problem, users had no choice but to automate the test. This can also be interpreted positively. The high cost of testing object-oriented software has made test automation unavoidable.

To increase reusability

When it comes to reusing existing software, object technology has undeniable advantages. It has made it possible to build universally valid class libraries that can be passed on from project to project. There are several reports of projects in which more than 50 percent of the estimated effort was saved by reusing code from previous projects. However, it is questionable whether the same savings would not have been possible with procedural software. Long before the introduction of object technology, Capers Jones in the U.S. and Albert Endres in Germany reported successes in the reuse of code modules. Generally usable subroutines, macros and include or copy code sections were already being used in the 1970s to save code. Endres also describes other reuse techniques that go beyond code - techniques such as design patterns, application frameworks and standard interfaces. It would therefore be incorrect to claim that reuse is an achievement of object orientation. There is also process and function reuse. Inheritance alone directly promotes the reuse of higher-level classes - and it is controversial among experts.

As far as reuse is concerned, object orientation has only made a limited contribution to increasing it. The concept of abstract data types has certainly contributed more to this. Whether we want to equate this with object orientation remains to be seen. In conclusion, we can state that object orientation promotes reuse, but is not an indispensable prerequisite for it. Product Line Management proves that non-object-oriented software can also be reused.

To improve portability

Nobody would deny that object orientation has led to better portability, at least as far as Java is concerned. The encapsulated objects with their standardized steps can be easily moved from one environment to another. Unfortunately, it is not possible to move components without recompiling them, but the bytecode can be transferred. In any case, portability is much greater than with classic procedural languages, where the modules have to be recompiled and bound. Today, software portability is a basic requirement for transferring local applications to the cloud. The strict division of code into classes, methods and interfaces makes it possible to reuse local classes as global web services. This is not easily possible with procedural languages such as COBOL, C and PL/I. One of the authors has spent years working on encapsulating such procedural programs as services for the purpose of reuse. This requires the creation of completely new interfaces. This is much easier with object-oriented code.

Critical voices

However, there are also critical voices on object orientation. Victor Basili, Professor of Computer Science at the University of Maryland, questions whether

„Object-oriented is the right way to think in all domains. It is obviously good for some, but we still don’t have enough empirical evidence to support the claim that it is good for all …”

Erik Stenstrud, professor at the Norwegian School of Management, conducted an experiment to compare productivity in object-oriented projects with that in procedurally-oriented projects. The result was inconclusive. In the end, productivity was about the same. The disadvantages of object orientation balanced out the advantages.

Professor Thomas Niemann from Portland Community College claims that excessive use of all the properties of object technology can be detrimental to productivity. He says: “excessive information hiding can be detrimental” and that inheritance and polymorphism make testing more difficult. If too much information remains hidden, a third party cannot correct the code. Ultimately, Niemann comes to the conclusion that productivity is best achieved through an OOLite.

A critical voice on object technology comes from Professor Manfred Broy. He writes in an article for Informatik-Spektrum together with Johannes Siedersleben:

“We argue that although the object orientation in use today has a whole range of interesting, advantageous features, there are also some serious shortcomings that show that object orientation does not reflect the current state of scientifically understood programming methodology and software engineering.”.

The authors also claim: “Despite its popularity and prevalence, object orientation does not solve all old problems and has created some new ones.” Inheritance and the use of object references in polymorphism, which are nothing more than poorly disguised pointers, are particularly troublesome. In addition, in the UML design language

  • the lack of clear semantics
  • the lack of integration of the various description techniques
  • the lack of a component concept.

What Broy and Siedersleben criticize most is not so much the OO concepts themselves, but the implementation of these concepts in concrete languages such as C++, Java and especially UML.

The lack of a component concept makes it difficult to design a complex architecture. The classes are far too small to serve as the highest abstraction element. The authors state:

“One of the worst shortcomings of object orientation is the absence of components as a supplement to the class. Classes are simply too small, too granular. They are units of implementation, not of construction. Large systems can hardly be structured with them”.

Broy and Siedersleben do not mention any empirical studies to support their criticism of object technology, but it can be inferred from their wording that object orientation does not cause any change in productivity. The positive, productivity-enhancing influences are balanced by the negative productivity-inhibiting characteristics. In the end, it remains plus/minus zero.

One of the most interesting studies on the subject of object technology and productivity was published in 1994 in the Communications of the ACM under the title “Requirements Specification - Learning Object, Process and Data Methodologies”. The authors are Iris Vessey and Sally Conger. Both authors - a professor and a consultant - conducted a study on how well the average developer copes with object orientation. Does it really correspond to the natural way of thinking? Compared to other development methods, the OO method was the most difficult for the people involved to learn. So this approach does not correspond to the natural way of thinking at all.

From the productivity discussion, it can be concluded that object orientation does not necessarily lead to higher productivity. It depends on who is involved in the project and how high the degree of reuse is. If experienced developers with in-depth knowledge of the languages used, for example UML and Java, are at work and if more than 50 percent of the code is reused, a significant increase in productivity can be expected compared to procedural development.

However, if the project participants have little OO experience and the reuse rate is low, then a loss of productivity is to be expected. Inexperienced developers and a high reuse rate indicate constant productivity. Experienced developers and a low reuse rate indicate a small increase in productivity - 1 to 50 percent. This means that the question of the influence of object technology cannot be answered unequivocally.

Back to the question of the benefits of object technology

In the end, when we ask ourselves what the object-oriented movement has brought to the software world, we have to subtract a few things from the original claims.

Has object orientation increased productivity?

In some cases, software development costs fell, especially where a high reuse rate was achieved and where experienced OO developers were at work. Where there was little reuse and where inexperienced personnel were employed, productivity fell. The correct application of object orientation places high demands on developers. If they are not up to it, their productivity is limited.

Compared to some 4th generation languages, Java and C++ perform poorly. Only C# in conjunction with .Net can keep up to some extent. When examining productivity in a large Austrian industrial company, one of the authors found that

  • the productivity when using a 4GL language for a system with over 32,000 function points was two function points per person-day, while
  • the productivity for a Java application with over 12,000 function points was only 1.1 function points per person-day.

So increasing productivity is not exactly a strength of object technology, especially not when inexperienced people are at work.

Has object orientation reduced maintenance costs?

There is no empirically substantiated evidence for this. The few studies that do exist point to a plus/minus zero in maintenance. In some cases, the maintainability of the software has increased, especially in cases where the amount of code has been significantly reduced and where many standard modules have been used.

In other cases, the complexity of the software has increased to such an extent that the maintenance staff can no longer cope with it. Maintenance costs have risen. Without regular refactoring of complex OO systems in particular, they got out of control. In the case of procedural systems, the developers were able to keep them more or less alive. This is not the case with object orientation.

Has object orientation promoted reuse?

The answer to this question is conditionally yes. Object-oriented components are generally easier to detach from their environment and simpler to encapsulate. This conclusion can be derived from the experience of one of the authors with the reuse of classes as web services. On the other hand, there is too little real scientific evidence for this assertion. Software was reused to a large extent even before object orientation. It is difficult to say to what extent object orientation has facilitated reuse. At best, one can say that there is a lot to be said for it.

Has object orientation improved the portability of the software?

This statement is the most true of all. Because the code has standardized interfaces, it can be moved to other environments and made to run with little adaptation effort. Developers do not have to rewrite the code. This is a decisive advantage in today’s world of distributed processing. The concept of distributed objects fits well with today’s diverse digital world. With portability, object technology has set an important course.

Fazit

In summary, it can be said that object orientation has by no means achieved everything it originally promised - especially in terms of productivity and easier maintenance. However, it has had other positive effects on software development. It promotes reuse and frees software from dependence on a proprietary environment. Object orientation has opened up the world of software - this alone has justified it.

Modeling Metrics for UML Diagrams

UML quantity metrics Quantity metrics are counts of the diagram and model types contained in the UML model. The model types are further subdivided...

Weiterlesen

Analytical Quality Assurance

Checking and measuring software artifacts Analytical quality assurance offers a cost- and resource-saving way of checking software artifacts - such...

Weiterlesen

Requirements Engineering & Software Test

A powerful combination After more than 40 years of experience in software engineering, too many projects are still overrun or fail completely. The...

Weiterlesen