Chapter 26: From Design to Implementation

The goal of the implementation phase is to implement a system correctly, efficiently, and quickly on a particular set or range of computers, using particular tools and programming languages. This phase is a set of activities with:

Input:
Design, environmental, and performance requirements.
Output:
A working system.
Techniques:
Reconciliation, transformation, conversion, monitoring, testing.

Designers see objects as software abstractions. Implementors see them as software realities. However, as with the transition from analysis to design, structural continuity of concepts and constructs means that design and even analysis notions should flow smoothly and traceably.  

The chief inputs from design to implementation may be categorized in a manner similar to those of previous phases. Again, while the headings are the same, the details differ.

Functionality:
A computational design of the system.
Resource:
The machines, languages, tools, services, and systems available to build the system.
Performance:
The expected response times of the system.
Miscellaneous:
Quality, scheduling, compatibility with other systems, etc.

Implementation activities are primarily environmental. They deal with the realities of particular machines, systems, languages compilers, tools, developers, and clients necessary to translate a design into working code.

Just as the design phase may include some ``analysis'' efforts approached from a computational standpoint, the implementation phase essentially always includes ``design'' efforts. Implementation-level design is a reconciliation activity, where in-principle executable models, implementation languages and tools, performance requirements, and delivery schedules must finally be combined, while maintaining correctness, reliability, extensibility, maintainability and related criteria.

While OO methods allow and even encourage design iteration, such activities must be tempered during the implementation phase. In analogy with our remarks in Chapter 25, if everything can change, then nothing can be implemented reliably. Implementation phase changes should ideally be restricted to occasional additions rather than destructive modifications.

Implementation activities may be broken across several dimensions, including the construction of intracluster software, intercluster software, infrastructure, tools, and documentation, as well as testing, performance monitoring, configuration management and release management. Most of these were touched on briefly in Chapter 15.

Many excellent texts, articles, manuals, etc., are available on OO programming in various languages, on using various tools and systems, and on managing the implementation process. In keeping with the goals and limitations of this book, we restrict further discussion of the implementation phase to a few comments about testing and assessment that follow from considerations raised in Parts I and II.

Testing

  A design must be testable. An implementation must be tested. Tests include the following:

Code Inspections.
Reviews and walk-throughs. 
Self tests.
The tests created during the design phase can almost always be built into implemented classes and invoked during test runs and/or during actual system execution.
White-box tests.
Tests that force most or all computation paths to be visited, and especially those that place components near the edges of their operating conditions form classic test strategies.
Portability tests.
Tests should be applied across the range of systems on which the software may execute. Tests may employ suspected nonportable constructions at the compiler, language, tool, operating system, or machine level.
Integration tests.
Tests of interobject and interprocess coordination should be built at several granularity levels. For example, tests of two or three interacting objects, dozens of objects, and thousands of them are all needed.
Use cases.
Use cases laid out in the analysis phase should actually be run as tests. 
Liveness tests.
Tests may be designed to operate for hours, days, or months to determine the presence of deadlock, lockup, or nontermination. 
Fault tolerance tests.
Hardware and software errors may be infused into systems before or during testing in order to test response to faults.  
Human factors tests.
While we do not concentrate much in this book on user interface design, any system, even one without an interactive interface, must meet basic human factors requirements. Tests and observations with potential users form parts of any test strategy.
Beta tests.
Use by outsiders rather than developers often makes up for lack of imagination about possible error paths by testers.
Regression tests.
Tests should never be thrown out (unless the tests are wrong). Any changes in classes, etc., should be accompanied by a rerun of tests. Most regression tests begin their lives as bug reports.

When tests fail, the reasons must be diagnosed. People are notoriously poor at identifying the problems actually causing failures. Effective system-level debugging requires instrumentation and tools that may need to be hand-crafted for the application at hand. Classes and tasks may be armed with tracers, graphical event animators, and other tools to help localize errors.

Performance Assessment

 

Analysis-level performance requirements may lead to design-phase activities to insert time-outs and related alertness measures in cases where performance may be a problem. However, often, designers cannot be certain whether some of these measures help or hurt.

Thus, while designers provide plans for building software that ought to pass the kinds of performance requirements described in Chapter 11, their effects can usually only be evaluated using live implementations. Poorer alternatives include analytic models, simulations, and stripped-down prototypes. These can sometimes check for gross, ball-park conformance, but are rarely accurate enough to assess detailed performance requirements.

Performance tests may be constructed using analogs of any of the correctness tests listed in the previous section. In practice, many of these are the very same tests. However, rather than assessing correctness, these check whether steps were performed within acceptable timing constraints.

The most critical tests are those in which the workings of the system itself are based on timing assumptions about its own operations. In these cases performance tests and correctness tests completely overlap. For example, any processing based on the timed transition declarations described in Chapters 11 and 19 will fail unless the associated code performs within stated requirements. 

As with correctness tests, the reasons for performance test failures must be diagnosed. Again, people are notoriously poor at identifying the components actually causing performance problems. Serious tuning requires the use of performance monitors, event replayers, experimentation during live execution, and other feedback-driven techniques to locate message traffic and diagnose where the bulk of processing time is spent and its nature.

Performance tuning strategies described in Chapter 25 may be undertaken to repair problems. Alternatively, or in addition, slower objects may be recoded more carefully, coded in lower level languages, moved to faster processors, and/or moved to clusters with faster interprocess interconnections. 

If all other routes fail, then the implementors have discovered an infeasible requirement. After much frustration, many conferences, and too much delay, the requirements must be changed.

Summary

Ideally, object-oriented implementation methods and practices seamlessly mesh with those of design. Implementation activities transform relatively environment independent design plans into executable systems by wrestling with environment dependent issues surrounding machines, systems, services, tools, and languages.

Further Reading

As mentioned, many good accounts of implementation processes and activities are available. For example, Berlack [2] describes configuration management. McCall et al [4] provide a step-by-step approach to tracking reliability. OO-specific testing strategies are described more fully by Berard [1]. System performance analysis is discussed in depth by Jain [3]. Shatz [5] describes monitoring techniques for distributed systems.

Exercises

  1. The borders between analysis, design, and implementation are easy to specify in a general way but ``leak'' a bit here and there. Does this mean the distinctions are meaningless?

  2. How do OO programming language constructs ensuring secure access protection make testing (a) easier (b) harder?

  3. Describe how to arm classes with trace operations.

  4. Which of the following is the path to OO utopia?
    1. better hardware
    2. better OO development methods
    3. better OO development tools
    4. better OO programming languages
    5. better OO system software support
    6. better OO software process management
    7. better economic conditions
    8. none of the above.

References

1
E. Berard. Essays in Object-Oriented Software Engineering. Prentice Hall, 1992.

2
H. Berlack. Software Configuration Management. Wiley, 1991.

3
R. Jain. The Art of Computer Systems Performance Analysis. Wiley, 1991.

4
J. McCall, W. Randell, J. Dunham, and L. Lauterbach. Software reliability measurement and testing guidebook. Technical Report RL-TR-92-52, Rome Laboratory USAF, 1992.

5
S. Shatz. Development of Distributed Software. Macmillan, 1993.

Next: Appendix



Doug Lea
Wed May 10 08:01:44 EDT 1995