Chapter 24: Designing Passive Objects

    

Clustering objects into processes yields a two-tiered architecture. The system as a whole appears as a relatively small number of interacting process-level active objects. Within each cluster lie all the passive objects that have been packed inside it. Internal cluster design activities perform necessary and desirable transformations applicable to those unfortunate objects that find themselves operating in a nonautonomous computational environment.

Transformations

  

Each process-level cluster serves as an agent, conceptually including:

This is just a small variation of our basic object model. Each of these properties could be ascribed to any object at all. In fact, each cluster serves as an OO simulation kernel as described in Chapter 15. But there are now two additional considerations reflecting the fact that clusters communicate with others. The queue must be able to receive messages from other clusters. Incoming messages that conform to those listed in the cluster's external interface are placed on the queue via system level magic. Also, the agent must be able to send external messages to other clusters, normally as isolated via proxies or related mechanisms.

The addition of intercluster messaging transforms each cluster into a special kind of joint action coordinator (Chapter 22). Not only are the states of all external objects beyond its control, but all internal objects are purely passive and unprotected. However, from a suitably abstract perspective, cluster agents still perform basic interpretation in the manner described in Chapter 15:

The nature and form of internal passive objects still meet our definition of objecthood in Chapter 1. Any object and its class may be cast in a form that enables passive simulation by a cluster agent. We do not need new ODL constructs to distinguish passive zero-threaded objects from active ones. In fact, we adopt the best pragmatic notation later in this chapter, where we transform such classes into common OO programming language constructs that only support definition of passive objects.

Single-Threaded Clusters

    

Clusters may be classified into two basic categories, single-threaded and multithreaded. Single-threaded clusters are pure servers, containing nothing but guardless blocking service operations and/or functional attributes. All others are multithreaded.

Actions within single-threaded clusters can be fully serialized. Each event can be made to lead to a single unique next-event within the cluster. For this reason, single-threaded clusters are substantially easier to design and implement than others. With only a few minor snags, most classes retain their forms when moving from active to passive status.

Single-threaded clusters are essentially identical to standard sequential programs. Only one thread of control is ever active within the cluster. When a cluster agent receives a request, it may invoke an operation on the appropriate embedded object using a sequential procedure call, ultimately receiving the result in the same fashion.

Single-threaded clusters may be constructed by transforming all internal operations and messages into local procedures and procedure calls. There may still be outbound one-way messages in single-threaded clusters, most typically to ``sinks'' including loggers, I/O devices, and notification relays.

Internal one-way ops may be proceduralized. Since only one thread may execute at any given time, it is harmless for the client to simply wait out an invoked one-way operation. External one-way calls should be handled (usually through proxy mechanisms) so that they return immediately to the internal senders. All other interaction constructs (e.g., early replies) may need to be translated into these more primitive forms before applying the transformations.

Our analysis and design level atomicity guarantees have no consequences at this level. Since no operation will ever be interrupted, there is no need to implement any kind of protection mechanism. Of course, the cluster itself must be protected via shell-level mechanics.  

Protocols

While the cluster itself may contain a queue to hold requests received while it services others, passive objects residing within single-threaded clusters cannot possibly contain any pends associated with tests for internal conditions. (Guards that reference external objects should be converted into other forms described in Chapter 22.) In purely sequential environments, if a guard referencing internal state is not true when a message is invoked, it will not ever become true if the sender is blocked waiting for it. For example, if a sender invokes s.top for some s:Stack when it is empty, it is senseless to wait. The process will simply lock up. Thus, this situation ought to be transformed into an error condition. This is, of course, standard practice in the design of sequential stacks, where attempts to read from an empty stack are treated as exception conditions.

Thus, error protocols in objects designed to operate in pure sequential environments are sometimes defined differently than for those that may operate concurrently. However, this has little to do with passivation in general. The reason that an empty sequential stack top access should lead to an error is that we know it is pointless to wait, and (implicitly) we believe that it is better to enter an error protocol than face certain infinite postponement. If we discovered that nothing could unblock a top request in a concurrent design, we might make the same choice.  

However, different screening protocols are more widely useful in sequential designs. In a single-threaded cluster, a client of a stack may itself test whether the stack is empty before calling top. The client may be confident that if the stack was not empty when tested, the request cannot fail. In a concurrent setting, other messages may intervene between the test and the top request unless locking protocols are employed.

Multithreaded Clusters

    

Multithreaded clusters contain only passive objects, However, at any given time, the component objects may be in states reflecting participation in several partially completed timethreads. Multithreading is possible by virtue of the first-class active status of the cluster agent itself. The continuum of possibilities ranging from one-process-per-object to one-process-per-system rests on the idea that one active object can ``swallow up'' and service any number of others:

The most general passivation strategy is extremely simple to describe, but extremely painful to perform. All possible ``microstates'' of all objects must be converted into a canonical form executable by means of the interpretation mechanism described at the beginning of this chapter. This entails converting conditionals to messages, transforming blocking calls and sequences of operations to wait-state guarded callback protocols, adding attributes for each object indicating whether it is logically ready (not engaged in a public operation), and so on. These are the same transforms needed in order to implement the simulator described in Chapter 15, as extended to allow multiple concurrent interpreters. Without tools, canonical conversion is impossibly difficult and error-prone.

Moreover, even if these conversions could be performed automatically, the resulting system would not be very useful. Without extensive manual optimization, the overhead required for interpreting objects in this fashion is too high to take seriously for production software. Instead, these mechanisms must be resorted to only when strictly necessary.

Task Scheduling

      

Objects and sequences of computation in multithreaded clusters may be converted into procedural form using the strategies listed for single-threaded clusters. The scheduling capabilities of the cluster agent need be invoked only when linear proceduralization fails.

This style of processing is sometimes termed ``thread'' or ``task'' programming. Effective design relies on a service-centered rather than object-centered approach. The basic idea is to map out all possibly concurrent timethreads, or sequences of operations that may come into play in the servicing of cluster-level messages. All objects that may be involved in multiple threads must be protected using locks, queues, and scheduling mechanisms to avoid interference and lockup.

The ``directionality'' of this approach makes it more tractable and familiar, but also somewhat more dangerous than canonical conversion. Instead of starting with an assumption of full protection and then loosening mechanisms as optimizations, these methods attempt to determine the minimum set of protection mechanisms necessary for correct functioning. In the worst case, a full conversion to canonical interpretive form may be necessary. However, this never happens in practice.

Thus, even here, most classes retain their original design structure when converted from active to passive forms. However, faithfulness of converted classes to their original active designs can be difficult to assess. Success relies on finding all situations in which objects could possibly interfere, block, or loop. Testing is imperative.

Many tools, support packages, and even programming language constructs (e.g., Ada tasks)  exist to aid implementation. Details vary widely, and many special techniques apply to only certain tools. We describe only some common capabilities.

Locks

  Standard semaphore-based locking mechanisms may be used to indicate whether objects are logically ready , and thus to enforce atomicity requirements. These may employ the same locking techniques described in Chapter 22. In fact, semaphore locks may be used to implement those arranged in earlier design steps. Otherwise, only those objects that may participate in multiple tasks need be lockable.

For lockable objects, operations may be converted to invoke LOCK(self) on entry and RELEASE(self) on exit. A less disruptive strategy is to wrap the original inside a view class (see Chapter 17):

class X ...   op a: () { ... } end

class ManagedX
  own x: X;
  op a: () { LOCK(x); x.a; RELEASE(x) }
end

Queues

     

As described in Chapter 19, one or more delay queues may be used to hold waiting operations. Boolean conditions may be associated with each queue. If queues are associated with the above ready locks, then locking and queuing may be combined. Queue maintenance is most often performed using monitor and/or port constructions. These queues are not boundless. Queue overflow may lead to either process-level blocking or failure, requiring associated error protocols.

Queue checking in task packages is normally entirely event-driven. Operations that affect conditions must call the appropriate queue management facilities. Any operation that changes a condition must (perhaps conditionally) trigger processing. For example:

class X 
  stat: Bool;
  local op doB;
  op b: () { if stat? then doB else pend end }
  op c: () { stat.t! }
end

This might be converted to use a delay queue:

  op b: () { if stat? then doB else DELAY(XStatQ, doB) end }
  op c: () { stat.t!; SIGNAL(XStatQ) }

Again, we have used these constructions before. These are the same self-notification and queuing strategies described in Chapter 19. Alternatives may be based on several of the designs discussed in Chapter 22; for example, versions of blackboard schemes   in which each passive object uses a polling loop to obtain stored operation requests.

Scheduling

An operation may request itself to be suspended in a special queue. This frees the CPU to check other delay queues and/or execute other suspended tasks. Associated timers and predefined policies are typically available to aid in scheduling details. All polling loops and other potentially infinitely looping operations must be broken up with occasional suspension requests to prevent process lockup. Finding all of these situations can be difficult. This is a disguised version of the classic problem of infinite loop detection, which can be unsolvable in theory, but usually conquered through hard work in practice.

Storage Management

     

As described in Chapter 23, clusters may be viewed as containing one or more repositories holding passive within-cluster objects. In this section, we outline some basic internal storage management techniques based on repositories. However, storage management services are sufficiently complex and sufficiently dependent on quality-of-implementation factors that it is almost always best to adopt services rather than design them yourself unless you really need to.

So far, we have been tacitly assuming the standard OO lifetime policy that objects somehow live ``as long as they are useful''. We need to further dissect this statement to focus in on management strategies. We start with the easiest case. Consider a repository that:

In this case, logical management may be equated with physical management. In other words, under these conditions, the repository can be sure that a logical remove operation may be accompanied by a storage deallocation. In ODL, the storage deletion operation delete( link ) irrevocably recovers the resources occupied by the object referred to by the link. Here, a remove operation may in turn invoke delete on the removed object. Similarly, under such circumstances, deletion of the repository itself may trigger deletion of all held objects.

Garbage Collection

  While this framework can be extended in various ways, when object identities are imported and/or exported from a repository, useful-lifetime tracking becomes a nonlocal issue. A single repository cannot itself determine whether any of the held objects may be safely deleted. Similar remarks hold even when identities are not exported, but internally managed objects contain cross-links (e.g., accounts and clients with links to each other).

The analysis of storage management requirements is a special form of dependency tracking. The useful lifetime of each object is dependent on those of all others that may ever try to communicate with it. A dependency graph could be constructed showing the lifetime dependencies of all objects in a system. At any point during system execution, all of those objects that are not ultimately reachable from one or more ``main'' system objects may be deallocated.

  Establishing storage management methods based on such a graph would be a good idea, except that it is impossible. The dependencies are defined on dynamically constructed objects. Normally, information about exactly which objects will be created in a system cannot be determined without executing or simulating it. In those cases where upper bounds may be determined in advance, the storage for each object may be preallocated before execution time. This is a realistic option in real-time systems and other designs in which resources are fully laid out before execution.

It is conceivable to create run-time mechanisms that implicitly maintain dependency graphs and perform the associated management. For example, if a repository were notified each time one of its held objects were (1) needed and (2) no longer needed by each other client in the system, then it may record clients in a set, and delete the object when the set becomes empty. It must also deal with cases in which, say, each of a pair of objects needs the other, but neither is needed by any other live client. But this kind of tracking is usually completely impractical. The notifications and corresponding bookkeeping operations would swamp a system.

The only alternative is automatic storage management, or garbage collection (GC). GC is an ``infrastructure'' task that normally needs to be implemented with the help of some system magic. The basic idea is to track lifetimes without requiring explicit notifications. This is performed using methods that track reachability by secretly inspecting and traversing links during execution.

There are two basic approaches to GC. Most older methods use variations of mark and sweep algorithms. They start with one or more main objects and then mark all objects transitively reachable from them as live. After this pass, all unmarked objects are deallocated. Most newer methods are based on copying collection. Memory is divided into two or more regions. At any given time, only one is used for allocation. When space runs out, only the still-live objects are copied from one region to another. Among the advantages of this strategy is that it may be implemented in smaller steps, avoiding situations in which the system appears to be ``shut down'' for noticeable periods while performing collection.

Manual Storage Management

If garbage collection facilities are not available and cannot be constructed, then manual storage management strategies must be applied. The two most common cases occur when (1) dealing with programming languages that do not provide within-cluster garbage collection, and (2) performing process-level management (i.e., deleting no-longer-needed clusters), for which few tools are currently available.

Manual storage deallocation is highly error prone, and errors are terribly dangerous. A call to delete(ob) may kill off an object that is still potentially useful. Any further invocation of any operation on ob will result in system failures requiring recovery mechanisms that are best avoided.

Despite this, there are many cases in which one object can easily tell that another is no longer needed. For example, one object may kill off a component when it is known to be exclusively held but no longer used. Nonexported links qualified by own have this property. This may include the case where the outer object itself is deleted. All components with necessarily coexistent lifetimes may also be killed. It is convenient to wrap these cascades inside ``destructor'' operations, that are then invokable in one fell swoop.   

Primary reliance on this strategy amounts to standardization on particular lock-style acquire/release protocols in which a constructor acquires the resources necessary to generate an object, but the object itself releases them when it is about to die. Similar reasoning applies to many ``functional'' objects; e.g., most objects constructed via WRAP. These are deletable after being used (invoked) once. Doing so manually simulates standard programming language level run-time mechanisms that delete storage for procedure activations after they return. 

While such schemes typically cover many deletion cases, they do not hit all of them. Objects cannot delete others that remain accessible through other means. No single object knows whether there are still other shared links. In such situations, backup strategies are necessary.

Controlling Shared Resources

The notion of manual lifetime tracking may be generalized a bit to apply to other aspects of resource control. Sharable, volatile resources are those that must be created on first access, maintained while being accessed by possibly many other objects, and destroyed when they are no longer being accessed. This is just a small narrowing of basic automatic storage management rules. The main difference is that destruction is triggered as soon as the resource is no longer being used.

Reference Counting

The standard approach is based on an acquire/release protocol. Client objects must cooperate by explicitly requesting access to a resource, and explicitly releasing it when they are done. An agent merely keeps track of how many objects have requested the resource, and maintains things accordingly. For example:  

class CountedResource
  local r: opt Resource;
  own refCount: Counter <> init refCount? = 0
  op acquire: Resource {
      if refCount.isZero then r := new Resource... end;
      refCount.inc; reply r }
  op release: () {
     refCount.dec; if refCount.isZero then delete(r) end }
end

This is a simplification of the general scheme described earlier. Instead of keeping track of users in a SET, the agent only maintains a count. A zero count corresponds to an empty set.

Reference counting is a somewhat fragile protocol. Any such design should include additional provisions to enhance safety. All access to resources must, of course go through CountedResources, normally managed through a repository. This may in turn be made more robust by defining a secondary interface in which all operation requests are mediated through pseudo-IDs.

Reference counting has a more serious limitation. If two objects each maintain a reference to each other, but both are otherwise unreachable, then neither will be killed even though they are both useless.

Copy-On-Write

  An extension of reference counting is copy-on-write sharing, in which clients share objects as long as they do not change them. However, before they attempt to send a transition request, they must obtain their own local version of an object. This protocol is fragile enough to demand intervention from a repository agent that intercepts mutative requests and performs the required actions.

Fixed Resources

A repository may provide access to fixed numbers of functionally identical objects and provide access to any one of them on request. For example: 

class FixedResourceMgr
  own pool: ARRAY[Resource];
  own inuse: ARRAY[Bool];
  allInUse: bool; 
  op acquire: Resource when ~allInUse then 
              % mark and return an unused resource % end
  op release(x: Resource); % record as free
end

Passive Objects in C++

  

Most class designs survive embedding and passivization relatively intact. To demonstrate the effects of this, we sketch the syntactic conversion of sequential, passive ODL classes into corresponding C++ constructs that enable clusters of passive ODL objects to be implemented.

We will not describe C++ or how to program in it. In fact, we will assume you know the basics or will find them out before applying the mechanics described here. There are several good ``standard'' accounts of C++. These include introductory [12], intermediate [15,13], and advanced [7] texts, as well as a reference manual [9].

We use here only a subset of C++ and stress mechanics over cosmetics. We concentrate on translating computational information. We include declarative constructs only when they are available in the language. For example, C++ does not support declarative inv constraints, `` ==>'' effects or anything mappable to them. We also ignore the possible need to employ locks, queues, and scheduling.

Basic Types

ODL value types map into C++ with a few obvious, easily-addressable snags. For example, ODL uses only int for integer values. This must be translated into any of short, int, long, or even special multiprecision representations, on a case-by-case basis. Similar rules apply for real versus float, double, long double. Booleans are conventionally defined as typedef bool int. The type time maps into whatever types are required to represent times on a system. Fixed vectors are easiest to translate as little structs (for example, struct string50 {char s[50];}). This forces copy based calling conventions when they are passed as values. The blob type may be represented as arrays of bytes ( char). ODL records may be translated as const structs.

Like most procedurally-based languages (but unlike ODL), C++ does not distinguish between built-in value types and built-in object types at the declaration level. Thus, ODL INTs also map to C++ int. However, there is no C++ analog of the representation-independent Int type. There are only the concrete, nonsubclassable versions. For the moment, we will assume that Int also translates to C++ int. Unlike most other procedural languages, C++ contains mutator operations on built-in object types. For example, ODL c.inc may be mapped to C++ c++.

Classes

    Abstract classes may be translated into C++ ``abstract base classes'' declaring pure virtual features. Generally, fns in ODL translate into ``const methods'' in C++ abstract classes, and ops into class-based methods. For example: 

class Counter {
public:
  virtual int  count() const = 0;
  virtual bool isZero const { return count() == 0; };
  virtual void inc() = 0;
  virtual void dec() = 0;
  virtual void clear = 0;
  virtual bool InvCheck()   { return count() >= 0; }
};

The invCheck operation is a simple example of a self-test, as described in Chapter 16.

The computed attribute isZero, which was given a constant abstract definition in Chapter 16, is provided here with a concrete definition. Since there are no abstract constraint constructs in C++, this is the best we can do. It is partly a matter of taste whether to declare it virtual. If non virtual, then it is known to compute the right value, but it may not compute it in the best way. For example, if a concrete subclass were based on a list of some sort, it might be easy to tell if it were empty, but harder to find out the exact count.

Links

   Internal links correspond to C++ pointer types, and fixed links to const pointers. Generally, local access conventions correspond to C++ private. In fully concrete classes, stored links may be represented as member variables. The Lamp example might look like this: 

class Lamp {
public:
  virtual bool on() const = 0;
  virtual void flip() = 0;
};

class LampV1 : public Lamp {
private:
  bool* switch;
public:
  bool on() const { return *switch; }
  void flip() { invertSet(switch, *switch); }
};

void invertSet(bool* dest, bool v) { *dest = !v; }

As an extremely common application of the optimizations to be described in Chapter 25, any component of a built-in type that is qualified as packed may be directly embedded in its host object. This applies to own attributes manufactured by ODL `` <>'' conventions. For example: 

class LampV2 : public Lamp {
private:
  bool switch;
public:
  bool on() const { return switch; }
  void flip() { switch = !switch; }
};

Standard export policies include never taking the address of such objects.

Value-holding classes generally transform to a special set of idioms in C++. These classes may employ overloaded operators, value-based assignment, copy-constructors, etc., in order to make the resulting objects look as close to built-in values as desirable. Strategies for doing so are described in the standard accounts. These techniques may be used to obtain object status for built-in types, using a variant of the strategy described in Chapter 17:

class RealVal {  virtual const float val() const = 0; };

class RealValV1 : public RealVal {
private:
  float _val;
public:
  const float val() const { return _val; };
  RealValV1(float s) : _val(s) {}
};

These kinds of classes may be used to simulate raw value types when the assumed characteristics of built-in types do not hold (e.g., for multiple precision integers).

Inheritance

  Abstract ODL subclass constructions translate into C++ ``public'' derivation, with all features (even links) declared as virtual methods. Rather than using property inheritance rules, C++ requires strict signature matching for redeclared versions of operations within subclasses, which may lose information about return types. However, C++ provides a way for clients of ``mistyped'' procedures to recover lost information. Pointers may be ``downcast'' to more specific types. There is no run-time check to determine that the cast type conforms to the actual type, so correctness depends on the programmer. (This situation may change in the upcoming C++ standard.) There is no Any root to the C++ type system. However, void* is normally used for analogous purposes. All usages must be downcast into specific class pointer types on dereference.

Construction

  In C++, constructors are listed within the class declarations of the objects they construct. It is not hard to recast this ODL-style if desired. A single constructor may be defined that binds all pointers to supplied objects and/or initializes embedded objects to initial values:

class LampV1 : public Lamp { ...
public:
  LampV1(bool* init_sw) switch(init_sw) {};
};

class LampV2 : public Lamp { ...
  LampV2(bool initstate) switch(initstate) {};
};
...
Lamp* l1 = new LampV1(new bool(0));
Lamp* l2 = new LampV2(0);

Construction may be assigned to a generator by making the constructor private, but declaring its manager as a friend:

class LampV1 : public Lamp { ...
  friend class LampV1Gen;
private:
  LampV1(bool* init_sw) switch(init_sw) {};
};

class LampV1Gen { ...
  Lamp* dflt() { return new LampV1(new bool(0)); }
};
...
Lamp* l3 = aLampGen->dflt();

Other Constructs

Multiple inheritance.

  C++ multiple abstract inheritance works in the expected fashion, but only if all superclasses are declared as public virtual, rather than just public. We downplayed the idea of multiple concrete inheritance at the design level. Several authors (e.g., [5,14]) argue for avoidance of the corresponding C++ constructs.

Generics.

     Parameterized classes and operations may be implemented using macros, templates, and/or simple preprocessor tools.

Open reuse.

  C++ private subclassing is roughly similar to ODL opens. The main difference is that pure private subclassing does not fully ``open'' the reused declaration. The private parts of the class remain inaccessible. Also, the reused parts are not reinterpreted in the new context.

Partitioning.

  C++ does not support a oneOf subclassing constraint. Regular subclassing may be used instead. Because the set of oneOf classes cannot change, each class may carry an ``type tag'' enum referenced in switch statements. Recall that oneOf is only useful when the partitions are based on logical necessity, not convenience. Only in these cases are nonextensible tags and switches always OK.

Type tests.

  In ODL we use type membership predicates x in X mainly as specification devices, rather than as run-time type tests. But some constructs are easier to express using dynamic type testing. As of this writing, it is unclear whether C++ will ``officially'' support run-time type tests. Many work-around strategies are known and described in the standard accounts.

Dispatching.

  Of the three senses of dispatching described in Chapter 21, C++ directly supports only the first, selection, through virtual methods. The second, resolution, may be simulated using type tests and/or double dispatching transformations, as described in Chapter 21. For the third, static class methods may be used in the special but most common case where object dispatching always dispatches to the exact same ODL-level object. In these cases, ODL `` $'' translates to C++ `` ::''. All other cases must be implemented using explicit name servers and relays. (Similar reasoning leads to the use of static class methods to translate ODL common attributes.)

Wrappers.

  While not quite equivalent, C++ pointers to member functions may be used to approximate wrappers. A tedious alternative is to predefine all Wrapper subclasses that will be used in a program. We defined the ODL WRAP macro just to eliminate such tedium. This is difficult to simulate using C++ macros since a new class name may need to be generated for each wrapper. A very simple preprocessor tool could handle this and related macros.

Exceptions.

  Although implementations are not yet widely available as of this writing, the C++ standard incorporates a form of exceptions (see [9]). If they are available, the corresponding ODL constructs may be transparently mapped to them. Exceptions may also be used to implement named replies and related constructs discussed in Chapter 20.

Collections.

  Libraries of collection classes similar to those described in Chapter 18 are commonly available in C++. They may be based on templates, macros, or simple tools. Implementation strategies may be found in the standard accounts. Many C++ collections are actually structured as repositories. This is often a better fit to storage allocation schemes.

Storage Management

  C++ does not provide automatic storage management. Because of C -based pointer insecurities, it is difficult to design your own program-wide garbage collector. Good ones exist, but they are either conservative but leaky (e.g., [3]), meaning that they may fail to deallocate some storage1, or they are tight but restricted (e.g., [8]), meaning that they may only be used if certain programming conventions are flawlessly held to. Failing adoption of such a collector, a multi-tiered approach is taken to implement storage management.

1Footnote:
Widespread anecdotal experience suggests that leaks are so rare as to not be an issue.

As discussed earlier, destructor methods may be defined that kill off all own and unique components held by a host object. (No special destructor code is needed for own components that have been directly embedded in their hosts.) This helps manage deallocation by distributing responsibility for it. A call to delete ob kills off ob and all of the other objects that it knows it can safely destroy. Destructors may also be called for unique objects in the course of rebinding. For example:

class X { // ...
  Y* y;   // ODL unique
  void rebindY(Y* newY) { if (y != newY) { delete y; y = newY; } }
}

Any object that must only be used in the enclosing procedure scope may be declared as a ``local'' by invoking a constructor rather than new, so that its destructor will be automatically invoked at procedure exit. Supporting this requires changes in construction conventions. ``Direct'' construction calls should go directly to the class constructor operations, not through generator objects.

Additional infrastructure is needed for handling the many cases where objects cannot tell whether or when to issue destructor calls. Any of the methods described earlier in this chapter may be adapted for use. One implementation strategy is to use counted pointers via classes that behave as pointers for others, while also maintaining reference counts whenever the pointer is used. Such classes are variants of smart links described in Chapter 22. They may be generated through templates, macros and tools. Details are described by Coplien [7], which we thoroughly recommend.

Summary

Passive, embedded objects residing in clusters may be constructed using classic within process constructs including semaphores, queues, scheduling, and interprocess messaging. Typically, the vast majority of objects in a cluster need little explicit transformation from their active forms. Unfortunately, successful conversion is not at all mechanical. Effective exploitation of special cases remains crucial for arranging efficient execution.

Translations of ODL constructions describing passive sequential objects are available in C++. Similar translation schemes may be devised to convert this subset of ODL into other OO languages. Translations lose some, but by no means all abstract declarative information. The listed translation techniques provide only a start to full implementation efforts.

Further Reading

User guides for specific thread and task packages (e.g., under Mach , SunOS) remain among the best references for both design and mechanics. Buhr and Ditchfield [4] and Bershad [1] describe C++ extensions and tools containing requisite features. An alternative to thread packages is to use self-contained systems such as Linda [6]  or ISIS [2] . A number of other C++-specific tools and techniques have been described in Usenix C++ conference proceedings. Garbage collection algorithms are described in more detail in Lee [10].

Exercises

  1. Estimate how much slower a simple fully queue-based cluster interpreter would be compared to (a) interpreted OOPL code (e.g., Smalltalk), (b) compiled OOPL code (e.g., C++), (c) assembly code, and (d) specially designed massively parallel MIMD hardware.

  2. If a Stack object is (indirectly) accessible from an external object, is it (a) always (b) ever justifiable to convert pends to errors?

  3. Sketch out the structure of a cluster containing the ATM device-control objects (e.g., CardEater).

  4. Describe the transformations required to convert an externally directed message originally phrased using an op with a callback Wrapper.

  5. If ODL so easily translates to C++, then why didn't we use C++ throughout Part II?

  6. Explain the different senses of const in C++ and their relations to ODL constructs.

  7. Measure the difference in performance between applications using classes such as RealVal versus simple floats.

  8. Show the C++ version of the Balance class described in Chapter 17.

  9. Exactly what are the C pointer insecurities that preclude simple garbage collection?

  10. Stored attributes may not be redefined in subclasses in C++. Explain why this is not a major impediment in converting most ODL designs.

  11. Compare the kinds of C++ constructs generated by our translation techniques with those most commonly employed in a large C++ application program or framework you have available (e.g., InterViews [11]).

  12. List three C++ constructs that cannot be described in ODL.

References

1
B. Bershad. The presto user's manual. Technical Report 88-01-04, University of Washington Department of Computer Science, 1988.

2
K. Birman. ISIS User Guide and Reference Manual. Isis Distributed Systems, 1992.

3
H.J. Boehm and M. Weiser. Garbage collection in an uncooperative environment. Software -- Practice and Experience, 1988.

4
P. Buhr and G. Ditchfield. Adding concurrency to a programming language. In Usenix C++ Conference. USENIX, 1992.

5
T. Cargill. Elements of C++ Programming Style. Addison-Wesley, 1992.

6
N. Carriero and D. Galerntner. How to Write Parallel Programs. MIT Press, 1990.

7
J. Coplien. Advanced C++: Programming Styles and Idioms. Addison-Wesley, 1991.

8
D. Edelson and I. Pohl. Copying garbage collection in c++. In Usenix C++ Conference. USENIX, 1991.

9
M. Ellis and B. Stroustrup. The Annotated C++ Reference Manual. Addison-Wesley, 1990.

10
P. Lee, editor. Advanced Language Implementation. MIT Press, 1991.

11
M. Linton and et al. InterViews. interviews.stanford.edu, 1990.

12
S. Lippman. C++ Primer. Addison-Wesley, 1991.

13
S. Meyers. Effective C++. Addison-Wesley, 1992.

14
M. Sakkinnen. A critique of the inheritance principles of c++. Computing Systems, Winter 1992.

15
B. Stroustrup. The C++ Programming Language. Addison-Wesley, 1991.

Next: Chapter 25



Doug Lea
Wed May 10 08:00:55 EDT 1995