Chapter 23: Clustering Objects

       

Most OO systems are implemented using a relatively small number of coarse-grained machine processes, or clusters, each containing a usually large number of ``smaller'' objects. Unless you are operating in an ideal automated OO systems development environment, the clustering of objects into processes is one of the main tasks of OO system design.

Clustering is a ``packing'' problem. A large number of ``logical'' objects must be embedded into a smaller number of ``physical'' objects. Each of these physical objects has the form of a standard system process, as supported by contemporary operating systems. Each process also serves as an interpreter in the sense of Chapter 15, simulating the actions of its ``passivized'' components.

In this chapter, we discuss strategies for clustering objects and basic properties of the system infrastructure needed to support them. In the next chapter, we focus on techniques for transforming the resulting embedded objects into passive form.

Clustering

 

It is possible to place strict optimality criteria on clustering strategies. On pure efficiency grounds, objects should be clustered to achieve the highest possible performance. However, this is a thoroughly useless guideline. Even if you did know the exact CPU and storage requirements of each object in a system, all object lifetimes, the exact communication times for all messages, and the exact resource capacities of all clusters, assigning them optimally is still an NP-complete ``bin-packing'' problem (see [7]). This means that all known algorithms for solving the problem are infeasibly time consuming.

Thus, clustering must be performed using heuristic approaches that provide acceptable solutions. For example, the exact number of objects generated during the lifetime of a system is usually unknowable. But the number of ``big'' or ``important'' objects is almost always at least approximately known, and suffices for clustering. Also, even if efficiency-based clustering could be made algorithmic, performance should not be the only criterion. Performance must be balanced against other design factors, such as maintainability, that argue for the use of functional and structural criteria in addition to resource concerns.

As is true for most decomposition problems, the most natural approach to clustering is top-down. First of all, the entire system may be considered as one big cluster, operating in the manner described by the kernel in Chapter 15. From there, major objects and groups of objects may be partitioned off using any agreed-on criteria. The process may then be repeated on these clusters until all resources are accounted for. These steps must also be applied to top-level System operations. However, rather than providing an explicit central or subdivided System object, these services may be replicated in each process requiring them. Only those top-level operations actually invoked in each cluster need be supported.

Generally, system designs are most understandable, maintainable, and extensible when cluster groupings correspond to structural and/or functional groupings. From the opposite point of view, attempts to cluster objects based purely via resource criteria sometimes reveal opportunities to refactor classes in conceptually meaningful ways. It is worth attempting to rationalize clusters retrospectively in this way, just to improve human factors and reduce complexity. Several idioms described in previous chapters (e.g., master-slave) may apply.

Most machines and operating systems allow for multiple processes. In these cases, several clusters may be assigned to the same machine. This may (or may not) slightly degrade performance, but allows conceptually meaningful partitioning criteria to be applied in a much larger number of cases. Reasonable choices about the number and size of cluster processes per machine depend on the underlying efficiency of operating system scheduling and interprocess communication mechanisms.

Because clustering remains something of a black art, it is very convenient to use a prototyping tool to assist in the evaluation of clusterings. The interpreter outlined in Chapter 15 may be extended to become a classic simulator by adding simulated communication delays for messages across tentative clusters.

Criteria

 

Many possible overlapping and/or incompatible criteria may be established. We list some here just to demonstrate the range of options.

Forced:
Identify processes with objects as mandated in non-functional requirements documents.
Functional:
Identify processes with coarse-grained objects identified in analysis, especially ensembles.
Structural:
Identify processes with objects that are easy to isolate. For example, transaction loggers and other ``message sinks'' consume events generated by a large number of other objects without communicating back to them.
Compute-Based:
Place any two objects that ever require substantial amounts of computation time simultaneously in different clusters.
Service-based:
Isolate objects that perform well-known, generic services in their own processes.
Visibility-based:
When systems constraints preclude full object communication, some objects (e.g., relays) should be housed in distinct processes to facilitate communication.
Task-based:
Combine all of those objects involved in particular single-threaded tasks in a cluster.
Class-based:
Allocate all objects of particular concrete classes to the same cluster.
Collection-based:
Allocate all objects that may be members of the same collection object to the same cluster.
Link-based:
Partition clusters so that as many as possible object links point to objects residing in the same cluster. This avoids fragmentation, in which objects include some components situated in one cluster and some in another.
Communication-based:
Allocate heavily interacting objects to the same cluster.
Performance-based:
Allocate slow objects to processes residing on fast computers.
Machine-based:
Allocate objects that could exploit special machine characteristics to processes on those machines.
One-Per-Machine:
Machines and systems that do not easily support multiprocessing should be assigned only one cluster process.
Device-based:
Allocate objects that directly communicate with special devices to processes on the corresponding computers.
Recovery-based:
Isolate failure-prone objects (e.g., those interacting with unreliable hardware) in their own processes to facilitate restarts, etc.
Maintenance-based:
Isolate objects of classes that are most likely to change in the future.

Splitting Objects

An opposite problem may occur when objects are ``too big'' to fit into a cluster. In this case, the class needs to be further decomposed. This is not usually an issue in compositionally designed systems.

Cluster Clusters

After a first-level pass at clustering, the individual clusters may themselves be clustered, in a different sense. Groups of heavily intercommunicating clusters may be targeted for placement on one or more machines with fast interprocess communication facilities. For example, if a system resides on several high-speed local area networks (LANs), which are in turn connected by slow modem connections, then clusters should be situated accordingly. Wide-area (long-haul) communication is substantially slower and less reliable than LAN communication, which is in turn slower and less reliable than communication across processes residing on the same machine.

Dynamic Clustering

When resources are not fixed and additional processes may be created dynamically during system execution, clustering may be equally dynamic. Except in special cases, this is hard to achieve. Tractable situations are often limited to those in which the creation of objects of certain classes is always performed by generating a new process.

Migration

  Even though clusters are designed around the management of particular sets of objects, in many cases it is sensible and desirable to migrate an object from one cluster to another. While conceptually straightforward, this is fraught with implementation difficulties, including synchronization problems and inter-cluster coordination of storage management. It is an option only if system infrastructure services are available to manage this.

Redundancy

   A more common and less intimidating variant of migration is object redundancy. Clones of objects may reside in multiple clusters. When these objects are stateless, or just immutable for the duration of their duplicate existences, this is easy to arrange. When the objects are mutable, additional support is required to maintain full consistency across copies. This is a very special form of constraint management, in which all changes in one version must transparently trigger changes in the other (see Chapter 22). Redundancy is also a means for achieving greater system reliability. Entire clusters may be redundantly implemented.

Other Mapping Strategies

In some systems, it may be necessary or preferable to avoid the notion of static clusters all together. System-level processes may be able to represent and execute code for different design-level objects at different times. Other mediator objects must route requests to the currently appropriate process. This is an extreme form of dynamic clustering and migration. For example, the states of objects may be centrally held on persistent media, and reconstructed in suitable processes when needed. Systems composed under these assumptions differ in numerous small ways from those adhering to the better-supported models that we will focus on in the remainder of this chapter. Conceptually straightforward but implementation-intensive adaptations are necessary to support an extra level of indirection in the mapping from objects and clusters to processes.

Examples

Clustering is usually much easier than our remarks might suggest. Many of the criteria amount to the application of a bit of common sense.

For example, the system described in Chapter 10 declared ATM and Bank as highest level subsystems. Assuming there is at least one processor per physical ATM, then device and communications criteria lead to one ATM object per ATM station. If there were more than one processor or process available per ATM station, reliability criteria then argue for isolation of individual device-based control systems, CardControl and DispenserControl. From there, visibility criteria lead to isolation of the communications interface with the Bank.

The Bank subsystem might be housed on a single powerful computer, or spread out over many computers depending, of course, on the configuration. Even if it were on a single computer it would still be wise to use other criteria to place objects in multiple processes. Structural and service criteria suggest splitting off time based services, perhaps as centralized into a time daemon. If a nonintegrated database is being used for persistent management of Accounts, Clients, etc., then reliability, maintenance, and communication criteria argue for isolating the database interface veneer objects as processes.

If the Bank is physically distributable, then communications performance and visibility criteria lead to separation of processes that communicate with the ATM objects. If, instead of a foreign database, extensive use is made of large collections (possibly themselves supporting persistence), these may be split into master-slave configurations. The most important ones (e.g., ActiveAccounts) may be supported redundantly through shadowed multiple clusters. Associated LockMgrs and the like may also be isolated in processes. Other objects responsible for control logic (e.g., billing) may be split off from there, until all resources are accounted for.

Cluster Objects

  The basic technique for packing objects into clusters is to have each cluster object subsume responsibility for construction, maintenance, dispatching, and execution of held objects and operations. Each of the component objects must ultimately be ``passivized'', as described in Chapter 24. In principle, cluster objects are just big artificial ensemble objects. Thus, clusters are objects with many, many own components that happen to be physically embedded within a common ``address space''. For much of Part II, we have downplayed premature commitment to representational embedding, in part to facilitate mappings into clusters.

   These designs may be expressed in ODL using the attribute qualifier packed, that denotes physical embedding requirements of a passive stored component inside another object. Sometimes, this is all that is necessary to define the resulting cluster structure. For a too-simple example, a LampV1 (Chapter 16) that is isolated in its own cluster may embed its Bool switch component via:

class LampV1C is LampV1 ...
  packed switch: Bool <> 
end

In the resulting configuration, each cluster protects components through a well-defined interface. All intercluster communication must pass through cluster interfaces. The system as a whole may be designed and viewed in terms of a relatively small number of possible interactions between very large-grained objects. In practice, clustering usually carries additional constraints, including:

Impermeability:
Objects residing within different clusters may not be able to communicate directly, or even know of one another's existence.
Restricted Messages:
All intercluster communication is to be directed to the clusters themselves, not inner components. Further, these messages may need to be transformed into different formats and protocols.
Persistence:
Because processes are volatile, some or all component objects may be redundantly represented on persistent media.
Resource Management:
Clusters may be responsible for keeping track of component objects and their storage and computational requirements.
System Chores:
Clusters may need to play roles in keeping track of other clusters, detecting deadlock, restarting processes, and other bookkeeping.

Cluster Interfaces

   The interface of each cluster consists of the set of reachable operations on its inner objects. The nicest cases occur when a cluster holds a single reachable object. If this is the case, then the cluster agent interface is the same as that object. Otherwise, the interface must be crafted to support a composite of operations. Cluster interfaces, like all others, need to be meaningful and understandable. This provides an additional reason to perform clustering using structural and functional criteria when possible.

A common technique for representing the interfaces of other clusters inside their clients is through proxies (stubs). Proxy classes are among the most trivial kinds of wrappers. They generate cluster-local objects with interfaces identical to those of nonlocal objects (i.e., other clusters). All within-cluster messages consist only of local invocations of operations on these proxies, which in turn forward them to other clusters via interprocess communication facilities. 

Interfaces to External Software

  Process-level objects may need to communicate with software entities in addition to those represented by other clusters. Most systems interact with other existing software. This software is most commonly non-object-oriented. However, it is still possible to provide object-oriented veneers with respect to desired interactions with the system being designed.

Among the purest applications of abstract class design is interface design for foreign software objects that will not even be implemented as part of the target system, but must be used just as they are currently implemented. This may be approached through pure black-box techniques. As far as abstract declarations are concerned, there is little difference between an external component and something that must be implemented. The only problem is that the way in which its functionality is defined and implemented is normally fixed and beyond a designer's control.

Veneers

  Interactions between any external component and the system at hand always need to be mediated by a layer of internal processing that is controllable, as long as it has the right properties when seen from the external component's point of view. Thus, internal veneer classes may be defined via standard constructs, and concrete classes may put together the necessary connections. For example, the application might interact with a native file system in ways described by classes including: 

class File ...
  op write(c: char) ...;
end
class Directory is File ... end

Effects, constraints, and the like can still be described, but the actual composition and computation cannot be determined without committing to a particular programming language, operating system, etc.

These techniques are especially useful for isolating common utility services, which may then be specialized for particular machines, operating systems, and configurations. In particular, a hierarchy of abstract classes can describe particular operating system functions, and a set of concrete classes can be defined to encapsulate system-dependent functionality. For example:

class OpSystemInterface ... end

class MC68000UnixSysInterface is OpSystemInterface ... end

Atkinson [1] provides a more detailed account of strategies for designing heterogeneous interface classes using the Ada OO shell language DRAGOON. Most of the ideas may be applied to interface design in general. Similar design steps may be necessary to deal with heterogeneous foreign subsystems with which the system may communicate.

Legacies

  The same basic strategies hold for designing-in old non-OO software that may be at least partially converted to become a component of the current system. In such cases, the front end of this software may be first encapsulated as a set of class interfaces, along with a bit of internal processing to link internal and external views and conventions. Perhaps over time, internals of the legacy may be ``objectified'' layer by layer, as far as necessary or desirable.

The most common general form of foreign interface layering is a set of classes that encapsulates arguments, global variables, and other data used in foreign functions as class components, along with relay operations that gather arguments together as necessary to submit to the external components.

For example, if a foreign fileOpen procedure required file names, directories, and access modes as arguments, and returned success or failure on completion, the interface File class might have name, dirname, mode, isOpen attributes, along with an argumentless open operation which sends the right arguments (perhaps after some data format conversion) to the fileOpen procedure and records the results.

A second layer of conversion might then replace fileOpen by directly incorporating its functionality in open(File) and/or other components. This strategy could be continued down to the level of operating system file handling primitives, surely using subclassing to manage the different ways of doing things across different operating systems and computers:

class UnixFile is File ...
   op write(c: char) ...;
end

Clusters and Object Identity

   Clustering may lead to overencapsulated objects. Embedded objects cannot always communicate in the manner in which they were originally designed. Previously visible objects become hidden within clusters. This is not all bad. The inability of foreign objects to exploit internal identities improves security and provides effortless enforcement of local/non- local distinctions. However, when communication is required, it must be supported.

  Communication via object identities need not be a problem. In a single-process design, object identity is implemented by ultimately (perhaps through several layers) associating identities with addresses of some kind or another. Similarly, operation and class identities are mapped into code and bookkeeping table addresses or surrogates. But any object that is visible across clusters (including dedicated persistent data managers, if employed) must have an identity that holds outside of the current process.

Process boundaries do not always correspond to near versus far identities. On systems with lightweight processes (LWPs), different processes may share address spaces. This greatly simplifies mappings and transformations. But lightweight processes are only useful for clusters decomposed for functional reasons. Except on shared-memory multiprocessors, LWPs share a CPU, and thus are hardly ever useful for supporting clusters divided for resource reasons.   Operating systems and hardware technology may someday solve such problems on a wider scale. For example, computers with 64-bit addresses might hold near or far identities using the same internal representations. Object-oriented operating systems may someday help automate address mappings and translations.

You cannot currently depend on a system to reliably support system-wide identity mappings. However, the representation and management of system-wide identities may be among the services offered by object-oriented databases and other object management systems. There are few reasons for not relying on such services when putting together a system in which object identities may be passed between clusters.

Unless using a system enabling the common representation of both clusters and embedded objects, cluster interfaces should minimize or eliminate operations that somehow refer to the identities of embedded objects. However, this is not always possible. When external objects send messages originally meant to be received by objects that have been embedded in a cluster, they must first discover which cluster to send a message to, and then send some kind of subidentifier as an argument in the cluster-level message. The recipient cluster must then somehow decode this and then emulate the desired processing.

   There are many ways to form such internal identities and resulting ``fat pointer'' tuples of ( clusterID, localID) needed to identify internal objects uniquely. Ready-made formats may be available through tools and services. Other choices range from simple unprotected ( MachineID, processID, virtualAddress) tuples to port  constructs to securely mapped pseudo-IDs for both clusters and objects. In any case, all external messages originally of the form:
x.meth
must be converted into a form such as:
x's-cluster.meth(x's-local-id),
which in turn is translated into the format required by the host language and interprocess communication tools. As mentioned, proxy mechanisms make this substantially easier by localizing and hiding at least parts of these conversions.

Even in 100% OO system environments, these measures cannot normally solve the problem of dealing with all foreign objects that a system may deal with. Different OO services may employ different ID representations and conventions. IDs received from nonconforming foreign systems must be used in a pure black-box fashion. For example, it may be impossible even to perform identity tests among two foreign IDs to see if they refer to the same object. Two IDs that compare as equal may not actually refer to the same object if they were generated by two different foreign systems. Two that compare as different may reflect different encodings of IDs referring to the same object.

System Tools and Services

 

The overall physical architecture of a system may depend in part on requirements-level constraints, but more commonly depends on the availability of system-level tools and services. Points on this space include the following.

Traditional.

Assuming that minimal interprocess communication services are available, designs can be mapped to the most common and traditional architectures and services using a fixed number of known processes, using impermeable cluster interfaces, only admitting interprocess messages directed at clusters, using hard-wired point-to-point interprocess message strategies, limiting interprocess message formats, and mapping persistence support to a non-OO database service.

Brokered.

If a centralized interprocess message handling service is available, then some of these restrictions may be lifted. Process interfaces may take a standard form, guided by tools that provide protocols for dealing with the service. Processes may be dynamically constructed and register their interfaces. All or some interprocess messages may be mediated by the broker service. At least some transmission of object identities may be supported.

OODB-based.

   If an object-oriented database is employed (perhaps in addition to, or in cooperation with brokering), then full OO communication is likely to be supportable without significant design transformations. OODBs also typically provide support for locking, caching, versioning, long-term checkin/checkout, crash recovery, security, heterogeneous data formats, transaction control, and other necessary and useful features of persistent storage management.

Ideal.

An ideal OO support system would make most of this chapter irrelevant. Systems that provide infrastructure to automatically and transparently cluster, dispatch, maintain, and manage distributed objects appear to be within the realm of technical feasibility. But they do not exist outside of experimental projects as of this writing.

In the remainder of this section, we survey some basic considerations governing the ways in which processes may be constructed and managed. Even at coarse design levels, strategies are often bound to the use of particular tools and support systems that will be used in subsequent implementation. We will restrict ourselves to general issues.

Process Shells

   

A layer of care-taking capabilities must underly clusters in order to deal with issues that escape the bounds of the virtual system constructed during class design. One way to address these matters is to assume that clusters have an associated shell. This shell is an interface of sorts between the virtual system and the operating system (or bare hardware) on which it resides. As far as the virtual system is concerned, there might be one shell overseeing all clusters, one per cluster, or anything in between. Shells are merely abstractions of low-level services. There may not be any one software component that is identifiable as a shell object. Instead, the required capabilities are most likely obtained through collections of language based run-time systems, operating system services and even hardware support.

Unless you are building an operating system, you cannot actually design a shell, but instead must learn to live with the peculiarities of existing capabilities. The nature of common hardware and operating systems demands that shells be considered intrinsically interruptible. But they must deal with interruptions in a magical way, somehow saving and restoring state as necessary. Shells are also privileged with the ability to interrupt, probe, stop, suspend, resume, and/or restart cluster objects, perhaps even if they are otherwise conceptually uninterruptible. Given this, we might define some of the capabilities of a generic shell:

class Shell ...
  own proc: Any;     % the cluster object
  op start;          % (re)initialize proc
  op shutdown;       % gracefully shutdown proc
  op halt;           % immediately halt proc
  op suspend;        % stop proc and save state
  op resume;         % resume proc from last suspended state
  op ping: Packet;   % tell sender whether proc is alive
  op rcv(p: Packet); % decode message packet and send to proc
end

This is a pure black-box description. All of these capabilities must be implemented through magic. Again, there will probably not be any single software object or service that implements this interface.

Process Control

    System-wide process control software resides one level below cluster shells. A wide range of services may be available, spanning from none to all of the following:

\

Design and implementation of a complete set of such services is tantamount to the construction of a distributed operating system. However, tools and services handling most tasks exist for most systems. Small-scale process management can be performed by directly invoking relevant system services when necessary. This may be approached by building classes and special-purpose tools that isolate the interfaces to such services.

Interprocess Communication Support

   

Interprocess communications services minimally support pass-by-description style messages routed to their immediate recipients. On heterogeneous systems, state values may be passed among processes using intermediaries that interconvert basic value type representation formats across machines. This is normally accomplished with the help of tools and conventions that convert concrete value representations into canonical forms, marshall them into interprocess message packets, and decode them at the other end.

OO designs require services that support one-way point-to-point messages and/or synchronous bidirectional interprocess protocols. Other protocols may be layered on top of these, but only with substantial design effort and/or tool support. Of course, the more protocols supported, the easier the implementation. Standard remote procedure call (RPC), as well as RPC-with-time-out mechanisms are widely available. These may or may not be integrated with special facilities for interacting with timers and I/O devices. 

Proxies

   Proxies are within-process objects that serve as local stand-ins for remote clusters. Proxy classes are best generated by tools that accept descriptions of cluster interfaces, and simultaneously construct entities representing both the internal and external views. The internal version is embedded only in the single process implementing the interface; all others get external versions. If proxy classes cannot be generated automatically, non-OO tools that generate proxy procedures are usable after making some minor compromises and/or building corresponding veneers.

Lightweight Processes

  

For clusters that are isolated for non-performance-based reasons, lightweight processes are a simpler alternative. Because LWPs share address spaces, there may be no need to use proxies for interprocess communication. In the best case messages may be sent directly to the principal object(s) in other clusters. However, for uniformity, lightweight process communication may still go through proxies that are specially implemented to take advantage of LWP communication optimizations.

Topologies and Dispatching

   The basic rules by which cluster processes know of each other's existences and capabilities must be selected. Interprocess communication topologies corresponding to the routing structures described in Chapter 21 are supported to varying extents by contemporary software services, utilities, and management facilities.

CORBA

    The OMG Common Object Request Broker Architecture [8] is an example of a system-level framework assisting in some of the design and implementation tasks described in this chapter. As of this writing, CORBA implementations are not widely available, but it is predicted to be among the most commonly employed services for OO systems in the near future. Also, because it is new, we have no significant first-hand experience with details. We will restrict ourselves to describing a few features that illustrate how CORBA supports distributed OO system design and corresponding programming tasks.

  The Object Request Broker (ORB) is a semicentralized brokering (dispatching) service that is generally structured as a relay. It is ``semicentralized'' in that accommodations are made for coordinating multiple brokers, but it may be treated as a central service by applications.

Process-level objects register their class interfaces with the ORB in order to receive messages from other objects. Messages from clients are then relayed through the ORB to their recipients. Clients normally communicate with the ORB through proxies (or ``adaptors'') generated by CORBA tools geared to particular languages (including C++ ). These in turn ordinarily invoke native interprocess communication services. However, the CORBA specification includes provisions that make it possible to support efficient lightweight process communication and even within-process messaging.

  Class interfaces are specified using the Interface Definition Language ( IDL). IDL is similar in structure to the declarative aspects of C++. Interfaces may be described as subclasses of others. The rules generally follow the same abstract subclassing conventions as ODL. IDL supports approximately the same primitive types as ODL, but with the usual provisions for specific word sizes, different character sets, etc. It also includes C-like unions, enumerations, and structures. A process-level object reference (ID) type is defined and used to support OO messaging conventions of point-to-point messages with copy-in/copy-out or process-object-reference arguments and results. Object references have pure black-box semantics. Reference identity tests are not explicitly supported. They may be hand-crafted for the ID representations used in any given system.

Two kinds of operations may be declared. Operations marked as oneway correspond to ODL one-way ops. Others are defined as RPC-style blocking ops that may be declared to raise any number of named, structured, exceptions. Simple set/get procedures are automatically generated for IDL attribute declarations. Only gets are generated for readonly attributes. These map well to stored ODL fns. For example, a process that maintains a simple buffer of integers may be defined in IDL as: 

interface Buffer 
{ 
  readonly attribute boolean empty;
  readonly attribute boolean full;
  oneway   put(long item);
  long     take();
};

IDL is, by design, ``weaker'' than notations such as ODL. It is not computationally complete. It is used to generate proxies and related structures that are then bound to code written in other languages. At the declarative level, it does not contain detailed semantic annotations such as invariants, effects, and triggering conditions. It does not allow declaration of internal ( local) structure, the existence of links to other objects, or constraints among such objects. When translating ODL, classes that share the same interface but differ across such dimensions may be treated as variant instances of the same interface class. Conversely, one process-level ODL object may be defined to support multiple interfaces in IDL.

Persistence

    Rather than individually listing packed components, clusters may be described as containing one or more repositories that in turn hold all passive within-cluster objects. Recall from Chapter 18 that repositories are objects that both generate and track objects. Repositories form natural venues for managing the storage requirements of component objects in a cluster. In the next chapter, we describe internal storage management techniques in which a repository or other agent tracks lifetimes of embedded cluster objects. Here we discuss situations where managed objects must live on even when a cluster and its repository (or repositories) are killed, suspended, and/or restarted. In these cases, objects must be maintained persistently.

Our design attitude about persistence is a little backward from some others. We consider the ``real'' objects to be active. Persistent media merely hold snapshots of state information needed to reinitialize or reconstruct functionally identical objects in case the current ones somehow fail or must be made dormant. This is in opposition to the database-centric view that the ``real data'' live in a database, and are transiently operated on by a database manager and other programs.

The interaction between any given object and its persistent representation is a form of constraint dependence. State changes must be mirrored on persistent media. This can be simplified by standardizing on a simple get/set protocol for interacting with the persistent representation. This meshes well with most database update facilities.    

There are several options. Objects may send update messages to persistence managers themselves whenever they change state, or only at selected intervals. Alternatively, repositories or other agents may intercept messages and forward them both to the internal objects as well as to caretakers that perform the associated updates on the shadow database representations. These may be further combined with locking mechanisms, replication, failure detection, and related control strategies. For example, updates need not be directly shadowed if it is OK to employ a locking check-out/check-in protocol in which objects are constructed from their persistent representations, exclusively operated on, and then later checked back in by transferring their states back to the database.

Saving and Restoring Objects

   The simplest form of persistence is a save/restore mechanism. A repository may support save(f:File) and restore(f:File) operations to read and write all held objects to a file. These may be triggered automatically by timers or other events (for saving) and re-initialization or error recovery routines (for restoring). They may be buttressed with history log files that keep track of changes to objects between saves. Because of their complexity and limitations, ad hoc save/restore strategies are limited to occasional, small-scale use.

Dealing with links.

Persistent storage formats may be based on the description records discussed in Chapter 17. Saving and restoring objects that are fully describable through attribute description records is relatively straightforward. However, relational and composite objects are visibly dependent on links to other objects that must be maintained across saves and restores.

    The mechanics of saving and restoring such objects interact in the usual ways with object identity. The only thing that can be saved to a file is a description of an object, not an active object itself. Upon restoration, the description may be converted to a new object with the same state but possibly a different identity. The repository must shield the rest of the system from such philosophical dilemmas, perhaps using variants of pseudo-identities and smart links discussed in Chapters 18 and 22. A TABLE may be constructed to provide an integer (or whatever) pseudo-ID for each saved object. Links may then be output using pseudo-ID equivalents. During restoration, the table may be dynamically reconstructed, but with pseudo-IDs mapped (or ``swizzled'') to the new internal equivalents.  

Persistent object stores.

   Tools exist to simplify and extend basic persistence support. As noted in Chapter 22, Kala provides persistence mechanisms specifically geared to OO systems. Conceptually, it uses a write once strategy in which each desired state (or version) of each object may be persistently stored and recovered. Inaccessible versions are garbage collected. Related access, locking, and transaction control services are also provided.

Reifying classes.

    Persistence support requires that class descriptions be represented persistently. A hunk of bits representing an object on disk does you no good unless you know its class type in a form that allows interpretation and reconstruction of the object. Thus, representational conventions must be established for describing attributes, operations, messages, and so on, as briefly described in Chapter 18. However, these must be extended to deal with concrete code bodies if they are not already part of the executable image. Binary executable code implementations may differ across hardware architectures. The support mechanics are not only target language dependent but also machine dependent.

 

Evolution and versioning.

   Among the most difficult issues in persistent object management of any form is schema evolution: What do you do when someone changes the definition of a class? The basic OO paradigm nicely supports restructurings, refinements and other improvements to classes throughout the software development, maintenance, and evolution process. However, when classes describing persistently managed objects are modified, it can be difficult to cope with all of the resulting system problems. These are not limited to the need to redesign and/or reimplement persistent support structures. For example, some clients may only work with previous definitions of classes. Outdated classes and objects may be kept around in order to minimize impact. However, this requires a versioning facility, in which version indicators are attached to each class and object, perhaps along with mechanisms to upgrade versions in-place.

Introducing classes.

Both of the previous concerns apply, even more so, when a running system must be able to accommodate instances of classes that were not even defined when the system started running. If the system cannot be restarted, it is necessary to introduce infrastructure to interpret, represent, allocate, and execute new descriptions of classes and objects.

Security.

   Persistent representations may require qualitatively different access control and authentication mechanisms than active objects. Because they typically reside in file structures and/or other media accessible outside of the running application, general-purpose system or database policies and mechanisms must be relied on.

Relational Databases

    Relational databases (RDBs) operate solely on values, not objects. In order to use an RDB for persistence support, identities must be mapped to pseudo-ID values. When an RDB is used extensively for such purposes, it is a good idea to build these into objects themselves, as unique key values. These pseudo-IDs may then simultaneously serve as database indexing keys, as well as key arguments that may be sent to relays and name servers to determine internal identities.

Although many snags may be encountered, the design of database relations underlying a set of objects is conceptually straightforward. Several alternative approaches and increasingly many tools are available for designing RDBs to support OO applications.

When persistence is supported using a stand-alone database service, implementation is usually based on a structured interface to the database's native data definition and manipulation facilities. For example, in the case of relational databases, an interface class may be defined to mediate SQL  commands.

The simplest situations occur for concrete entities that behave as ``data records'' (e.g., our MailingLabel classes). Any class with an interface that sets and gets values from owned internal components transparently translates into an RDB table with updatable ``value fields'' corresponding to the components.

Classes with essential or visible links must represent these links through pseudo-IDs. When these links are fixed and refer to other concrete objects, these may then key into the appropriate tables. But in the much more typical cases where they are rebindable and/or refer only to abstract classes, secondary tags and tables must be employed that ``dispatch'' an ID to the appropriate concrete table. For example, a link listed with type Any might actually be bound to some MailingLabelV1 object. This may be handled via a layer of processing in the database interface that maintains a table of IDs along with concrete type-tags, and re-issues RDB requests based on tag retrievals. Without tools, the normalization problems stemming from such schemes are difficult to resolve.

Object-Oriented Databases

   OODBs normally provide the most natural mechanisms for persistently maintaining objects. OODB design and OO design amount to nearly the same activities. OODBs use the same kinds of constructs as the other parts of OO systems. While models and their details vary widely across different OODBs, all of them support basic constructs including classes, attributes, objects, and collections. Certain OO-like ``extended relational'' database systems provide similar constructs. All are designed to maximize performance for typical OO operations.  

OODBs vary significantly in how they are accessed and used. Many OODBs are designed as supersets of particular OO programming languages. Additional persistence constructs are supported on top of the base language, enabling more transparent programming and usage. Others are stand-alone services, accessed via ``object-oriented SQL'' ( OSQL) and corresponding language-based interfaces.

OSQL

  We will illustrate using OSQL. Several variants exist. The examples here are based on one supported by at least early versions of Iris/ OpenODB   [6]. They are phrased in the user-oriented OSQL language rather than the language-based interfaces that would actually be employed within a system.

Like ODL, OSQL separates value ( LITERAL) types from object types (links). Unlike ODL classes ( TYPEs) and subclasses ( SUBTYPEs) are always defined independently from attributes. Attributes (and most everything else) are defined as FUNCTIONS. Stored attributes are so annotated. Computed attributes may be coded as other OSQL statements.

Queries are class-based. The most common general form of a query is:

SELECT x, y                             (vars to be returned)
FOR EACH X x, Y y, Z z                  (all participants)
WHERE R(x, y, z) AND S(x, y) AND T(x)   (predicates)

Any query may result in a bag of objects. The individual elements are sequentially traversable using a cursor mechanism. Indices may be requested for particular attributes in order to speed up access for otherwise slow attribute-based queries.  Other constructs are illustrated in the following examples:

CREATE TYPE Counter;
CREATE FUNCTION val(Counter) -> INTEGER AS STORED;

CREATE TYPE Person;
CREATE FUNCTION name(Person) -> CHAR(20) AS STORED
CREATE FUNCTION gender(Person) -> CHAR AS STORED
CREATE FUNCTION nicknames(Person) -> SET(CHAR(20)) AS STORED;
CREATE FUNCTION parent(Person) -> Person AS STORED;

CREATE FUNCTION parentName(Person p) -> Person AS OSQL
  BEGIN RETURN name(parent(p)); END;

CREATE FUNCTION mother(Person child) -> Person AS OSQL
  SELECT m
  FOR EACH Person m
  WHERE parent(child) = m AND gender(m) = "f";

CREATE TYPE Employee SUBTYPE OF Person;
CREATE FUNCTION salary(Employee) -> INTEGER AS STORED;

Instances are constructed by binding initial values to all stored attributes (in a CREATE INSTANCE statement). Types, functions, and instances may all be deleted. All stored OSQL attribute changes are performed through set/put mechanics. For example:

CREATE FUNCTION ChangeName(Person p, CHAR(20) newname) AS
  UPDATE name(p) := newname;

Updates to set-valued attributes and functions are performed using the operations ADD and REMOVE. These are analogous to ODL collection operations. A few simple arithmetic functions are also provided. All other computational operations are performed through bindings to user-supplied external operations.

All updates implicitly generate locks. Sets of updates are committed only when explicitly requested (through COMMIT). ROLLBACK commands abort uncommitted updates. Intermediate SAVE-POINTS may be requested and rolled back to.

Summary

Clustering objects into processes is an ill-defined problem. Heuristic criteria may be employed to good effect. While clustering has no impact on the general forms of designs, it does not usually have beneficial effects on concrete details.

Clusters must be connected to the underlying computational substrate through a range of system services that may be abstracted as shells. Numerous services may be required to realize clusters. They range from those requiring low-level ``magical'' powers, to those in which interface classes to foreign services provide the necessary coupling and functionality, to those that are in full control of the constructed system itself.

Reliability is enhanced through persistence. Although OODB services provide the best vehicles for managing persistence, most other schemes can be accommodated. Subclassing and the need to preserve object identity separate OO persistence, storage management, and related support services from those of most non-OO systems.

Further Reading

Distributed object management services and object-oriented operating systems are surveyed by Chin and Chanson [4]. Comparable non-OO system design strategies are described, for example, by Shatz [10]. Atkinson [1] discusses OO physical system design in Ada. Fault tolerant system support is discussed in more depth by Cristian [5]. System-specific manuals remain the best guides for most process management issues. Schmidt [9] presents examples of class veneers on system services. Cattell [2] (among others) provides a much fuller introduction to OO and extended relational database systems. As of this writing, automated object clustering is just beginning to be studied as a research topic; see, e.g., [3].

Exercises

  1. Most non-OO accounts of distributed processing concentrate on splitting, not clustering. Why?

  2. How would you cluster the ATM system if there were 100 ATM stations, each with an associated IBM PC, and 1000 other IBM PCs (but nothing else) available? Why would you want more details of the class designs before answering?

  3. List some advantages and disadvantages of clusters having any knowledge of between-cluster dispatching matters.

  4. Describe how to build proxy classes using only RPC stub generator tools.

  5. A cluster residing on a small non-multitasking machine is, in a sense, the machine itself. Explain how to deal with this.

  6. The described version of OSQL contains no access control features. Is this an asset or a liability?

  7. Write the OSQL declarations necessary to provide persistent support for the Account class.

References

1
C. Atkinson. Object-Oriented Reuse, Concurrency AND Distribution. Addison-Wesley, 1991.

2
R. Cattell. Object Data Management. Addison-Wesley, 1991.

3
A. Chatterjee. The class as an abstract behavior type for resource allocation of distributed object-oriented programs. In OOPSLA Workshop on Object-Oriented Large Distributed Applications, 1992.

4
R. Chin and S. Chanson. Distributed object based programming systems. Computing Surveys, March 1991.

5
F. Cristian. Understanding fault-tolerant distributed systems. Communications of the ACM, February 1991.

6
D. Fishman, J. Annevelink, E. Chow, T. Connors, J. Davis, W. Hasan, C. Hoch, W. Kent, S. Leichner, P. Lyngbaek, B. Mahbod, M. Neimat, T. Risch, M. Shan, and W. Wilkinson. Overview of the iris dbms. In W. Kim and F. Lochovsky, editors, Object-Oriented Concepts, Databases AND Applications. ACM Press, 1989.

7
M. Garey and D. Johnson. Computers and Intractability. Freeman, 1979.

8
OMG. Common Object Request Broker Architecture and Specification. Object Management Group, 1991.

9
D. Schmidt. An objected-oriented interface to ipc services. The C++ Report, November/December 1992.

10
S. Shatz. Development of Distributed Software. Macmillan, 1993.

Next: Chapter 24



Doug Lea
Wed May 10 08:00:35 EDT 1995