Previous Table of Contents Next


Lines 54-56 instantiate the generic procedure Ada.Unchecked_Deallocation to produce a procedure Dispose. If P and Q are of type ListPtr, then executing

   Q := P;
   Dispose(P);

causes these standard actions:

1.  P is Null after the Dispose call.
2.  If P was already Null, the Dispose call has no effect.
3.  The storage designated by P is deallocated. Actually reclaiming the storage is recommended but not required by the standard; practical implementations will, of course, reclaim the storage.

Because P is now Null, an attempt to dereference P raises Constraint_Error (as discussed earlier). On the other hand, setting P to Null does not—cannot—magically set Q to Null, so Q is now a dangling pointer, and the behavior of an attempt to dereference it is erroneous. The Ada standard defines erroneous execution as one whose effect is not predictable. In this case, the standard cannot require that dangling pointers be detected at execution time; in general, the overhead to do so would be unacceptable. It is for this reason that the generic procedure is called Unchecked_Deallocation; it cannot be expected to check for dangling pointers.

In HB.Lists_Generic, the Dispose instance is called in the overriding Finalize (lines 58-69), in which two temporary pointers, Current and Leading, are used to walk down the list, calling Dispose on each node in turn.

Finally, the exported operation MakeEmpty, and the overriding operation Initialize, are both implemented (lines 71-79) as simple calls to Finalize.

The final set of examples illustrates some of Ada’s concurrent programming constructs.

10.4.4. Concurrent Programming

Before examining a concurrent programming example, it is useful to consider the background against which concurrent programming was introduced in Ada 83.

There is still no universally accepted terminology to describe this subdiscipline; various authors disagree on the proper use of concurrency, parallelism, multitasking, and other related terms. This is not the place in which to enter at length into this debate. Here I choose to understand the term concurrent program as a program in which several things can happen—or at least appear to happen—simultaneously. This usage allows me to focus on program structures without undue concern for either the underlying operating system, or hardware configuration, or intended application.

10.4.4.1. Why Concurrency?

Section 9 of the STEELMAN Report (HOLWG, 1978) is titled Parallel Processing. It is interesting to read several paragraphs of that section:

9A. Parallel Processing. It shall be possible to define parallel processes. Processes (that is, activation instances of such a definition) may be initiated at any point within the scope of the definition. Each process (activation) must have a name. It shall not be possible to exit the scope of a process name unless the process is terminated (or uninitiated).

9B. Parallel Process Implementation. The parallel processing facility shall be designed to minimize execution time and space. Processes shall have consistent semantics whether implemented on multicomputers, multiprocessors, or with interleaved execution on a single processor.

9H. Passing Data. It shall be possible to pass data between processes that do not share variables. It shall be possible to delay such data transfers until both the sending and receiving processes have requested the transfer.

Recall that STEELMAN (HOLWG, 1978) was a requirements document, setting out the desired capabilities of a new programming language. It is clear from the preceding paragraphs that this language was intended to support concurrent processing with constructs that did not depend on either a particular operating system or, indeed, a particular hardware configuration. STEELMAN is, in fact, using the term “parallel” in the sense in which I defined “concurrent” previously. To the STEELMAN authors, it was of critical importance to be able to develop sophisticated concurrent programs, with long life-cycles, that could be recompiled with minimal change for new generations of competitively procured hardware, running new underlying operating systems.

Language-level concurrency was (and to some, still is) a controversial requirement; many argued (and some still do) that concurrency is best handled via straightforward calls to operating-system processes or threads. Opinions aside, it is a fact that Ada’s concurrency constructs derive directly from the STEELMAN (HOLWG, 1978) requirements. The Ada designers studied all the research on concurrent programming and developed a practical model as an extended version of the constructs proposed in such lab models as Dijkstra’s guarded commands (Dijkstra, 1975) and Hoare’s monitors (Hoare, 1974) and Communicating Sequential Processes (CSP; Hoare, 1978).


Previous Table of Contents Next