Table of Contents |
The aim of the Handbook of Programming Languages is to provide a single, comprehensive source of information concerning a variety of individual programming languages and methodologies for computing professionals. The Handbook is published in multiple volumes and covers a wide range of languages, organized by type and functionality.
The Handbook includes four volumes:
Natural, or human, languages appear to be about 10,000 years old. Symbolic, or formal, languages began in Sumer (a civilization of southern Iraq from about 3800 to 2300 BCE), where we find the oldest writing system, cuneiform. It was followed by Egyptian hieroglyphics (about 3000 BCE), the language of the Harappa in the Indus valley, the Chinese shell and bone inscriptions, and (in the Western hemisphere) the language of the Maya.
Writing systems abstract from speech and formalize that abstraction in their symbols. This may be done semantically (for example, hieroglyphs, English numerals, and symbols such as &) or phonologically (for example, alphabetic spelling).
In more recent times, further abstractions have become necessary: warning beacons, flags on sailing vessels, railway telegraph/semaphore, Morse code, and so forth.
Mechanical methods for calculating are very old, but they all involve symbolic abstraction. The abacus is probably the oldest of such constructions. The Chinese and Egyptians had this device nearly four millennia ago. The Mayans possessed it when the Spanish arrived. It was only a few years after Napiers discovery of logarithms (1614), and the use of his bones (marked ivory rods) for multiplication, that the slide rule was invented.
In 1642, at the age of 18, Blaise Pascal invented a calculator that could add and carry to aid his father, a tax collector. Almost 30 years later, in 1671, Leibniz took Pascals machine a step further and built a prototype machine that could multiply using an ingenious device called the stepped wheel, which was still in use in mechanical calculators manufactured in the late 1940s. Leibniz demonstrated his calculator to the Royal Society in London in 1676.
The first commercially successful calculator was invented by Charles Xavier Thomas in 1820. By 1878, an astounding 1,500 had been soldnearly 30 per year. They were still being manufactured by Darras in Paris after World War I. The Brunsviga adding machine, based on an 1875 patent by Frank Stephen Baldwin, which substituted a wheel with a variable number of protruding teeth for the Leibniz stepped wheel, sold an incredible 20,000 machines between 1892 and 19121,000 per year.
The first keyboard-driven calculator was patented in 1850 by D. D. Parmalee, and Dorr Eugene Felts Comptometerthe first successful key-driven, multiple-order calculating machinewas patented in 1887.
In 1812, Charles Babbage came up with a notion for a different type of calculator, which he termed a difference engine. He was granted support by the British government in 1823. Work stopped in 1833, and the project was abandoned in 1842, the government having decided the cost was too great. From 1833 on, though, Babbage devoted himself to a different sort of machine, an analytical engine, that would automatically evaluate any mathematical formula. The various operations of the analytical engine were to be controlled by punched cards of the type used in the Jacquard loom. Though only a fraction of the construction appears to have been effected, Babbages notes, drawings, and portions of the engine are in the Victoria and Albert Museum (as is the set of Napiers bones that belonged to Babbage).
The Jacquard loom, a successful attempt at increasing production through automation, was itself the result of several prior innovations: In 1725 Bouchon substituted an endless paper tape with perforations for the bunches of looped string. In 1728 Falcon substituted perforated cards, but attached them to strings, and in 1748, Jacques de Vaucanson combined the bands of perforated paper and the cards. The patterns on the cards were perforated by machines that cut on designs painted on by stencils. The programmed machine was born.
Over 100 years later, Herman Hollerith, a graduate of Columbia College in New York, recalled the existence of those perforated cards. Hollerith had just started work at the Census Bureau at a generous salary of $600 per year. There he was put to work on a survey of power and machinery used in manufacturing. But he also met John Shaw Billings, who was in charge of vital statistics. One night at dinner, Billings complained about the recently invented but inadequate tabulating device of Charles Seaton, which had been used for the census of 1870. Billings felt that given the increased population, the 1880 census might not be completed in less than seven or eight years, and the 1890 census would still be incomplete in 1900. There ought to be a machine for doing the purely mechanical work of tabulating population and similar statistics, Billings said. We talked it over, Hollerith recalled 30 years later, and I remember he thought of using cards with the description of the individual shown by notches punched in the edge of the card. Hollerith thought about constructing a device to record and read such information and asked Billings to go into business with him. Billings was a cautious man and said no.
In 1882 Hollerith went to MIT as an instructor in mechanical engineering (he was then 22). Teaching at MIT gave him the time to work on his machine. He first considered putting the information on long strips of paper, but this proved impractical. In the summer of 1883, Hollerith took a train trip west. On the train he saw the punch photograph, a way for conductors to punch passengers descriptions onto tickets so they could check that the same individual was using the ticket throughout the trip; in this system things like gender and hair and eye color were encoded.
Hollerith patented his first machine in 1884 and an improved design in 1886, when he performed a trial by conducting the Baltimore census. On the basis of reports of the trial, New Jersey and New York placed orders for machines (to tally mortality rates). Hollerith and some business colleagues bid for the contract for the 1890 census and won it. The government of Austria ordered machines in 1890. Canada ordered five the next year. Italy and Norway followed, and then Russia. The machines were a clear success. Hollerith incorporated his Hollerith Electric Tabulating System as the Tabulating Machine Company in 1896; he reincorporated it in 1905.
Nearly 80 years passed before the computer industry moved beyond several of Holleriths insights. First, so that operators would have no problem orienting the cards, he cut a corner from the upper right. Second, he rented the machines at a reasonable rate (the rental fees for the 1890 census were $750,000; the labor cost in 1880 had been $5 million), but sold the patented cards (more than 100 million between 1890 and 1895). Third, he adapted the census-counting to tally freight and passenger data for railroads. Hollerith effectively invented reusability.
Despite the fact that Thomas Watson said (in 1945), I think there is a world market for about five computers, the first completed was one he had funded. Howard Aiken of Harvard, along with a small team, began in 1939 to put together a machine that exploited Babbages principles. It consisted, when completed in 1944, of a 51-foot by 8-foot panel on which tape readers, relays, and rotary switches were mounted. Nearly all of the operations of the Harvard Mark I Calculator were controlled by mechanical switches, driven by a 4-horsepower motor.
The first all-electronic computer was the Electronic Numerical Integrator and Calculator. Completed by J. W. Mauchly and J. P. Eckert of the University of Pennsylvania in late 1945 and installed in 1946, it was commissioned by the Ballistics Research Laboratory (BRL) at the Aberdeen (Maryland) Proving Ground. It wasand will remain, I expectthe largest computing machine ever built: It was made up of 18,000 tubes and 1,500 relays. ENIAC was the electronic analogue of the Mark I, but ran several hundred times faster.
ENIAC had offspring in England, too. Maurice V. Wilkes and his group began planning their Electronic Delay Storage Automatic Calculator (EDSAC) in late 1946, on Wilkess return from Pennsylvania, and began work at the University Mathematical Laboratory in Cambridge early in the new year. It was one fifth the size of ENIAC and based on ideas that John von Neumann had presented in a paper. When it performed its first fully automatic calculation in May 1949, EDSAC became the first electronic machine to be put into operation that had a high-speed memory (store) and I/O (input/output) devices. Within a few years, EDSACs library contained more than 150 subroutines, according to Wilkes.
At virtually the same time, in Manchester, a team under M. H. A. Newman began work on a machine that was to embody the EDVAC concepts. F. C. Williams, who invented cathode ray tube storage, I. J. Good, who had worked on the Colossus code-breaking machine with Alan M. Turing, and Turing himself, joined the team. The Manchester Automatic Digital Machine prototype was built in 1948, and the definitive machine ran its first program in June 1949. MADM introduced to computing both the index register and pagination.
In the meantime, IBM had begun work on its Selective-Sequence Electronic Calculator (SSEC). It is important to remember that while EDSAC was the first electronic computer, the SSEC was the first computerit combined computation with a stored program. It was put into operation at IBM headquarters in Manhattan early in 1948, cleverly placed behind plate glass windows at street level so that pedestrians could see it operate. It was a large machine with 13,000 tubes and 23,000 relays. Because all the arithmetic calculations were carried out by the tubes, it was more than 100 times as fast as the Mark I. It also had three different types of memory: a high-speed tube store, a larger capacity in relays, and a vastly larger store on 80-column paper tape. Instructions and input were punched on tape and there were 66 heads arranged so that control was transferred automatically from one to the other. It was probably the first machine to have a conditional transfer of control instruction in the sense that Babbage and Lady [Ada] Lovelace recommended, wrote B. W. Bowden in 1953. It did work for, among other things, the Atomic Energy Commission, before being dismantled in August 1952.
That very June, von Neumann and his colleagues completed Maniac at the Institute for Advanced Studies in Princeton, New Jersey. It employed the electrostatic memory invented by F. C. Williams and T. Kilburn, which required a single cathode ray tube, instead of special storage tubes.
The next advance in hardware came at MITs Whirlwind project, begun by Jay Forrester in 1944. Whirlwind performed 20,000 single-address operations per second on 16-digit words, employing a new type of electrostatic store in which 16 tubes each contained 256 binary digits. The Algebraic Interpreter for the Whirlwind and A-2developed by Grace Murray Hopper for the UNIVACare likely the most important of the machine-oriented languages.
The 704, originally the 701A, was released in 1954. It was the logical successor to the IBM 701 (1952, 1953). The evolution of the 701 into the 704 was headed up by Gene Amdahl. The direct result of the 701/704 was the beginning of work on Fortran (which stands for formula translator) by John Backus at IBM in 1953. Work on the Fortran translator (we would call it a compiler) began in 1955 and was completed in 1957. Fortran was, without a doubt, the first programming language.
In December 1959, at the Eastern Joint Computer Conference at the Statler Hotel in Boston, the three-year-old DEC unveiled the prototype of its PDP-1 (Programmed Data Processor-1). It was priced at $120,000 and deliveries began in November 1960.
The PDP-1 was an 18-bit machine with a memory capacity between 4,096 and 32,768 words. The PDP-1 had a memory cycle of 5 microseconds and a computing speed of 100,000 computations per second. It was the result of a project led by Benjamin Gurley and was composed of 3,500 transistors and 4,300 diodes. It had an editor, a macroassembler, and an ALGOL compiler, DECAL. It employed a paper tape reader for input and an IBM typewriter for output. The PDP-1 had the best cost/performance of any real-time computer of its generation. It was also the first commercial computer to come with a graphical display screen.
Just over 40 years ago there were no programming languages. In 1954 programming was still a function of hardware. Fortran was invented in 1957. It was soon being taught. By 1960, not only had COBOL and Lisp joined the roster, but so had others, many now thankfully forgotten. Over the past 40 years, nearly 4,000 computer languages have been produced. Only a tithe of these are in use today, but the growth and development of them has been progressive and organic.
There are a number of ways such languages can be taxonomized. One frequent classification is into machine languages (the natural language of a given device), assembly languages (in which common English words and abbreviations are used as input to the appropriate machine language), and high-level languages (which permit instructions that more closely resemble English instructions). Assembly languages are translators; high-level languages require conversion into machine language: These translators are called compilers. Among the high-level languages currently in use are C, C++, Eiffel, and Java.
Yet there is no guide for the overwhelmed programmer, who merely wants to get her job done. This Handbook of Programming Languages is intended to serve as an instant reference, a life-preserver, providing information to enable that programmer to make intelligent choices as to which languages to employ, enough information to enable him to program at a basic level, and references to further, more detailed information.
General Bibliography
Histories of Programming Languages
Bergin, T. J., and R. G. Gibson (Eds.). 1996. History of programming languages. Reading, MA: Addison-Wesley. Proceedings of ACMs Second History of Programming Languages Conference.
Sammet, J. A. 1969. Programming languages: History and fundamentals. Englewood Cliffs, NJ: Prentice Hall. An indispensable work.
Wexelblat, R. L. (Ed.). 1981. History of programming languages. New York: Academic Press. The proceedings of ACMs First History of Programming Languages Conference.
Reader on Programming Languages
Horowitz, E. 1987. Programming languages: A grand tour (3rd ed.). Rockville, MD: Computer Science Press.
Surveys and Guides to Programming Languages
Appleby, D. 1991. Programming languages: Paradigm and practice. New York: McGraw-Hill.
Bal, H. E., and D. Grune. 1994. Programming language essentials. Wokingham, England: Addison-Wesley.
Cezzar, R. 1995. A guide to programming languages. Norwood, MA: Artech House.
Sethi, R. 1996. Programming languages: Concepts & constructs (2nd ed.). Reading, MA: Addison-Wesley.
Stansifer, R. 1995. The study of programming languages. Englewood Cliffs, NJ: Prentice Hall.
Foreword to This Volume: Imperative Programming Languages
On the simplest level, imperative programming languages are those that manipulate data in a stepwise fashion. That is, they take sequential instructions (algorithms) and apply them to data of various kinds.
Imperative languages have a do this, then do that structure. These instructions or commands are usually called statements. Most of the data items in memory have names, which are used when manipulating those items. The properties of the data items are called types. Programmers specify the relationships among representations, types, data, and names in data declarations.
A data declaration imposes structure upon data and gives it a name. The imposed structure is a specified type; the name is an identifier. All data is stored as bits. The bit pattern and the structure determine the data items value. The union of name, type and value is called a variable.
The work of Konrad Zuse is of tremendous value, but it lacked use and influence: For political reasons, Zuses work wasnt published contemporaneously, and so the history of imperative languages begins with Fortran, proceeding to Snobol4, Icon, C, and Pascal. Fortran, it must be noted, was designed for a specific machine: the IBM 704. It quickly became extremely popular with scientists.
In C, a data declaration looks like:
int x, y;
This states that x and y are integers.
In Ada66 and Ada9x, this would be stated as:
I, J: Integer;
These languages can also initialize the data, assigning a real value:
int x = 3, y = 4;
Not all currently used imperative programming languages can be found in this volume. C++ is an imperative language; but it is also an object-oriented language, and I have made the (arbitrary) decision to place it in Volume I of this Handbook. Similarly, while CLOS is an object-oriented language, it also belongs in the Lisp family, and so I have placed it in Volume IV.
There are basically two kinds of imperative languages: those which focus on compilation, on speed of execution, and those which focus on the level and convenience of the programmerfocus on compilation or on interpretation. Icon is most likely the best instance of an interpretation-oriented language.
This volume will concentrate on languages in use in 1996/97. This includes the first true programming language (Fortran), as well as C, Pascal, and Icon; Modula-3 and Ada95 have been placed in Volume I.
Although it is but 40 years between Fortran and Limbo, more than 4,000 programming languages have been proposed and/or developed in that time. That averages at two per week! I can only compare this luxuriance to that of the Permian explosion, millions of years ago.
Prehistory
The first true computer (ENIAC, 1946) wasnt electronic: it was electromechanical, with many thousands of glass tubes and many mechanical relays. It used a stored program, as did the early Manchester (UK) computers. It was followed rapidly by Aikens (and IBMs) Automatic Sequence Controlled Calculator at Harvard (also 1946). The SSC had more than 2,200 counter wheels for storage and summation and 3,300 relay components for control circuitry [it] was 51 feet long and 8 feet high. It weighed about 5 tons1. A decade later, we were still working with punched cards and plugwires.
1Bashe, C.J., L. R. Johnson, J. H. Palmer, & Emerson W. Pugh. 1986. IBMs early computers. Cambridge, MA: MIT Press.
While ENIAC and its relatives were computers, they were not addressable by means of languages, any more than the punched cards of a Jacquard loom were addressing the mechanical loom. With one exception, the programmers of the 1940s were individuals who wrote out instructions on large sheets of paper which were translated to the machinery by means of registers and cables (plugwires)
The sole exception was a German engineer, Konrad Zuse, who was in exile in Switzerland. Zuse designed Plankakuel, a programming language. It featured structured values, variables, and procedures with parameters. Unfortunately, the Second World War and other events prevented Plankalkuel from becoming known for nearly 25 years. Bal and Grune note that had Zuses work become known, present-day programming languages might have looked different. But it must be admitted that Plankalkuel wasnt efficient.
However, in the 1950s, IBM (and other companies) were interested in leveraging the power of their machines. In order to gain the greatest machine advantage from their computers in executing mathematical calculations (the principal use of computation), programmers spent most of their time converting numerical material into assembler instructions. These first higher-level languages were called Autocodes. They were extremely limited in their range and could handle a few relatively simple formulae.
And so the stage was set for IBM to set up a group, headed by John Backus, in 1953; in 1955, the group began its six month job of writing a compiler. They finished in 1957. The report was published late that year, and by spring 1958, IBMs formula translator (Fortran) was being taught.
Table of Contents |