6

Winners and Losers

BY 1966, when David Silver took his first elevator ride to the ninth floor of Tech Square, the AI lab was a showcase community, working under the hallowed precepts of the Hacker Ethic. After a big Chinese dinner, the hackers would go at it until dawn, congregating around the PDP-6 to do what was most important in the world to them. They would waddle back and forth with their printouts and their manuals, kibitzing around whoever was using the terminal at that time, appreciating the flair with which the programmer wrote his code. Obviously, the key to the lab was cooperation and a joint belief in the mission of hacking. These people were passionately involved in technology, and as soon as he saw them David Silver wanted to spend all his time there.

David Silver was fourteen years old. He was in the sixth grade, having been left back twice. He could hardly read. His classmates often taunted him. Later, people would reflect that his problem had been dyslexia; Silver would simply say that he "wasn't interested" in the teachers, the students, or anything that went on in school. He was interested in building systems.

From the time he was six or so, he had been going regularly to Eli Heffron's junkyard in Cambridge (where TMRC hackers also scavenged) and recovering all sorts of fascinating things. Once, when he was around ten, he came back with a radar dish, tore it apart, and rebuilt it so that it could pick up sounds he rigged it as a parabolic reflector, stuck in a microphone, and was able to pick up conversations thousands of feet away. Mostly he used to listen to faraway cars, or birds, or insects. He also built a lot of audio equipment, and dabbled in time-lapse photography. Then he got interested in computers.

His father was a scientist, a friend of Minsky's and a teacher at MIT. He had a terminal in his office connected to the Compatible Time-sharing System on the IBM 7094. David began working with it his first program was written in LISP, and translated English phrases into pig Latin. Then he began working on a program that would control a tiny robot he called it a "bug" which he built at home, out of old telephone relays that he got at Eli's. He hooked the bug to the terminal, and working in machine language he wrote a program that made the two-wheeled bug actually crawl. David decided that robotics was the best of all pursuits what could be more interesting than making machines that could move on their own, see on their own ... think on their own?

So his visit to the AI lab, arranged by Minsky, was a revelation. Not only were these people as excited about computers as David Silver was, but one of the major activities at the lab was robotics. Minsky was extremely interested in that field. Robotics was crucial to the progress of artificial intelligence; it let us see how far man could go in making smart machines do his work. Many of Minsky's graduate students concerned themselves with the theory of robotics, Grafting theses about the relative difficulty of getting a robot to do this or that. The hackers were also heavily involved in the field not so much in theorizing as in building and experimenting. Hackers loved robots for much the same reasons that David Silver did. Controlling a robot was a step beyond computer programming in controlling the system that was the real world. As Gosper used to say, "Why should we limit computers to the lies people tell them through keyboards?" Robots could go off and find out for themselves what the world was like.

When you program a robot to do something, Gosper would later explain, you get "a kind of gratification, an emotional impact, that is completely indescribable. And it far surpasses the kind of gratification you get from a working program. You're getting a physical confirmation of the correctness of your construction. Maybe it's sort of like having a kid."

One big project that the hackers completed was a robot that could catch a ball. Using a mechanical arm controlled by the PDP-6, as well as a television camera, Nelson, Greenblatt, and Gosper worked for months until the arm could finally catch a Ping-Pong ball lobbed toward it. The arm was able to determine the location of the ball in time to move itself in position to catch it. It was something the hackers were tremendously proud of, and Gosper especially wanted to go further and begin work on a more mobile robot which could actually play Ping-Pong.

"Ping-Pong by Christmas?" Minsky asked Gosper as they watched the robot catch balls.

Ping-Pong, like Chinese restaurants, was a system Gosper respected. He'd played the game in his basement as a kid, and his Ping-Pong style had much in common with his hacking style: both were based on his love of the physically improbable. When Gosper hit a Ping-Pong ball, the result was something as looney as a PDP-6 display hack he put so much English on the ball that complex and counterintuitive forces were summoned, and there was no telling where the ball might go. Gosper loved the spin, the denial of gravity that allowed you to violently slam a ball so that instead of sailing past the end of a table it suddenly curved down, and when the opponent tried to hit it the ball would be spinning so furiously that it would fly off toward the ceiling. Or he would chop at a ball to increase the spin so much that it almost flattened out, nearly exploding in mid-air from the centrifugal force. "There were times when in games I was having," Gosper would later say, "a ball would do something in mid-air, something unphysical, that would cause spectators to gasp. I have seen inexplicable things happen in mid-air. Those were interesting moments."

Gosper was obsessed for a while with the idea of a robot playing the game. The hackers actually did get the robot to hold a paddle and take a good swat at a ball lobbed in its direction. Bill Bennett would later recall a time when Minsky stepped into the robot arm's area, floodlit by the bright lights required by the vidicon camera; the robot, seeing the glare reflecting from Minsky's bald dome, mistook the professor for a large Ping-Pong ball and nearly decapitated him.

Gosper wanted to go all the way, have the robot geared to move around and make clever shots, perhaps with the otherworldly spin of a good Gosper volley. But Minsky, who had actually done some of the hardware design for the ball-catching machine, did not think it an interesting problem. He considered it no different from the problem of shooting missiles out of the sky with other missiles, a task that the Defense Department seemed to have under control. Minsky dissuaded Gosper from going ahead on the Ping-Pong project, and Gosper would later insist that that robot could have changed history.

Of course, the idea that a project like that was even considered was thrilling to David Silver. Minsky had allowed Silver to hang out on the ninth floor, and soon Silver had dropped out of school totally, so he could spend his time more constructively at Tech Square. Since hackers care less about people's age than about someone's potential contribution to hacking, fourteen-year-old David Silver was accepted, at first as sort of a mascot.

He immediately proved himself of some value by volunteering to do some tedious lock-hacking tasks. It was a time when the administration had installed a tough new system of high-security locks. Sometimes the slightly built teen-ager would spend a whole night crawling over false ceilings, to take apart a hallway's worth of locks, study them to see how the mastering system worked, and painstakingly reconstruct them before the administrators returned in the morning. Silver was very good at working with machinist's tools, and he machined a certain blank which could be fashioned into a key to open a particularly tough new lock. The lock was on a door protecting a room with a high-security safe which held ... keys. Once the hackers got to that, the system "unraveled," in Silver's term.

Silver saw the hackers as his teachers he could ask them anything about computers or machines, and they would toss him enormous chunks of knowledge. This would be transmitted in the colorful hacker jargon, loaded with odd, teddy-bearish variations on the English language. Words like winnitude, Greenblattful, gronk, and foo were staples of the hacker vocabulary, shorthand for relatively nonverbal people to communicate exactly what was on their minds.

Silver had all sorts of questions. Some of them were very basic: What are the various pieces computers are made of? What are control systems made of? But as he got more deeply into robotics he found that the questions you had to ask were double-edged. You had to consider things in almost cosmic terms before you could create reality for a robot. What is a point? What is velocity? What is acceleration? Questions about physics, questions about numbers, questions about information, questions about the representation of things ... it got to the point. Silver realized later, where he was "asking basic philosophical questions like what am I, what is the universe, what are computers, what can you use them for, and how does that relate? At that time all those questions were interesting, because it was the first time I had started to contemplate. And started to know enough about computers, and was relating biological-, human-, and animal-type functions, and starting to relate them to science and technology and computers. I began to realize that there was this idea that you could do things with computers that are similar to the things intelligent beings do."

Silver's guru was Bill Gosper. They would often go off to one of the dorms for Ping-Pong, go out for Chinese food, or talk about computers and math. All the while, Silver was soaking up knowledge in this Xanadu above Cambridge. It was a school no one else knew about, and for the first time in his life he was happy.

The computer and the community around it had freed him, and soon David Silver felt ready to do serious work on the PDP-6. He wanted to write a big, complicated program: he wanted to modify his little robot "bug" so that it would use the television camera to actually "fetch" things that people would toss on the floor. The hackers were not fazed at the fact that no one, even experienced people with access to all sorts of sophisticated equipment, had really done anything similar. Silver went about it in his usual inquisitive style, going to ten or twenty hackers and asking each about a specific section of the vision part of the program. High-tech Tom Sawyer, painting a fence with assembly code. Hardware problems, he'd ask Nelson. Systems problems, Greenblatt. For math formulas, Gosper. And then he'd ask people to help him with a subroutine on that problem. When he got all the subroutines, he worked to put the program together, and he had his vision program.

The bug itself was a foot long and seven inches wide, made of two small motors strapped together with a plastic harness. It had erector-set wheels on either end, an erector-set bar going across the top, and copper welding bars sticking out in front, like a pair of antlers. It looked, frankly, like a piece of junk. Silver used a technique called "image subtraction" to let the computer know where the bug was at any time the camera would always be scanning the scene to see what had moved, and would notice any change in its picture. Meanwhile the bug would be moving randomly until the camera picked it up, and the computer directed it to the target, which would be a wallet which someone tossed nearby.

Meanwhile, something was happening which was indicative of a continuing struggle in this hacker haven. David Silver was getting a lot of criticism. The criticism came from nemeses of the Hacker Ethic: the AI theorists and grad students on the eighth floor. These were people who did not necessarily see the process of computing as a joyful end in itself: they were more concerned with getting degrees, winning professional recognition, and the, ahem, advancement of computer science. They considered hackerism unscientific. They were always demanding that hackers get off the machine so they could work on their Officially Sanctioned Programs, and they were appalled at the seemingly frivolous uses to which the hackers put the computer. The grad students were all in the midst of scholarly and scientific theses and dissertations which pontificated on the difficulty of doing the kind of thing that David Silver was attempting. They would not consider any sort of computer-vision experiment without much more planning, complete review of previous experiments, careful architecture, and a setup which included pure white cubes on black velvet in a pristine, dustless room. They were furious that the valuable time of the PDP-6 was being taken up for this ... toy! By a callow teen-ager, playing with the PDP-6 as if it were his personal go-cart.

While the grad students were complaining about how David Silver was never going to amount to anything, how David Silver wasn't doing proper AI, and how David Silver was never going to understand things like recursive function theory, David Silver was going ahead with his bug and PDP-6. Someone tossed a wallet on the grimy, crufty floor, and the bug scooted forward, six inches a second, moved right, stopped, moved forward. And the stupid little bug kept darting forward, right, or left until it reached the wallet, then rammed forward until the wallet was solidly between its "antlers" (which looked for all the world like bent shirt-hangers). And then the bug pushed the wallet to its designated "pen." Mission accomplished.

The graduate students went absolutely nuts. They tried to get Silver booted. They claimed there were insurance considerations springing from the presence of a fourteen-year-old in the lab late at night. Minsky had to stand up for the kid. "It sort of drove them crazy," Silver later reflected, "because this kid would just sort of screw around for a few weeks and the computer would start doing the thing they were working on that was really hard, and they were having difficulties and they knew they would never really fully solve [the problem] and couldn't implement it in the real world. And it was all of a sudden happening and I pissed them off. They're theorizing all these things and I'm rolling up my sleeves and doing it ... you find a lot of that in hacking in general. I wasn't approaching it from either a theoretical point of view or an engineering point of view, but from sort of a funness point of view. Let's make this robot wiggle around in a fun, interesting way. And so the things I built and the programs I wrote actually did something. And in many cases they actually did the very things that these graduate students were trying to do."

Eventually the grad students calmed down about Silver. But the schism was constant. The grad students viewed the hackers as necessary but juvenile technicians. The hackers thought that grad students were ignoramuses with their thumbs up their asses who sat around the eighth floor blindly theorizing about what the machine was like. They wouldn't know what The Right Thing was if it fell on them. It was an offensive sight, these incompetents working on Officially Sanctioned Programs which would be the subjects of theses and then tossed out (as opposed to hacker programs, which were used and improved upon). Some of them had won their sanctions by snow-jobbing professors who themselves knew next to nothing about the machines. The hackers would watch these people "spaz out" on the PDP-6, and rue the waste of perfectly good machine time.

One of these grad students, in particular, drove the hackers wild he would make certain mistakes in his programs that would invariably cause the machine to try to execute faulty instructions, so-called "unused op-codes." He would do this for hours and days on end. The machine had a way of dealing with an unused op-code it would store it in a certain place and, assuming you meant to define a new op-code, get ready to go back to it later. If you didn't mean to redefine this illegal instruction, and proceeded without knowing what you'd done, the program would go into a loop, at which point you'd stop it, look over your code, and realize what you'd done wrong. But this student, whom we will call Fubar in lieu of his long-forgotten name, could never understand this, and kept putting in the illegal instructions. Which caused the machine to loop wildly, constantly executing instructions that didn't exist, waiting for Fubar to stop it. Fubar would sit there and stare. When he got a printout of his program, he would stare at that. Later on, perhaps, after he got the printout home, he would realize his mistake, and come back to run the program again. Then he'd make the same error. And the hackers were infuriated because by taking his printout home and fixing it there all the time, he was wasting the PDP-6 doing thumb-sucker, IBM-style batch-processing instead of interactive programming. It was the equivalent of cardinal sin.

So one day Nelson got into the computer and made a hack that would respond to that particular mistake in a different way. People made sure to hang around the next time Fubar was signed up for the machine. He sat down at the console, taking his usual, interminably long time to get going, and sure enough, within a half hour, he made the same stupid mistake. Only this time, on the display screen, he saw that the program was not looping, but displaying the part of his code which had gone wrong. Right in the middle of it, pointing to the illegal instruction he'd put in, was a huge, gleaming, phosphorescent arrow. And flashing on the screen was the legend, "Fubar, you lose again!"

Fubar did not respond graciously. He wailed about his program being vandalized. He was so incensed that he completely ignored the information that Nelson's hack had given him about what he was doing wrong, and what he might do to fix it. He was not, as the hackers had somehow hoped, thankful that this wonderful feature had been installed to help him find the error of his ways. The brilliance of the hack had been wasted on him.


The hackers had a word to describe those graduate students. It was the same word they used to describe almost anyone who pretended to know something about computers and could not back it up with hacker-level expertise. The word was "loser." The hackers were "winners." It was a binary distinction: people around the AI lab were one or the other. The sole criterion was hacking ability. So intense was the quest to improve the world by understanding and building systems that almost all other human traits were disregarded. You could be fourteen years old and dyslexic, and be a winner. Or you could be bright, sensitive, and willing to learn, and still be considered a loser.

To a newcomer, the ninth floor was an intimidating, seemingly impenetrable passion palace of science. Just standing around the likes of Greenblatt or Gosper or Nelson could give you goose bumps. They would seem the smartest people in the world. And since only one person at a time could use the PDP-6, it took a lot of guts to sit down and learn things interactively. Still, anybody who had the hacker spirit in him would be so driven to compute that he would set self-doubt aside and begin writing programs.

Tom Knight, who drifted up to the the ninth floor as a startlingly tall and skinny seventeen-year-old freshman in 1965, went through that process, eventually earning winner status. To do that, he later recalled, "You have to pretty much bury yourself in that culture. Long nights looking over the shoulder of people who were doing interesting things that you didn't understand." What kept him going was his fascination with the machine, how it let you build complicated systems completely under your control. In that sense, Knight later reflected, you had the same kind of control that a dictator had over a political system. But Knight also felt that computers were an infinitely flexible artistic medium, one in which you could express yourself by creating your own little universe. Knight later explained: "Here is this object you can tell what to do, and with no questions asked, it's doing what you tell it to. There are very few institutions where an eighteen-year-old person can get that to happen for him."

People like Knight and Silver hacked so intensely and so well that they became winners. Others faced a long uphill climb, because once hackers felt that you were an obstacle to the general improvement of the overall system, you were a loser in the worst sense and should be either cold-shouldered or told to leave outright.

To some, that seemed cruel. A sensitive hacker named Brian Harvey was particularly upset at the drastically enforced standard. Harvey successfully passed muster himself. While working on the computer he discovered some bugs in the TECO editor, and when he pointed them out, people said, fine now go fix them. He did, realized that the process of debugging was more fun than using a program you'd debugged, and set about looking for more bugs to fix. One day while he was hacking TECO, Greenblatt stood behind him, stroking his chin as Harvey hammered in some code, and said, "I guess we ought to start paying you." That was the way you were hired in the lab. Only winners were hired.

But Harvey did not like it when other people were fingered as losers, treated like pariahs simply because they were not brilliant. Harvey thought that Marvin Minsky had a lot to do with promulgating that attitude. (Minsky later insisted that all he did was allow the hackers to run things themselves "the system was open and literally encouraged people to try it out, and if they were harmful or incompetent, they'd be encouraged to go away.") Harvey recognized that, while on the one hand the AI lab, fueled by the Hacker Ethic, was "a great intellectual garden," on the other hand it was flawed by the fact that who you were didn't matter as much as what kind of hacker you were.

Some people fell right into a trap of trying so hard to be a winner on the machine that they were judged instantly as losers: for instance, Gerry Sussman, who arrived at MIT as a cocky seventeen year-old. Having been an adolescent electronics junkie and high school computer fan, the first thing he did when he arrived at MIT was to seek a computer. Someone pointed him to Tech Square. He asked a person who seemed to belong there if he could play with the computer. Richard Greenblatt said, go ahead, play with it.

So Sussman began working on a program. Not long after, this odd-looking bald guy came over. Sussman figured the guy was going to boot him out, but instead the man sat down, asking, "Hey, what are you doing?" Sussman talked over his program with the man, Marvin Minsky. At one point in the discussion, Sussman told Minsky that he was using a certain randomizing technique in his program because he didn't want the machine to have any preconceived notions. Minsky said, "Well, it has them, it's just that you don't know what they are." It was the most profound thing Gerry Sussman had ever heard. And Minsky continued, telling him that the world is built a certain way, and the most important thing we can do with the world is avoid randomness, and figure out ways by which things can be planned. Wisdom like this has its effect on seventeen-year-old freshmen, and from then on Sussman was hooked.

But he got off on the wrong foot with the hackers. He tried to compensate for his insecurity by excessive bravado, and everyone saw right through it. He was also, by many accounts, terrifically clumsy, almost getting himself flattened in a bout with the robot arm which he had infinite trouble controlling and once he accidentally crushed a special brand of imported Ping-Pong ball that Gosper had brought into the lab. Another time, while on a venture of the Midnight Computer Wiring Society, Sussman got a glob of solder in his eye. He was losing left and right.

Perhaps to cultivate a suave image, Sussman smoked a pipe, the utterly wrong thing to do on the smokeaphobic ninth floor, and one day the hackers managed to replace some of his tobacco with cut-up rubber bands of the same approximate color.

He unilaterally apprenticed himself to Gosper, the most verbally profound of the hackers. Gosper might not have thought that Sussman was much of a winner at that point, but he loved an audience, and tolerated Sussman's misguided cockiness. Sometimes the wry guru's remarks would set Sussman's head spinning, like the time Gosper offhandedly remarked that "Well, data is just a dumb kind of programming." To Sussman, that answered the eternal existence question, "What are you?" We are data, pieces of a cosmic computer program that is the universe. Looking at Gosper's programs,

Sussman divined that this philosophy was embedded in the code. Sussman later explained that "Gosper sort of imagined the world as being made out of all these little pieces, each of which is a little machine which is a little independent local state. And [each state] would talk to its neighbors."

Looking at Gosper's programs, Sussman realized an important assumption of hackerism: all serious computer programs are expressions of an individual. "It's only incidental that computers execute programs," Sussman would later explain. "The important thing about a program is that it's something you can show to people, and they can read it and they can learn something from it. It carries information. It's a piece of your mind that you can write down and give to someone else just like a book." Sussman learned to read programs with the same sensitivity that a literature buff would read a poem. There are fun programs with jokes in them, there are exciting programs which do The Right Thing, and there are sad programs which make valiant tries but don't quite fly.

These are important things to know, but they did not necessarily make you a winner. It was hacking that did it for Sussman. He stuck at it, hung around Gosper a lot, toned down his know-it-all attitude, and, above all, became an impressive programmer. He was the rare loser who eventually turned things around and became a winner. He later wrote a very complicated and much-heralded program in which the computer would move blocks with a robot arm; and by a process much like debugging, the program would figure out for itself which blocks it would have to move to get to the one requested. It was a significant step forward for artificial intelligence, and Sussman became known thereafter as more of a scientist, a planner. He named his famous program HACKER.

One thing that helped Sussman in his turnaround from loser to winner was a sense of what The Right Thing was. The biggest losers of all, in the eyes of the hackers, were those who so lacked that ability that they were incapable of realizing what the true best machine was, or the true best computer language, or the true best way to use a computer. And no system of using a computer earned the hackers' contempt as much as the time-sharing systems which, since they were a major part of Project MAC, were also based on the ninth floor of Tech Square. The first one, which was operating since the mid-sixties, was the Compatible Time-sharing System (CTSS). The other, long in preparation and high in expense, was called Multics, and was so offensive that its mere existence was an outrage.

Unlike the quiltwork of constantly improving systems programs operating on the PDP-6, CTSS had been written by one man, MIT Professor F. J. Corbate. It had been a virtuoso job in many respects, all carefully coded and ready to run on the IBM 7094, which would support a series of terminals to be used simultaneously. But to the hackers, CTSS represented bureaucracy and IBM-ism. "One of the really fun things about computers is that you have control over them," CTSS foe Tom Knight would later explain. "When you have a bureaucracy around a computer you no longer have control over it. The CTSS was a 'serious' system. People had to go get accounts and had to pay attention to security. It was a benign bureaucracy, but nevertheless a bureaucracy, full of people who were here from nine to five. If there was some reason you wanted to change the behavior of the system, the way it worked, or develop a program that might have only sometimes worked, or might have some danger of crashing the system, that was not encouraged [on CTSS]. You want an environment where making those mistakes is not something for which you're castigated, but an environment where people say, 'Oops, you made a mistake.'"

In other words, CTSS discouraged hacking. Add to this the fact that it was run on a two-million-dollar IBM machine that the hackers thought was much inferior to their PDP-6, and you had one loser system. No one was asking the hackers to use CTSS, but it was there, and sometimes you just have to do some hacking on what's available. When a hacker would try to use it, and a message would come on-screen saying that you couldn't log on without the proper password, he would be compelled to retaliate. Because to hackers, passwords were even more odious than locked doors. What could be worse than someone telling you that you weren't authorized to use his computer?

As it turned out, the hackers learned the CTSS system so well that they could circumvent the password requirements. Once they were on the system, they would rub it in a bit by leaving messages to the administrators high-tech equivalents of "Kilroy Was Here." Sometimes they would even get the computer to print out a list of all current passwords, and leave the printout under an administrator's door. Greenblatt recalls that the Project MAC-CTSS people took a dim view of that, and inserted an official MAC memo which would flash when you logged in, basically saying, a password is your sanctity, and only the lowest form of human would violate a password. Tom Knight got inside the system and changed the heading of that memo from MAC to HAC.

But as bad as CTSS was, the hackers thought Multics was worse. Multics was the name of the hugely expensive time-sharing system for the masses being built and debugged on the ninth floor. Though it was designed for general users, the hackers evaluated the structure of any system in a very personal light, especially a system created on the very floor of the building in which they hacked. So MULTICS was a big topic of hacker conversation.

Originally, Multics was done in conjunction with General Electric; then Honeywell stepped in. There were all sorts of problems with it. As soon as the hackers heard that the system would run on teletype Model 33 terminals instead of fast, interactive CRT displays, they knew the system was a total loser. The fact that the system was written in an IBM-created computer language called PL/I instead of sleek machine language was appalling. When the system first ran, it was incredibly sluggish. It was so slow that the hackers concluded the whole system must be brain-damaged, a term used so often to describe Multics that "brain-damaged" became a standard hackerese pejorative.

But the worst thing about Multics was the heavy security and the system of charging the user for the time. Multics took the attitude that the user paid down to the last nickel; it charged some for the memory you used, some more for the disk space, more for the time. Meanwhile the Multics planners, in the hacker view, were making proclamations about how this was the only way that utilities could work. The system totally turned the Hacker Ethic around instead of encouraging more time on the computer (the only good thing about time-sharing as far as most hackers were concerned), it urged you to spend less time and to use less of the computer's facilities once you were on! The Multics philosophy was a disaster.

The hackers plagued the Multics system with tricks and crashes. It was almost a duty to do it. As Minsky would later say, "There were people doing projects that some other people didn't like and they would play all sorts of jokes on them so that it was impossible to work with them... I think [the hackers] helped progress by undermining professors with stupid plans."

In light of the guerrilla tendencies of hackers, the planners in charge of the AI lab had to tread very lightly with suggestions that would impact the hacker environment. And around 1967, the planners wanted a whopper of a change. They wanted to convert the hackers' beloved PDP-6 into a time-sharing machine.

By that time, Minsky had turned many of his AI lab leadership duties over to his friend Ed Fredkin, Nelson's boss at Triple-I who himself was easing out of full-time business and into a professorship at MIT. (Fredkin would be one of the youngest full professors on the faculty, and the only full professor without a degree.) A master programmer himself, Fredkin was already close to the hackers. He appreciated the way the laissez-faire attitude allowed hackers to be dazzlingly productive. But he thought that sometimes the hackers could benefit from top-down direction. One of his early attempts to organize a "human wave" approach toward a robotics problem, assigning the hackers specific parts of the problem himself, had failed ignominiously. "Everyone thought I was crazy," Fredkin later recalled. He ultimately accepted the fact that the best way to get hackers to do things was to suggest them, and hope that the hackers would be interested enough. Then you would get production unheard of in industry or academia.

Time-sharing was something that Minsky and Fredkin considered essential. Between hackers and Officially Sanctioned Users, the PDP-6 was in constant demand; people were frustrated by long waits for access. But the hackers did not consider time-sharing acceptable. They pointed at CTSS, Multics, even at Jack Dennis' more amiable system on the PDP-1, as examples of the slower, less powerful access one would be stuck with when one shared the computer with others using it at the same time.

They noted that certain large programs could not be run at all with time-sharing. One of these was a monster program that Peter Samson had been working on. It was sort of an outgrowth of one of his first hacks on the TX-0, a program which, if you typed in the names of two subway stations on the MTA, would tell you the proper subway lines to take, and where to make the changes from one to another. Now, Samson was tackling the entire New York subway system ... he intended to put the entire system in the computer's memory, and the full timetable of its trains on a data disk accessible by the computer. One day he ran the program to figure out a route by which a person could ride the entire subway system with one token. It got some media attention, and then someone suggested that they see if they could use the computer to actually do it, break a record previously set by a Harvard student for actually traveling to every stop on the New York subway system.

After months of hacking, Samson came up with a scheme, and one day two hackers made the run. A teletype was installed at the MIT Alumni Club in Manhattan, connected to the PDP-6. Two dozen or so messengers were stationed along the route, and they periodically ducked into pay phones, constantly updating schedule information, calling in late trains, reporting delays, and noting missed connections. The hackers at the teletype pounded in the information, and back in Cambridge the PDP-6 calculated changes in the route. As the travelers passed each station, Samson marked it off on a war-room map. The idea of these crew-cut madmen stark contrast to the long-haired protesters making news in other sorts of activities captured the imagination of the media for a day, and The Great Subway Hack was noted as one of the memorable uses of the PDP-6.

It underlined something that Greenblatt, Gosper, and the rest considered essential the magic that could come only from programs using all of the computer. The hackers worked on the PDP-6, one by one, as if it were their own personal computer. They would often run display programs which ran in "real time" and required the computer to constantly refresh the screen; timesharing would make the display hacks run slower. And the hackers had gotten used to little frills that came from complete control of the PDP-6, like being able to track a program by the flashing lights (indicating which registers in the machine were firing). Those perks would be gone with time-sharing.

At heart, though, the time-sharing issue was an esthetic question. The very idea that you could not control the entire machine was disturbing. Even if the time-sharing system allowed the machine to respond to you in exactly the same way as it did in single-user mode, you would just know that it wasn't all yours. It would be like trying to make love to your wife, knowing she was simultaneously making love to six other people!

The hackers' stubbornness on this issue illustrated their commitment to the quality of computing; they were not prepared to compromise by using an inferior system that would serve more people and perhaps spread the gospel of hacking. In their view, hacking would be better served by using the best system possible. Not a time-shared system.

Fredkin was faced with an uphill political struggle. His strategy was to turn around the most vehement of the anti-time-sharing camp Greenblatt. There was a certain affection between them. Fredkin was the only person on the-ninth floor who called Greenblatt "Ricky." So he courted. He cajoled. He told Greenblatt how the power of the PDP-6 would be improved by a new piece of hardware which would expand its memory to a size bigger than any computer in the world. He promised that the time-sharing system would be better than any to date and the hackers would control it. He worked on Greenblatt for weeks, and finally Ricky Greenblatt agreed that time-sharing should be implemented on the PDP-6.

Soon after that, Fredkin was in his office when Bill Gosper marched in, leading several hackers. They lined up before Fredkin's desk and gave him a collective icy stare.

"What's up?" Fredkin asked.

They kept staring him for a while longer. Finally they spoke.

"We'd like to know what you've done to Greenblatt," they said. "We have reason to believe you've hypnotized him."

Gosper in particular had difficulty accepting joint control of the PDP-6. His behavior reminded Fredkin of Rourke, the architect in Ayn Rand's The Fountainhead who designed a beautiful building; when Rourke's superiors took control of the design and compromised its beauty, Rourke blew up the building. Fredkin later recalled Gosper telling him that if time-sharing were implemented on the PDP-6, Gosper would be compelled to physically demolish the machine. "Just like Rourke," Fredkin later recalled. "He felt if this terrible thing was to be done, you would have to destroy it. And I understood this feeling. So I worked out a compromise." The compromise allowed the machine to be run late at night in single-user mode, so the hackers could run giant display programs and have the PDP-6 at their total command.

The entire experiment in time-sharing did not work out badly at all. The reason was that a special, new time-sharing system was created, a system that had the Hacker Ethic in its very soul.


The core of the system was written by Greenblatt and Nelson, in weeks of hard-core hacking. After some of the software was done, Tom Knight and others began the necessary adjustments to the PDP-6 and the brand-new memory addition a large cabinet with the girth of two Laundromat-size washing machines, nicknamed Moby Memory. Although the administration approved of the hackers' working on the system, Greenblatt and the rest exercised full authority on how the system would turn out. An indication of how this system differed from the others (like the Compatible Time-sharing System) was the name that Tom Knight gave the hacker program: the Incompatible Time-sharing System (ITS).

The title was particularly ironic because, in terms of friendliness to other systems and programs, ITS was much more compatible than CTSS. True to the Hacker Ethic, ITS could easily be linked to other things that way it could be infinitely extended so users could probe the world more effectively. As in any time-sharing system, several users would be able to run programs on ITS at the same time. But on ITS, one user could also run several programs at once. ITS also allowed considerable use of the displays, and had what was for the time a very advanced system of editing that used the full screen ("years before the rest of the world," Greenblatt later boasted). Because the hackers wanted the machine to run as swiftly as it would have done had it not been time-shared, Greenblatt and Nelson wrote machine language code which allowed for unprecedented control in a time-sharing system.

There was an even more striking embodiment of the Hacker Ethic within ITS. Unlike almost any other time-sharing system, ITS did not use passwords. It was designed, in fact, to allow hackers maximum access to any user's file. The old practice of having paper tapes in a drawer, a collective program library where you'd have people use and improve your programs, was embedded in ITS; each user could open a set of personal files, stored on a disk. The open architecture of ITS encouraged users to look through these files, see what neat hacks other people were working on, look for bugs in the programs, and fix them. If you wanted a routine to calculate sine functions, for instance, you might look in Gosper's files and find his ten-instruction sine hack. You could go through the programs of the master hackers, looking for ideas, admiring the code. The idea was that computer programs belonged not to individuals, but to the world of users.

ITS also preserved the feeling of community that the hackers had when there was only one user on the machine, and people could crowd around him to watch him code. Through clever cross-bar switching, not only could any user on ITS type a command to find out who else was on the system, but he could actually switch himself to the terminal of any user he wanted to monitor. You could even hack in conjunction with another user for instance, Knight could log in, find out that Gosper was on one of the other ports, and call up his program then he could write lines of code in the program Gosper was hacking.

This feature could be used in all sorts of ways. Later on, after Knight had built some sophisticated graphics terminals, a user might be wailing away on a program and suddenly on screen there would appear this six-legged ... bug. It would crawl up your screen and maybe start munching on your code, spreading little phosphorous crumbs all over. On another terminal, hysterical with high-pitched laughter, would be the hacker who was telling you, in this inscrutable way, that your program was buggy. But though any user had the power not only to do that sort of thing, but to go in your files and delete ("reap," as they called it) your hard-hacked programs and valuable notes, that sort of thing wasn't done. There was honor among hackers on ITS.

The faith that the ITS had in users was best shown in its handling of the problem of intentional system crashes. Formerly, a hacker rite of passage would be breaking into a time-sharing system and causing such digital mayhem maybe by overwhelming the registers with looping calculations that the system would "crash." Go completely dead. After a while a hacker would grow out of that destructive mode, but it happened often enough to be a considerable problem for people who had to work on the system. The more safeguards the system had against this, the bigger the challenge would be for some random hacker to bring the thing to its knees. Multics, for instance, required a truly non-trivial hack before it bombed. So there'd always be macho programmers proving themselves by crashing Multics.

ITS, in contrast, had a command whose specific function was crashing the system. All you had to do was type KILL SYSTEM, and the PDP-6 would grind to a halt. The idea was to take all the fun away from crashing the system by making it trivial to do that. On rare occasions, some loser would look at the available commands and say, "Wonder what KILL does?" and bring the system down, but by and large ITS proved that the best security was no security at all.

Of course, as soon as ITS was put up on the PDP-6 there was a flurry of debugging, which, in a sense, was to go on for well over a decade. Greenblatt was the most prominent of those who spent full days "hacking ITS" seeking bugs, adding new features, making sections of it run faster ... working on it so much that the ITS environment became, in effect, a home for systems hackers.

In the world that was the AI lab, the role of the systems hacker was central. The Hacker Ethic allowed anyone to work on ITS, but the public consequences of systems hacking threw a harsh spotlight on the quality of your work if you were trying to improve the MIDAS assembler or the ITS-DDT debugger, and you made a hideous error, everyone's programs were going to crash, and people were going to find out what loser was responsible. On the other hand, there was no higher calling in hackerism than quality systems hacking.

The planners did not regard systems hacking with similar esteem. The planners were concerned with applications using computers to go beyond computing, to create useful concepts and tools to benefit humanity. To the hackers, the system was an end in itself. Most hackers, after all, had been fascinated by systems since early childhood. They had set aside almost everything else in life once they recognized that the ultimate tool in creating systems was the computer: not only could you use it to set up a fantastically complicated system, at once byzantine and elegantly efficient, but then, with a "Moby" operating system like ITS, that same computer could actually be the system. And the beauty of ITS was that it opened itself up, made it easy for you to write programs to fit within it, begged for new features and bells and whistles. ITS was the hacker living room, and everyone was welcome to do what he could to make himself comfortable, to find and decorate his own little niche. ITS was the perfect system for building ... systems!

It was an endlessly spiraling logical loop. As people used ITS, they might admire this feature or that, but most likely they would think of ways to improve it. This was only natural, because an important corollary of hackerism states that no system or program is ever completed. You can always make it better. Systems are organic, living creations: if people stop working on them and improving them, they die.

When you completed a systems program, be it a major effort like an assembler or debugger or something quick and (you hoped) elegant, like an interface output multiplexor, you were simultaneously creating a tool, unveiling a creation, and fashioning something to advance the level of your own future hacking. It was a particularly circular process, almost a spiritual one, in which the systems programmer was a habitual user of the system he was improving. Many virtuoso systems programs came out of remedies to annoying obstacles which hackers felt prevented them from optimum programming. (Real optimum programming, of course, could only be accomplished when every obstacle between you and the pure computer was eliminated an ideal that probably won't be fulfilled until hackers are somehow biologically merged with computers.) The programs ITS hackers wrote helped them to pro gram more easily, made programs run faster, and allowed programs to gain from the power that comes from using more of the machine. So not only would a hacker get huge satisfaction from writing a brilliant systems program a tool which everyone would use and admire but from then on he would be that much further along in making the next systems program.

To quote a progress report written by hacker Don Eastlake five years after ITS was first running:

The ITS system is not the result of a human wave or crash effort. The system has been incrementally developed almost continuously since its inception. It is indeed true that large systems are never "finished"... In general, the ITS system can be said to have been designer implemented and user designed. The problem of unrealistic software design is greatly diminished when the designer is the implementor. The implementor's ease in programming and pride in the result is increased when he, in an essential sense, is the designer. Features are less likely to turn out to be of low utility if users are their designers and they are less likely to be difficult to use if their designers are their users.

The prose was dense, but the point was clear ITS was the strongest expression yet of the Hacker Ethic. Many thought that it should be a national standard for time-sharing systems everywhere. Let every computer system in the land spread the gospel, eliminating the odious concept of passwords, urging the unrestricted hands-on practice of system debugging, and demonstrating the synergistic power that comes from shared software, where programs belong not to the author but to all users of the machine.

In 1968, major computer institutions held a meeting at the University of Utah to come up with a standard time-sharing system to be used on DEC'S latest machine, the PDP-10. The Ten would be very similar to the PDP-6, and one of the two operating systems under consideration was the hackers' Incompatible Time-sharing System. The other was TENEX, a system written by Bolt Beranek and Newman that had not yet been implemented. Greenblatt and Knight represented MIT at the conference, and they presented an odd picture two hackers trying to persuade the assembled bureaucracies of a dozen large institutions to commit millions of dollars of their equipment to a system that, for starters, had no built-in security.

They failed.

Knight would later say that it was political naivete which lost it for the MIT hackers. He guessed that the fix was in even before the conference was called to order a system based on the Hacker Ethic was too drastic a step for those institutions to take. But Greenblatt later insisted that "we could have carried the day if [we'd] really wanted to." But "charging forward," as he put it, was more important. It was simply not a priority for Greenblatt to spread the Hacker Ethic much beyond the boundaries of Cambridge. He considered it much more important to focus on the society at Tech Square, the hacker Utopia which would stun the world by applying the Hacker Ethic to create ever more perfect systems.