Looking Back

It’s a brave new world we are embarking on with all of our embedded personal computers that have been programmed by others who hopefully have our well being in mind.

—Paul Terrell

What is a personal computer?

To those who first built them, a personal computer meant a computer of your own. It meant a lot. That was the promise of the microprocessor: that you could have a computer of your own.

And that, to some—the time in question being the tail end of the idealistic ’60s—implied empowering others as well. "Computer power to the people" was not just a slogan heard in those days; it was a motivating force for many personal-computer pioneers. They wanted to build computers to be used for the betterment of mankind. They wanted to bring the power of computers to everyone.

This humanitarian motive existed alongside other, more self-centered motives. Having your own computer meant having control over the device, with the sense of power that brought. If you were an engineer, if you were a programmer, it specifically meant control over this necessary tool of your trade.

But control over the box was just the start; what you really needed was control over the software. Software is the ultimate example of building on the shoulders of others. Every program written was either pointlessly reinventing the wheel or building on existing software. And nothing was more natural than to build on other programs. A program was an intellectual product. When you saw it and understood it, you had it; it was a part of you. It was madness to suggest that you couldn’t use knowledge in your own head—and only slightly less mad to try to prevent you from acquiring knowledge that would help you do your job.

The idea of owning your own computer, paradoxically, fed the idea of not owning software, the idea that software should be open to be observed and analyzed, independently tested, borrowed, and built on. It supported the ideas of open source software and nonproprietary architectures, a collaborative perspective that John “Captain Crunch” Draper had called “the Woz principle” (for Steve Wozniak).

How have these ideas—the Woz principle and computer power for the people—fared with the deconstruction of the personal computer?

The Woz Principle

There was once an ongoing debate about the merits of proprietary versus open source software. Supporters of open source argued that it was better than the commercial kind because everyone was invited to find and fix bugs. They claimed that in an open source world, a kind of natural selection would ensure that only the best software survived. Eric S. Raymond, an author and open source proponent, contended that in open source we were seeing the emergence of a new nonmarket economy, with similarities to medieval guilds.

The debate is effectively over. Open source software didn’t drive out proprietary software, but neither is open source going away. This open source idea, what John Draper had called the Woz principle, is really just an elaboration of the sharing of ideas that many of the pioneers of the industry had enjoyed in college, and that scientifically trained people understood was the key to scientific advance. It was older than Netscape, older than Apple, older than the personal-computer revolution. As an approach to developing software, it began with the first computers in the 1940s and has been part of the advance of computer-software technology ever since. It was important in the spread of the personal-computer revolution, and now it powers the Internet.

The market for post-PC devices is different from the market for personal computers. The personal computer started as a hobbyist product designed by tech enthusiasts for people like themselves. Commoditization began in just a few years, but the unique nature of the computer—a device whose purpose is left to the user to define—allowed it to resist commoditization for decades. In the post-PC era, even though the devices are still really computers they are designed around specific functions. So the forces of commoditization that had been held at bay are now rushing in. The mainstream market for these devices doesn’t want flexibility and moddability and choices to make once they’ve purchased a device. They want it all to be simple, clear, consistent, and attractive. They don’t care about open source, but they like free. They want security but they don’t want to have to do anything about it. Their values are not the values of the people making the devices or the apps. They’re normal consumers, something the companies in the early personal-computer industry never really had to acknowledge.

If you look only at commercial apps, you will conclude that proprietary software, locked-down distribution channels, and closed architectures have won. If you look at the Internet and the Web and at the underlying code and practices of web developers, on the other hand, it is clear that open source and open architecture and standards are deeply entrenched.

And even in the device space, the differing models of Android and Apple suggest that things may shift further in the direction of the Woz principle. In this revolution, the workers have often succeeded in controlling the tools of their trade.

The implications go beyond the interests of software developers. Google’s Android model has allowed the development of lower-priced platforms that benefit the developing world, where personal computers and iPhones are out of the price range of many people.

As for the developers, a change has occurred in the makeup of that community. Hobbyists, who comprised the bulk of early personal-computer purchasers and developers, are harder to find in the post-PC era. When new phones and tablets hit the market now, they don’t come with BASIC installed, as the Apple II did. The exceptions are in programmable microcomputers used to program devices. The low-budget Arduino and Raspberry Pi being used by electronics hobbyists of today are rare examples of computers that are still intended to be programmed by their users. Perhaps the shift away from hobbyist programming is balanced by the growth in website development, which has a very low barrier to entry.

The industry has changed, but for one player the change had come much earlier, and he decided he didn’t want to be a part of it any longer.

The Electronic Frontier

At the height of his power and influence at Lotus Development Corporation, Mitch Kapor walked away from it all.

Lotus had grown large very quickly. The first venture capital arrived in 1982, Lotus 1-2-3 shipped in January 1983, and that year the company made $53 million in sales. By early 1986 some 1,300 people were working for Lotus. Working for Mitch Kapor.

The growth was out of control, and it was overwhelming. Rather than feeling empowered by success, Kapor felt trapped by it. It occurred to him that he didn’t really like big companies, even when he was the boss.

Then came a day when a major customer complained that Lotus was making changes in its software too often: that it was, in effect, innovating too rapidly. So what was Lotus supposed to do, slow down the pace of innovation? Exactly. That’s what the company did. It was perfectly logical. Kapor didn’t really fault it as a business decision. But what satisfaction was there in dumbing down your company?

images/images-by-chapter/chapter-10/Kapor.jpg

Figure 94. Mitch Kapor Kapor went from teaching transcendental meditation to founding one of the most successful companies in personal computing’s boom days to defending individual rights in the information age.

(Courtesy of Mitch Kapor)

Lotus just wasn’t fun anymore. So Kapor resigned. He walked out the door and never looked back.

This act left him with a question: what now? After having helped launch a revolution, what should he do with the rest of his life? He didn’t get away from Lotus cleanly. He had spent a year completing work on a Lotus product called Agenda while serving as a visiting scientist at MIT. After that he jumped back in and started another firm, the significantly smaller On Technology, a company focusing on software for workgroups.

And in 1989 he began to log on to an online service called The Well. The Well, which stood for Whole Earth ’Lectronic Link, was the brainchild of Stewart Brand, who had also been behind The Whole Earth Catalog. The Well was an online community of bright, techno-savvy people.

“I fell in love with it,” Kapor said later, because “I met a bunch of people online with similar interests who were smart that I wanted to talk to.” He plunged headlong into this virtual community.

One day in the summer of 1990, he even found himself on a cattle ranch in Wyoming talking computers with John Perry Barlow, a former lyricist of the Grateful Dead.

What led to that unlikely meeting was a series of events that bode ill for civil liberties in the new wired world. A few months earlier, some anonymous individual, for whatever motive, had “liberated” a piece of the proprietary, secret operating-system code for the Apple Macintosh and had mailed it on floppy disks to influential people in the computer industry. Kapor was one of these people; John Perry Barlow—the former Grateful Dead lyricist, current cattle rancher, and computer gadfly—was not. But apparently because Barlow had attended something called The Hacker’s Conference, the FBI concluded that Barlow might know the perpetrator.

The Hacker’s Conference was a gathering of gifted programmers, industry pioneers, and legends, organized by the same Stewart Brand who had launched The Well. Hacker in this context was a term of praise, but in society at large it was coming to mean “cybercriminal”—that is, one who illegally breaks into others’ computer systems.

An exceptionally clueless G-man showed up at Barlow’s ranch. The agent demonstrated his ignorance of computers and software, and Barlow attempted to educate him. Their conversation became the subject of an entertaining online essay by Barlow that appeared on The Well.

Soon afterward, Barlow got another visitor at the ranch. But this one, being the founder of Lotus Development Corporation, knew a lot about computers and software. Kapor had received the fateful disk, had had a similar experience with a couple of clueless FBI agents, had read Barlow’s essay, and now wanted to brainstorm with him about the situation.

“The situation” transcended one ignorant FBI investigator or a piece of purloined Apple software. The Secret Service, in what they called “Operation Sun Devil,” had been waging a campaign against computer crime, chiefly by storming into the homes of teenaged computer users at night, waving guns, frightening the families, and confiscating anything that looked like it had to do with computers.

“The situation” involved various levels of law enforcement responding, often with grotesquely excessive force, to acts they scarcely understood. It involved young pranksters being hauled into court on very serious charges in an area where the law was murky and the judges were as uninformed as the police.

It did not escape the notice of Barlow and Kapor that they were planning to take on the government, to take on guys with guns.

What should they do? At the very least, they decided, these kids needed competent legal defense. They decided to put together an organization to provide it. As Kapor explained it, “There was uninformed and panicky government response, treating situations like they were threats to national security. They were in the process of trying to put some of these kids away for a long time and throw away the key, and it just felt like there were injustices being done out of sheer lack of understanding. I felt a moral outrage. Barlow and I felt something had to be done.”

In 1990 they cofounded the Electronic Frontier Foundation (EFF). They put out the word to a few high-profile computer-industry figures who they thought would understand what they were up to. Steve Wozniak kicked in a six-figure contribution immediately, as did Internet pioneer John Gilmore.

Merely fighting the defensive battles in the courts was a passive strategy. EFF, they decided, should play an active role. It should take on proposed and existing legislation, guard civil liberties in cyberspace, help open this new online realm to more people, and try to narrow the gulf between the “info haves” and the “info have-nots.”

The pace picked up when they hired Mike Godwin to head their legal efforts. “Godwin was online a lot as he was finishing law school at the University of Texas,” Kapor remembers. “I was impressed by him.”

EFF evolved quickly from a legal defense fund for some kid hackers to an influential lobbying organization. “In a way it was an ACLU of cyberspace,” Kapor now says. “We quickly found that we were doing a lot of good raising issues, raising consciousness [about] the whole idea of how the Bill of Rights ought to apply to cyberspace and online activity. I got very passionate about it.”

By 1993, EFF had an office in Washington and the ear of the Clinton administration, especially Vice President Al Gore, who dreamed of an information superhighway analogous to his father’s (Senator Albert Gore, Sr.) favorite project, the interstate highway system. Also addressing the issues were organizations such as Computer Professionals for Social Responsibility, which EFF was partly funding.

To a personal-computer pioneer intent on using the power of computers to benefit people and exorcising the shibboleth of the computer as an impersonal tool of power over the masses, this looked promising. It looked like dumb, slow, old power structures on one side and clever, agile, new techno-savvy revolutionaries on the other.

And since those heady days? It is easy to conclude that the problems EFF was created to address are growing faster than the efforts of such organizations can handle. Revelations that the US government is, with the help of huge cloud corporations, tapping into the massive stores of data they collect on citizens of the United States, on foreign countries, and on foreign heads of state, have undermined confidence not just in government but in technology. Once again, as before the personal-computer revolution, computer technology is starting to be seen as power that can be used by the powerful against regular people.

Ironically, the social networks built by the cloud-based companies have become the means of circulating both facts and wild rumors about these surveillance horrors. As the industry rushes gleefully into a networked world of embedded devices and lives lived in public, the public is increasingly convinced that we are being sucked into some nightmare science-fiction future of mind control, tracking devices embedded in newborns, and a Big Brother who sees your every act and knows your every thought. Mary Shelley could do a lot with that material.

But the fears are not groundless. Computers actually have given great power to individuals, and they will continue to do so. But this question cannot be avoided: can we keep them from taking away our privacy, our autonomy, and our freedom?

Looking Forward

The revolution that produced the personal computer is over. It grew out of a unique mixture of technological developments and cultural forces and burst forth with the announcement of the Altair computer in 1975. It reached critical mass in 1984, with the release of the Apple Macintosh, the first computer truly designed for nontechnical people and delivered to a broad consumer market. By the 2000s the revolutionaries had seized the palace.

The technology carried in on this revolutionary movement is now the driving force of the economies of most developed countries and is contributing to the development of the rest. It has changed the world.

Changing the world. By 1975, time and again crazy dreamers had run up against resistance from accepted wisdom and had prevailed to realize their dreams. David Ahl trying to convince Digital Equipment Corporation management that people would actually use computers in the home; Lee Felsenstein working in post-1965 Berkeley to turn technology to populist ends; Ed Roberts looking for a loan to keep MITS afloat so it could build kit computers; Bill Gates dropping out of Harvard to get a piece of the dream; Steve Dompier flying to Albuquerque to check up on his Altair order; Dick Heiser and Paul Terrell opening stores to sell a product for which their friends told them there was no market; Mike Markkula backing two kids in a garage—dreamers all. And then there was Ted Nelson, the ultimate crazy dreamer, envisioning a new world and spending a lifetime trying to bring it to life. In one way or another, they were all dreaming of one thing: the personal computer, the packaging of the awesome power of computer technology into a little box that anyone could own.

Changing the world. It’s happening all the time.

Today that little box is taken for granted at best, and just as often supplanted by new devices, services, and ideas that will, in turn, be supplanted by newer, possibly better ones. Technological innovation on a scale never seen in human history is simply a fact of life today.

Perhaps the lesson of the personal-computer revolution is that technological innovations are never value-neutral. They are motivated by human hopes and desires, and they reflect the values of those who create them and of those who embrace them. Perhaps they are a reflection of who and what we are. If so, are we happy with what they show us?

In any case, we seem to be in an era of unbridled technological innovation. You can’t guess what new technological idea will shake the world, but you can be sure it is coming. Some bright young hacker is probably working on it today.

She may even be reading this book right now.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset