The Breakthrough

Inventing the transistor meant the fulfillment of a dream.

–Ernest Braun and Stuart MacDonald, Revolution in Miniature

In the 1940s, the switching units in computers were mechanical relays that constantly opened and closed, clattering away like freight trains. In the 1950s, vacuum tubes replaced mechanical relays. But tubes were a technological dead end. They could be made only just so small, and because they generated heat, they had to be spaced a certain distance apart from one another. As a result, tubes afflicted the early computers with a sort of structural elephantiasis.

But by 1960, physicists working on solid-state elements introduced an entirely new component into the mix. The device that consigned the vacuum tube to the back-alley bin was the transistor, a tiny, seemingly inert slice of crystal with interesting electrical properties. The transistor was immediately recognized as a revolutionary development. In fact, John Bardeen, Walter Brattain, and William Shockley shared the 1956 Nobel Prize in physics for their work on the innovation.

The transistor was significant for more than merely making another bit of technology obsolete. Resulting from a series of experiments in the application of quantum physics, transistors changed the computer from a “giant electronic brain” that was the exclusive domain of engineers and scientists to a commodity that could be purchased like a television set. The transistor was the technological breakthrough that made both the minicomputers of the 1960s and the personal-computer revolution of the 1970s possible.

Bardeen and Brattain introduced “the major invention of the century” in 1947, two days before Christmas. But to understand the real significance of the device that came into existence that winter day in Murray Hill, New Jersey, you have to look back to research done years before.

Inventing the Transistor

In the 1940s, Bardeen and Shockley were working in apparently unrelated fields. Experiments in quantum physics resulted in some odd predictions (which were later born out) about the behavior that chemical element crystals, such as germanium and silicon, would display in an electrical field. These crystals could not be classified as either insulators or conductors, so they were simply called semiconductors. Semiconductors had one property that particularly fascinated electrical engineers: a semiconductor crystal could be made to conduct electricity in one direction but not in the other. Engineers put this discovery to practical use. Tiny slivers of such crystals were used to “rectify” electrical current; that is, to turn alternating current into direct current. Early radios, called crystal sets, were the first commercial products to use these crystal rectifiers.

The crystal rectifier was a curious item, a slice of mineral material that did useful work but had no moving parts: a solid-state device. But the rectifier knew only one trick. A different device soon replaced it almost entirely: Lee de Forest’s triode, the vacuum tube that made radios glow. The triode was more versatile than the crystal rectifier; it could both amplify a current passing through it and use a weak secondary current to alter a strong current passing from one of its poles to the other. It was a step in the marriage of electricity and logic, and this capability to change one current by means of another would be essential to EDVAC-type computer design. At the time, though, researchers saw that the triode’s main application lay in telephone switching circuits.

Naturally, people at AT&T, and especially at its research branch Bell Labs, became interested in the triode. William Shockley was working for Bell Labs at the time and looking at the effect impurities had on semiconductor crystals. Trace amounts of other substances could provide the extra electrons needed to carry electrical current in the devices. Shockley convinced Bell Labs to let him put together a team to study this intriguing development. His team consisted of experimental scientist Walter Brattain and theoretician John Bardeen. For some time the group’s efforts went nowhere. Similar research was underway at Purdue University in Lafayette, Indiana, and the Bell group kept close tabs on the work going on there.

Then Bardeen discovered that an inhibiting effect on the surface of the crystal was interfering with the flow of current. Brattain conducted the experiment that proved Bardeen right, and on December 23, 1947, the transistor was born. The transistor did everything the vacuum tube did, and it did it better. It was smaller, it didn’t generate as much heat, and it didn’t burn out.

Integrated Circuits

Most important, the functions performed by several transistors could be incorporated into a single semiconductor device. Researchers quickly set about the task of constructing these sophisticated semiconductors. Because these devices integrated a number of transistors into a more complex circuit, they were called integrated circuits, or ICs. Because they essentially were tiny slivers of silicon, they also came to be called chips.

Building ICs was a complicated and expensive process, and an entire industry devoted to making them soon sprang up. The first companies to begin producing chips commercially were the existing electronics companies. One very early start-up company was Shockley Semiconductor, which William Shockley founded in 1955 in his hometown of Palo Alto. Shockley’s firm employed many of the world’s best semiconductor people. Some of those folks didn’t stay with the company for long. Shockley Semiconductor spawned Fairchild Semiconductor, and Silicon Valley grew from this root.

images/images-by-chapter/chapter-1/Moore-Noyce.jpg

Figure 10. Gordon Moore and Robert Noyce Gordon Moore (left) and Robert Noyce founded Intel, which became the computer industry’s semiconductor powerhouse. (Courtesy of Intel)

A decade after Fairchild was formed, virtually every semiconductor company in existence could boast a large number of former Fairchild employees. Even the big electronics companies, such as Motorola, that entered the semiconductor industry in the 1960s employed ex-Fairchild engineers. And except for some notable exceptions—RCA, Motorola, and Texas Instruments—most of the semiconductor companies were located within a few miles of Shockley’s operation in Palo Alto in the Santa Clara Valley, soon to be rechristened Silicon Valley. The semiconductor industry grew with amazing speed, and the size and price of its products shrank at the same pace. Competition was fierce.

At first, little demand existed for highly complex ICs outside of the military and aerospace industries. Certain kinds of ICs, though, were in common use in large mainframe computers and minicomputers. Of paramount importance were memory chips—ICs that could store data and retain them as long as they were fed power. Memory chips were the wedge that took semiconductors mainstream.

Memory chips at the time embodied the functions of hundreds of transistors. Other ICs weren’t designed to retain the data that flowed through them, but instead were programmed to change the data in certain ways in order to perform simple arithmetic or logic operations on it. Then, in the early 1970s, the runaway demand for electronic calculators led to the creation of a new and considerably more powerful computer chip.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset