Progenitors of the Information Age
The Development of Chips and Computers
James W. Cortada
Ground zero and the first minute of the Computer Revolution occurred at 9:00 A.M., April 21, 1952, at Bell Laboratories in Murray Hill, New Jersey. Representatives from more than thirty companies gathered for a six-day seminar on the transistor, learning about what would become the heart of the computer. The companies had paid a $25,000 fee for their representatives to attend, and with that attendance to sign up for the right to manufacture the transistor. If one had to pick the one time, place, and item that heralded the arrival of the computer, this event is as good as any because the key to understanding why the computer was such an American story depended more on appreciating how this machine was diffused than on how it was invented. The initial development of the computer for primarily military purposes occurred simultaneously in Great Britain and the United States, with smaller projects underway in Germany and in Poland, but the transistor was clearly a U.S. invention that we can precisely document as having been created at Bell Labs in the I940s. It was the decision of American Telephone & Telegraph (AT&T) to license this technology to other firms that made it possible for the transistor to work its way out of one company into many, and therefore to be used in ways unanticipated in the beginning. One of those unanticipated uses would be as the heart of this new machine we call the computer.
AT&T recognized that by sharing the rights to manufacture the transistor it would fulfill a traditional commitment as a nationally regulated utility in disseminating knowledge useful to the public while generating revenues from licensing agreements. This act would also stave off real concerns about what the Antitrust Division of the U.S. Department of Justice might do; at that moment, government lawyers were pondering the massive market power of AT&T. Management