Computer Industry

computer

computer, device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical decisions), by the number and complexity of the operations it can perform, and by its ability to process, store, and retrieve data without human intervention. Computers developed along two separate engineering paths, producing two distinct types of computer—analog and digital. An analog computer operates on continuously varying data; a digital computer performs operations on discrete data.

Computers are categorized by both size and the number of people who can use them concurrently. Supercomputers are sophisticated machines designed to perform complex calculations at maximum speed; they are used to model very large dynamic systems, such as weather patterns. Mainframes, the largest and most powerful general-purpose systems, are designed to meet the computing needs of a large organization by serving hundreds of computer terminals at the same time. Minicomputers, though somewhat smaller, also are multiuser computers, intended to meet the needs of a small company by serving up to a hundred terminals. Microcomputers, computers powered by a microprocessor, are subdivided into personal computers and workstations, the latter typically incorporating RISC processors. Although microcomputers were originally single-user computers, the distinction between them and minicomputers has blurred as microprocessors have become more powerful. Linking multiple microcomputers together through a local area network or joining multiple microprocessors together in a parallel-processing system has enabled smaller systems to perform tasks once reserved for mainframes, and the techniques of grid computing have enabled computer scientists to utilize the unemployed processing power of computers connected over a network or the Internet.

Advances in the technology of integrated circuits have spurred the development of smaller and more powerful general-purpose digital computers. Not only has this reduced the size of the large, multi-user mainframe computers—which in their early years were large enough to walk through—to that of pieces of furniture, but it has also made possible powerful, single-user personal computers and workstations that can sit on a desktop or be easily carried. These, because of their relatively low cost and versatility, have replaced typewriters in the workplace and rendered the analog computer inefficient. The reduced size of computer components has also led to the development of thin, lightweight notebook computers and even smaller computer tablets and smartphones that have much more computing and storage capacity than that of the desktop computers that were available in the early 1990s.

Analog Computers

An analog computer represents data as physical quantities and operates on the data by manipulating the quantities. It is designed to process data in which the variable quantities vary continuously (see analog circuit); it translates the relationships between the variables of a problem into analogous relationships between electrical quantities, such as current and voltage, and solves the original problem by solving the equivalent problem, or analog, that is set up in its electrical circuits. Because of this feature, analog computers were especially useful in the simulation and evaluation of dynamic situations, such as the flight of a space capsule or the changing weather patterns over a certain area. The key component of the analog computer is the operational amplifier, and the computer's capacity is determined by the number of amplifiers it contains. Although analog computers are commonly found in such forms as speedometers and watt-hour meters, they largely have been made obsolete for general-purpose mathematical computations and data storage by digital computers.

Digital Computers

A digital computer is designed to process data in numerical form (see digital circuit); its circuits perform directly the mathematical operations of addition, subtraction, multiplication, and division. The numbers operated on by a digital computer are expressed in the binary system; binary digits, or bits, are 0 and 1, so that 0, 1, 10, 11, 100, 101, etc., correspond to 0, 1, 2, 3, 4, 5, etc. Binary digits are easily expressed in the computer circuitry by the presence (1) or absence (0) of a current or voltage. A series of eight consecutive bits is called a "byte" ; the eight-bit byte permits 256 different "on-off" combinations. Each byte can thus represent one of up to 256 alphanumeric characters, and such an arrangement is called a "single-byte character set" (SBCS); the de facto standard for this representation is the extended ASCII character set. Some languages, such as Japanese, Chinese, and Korean, require more than 256 unique symbols. The use of two bytes, or 16 bits, for each symbol, however, permits the representation of up to 65,536 characters or ideographs. Such an arrangement is called a "double-byte character set" (DBCS); Unicode is the international standard for such a character set. One or more bytes, depending on the computer's architecture, is sometimes called a digital word; it may specify not only the magnitude of the number in question, but also its sign (positive or negative), and may also contain redundant bits that allow automatic detection, and in some cases correction, of certain errors (see code; information theory). A digital computer can store the results of its calculations for later use, can compare results with other data, and on the basis of such comparisons can change the series of operations it performs. Digital computers are now used for a wide range of personal, business, scientific, and government purposes, from electronic games, e-mail, social networking, and data- and word-processing applications to desktop publishing, video conferencing, weather forecasting, simulated nuclear weapons testing, cryptography, and many other purposes.

Processing of Data

The operations of a digital computer are carried out by logic circuits, which are digital circuits whose single output is determined by the conditions of the inputs, usually two or more. The various circuits processing data in the computer's interior must operate in a highly synchronized manner; this is accomplished by controlling them with a very stable oscillator, which acts as the computer's "clock." Typical personal computer clock rates now range from several hundred million cycles per second to several billion. Operating at these speeds, digital computer circuits are capable of performing hundred of billions of of arithmetic or logic operations per second, but supercomputers are capable of performing more than 1 million times faster; such speeds permit the rapid solution of problems that would be impossible for a human to solve by hand. In addition to the arithmetic and logic circuitry and a number of registers (storage locations that can be accessed faster than main storage, or memory, and are used to hold the intermediate results of calculations), the heart of the computer—called the central processing unit, or CPU—contains the circuitry that decodes the set of instructions, or program, and causes it to be executed.

Storage and Retrieval of Data

Associated with the CPU is the main storage, or memory, where results or other data are stored for periods of time ranging from a small fraction of a second to days or weeks before being retrieved for further processing. Once made up of vacuum tubes and later of small doughnut-shaped ferromagnetic cores strung on a wire matrix, main storage now consists of integrated circuits, each of may contain billions of semiconductor devices. Where each vacuum tube or core represented one bit and the total memory of the computer was measured in thousands of bytes (or kilobytes, KB), modern computer memory chips represent hundreds of millions of bytes (or megabytes, MB) and the total memory of both personal and mainframe computers is measured in billions of bytes (gigabytes, GB) or more. Read-only memory (ROM), which cannot be written to, maintains its content at all times and is used to store the computer's control information. Random-access memory (RAM), which both can be read from and written to, is lost each time the computer is turned off. Modern computers now include cache memory, which the CPU can access faster than RAM but slower than the registers; data in cache memory also is lost when the computer is turned off.

Programs and data that are not currently being used in main storage can be saved on auxiliary or secondary storage. Although punched paper tape and punched cards once served this purpose, the major materials used today are magnetic tape and disks and flash memory devices, all of which can be read from and written to, and two types of optical disks, the compact disc (CD) and its successor the digital versatile disc (DVD). When compared to RAM, these are less expensive (though flash memory is more expensive than the other two), are not volatile (i.e., data is not lost when the power to the computer is shut off), and can provide a convenient way to transfer data from one computer to another. Thus operating instructions or data output from one computer can be stored and be used later either by the same computer or another.

In a system using magnetic tape the information is stored by a specially designed tape recorder somewhat similar to one used for recording sound. Magnetic tape is now largely used for offsite storage of large volumes of data or major systems backups. In magnetic and optical disk systems the principle is the same; the magnetic or optical medium lies in a path, or track, on the surface of a disk. The disk drive also contains a motor to spin the disk and a magnetic or optical head or heads to read and write the data to the disk. Drives take several forms, the most significant difference being whether the disk can be removed from the drive assembly. Flash memory devices, such as USB flash drives, flash memory cards, and solid-state drives, use nonvolatile memory that can be erased and reprogrammed in blocks.

Removable magnetic disks made of mylar enclosed in a plastic holder (older versions had paper holders) are now largely outdated. These floppy disks have varying capacities, with very high density disks holding 250 MB—more than enough to contain a dozen books the size of Tolstoy's Anna Karenina. Internal and external magnetic hard disks, or hard drives, are made of metal and arranged in spaced layers. They can hold vastly more data than floppies or optical disks, and can read and write data much faster than floppies. As hard disks dropped in price, they became increasingly included as a component of personal computers and replaced floppy disks as the standard media for the storage of operating systems, programs, and data.

Compact discs can hold hundreds of megabytes, and have been used, for example, to store the information contained in an entire multivolume encyclopedia or set of reference works. DVD is an improved optical storage technology capable of storing as much as ten times the data that CD technology can store. CD–Read-Only Memory (CD-ROM) and DVD–Read-Only Memory (DVD-ROM) disks can only be read—the disks are impressed with data at the factory but once written cannot be erased and rewritten with new data. The latter part of the 1990s saw the introduction of new optical storage technologies: CD-Recordable (CD-R) and DVD-Recordable (DVD-R, DVD+R), optical disks that can be written to by the computer to create a CD-ROM or DVD-ROM, but can be written to only once; and CD-ReWritable (CD-RW), DVD-ReWritable (DVD-RW and DVD+RW), and DVD–Random Access Memory (DVD-RAM), disks that can be written to multiple times.

Flash memory devices, a still more recent development, are an outgrowth of electrically erasible programmable read-only memory. Although more expensive than magnetic and optical storage technologies, flash memory can be read and written to much faster, permitting shorter boot times and quicker data access and storage. Because flash memory also is resistant to mechanical shock and has become increasingly compact, a USB flash drive allows for the easy, portable external storage of large quantities of data. Solid-state drives are more easily accessed and written to than magnetic hard drives and use less power, and have become common in high-end, lightweight notebook computers and in high-performance computers. Flash memory is also used in computer tablets and smartphones. Hybrid drives, which combine a smaller amount of flash memory with a large magnetic hard drive, permit the economical storage of large amounts of data while benefitting from a more responsive access to frequently used but only occasionally changed operating system and program files.

Data are entered into the computer and the processed data made available via input/output devices, also called peripherals. All auxiliary storage devices are used as input/output devices. For many years, the most popular input/output medium was the punched card. The most popular input devices are the computer terminal and internal magnetic hard drives, and the most popular output devices are the computer display screen associated with a terminal (typically displaying output that has been processed by a graphics processing unit) and the printer. Human beings can directly communicate with the computer through computer terminals, entering instructions and data by means of keyboards much like the ones on typewriters, by using a pointing device such as a mouse, trackball, or touchpad, or by speaking into a microphone that is connected to computer running voice-recognition software. The result of the input may be displayed on a liquid-crystal, light-emitting diode, or cathode-ray tube screen or on printer output. Another important input/output device in modern computers is the network card, which allows the computer to connect to a computer network and the Internet using a wired or radio (wireless) connection. The CPU, main storage, auxiliary storage, and input/output devices collectively make up a cumputer system.

Sharing the Computer's Resources

Generally, the slowest operations that a computer must perform are those of transferring data, particularly when data is received from or delivered to a human being. The computer's central processor is idle for much of this period, and so two similar techniques are used to use its power more fully.

Time sharing, used on large computers, allows several users at different terminals to use a single computer at the same time. The computer performs part of a task for one user, then suspends that task to do part of another for another user, and so on. Each user only has the computer's use for a fraction of the time, but the task switching is so rapid that most users are not aware of it. Most of the tens of millions of computers in the world are stand-alone, single-user devices known variously as personal computers or workstations. For them, multitasking involves the same type of switching, but for a single user. This permits a user, for example, to have one file printed and another uploaded to an Internet website while editing a third in a word-processing session and listening to a recording streamed over the Internet. Personal computers can also be linked together in a network, where each computer is connected to others, usually by network, coaxial, or fiber-optic cable or by radio signals (wireless), permitting all to share resources such as printers, hard-disk storage devices, and an Internet connection. Cloud computing is another form of resource sharing. Delivering access to both hardware and software over a network, most often the Internet, cloud computing is designed to allow many individuals and organizations using a wide range of devices both ease of access to computing resources and flexibility in changing the type and volume of the resources to which they have access.

Computer Programs and Programming Languages

Before a computer can be used for a given purpose, it must first be programmed, that is, prepared for use by loading a set of instructions, or program. The various programs by which a computer controls aspects of its operations, such as those for translating data from one form to another, are known as software, as contrasted with hardware, which is the physical equipment comprising the installation. In most computers the moment-to-moment control of the machine resides in a special software program called an operating system, or supervisor. Other forms of software include assemblers and compilers for programming languages and applications for business and home use (see computer program). Software is of great importance; the usefulness of a highly sophisticated array of hardware can be limited by the lack of adequate software.

Each instruction in the program may be a simple, single step, telling the computer to perform some arithmetic operation, to read the data from some given location in the memory, to compare two numbers, or to take some other action. The program is entered into the computer's memory exactly as if it were data, and on activation, the machine is directed to treat this material in the memory as instructions. Other data may then be read in and the computer can carry out the program to complete the particular task.

Since computers are designed to operate with binary numbers, all data and instructions must be represented in this form; the machine language, in which the computer operates internally, consists of the various binary codes that define instructions together with the formats in which the instructions are written. Since it is time-consuming and tedious for a programmer to work in actual machine language, a programming language, or high-level language, designed for the programmer's convenience, is used for the writing of most programs. The computer is programmed to translate this high-level language into machine language and then solve the original problem for which the program was written. Many high-level programming languages are now universal, varying little from machine to machine.

Development of Computers

Although the development of digital computers is rooted in the abacus and early mechanical calculating devices, Charles Babbage is credited with the design of the first modern computer, the "analytical engine," during the 1830s. Vannevar Bush built a mechanically operated device, called a differential analyzer, in 1930; it was the first general-purpose analog computer. John Atanasoff constructed the first electronic digital computing device in 1939; a full-scale version of the prototype was completed in 1942 at Iowa State College (now Iowa State Univ.). In 1943 Conrad Zuse built the Z3, a fully operational electromechanical computer.

During World War II, the Colossus was developed for British codebreakers; it was the first programmable electronic digital computer. The Mark I, or Automatic Sequence Controlled Calculator, completed in 1944 at Harvard by Howard Aiken, was the first machine to execute long calculations automatically, while the first all-purpose electronic digital computer, ENIAC (Electronic Numerical Integrator And Calculator), which used thousands of vacuum tubes, was completed in 1946 at the Univ. of Pennsylvania. UNIVAC (UNIVersal Automatic Computer) became (1951) the first computer to handle both numeric and alphabetic data with equal facility; intended for business and government use, this was the first widely sold commercial computer.

First-generation computers were supplanted by the transistorized computers (see transistor) of the late 1950s and early 60s, second-generation machines that were smaller, used less power, and could perform a million operations per second. They, in turn, were replaced by the third-generation integrated-circuit machines of the mid-1960s and 1970s that were even smaller and were far more reliable. The 1970s, 80s, and 90s were characterized by the development of the microprocessor and the evolution of increasingly smaller but powerful computers, such as the personal computer and personal digital assistant (PDA), which ushered in a period of rapid growth in the computer industry.

The World Wide Web was unveiled in 1990, and with the development of graphical web browser programs in succeeding years the Web and the Internet spurred the growth of general purpose home computing and the use of computing devices as a means of social interaction. Smartphones, which integrate a range of computer software with a cellular telephone that now typically has a touchscreen interface, date to 2000 when a PDA was combined with a cellphone. Although computer tablets date to the 1990s, they only succeeded commercially in 2010 with the introduction of Apple's iPad, which built on software developed for smartphones. The increasing screen size on some smartphones has made them the equivalent of smaller computer tablets, leading some to call them phablets.

Bibliography

See S. G. Nash, A History of Scientific Computing (1990); D. I. A. Cohen, Introduction to Computer Theory (2d ed. 1996); P. Norton, Peter Norton's Introduction to Computers (2d ed. 1996); A. W. Biermann, Great Ideas in Computer Science: A Gentle Introduction (2d ed. 1997); R. L. Oakman, The Computer Triangle: Hardware, Software, People (2d ed. 1997); R. Maran, Computers Simplified (4th ed. 1998); A. S. Tanenbaum and J. R. Goodman. Structured Computer Organization (4th ed. 1998).

The Columbia Encyclopedia, 6th ed. Copyright© 2014, The Columbia University Press.

Selected full-text books and articles on this topic

Inventing the Electronic Century: The Epic Story of the Consumer Electronics and Computer Industries
Alfred D. Chandler Jr.
Harvard University Press, 2005
The Digital Flood: Diffusion of Information Technology across the United States, Europe, and Asia
James W. Cortada.
Oxford University Press, 2012
Media Ownership and Concentration in America
Eli M. Noam.
Oxford University Press, 2009
Librarian’s tip: Chap. 10 "The Computer and Software Sector"
The HP Phenomenon: Innovation and Business Transformation
Charles H. House; Raymond L. Price.
Stanford Business Books, 2009
The Maverick and His Machine: Thomas Watson, Sr., and the Making of IBM
Kevin Maney.
John Wiley & Sons, 2003
Territories of Profit: Communications, Capitalist Development, and the Innovative Enterprises of G.F. Swift and Dell Computer
Gary Fields.
Stanford University Press, 2004
The Antitrust Revolution: Economics, Competition, and Policy
John E. Kwoka Jr.; Lawrence J. White.
Oxford University Press, 2004
Librarian’s tip: Case 19 "Maintenance of Monopoly: U.S. v. Microsoft (2001)"
Technological Turf Wars: A Case Study of the Computer Antivirus Industry
Jessica Johnston.
Temple University Press, 2009
Computer: A History of the Information Machine
Martin Campbell-Kelly; William Aspray.
Basic Books, 1996
Legal Battles That Shaped the Computer Industry
Lawrence D. Graham.
Quorum Books, 1999
The Economics of Network Industries
Oz Shy.
Cambridge University Press, 2001
Looking for a topic idea? Use Questia's Topic Generator