HDTV: Defining the Future of Broadcasting and Film?

Article excerpt

High Definition Television, or HDTV, has been touted as the obvious next step in improving the realism of TV. It is, we are told, the "television system of the next millennium," which will bring a radical improvement in viewing enjoyment.

According to HDTVs proponents, the goal is to bring the picture quality of 35mm movies and the sound fidelity of compact discs into the living room. While the Japanese and the Europeans have been racing along the track of HDTV development for years, America is still in the starting blocks. Consequently, some fear this could be the first major new technology of the last hundred years in which the U.S. will not play a leading role. Others see HDTV as an opportunity: America's last chance to get back into the consumer electronics business.

HDTV has become a political issue, with consequences for the electronics industry, TV broadcasters, photographic interests, and the professional cinematographer. The arcane world of waveforms, recording formats, and electronic esoterica which is television can be imposing. But knowledge is power.

Lines, Frames and Fields

Let's consider some of the technological basics of TV. Television creates the illusion of motion by painting the face of your TV tube with many images, or frames (to use borrowed terminology), each second. But unlike motion pictures, television frames come into a set a piece at a time; that is to say, they are built up serially. Each complete frame is broken into 525 horizontal lines. The lines are drawn left to right, from top to bottom, at a rate of about sixteen thousand lines, or thirty complete frames, per second.

Actually, this simple description of television is, in fact, a little too simple. If your TV screen merely displayed thirty frames per second, you would discern an annoying flicker as the screen became alternately light and dark. Indeed, to avoid this effect at the cinema, movie projectors are designed to project each image twice, thereby boosting the frame rate from 24 to 48 frames per second. At this frequency, most people no longer sense the flicker, even if they still talk about going to "the flicks."

A similar trick was devised for television. Rather than generating thirty complete images every second, TV cameras produce sixty "half" images. Each "half" image, or field, consists of 262.5 lines spanning the full height of the screen. These are painted alternately to the odd and even lines on the tube, so that you do, in a sense, get thirty complete images per second.

This scheme, designed to boost the flicker frequency to 60 images per second, is called (for obvious reasons) interlacing. Of course, any given line on the screen still flickers at 30 frames per second, an effect which many viewers can detect. (Horizontal edges in the picture are good places to look for this interline flicker.) Another problem with interlaced scanning is that small, moving details can "get lost" in the missing scan lines of a field, and seem to randomly appear and disappear.

Color Revolution

In 1953, the Federal Communications Commission adopted a scheme for adding color to American television. The overriding technical requirement of this scheme, known as NTSC (National Television Standards Committee), was that it be compatible with the existing black and white standard. The FCC was not only protecting the consumers huddled about their small-screen Sylvania and DuMont sets, but also the networks' investment in black and white production and transmission facilities. Any incompatible system, even in 1953, had a zero chance of acceptance, as the networks were not about to invest overnight in expensive new facilities to provide color programming to a non-existent audience.

NTSC was clever, and here's why: in principle, color images can be faked by combining three primary color (red, green, blue) images. This is the way color film works-it is essentially three films in one. …