Color Television
Monochrome television involves a single signal: brightness, or luma , usually given the symbol Y . (Although more properly it's Y', a gamma-corrected value.) Color TV requires three times the information: red, green, and blue (RGB ). Unfortunately, color TV was supposed to be broadcast using the same spectrum as monochrome, using the same bandwidth, so having three times the information was a bit of a problem.Engineers at RCA developed a truly brilliant solution. They realized that the frequency spectrum of a television signal had gaps in it largely unoccupied by picture content. They encoded color information in a sine-wave called a subcarrier and added it to the existing luma signal; the subcarrier's frequency fit within one of the empty gaps in monochrome's spectrum, so the luma information was largely unaffected by the new chroma information, and vice versa. The resulting signal remained compatible with existing monochrome receivers and fit in the same broadcast channels: they had managed to squeeze the quart of color information into the pint pot of existing monochrome broadcasting.In this composite color system, the TV recovers color by comparing the amplitude and phase of the modulated subcarrier with an unmodulated reference signal. Differences in phasethe angle you can see on a vectorscopedetermine the hue of the color; the amplitudethe distance of a color from the center of the vectorscope's displayindicates its saturation. The reference signal is transmitted on every scanline; it's the colorburst signal that you can see between the horizontal sync pulse and the left edge of the active picture.

Color Recording
Broadcasting color is one thing. Recording it on tape, with all the instabilities and variability of electromechanical systems, is something else altogether. Many different approaches have been used, with differing benefits and side effects.
Direct Color
In the early days of color, the composite signal was recorded directly on tape, using 1-inch VTRs. Because the color is carried in the phase and amplitude of a high-frequency signal (3.58 MHz in most NTSC systems, 4.43 MHz in PAL), any timebase errors minor variations in playback speed that change the apparent phase of the color signalresult in ruined color. Playing back direct color recordings requires timebase correction , using memory buffers to smooth out mechanically-induced jitter. Timebase correctors (TBCs) for direct color playback were very expensive and were essentially specific to the format and type of machine being used.
Color-Under
Heterodyne or color-under recording developed as a way to record and play back color without the need for the expensive TBC. Inside the VTR, the chroma signal is "heterodyned down" to a much lower frequency and recorded "under" the luma signal. The magic of heterodyning is that when the playback signal is "heterodyned up" again, most of the timebase errors cancel themselves out, which reproduces usable color. Color-under is used in ¾-inch, VHS, Video8, S-VHS, and Hi8 formats, among others.TBC-free color comes at a price: the chroma resolution of the color-under signal is reduced, and the precise timebase of the color subcarrier becomes muddled. The muddled subcarrier makes it impossible to separate the luma and chroma information as accurately as with direct color, so color-under recordings are harder to process in downstream equipment such as proc amps, TBCs, and frame synchronizers, and almost invariably suffer from diminution of high-frequency details.
Component
In the mid-1980s, Sony's Betacam and Panasonic's MII formats introduced component recording. In the mid-1980s, the Hawkeye/Recam/M, Betacam, and MII formats introduced component recording. A major problem with composite video is that once the luma and chroma are mixed together, it's pretty much impossible to recover the separate luma and chroma signals completely intact. Although this is irrelevant for final broadcast, since analog broadcasting uses composite color, it makes manipulation of the image in post production difficult, and dubbing composite color signals across multiple generations, even with TBCs, results in considerable quality loss.Instead of subcarrier-modulating the signal, Betacam and MII record the luma component and two color-difference components separately, in different parts of the video track. Because the color signal is never modulated on a phase-critical subcarrier, it isn't subject to hue-distorting timebase errors or the resolution limitations imposed by subcarrier modulation.Color-difference components fall under the general nomenclature of "YUV", although there are several variantsY'UV, Y'/R-Y/B-Y, Y'PRPB, Y'CRCBdepending on the exact format or signal connection being used. There's nothing mysterious about YUV color; it's a simple transformation of RGB signals by matrix multiplication. YUV signals offer several advantages over RGB in the rough-and-tumble world of analog recording and transmission:Gain imbalances between RGB signals show up as very noticeable color casts in the reproduced picture. The same degree of imbalance between YUV signals appears as a slight change in overall brightness or a change in saturation of some colors. To see what gain imbalances do to images, follow these steps:
1. | Exit FCP if it's running. |
2. | Install AJW's filters from the DVD included with this book: Drag the folder AJW's Filters (Media > Lesson 07) into your Mac's Plugins folder (Macintosh HD > Library > Application Support > Final Cut Pro System Support > Plugins, where "Macintosh HD" is the name of your system disk).If you're working on a shared system and don't have the permissions to drop the folder in the prescribed location, you can install it just for yourself in the Plugins folder in your home directory (YourHomeDirectory > Library > Preferences > Final Cut Pro User Data > Plugins). (FCP 1 through FCP 3 used different script locations; look in your User Manual for script installation instructions.)If the filters don't appear when you run FCP, make sure the AJW's Filters folder and its contents are both readable and writable. Change the permissions as necessary, and the filters should show up inside FCP. |
3. | Start FCP and load the DVStressTest3.tif clip into the Viewer. |
4. | Put the clip into a 720 x 480pixel Timeline and set the playhead so that the clip shows up in the Canvas. Double-click the clip in the Timeline to load it in the Viewer, then select the Viewer's Filters tab. |
5. | Apply the Effects > Video Filters > AJW's Filters > Channel Balance [ajw] filter to the clip. |
6. | Set the Green / Cb Gain slider to 70%, simulating an amplitude imbalance. |
7. | Change the Color Space setting from RGB to YCrCb (YUV ) and back again while looking at the results in the Canvas. |
8. | Play with other Gain settings for the various channels and watch what happens in RGB mode and in YUV mode. |
Timing differences between RGB signals appear as bright fringes of contrasting colors along the edges of objects, whereas in YUV the same amount of delay shows itself as laterally displaced colors with much milder edge effects. Follow these steps to see what timing differences do in RGB and YUV:
1. | You already installed AJW's filters in the previous exercise, right? |
2. | Start FCP and load the DVStressTest3.tif clip. |
3. | Put the clip into a 720 x 480pixel Timeline and set the playhead so that the clip shows up in the Canvas. |
4. | Double-click the clip in the Timeline to load it in the Viewer, then select the Viewer's Filters tab.If you're lazy like I am, FCP still has the clip loaded from the previous exercise. That'll do fine. Turn off or delete any filters already applied to the clip. |
5. | Apply the Effects > Video Filters > AJW's Filters > Channel Offset [ajw] filter to the clip. |
6. | Set the Green / Cb Horizontal slider to 10, simulating a timing difference. |
7. | Change the Color Space setting from RGB to YCrCb (YUV) and back again while looking at the results in the Canvas. |
8. | Play with other Horizontal settings for the various channels and watch what happens in RGB mode and in YUV mode. |
Finally, as mentioned before, the human eye is less sensitive to color resolution than to brightness resolution. By transcoding color into YUV components, it's possible to reduce the bandwidth taken by the color components considerably without markedly affecting picture quality.Lesson 9.