Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Colour_Reproduction_in_Electronic_Imaging_Systems_Photography_Television_Cinematography_2016_Michael_S_Tooms

Colour_Reproduction_in_Electronic_Imaging_Systems_Photography_Television_Cinematography_2016_Michael_S_Tooms

Published by Jie Leslie, 2022-01-04 08:48:34

Description: Colour_Reproduction_in_Electronic_Imaging_Systems_Photography_Television_Cinematography_2016_Michael_S_Tooms

Search

Read the Text Version

Storage and Conveyance of Colour Signals – Encoding Colour Signals 275 Prior to the advent of digital technology, considerable effort was brought to bear to reduce these data quantities, and the processes developed then continue to be used today. Once the signals have been quantised, then compression technology (Watkinson, 1999) is used to very significantly reduce the quantity of data required. In television and cinema terms, the frames are conveyed at a minimum rate of 25 or 30 times per second in current systems, with higher rates being considered for projected systems, producing very high data rates. Large quantities of data are expensive to store and conveyance capacity for high data rates is limited; there is thus a strong incentive to evolve the means to reduce the data required to describe an electronically generated image. 14.3 System Compatibility and Retention of Colour Balance As noted in the Introduction to this chapter, it was essential that any colour reproduction system should be compatible with monochrome photographic and television systems already in widespread use. Furthermore, the colour balance of the system should not change with differential change in the gain of the three paths carrying the colour signals. 14.3.1 The Luminance Signal The essence of a monochrome system is obtaining an electrical signal derived from a camera whose optical analysis system has a spectral sensitivity characteristic which broadly follows the luminosity function of the eye as illustrated by the V������ curve in Figure 14.3. 1.2 1.0 Relative response 0.8 Vλ luminance ITU/sRGB green primary 0.6 0.4 0.2 0.0 380 420 460 500 540 580 620 660 700 740 –0.2 Wavelength (nm) Figure 14.3 The photopic response of the eye and a typical ideal green camera spectral sensitivity characteristic.

Relative response276 Colour Reproduction in Electronic Imaging Systems Thus, what is required from a colour reproduction system is a means of deriving a luminance signal from the three R, G and B signals which emulates the luminance response in Figure 14.3. Intuitively, consideration of the colour camera spectral sensitivities and the luminosity function of the eye indicates that as the green characteristic is closest to the luminosity function, this would form the basis of the luminance signal, with the addition of diminishing contributions from the red and blue characteristics respectively. However, we need to find a method of calculating precisely the values of the contributions or weighting factors of the RGB signals derived from the three spectral sensitivities to match the luminance response. It may be recalled from Chapter 4 that the ‘Y’ characteristic of the CIE XYZ colour- measuring system was made to follow the luminosity function of the eye in order that the value of Y always measured the luminance of a colour. Thus, by using the inverse of the matrix procedures outlined in Appendix F to derive the XYZ characteristics from the camera RGB characteristics, we can use the resulting factors of Y in the matrix to establish the RGB weighting factors. In Worksheet 14, Matrices 1 and 6 illustrate the procedure for obtaining this inverse matrix and, following the selection of the appropriate primaries, we can use the values derived there, to establish the values of the RGB weighting factors required. From Worksheet 14, after selecting the ITU/sRGB primaries, which are used in both tele- vision and photography, the Y row of Matrix 6 provides the following values for the RGB luminance weighting factors: LR = 0.2126, LG = 0.7152 and LB = 0.0722. Thus, the luminance signal, designated by the symbol Y, can be derived as follows: Y = 0.2126R + 0.7152G + 0.0722B 1.0 y(λ) 0.8 0.6 Y = LRR + LGG + LBB 0.4 G 0.2 BR 0.0 380 420 460 500 540 580 620 660 700 740 –0.2 Wavelength (nm) Figure 14.4 Illustration shows that summing the weighted RGB curves produces a true luminance response.

Storage and Conveyance of Colour Signals – Encoding Colour Signals 277 In Worksheet 14, the luminance weighting factors are applied to the RGB camera spectral sensitivities to provide the weighted curves in Figure 14.4. These curves are then added in Worksheet 14, Table 1, to show that they sum to produce the Y curve. The CIE y(������) curve is also shown to confirm that the Y curve is of identical shape. This signal would be used by a monochrome display to produce results which would compare identically to that derived from a monochrome camera whose optical response matched the V������ curve. Systems using different primaries would need to use different appropriate weighting factors for the RGB signals comprising the luminance signal. These weighting factors can be determined from Matrix 6 using the selection buttons to select the appropriate primaries or copying the chromaticity coordinates from the Primaries Worksheet to the relevant cell range in Worksheet 14. 14.3.1.1 Gamma Correction As explained in Section 13.4, the linear RGB signals are gamma corrected at source and the luminance signal is no exception to this requirement, so, as in a monochrome system, gamma correction would also be applied. Thus the gamma-corrected luminance signal is: Y1∕������ = (0.2126R + 0.7152G + 0.0722B)1∕������ Now, for the sake of consistency, we would normally designate the gamma-corrected lumi- nance signal with a prime in the same manner as used to designate gamma-corrected RGB signals; however, Y′ has historically been used to describe the ‘luma’ signal, which is defined in terms of the addition of the luminance-weighted R′G′B′ signals; thus: Y′ = 0.2126R′ + 0.7152G′ + 0.0722B′ It is perhaps not surprising that the use of both of these signals in various colour reproduction systems has led to much confusion, so a few words of clarification would not go amiss. Traditionally, probably for simplicity and overlooking the problems which can occur in encoding and decoding by its use, the Y′ signal has been almost universally used in reproduc- tion systems and was loosely and incorrectly referred to as the luminance signal. (We shall investigate the problems referred to above in Section 14.6.) Poynton (2012) has been at pains to attempt to clarify the situation by introducing the term ‘luma’ to describe the Y′ signal, a recommendation we have adopted throughout this book. The use of the luma signal can lead to compromises in the quality of the reproduced image, particularly with regard to the lack of detail in saturated areas of the image. This compromise has been recognised from the beginnings of electronic colour reproduction but it is only recently that newly defined systems offer the option of using the luminance Y1/������ signal in preference to the luma Y′ signal. In these systems, Y1/������ is sometimes designated as YC′ and the subscript ‘C’ is used as an abbreviation of ‘constant luminance’ to differentiate it from Y′, which is not a constant luminance signal as we shall see subsequently.

278 Colour Reproduction in Electronic Imaging Systems 14.3.2 The Complementary Colour Difference Signals Having derived a luminance signal for compatibility purposes, in order to avoid duplication of data it will also be used as one of the principal signals for the colour reproduction system. But what are the requirements for the two signals to complement the luminance signal? We have seen that in order to describe a colour, three values are required; these may be values of red, green and blue; values of luminance, hue and saturation; or values of luminance and chromaticity, where as we have seen, specifying chromaticity requires two values. Since we have already derived a luminance signal, then two further signals are required, which ideally would describe the chromaticity of the scene. The principal criterion is that any changes in the relative amplitudes of the three signals as they pass through the signals chain do not change the colour balance of the reproduced image. 14.3.2.1 The Linear Case We will first derive some basic properties of a linear-based system before addressing a system using gamma-corrected signals. As the luminance signal is comprised primarily of the green signal, then the two other signals are generated by subtracting the luminance signal from the red and blue signals respectively. Thus, using the ITU/sRGB primaries to two decimal places: R − Y = 1R − 0.21R − 0.72G − 0.07B = +0.79R − 0.72G − 0.07B and B − Y = 1B − 0.21R − 0.72G − 0.07B = −0.21R − 0.72G + 0.93B Since the luminance signal is subtracted from the colour signals, these new signals are referred to as colour difference signals. Thus, the three new signals are Y, R – Y, and B – Y and we shall explore their properties in order to show that they meet the requirements listed in the Introduction to this chapter. Recovering the RGB Signals Firstly, we need to show how we can extract the original RGB signals from the luminance and colour difference signals. The R and B signals are recovered by direct addition of the luminance signal to each of the colour difference signals respectively. To recover the G signal: Y = 0.21R + 0.72G + 0.07B (14.1) and also Y = 0.21Y + 0.72Y + 0.07Y (14.2) Subtracting (14.2) from (14.1) 0 = 0.21(R − Y) + 0.72(G − Y) + 0.07(B − Y)

Storage and Conveyance of Colour Signals – Encoding Colour Signals 279 Thus G − Y = − 0.21 (R − Y) − 0.07 (B − Y) and 0.72 0.72 G = Y − 0.30(R − Y) − 0.10(B − Y) (14.3) Thus G = 1.40Y − 0.30R − 0.10B (14.4) Colour Difference Signal Amplitudes When the camera is scanning a white in the scene: R = G = B = 1 and Y = 1 Thus R − Y = 0 and B − Y = 0 Furthermore, if R = G = B = 0.5, then Y = 0.5 and again the colour difference signals are zero. In the general case, whenever the camera is scanning a white or neutral grey in the scene, the colour difference signals will be zero. Furthermore, it can be shown that the amplitude of the colour difference signals increases with the saturation of the colour or conversely diminishes as the saturation approaches zero. This property was very important in the analogue days of colour reproduction, since for much of the time an average scene has low levels of saturation and thus the corresponding low levels of the colour difference signals were less likely to cause mutual interference in systems of encoding where they shared frequency bands with the luminance signal. It can also be shown by a few examples that the colour difference signals increase in level with increasing luminance; thus both luminance and saturation cause a rise in the levels of the colour difference signals. It may be recalled that in Section 2.4, this property of colour is defined as chroma and in consequence when the colour difference signals are taken together, they are usually referred to as chrominance signals. A reproduction system comprising signals in a luminance and chrominance format is often described as being a YC format system and, with the exception of the initial electronic system for cinematography, all current and proposed photographic and television systems utilise this format in one way or another. Colour difference signals are often plotted with the B–Y signal on the x-axis and the R–Y signal on the y-axis as illustrated in Figure 14.5, for the additive and subtractive primaries, based on the figures in Table 2 of Worksheet 14. The resulting vector length of the colour is proportional to both the saturation and the luminance of the signal, as can be seen from the inner shape, which shows the same colours at 50% amplitude but the same saturation. (If these were chromaticity values the vectors for each colour would be the same length for both sets of levels.)

280 Colour Reproduction in Electronic Imaging Systems 1.0 Red 0.8 Magenta Level = 100% 0.6 0.4 50% 0.2 R-Y Yellow 0.0 –1.0 –0.8 –0.6 –0.4 –0.2 0.0 0.2 0.4 0.6 0.8 1.0 –0.2 Blue –0.4 Green –0.6 –0.8 Cyan –1.0 B-Y Figure 14.5 Chrominance values for each set of primaries at both 50% and 100% levels. Changes in Channel Gain Any differential changes in gain between the three channels carrying the YC signals will not change the colour balance of the image since on a grey-scale the colour difference signals are zero. This property satisfies our criterion that the colour balance is not modified by changes in gain between channels. Thus the critical neutral colours in the scene will not change; clearly however, a differential change in gain between the two colour difference signals will change the chromaticity of the non-neutral colours in the scene. 14.4 A Simple Constant Luminance Encoding System In Section 14.5, we will be looking in a little more detail at a characteristic of the eye, first alluded to in Section 1.4, in which the eye is much more sensitive to changes in luminance than it is to changes in chrominance. This inspired early workers responsible for defining colour reproduction systems to engineer a system where, when it became necessary to trade off unwanted interference or noise in the signals of a system, then more of the interference or noise was directed into the chrominance channels. This strategy will operate only at maximum efficiency if none of the luminance information is carried in the chrominance channels. The term ‘constant luminance’ is used to describe a system whereby all the luminance information is constantly carried in the luminance channel; the converse of this statement is that there are no circumstances when the chrominance channels carry any luminance information. A consequence of this property is that the addition of any signal to the colour difference signals, such as noise or interference, will change only the chromaticity and not the luminance of the image, making it far less visible than it would otherwise be.

Storage and Conveyance of Colour Signals – Encoding Colour Signals 281 As an example of how this operates, let us assume we commence with a colour C represented by: C = 0.6R + 0.4G + 0.5B Thus Y = 0.21R + 0.72G + 0.07B = 0.21 × 0.6 + 0.72 × 0.4 + 0.07 × 0.5 = 0.126 + 0.288 + 0.035 = 0.449 and R − Y = 0.6 − 0.449 = 0.151 B − Y = 0.5 − 0.449 = 0.051 During the passage between the camera and the display, let us assume that a noise signal of level 0.1 is added to the colour difference signals. Then the new value of the signals will be: Y = 0.449 R − Y = 0.251 B − Y = 0.151 Therefore R = (R − Y) + Y = 0.700 B = (B − Y) + Y = 0.600 And, using equation (14.3) derived in Section 14.3: G = Y − 0.30(R − Y) − 0.10(B − Y) = 0.449 − 0.30 × 0.251 − 0.10 × 0.151 = 0.3594 Thus the new luminance will be: Y = 0.21R + 0.72G + 0.07B = 0.21 × 0.70 + 0.72 × 0.3594 + 0.07 × 0.600 = 448 (an error of 0.001 due to rounding to two decimal places above) Ignoring the rounding error, this is the same luminance level we started with, confirming that as long as the luminance signal carries the full luminance information, the constant luminance system is immune to error signals in the colour difference signals causing errors in the display of the luminance information.

282 Colour Reproduction in Electronic Imaging Systems 14.5 Exploiting the Spatial Characteristics of the Eye In Section 1.4, where the characteristics of the eye relevant to colour reproduction were listed, reference was made to the spatial response of the eye and in Section 8.4 the acuity of the eye was defined and the relationship between acuity and the number of pixels required in order that the acuity of the eye is not compromised was established. This work was related to the maximum acuity of the eye, which early experiment indicated was between changes in the luminance of objects in the scene. It was found that if the luminance is kept close to a constant but the chromaticities of the objects are changed the level of acuity diminishes. If two sets of saturated complementary colours from opposite sides of the chromaticity diagram are placed adjacent, it is possible to evaluate in qualitative terms how the colour acuity of the eye is affected. Figure 14.6 Resolution wedges indicate the different acuities of the eye to luminance and chrominance. Figure 14.6 Illustrates three equal frequency wedges: for luminance, for reddish orange and bluish cyan, and for yellow-green and purple respectively. The darkest colour of each pair was adjusted for maximum colourfulness and the lightness of the complementary colour in each wedge was subjectively adjusted for equal lightness. If this image is viewed at an appropriate distance, it is found that at the distance where the luminance bars appear to merge halfway down the wedge, the colours merge considerably higher up the wedges. Detailed measurements indicate that in broad terms, the orange to cyan colours merge at half the resolution of the luminance and the yellow-green to purple colours merge at about a third of the resolution of the luminance. Using the same rationale as used in Section 14.2 to establish the amount of data required to be included in the signal to match the luminance acuity of the eye, it will be clear that only half the amount or less will be required for the chrominance information. Thus, the colour difference signals may be filtered or subsampled to a fraction of the information capacity

Storage and Conveyance of Colour Signals – Encoding Colour Signals 283 of the luminance signal without impairing the reproduced image. In pixel terms, the colour difference values of only alternate pixels, in both the horizontal and vertical directions, need to be included in the composite signal. Different reproduction systems, depending upon the level of performance required and the limitations in their channel capacity, select different fractions of the luminance signal data rate for the chrominance signals in both the horizontal and vertical directions. Since the fractions chosen in digital systems are always factors of 1, 2 or 4, compared with luminance, then in the digital domain the luminance signal sampling rate is always considered to be at a base rate of 4 and the colour difference signals at a base rate of 4, 2 or 1. A nomenclature has evolved to describe the variants of subsampling used for the colour difference signals, which is written in the form of 4:4:4. The first number indicates the luminance sampling rate and the next two numbers indicate that the colour difference signals are sampled at the same rate as the luminance in both the horizontal and vertical directions; thus, 4:4:4 describes the signals as they are following the matrixing of the RGB or R′G′B′ signals to the Y, R–Y and B–Y signals. The form 4:2:2 indicates that the colour difference signals are sampled at half the rate of the luminance signals in the horizontal direction only; thus the composite data rate is reduced by a third. The form 4.2.0 indicates the sampling at half the luminance rate in both the horizontal and vertical directions, thus the composite data rate is reduced by a half. Both the above formats will generally produce images with no perceptual impairment. Finally, the form 4.1.1 indicates that the colour difference signals are sampled at a quarter of the luminance rate in the horizontal direction only. Such a low chrominance sampling rate is marginal in terms of not affecting the quality of the perceived image and so is not generally used in quality reproduction. In the early analogue systems where the multiplexing parameters were more stringent, in order to minimise interference or crosstalk between channels, different fractions were sometimes applied to different versions of the colour difference signals. These variants are briefly reviewed in Chapter 17. 14.6 A Practical Constant Luminance System As we saw in Chapter 13, for a number of reasons the signals from the camera are gamma corrected and as a consequence the simple system described in Section 14.4 becomes somewhat more complicated to implement. The gamma-corrected versions of the components of the YC signal defined earlier are: Y1∕������ , R′ − Y1∕������ , B′ − Y1∕������ In what follows the various configurations of the variants of the YC system will be illustrated in a schematic form. 14.6.1 A Constant Luminance Camera In a camera designed for constant luminance operation, the pertinent processes required between the derivation of the linear RGB signals from the image sensors and the YC output of the camera are illustrated in Figure 14.7.

284 Colour Reproduction in Electronic Imaging Systems R Gamma R′ R′-Y1/γ a(R′-Y1/γ) Y1/γ YCCL corrector matrix Filter Filter Multiplexer G Luminance Y Gamma Y1/γ b(B′-Y1/γ) matrix corrector B Gamma B′ B′-Y1/γ corrector matrix Figure 14.7 Constant luminance camera encoder. It will be noted that the luminance signal is formed from the linear RGB signals before gamma correction, ensuring that it truly represents the luminance of the scene and emulates the signal from a well-designed monochrome camera. The colour difference signals are formed by simple subtractive matrices which usually also contain scaling factors ‘a’ and ‘b’ for the R′ –Y1/������ and B′ –Y1/������ signals respectively. These scaling factors vary according to the particular colour system in use and are designed to reduce the amplitude of the colour difference signals, which in peak-to-peak terms would otherwise exceed the maximum value of the luminance signal, so ensuring they do not exceed the signal level capacity of the multiplexer.1 Since the constant luminance colour difference signals are not symmetrical around zero level, as are the non-constant luminance colour difference versions, then where it is important in encoding to retain extent of polarity symmetry, the weighting factors for the positive and negative excursions may differ to make the signals symmetrical. This procedure is explained in detail in Section 20.4.2.4. Depending upon whether subsampling is used, the colour difference matrices may be followed by filters which reduce the information content in the manner described in the previous section to produce signals appropriate to the sampling standards of the particular colour reproduction system. The multiplexer does not change the colour content of the signals in any way and the technical description of its operation is therefore beyond the scope of this book. The output of the multiplexer is a single signal in YC format, sometimes with a subscript to indicate whether it is a constant luminance or non-constant luminance signal. 14.6.2 A Constant Luminance Display In Figure 14.8, the YCCL signal is fed to the de-multiplexer, not described here, which outputs the Y1/������ , a(R′ − Y1/������ ) and b(B′ − Y1/������ ) signals. The luminance signal is added to the scaling corrected colour difference signals in the red and blue matrix respectively to recover the 1 The multiplexer is a device that utilises the structure of the luminance and colour-difference signals in a manner that enables all three signals to be combined and subsequently separated with the minimum of interference between the signals.

Storage and Conveyance of Colour Signals – Encoding Colour Signals 285 a(R′-Y1/γ) Red R′ De-gamma R R YCCL Demultiplexer Y1/γ matrix b(B′-Y1/γ) De-gamma Y Green G G Linear matrix screen Blue B′ De-gamma B B matrix Figure 14.8 Constant luminance display decoder. gamma-corrected R′ and B′ signals, which, together with the Y1/������ signal, are fed to the gamma circuits with a characteristic of V = V������ and thus produce the linear Y, R and B signals, respectively. These linear signals are then used by the green matrix to recover the G signal. The three linear signals are then fed to the linear screen. It should be remembered that following the demise of the CRT, most displays operate in a linear fashion and, though often equipped with inverse gamma correctors, do so only in order to complement the gamma correctors inserted at source to emulate the legacy CRT displays. This section has described the configuration of a constant luminance system designed with the linear screen in mind; in these circumstances, the gamma and de-gamma elements are more likely to take on the role of ‘perceptible coding’ elements where the encoding and decoding characteristics are fully complementary, as described in Section 13.6. Other approaches to a constant luminance system are designed to use legacy equipment which significantly complicates the resulting configuration. In the case of a legacy camera, linear RGB signals are not available to the encoder and legacy display devices have built-in gamma circuits to emulate displays based upon the CRT. Such an approach therefore requires a plethora of gamma and gamma corrector circuits to provide the correct signals before and after multiplexing respectively. One of the principal advantages of the constant luminance system is that since all the high resolution luminance information is carried in the Y1/������ signal, at maximum sample rate, no high resolution luminance information is lost. This is in contrast to the non-constanct lumi- nance system described in the next section, where, as some of the high resolution luminance information in saturated colours is carried in the chrominance signals, it is removed by the chrominance filters in non 4:4:4 systems. 14.7 A Non-Constant Luminance System The essence of a non-constant luminance system is the use of the luma signal, Y′, comprising the addition of the gamma-corrected R′G′B′ signals, rather than the true luminance signal Y1/������ derived from the addition of the linear RGB signals.

286 Colour Reproduction in Electronic Imaging Systems R Gamma R′ R′-Y′ Filter a(R′-Y′) corrector matrix G Gamma G′ Luminance Y′ Y′ YCNCL Multiplexer corrector matrix B Gamma B′ B′-Y Filter b(B′-Y′) corrector matrix Figure 14.9 A non-constant luminance camera encoder. 14.7.1 A Non-Constant Luminance Camera Encoder The non-constant luminance encoder is very similar in the type and number of processes it contains to the constant luminance encoder, as a comparison of Figure 14.9 with Figure 14.7 illustrates. The only difference is that the matrix for the Y signal is positioned following rather than preceding the gamma correctors. The output from the multiplexer is designated the YCNCL to differentiate it from the YCCL signal from the constant luminance system. 14.7.2 A Non-Constant Luminance Display Decoder A schematic diagram of a non-constant luminance display decoder is illustrated in Figure 14.10 and a comparison with the constant luminance decoder shown in Figure 14.8 illustrates why the non-constant luminance approach has been the de facto method adopted by virtually all colour reproduction systems to date. Because the CRT was the only practical way of displaying electronically generated images up to the turn of the century, and its nonlinear characteristics a(R′-Y′) Red R′ R′ YCNCL Demultiplexer Y′ matrix Green G′ G′ Gamma matrix display b(B′-Y′) Blue B′ B′ matrix Figure 14.10 A non-constant luminance display decoder.

Storage and Conveyance of Colour Signals – Encoding Colour Signals 287 negated the requirement for gamma circuits in the display, it meant that these expensive circuits of the time were unnecessary. In consequence, the decoders for television receivers and computer displays were significantly simpler and cheaper than the alternative for constant luminance signals. Such a system has served the photographic and television industries well. As can be seen from Figures 14.9 and 14.10, the signals delivered to the gamma display are apparently not compromised and good-quality pictures generally result. However, under certain conditions artefacts are introduced and these are detailed in the next section. 14.8 The Ramifications of the Failure of Constant Luminance As we have seen constant luminance fails because the Y′ signal carries all the luminance information only on neutral colours; for other colours, increasing levels of saturation lead to an increasing percentage of the luminance information being carried by the colour difference signals; conversely, the Y′ signal diminishes in level with increasing saturation. Table 14.2, generated in Worksheet 14, gives the values of Y′ and Y1/������ for primary and complementary colours at levels of 50% and 100% and the ΔL column illustrates the level of the inconstancy. 14.8.1 Loss of Compatibility with Monochrome Systems It may be recalled that one of the important criteria for a colour reproduction system in the early days was compatibility with the large number of monochrome systems already in use. However, it can be seen from Table 14.2 that the use of the Y′ signal for luminance, though accurate for neutral colours, will result in increasing error as the level of the saturation of the colours in the scene increases. Thus, for colours of increasing saturation, monochrome displays will render the scene with increasingly diminished levels of luminance compared with the correct value. Since the viewer Table 14.2 The loss of constant luminance as a result of using Y′ Colour R G B Y Y1/������ Y′ ΔL White 1.000 1.000 1.000 1.000 1.000 1.000 0.000 Red 1.000 0.000 0.000 0.213 0.498 0.213 0.286 Magenta 1.000 0.000 1.000 0.285 0.568 0.285 0.283 Blue 0.000 0.000 1.000 0.072 0.306 0.072 0.234 Cyan 0.000 1.000 1.000 0.787 0.898 0.787 0.111 Green 0.000 1.000 0.000 0.715 0.860 0.715 0.145 Yellow 1.000 1.000 0.000 0.928 0.967 0.928 0.039 Grey 0.500 0.500 0.500 0.500 0.732 0.732 0.000 Red 0.500 0.000 0.000 0.106 0.365 0.156 0.209 Magenta 0.500 0.000 0.500 0.142 0.416 0.209 0.207 Blue 0.000 0.000 0.500 0.036 0.224 0.053 0.171 Cyan 0.000 0.500 0.500 0.394 0.657 0.576 0.081 Green 0.000 0.500 0.000 0.358 0.630 0.524 0.106 Yellow 0.500 0.500 0.000 0.464 0.708 0.679 0.029

288 Colour Reproduction in Electronic Imaging Systems Figure 14.11 The photograph illustrates the loss of detail in saturated colours when the chrominance signal is filtered in a non-constant luminance system. was usually unaware of the original colour for most of the time, the distortion was often not perceived, particularly since those things we do recognise, such as the human face, do not generally contain saturated colours and were therefore reproduced satisfactorily. However, there are occasions when the general rule does not apply, examples being the use of saturated red lipstick, which would cause the lips to appear very dark, and product packaging, which often contained highly saturated colours with which the viewer would be familiar. 14.8.2 Loss of Detail in Colours of High Saturation One of the main advantages of YC systems is the ability to exploit the acuity characteristics of the eye by ensuring that the full detail of the scene is carried by the luminance signal. However, as the non-constant luminance encoder appearing in Figure 14.9 illustrates, the colour difference signals, which carry increasingly larger percentages of the luminance information on saturated colours, are often filtered, which removes all the finer detail present. This includes the fine detail of the luminance information carried in these signals on saturated colours. When the colour difference signals are filtered, the result of the loss of this detail information on saturated colours in the reproduced image is very obvious and is particularly noticeable on flowers, where most of the image is clearly in focus but the saturated flowers appear out of focus, despite being in the same image plane as the remainder of the scene. Figure 14.11 attempts to illustrate this effect. The opportunities for introducing the constant luminance approach in television systems are discussed in Section 20.2.4.

15 Specifying a Colour Reproduction System 15.1 Introduction In Part 4, we have reviewed the procedures and processes required for the implementation of practical colour reproduction systems and have indicated the parameters which need to be defined in order that users of the systems can ensure good quality images are displayed. However these parameters, which are widely dispersed throughout the text of Part 4, will benefit by being brought together in order to provide a coherent specification of the colour reproduction system. 15.2 Deriving the Specifications The approach to deriving a system specification is dependent upon whether an open or a closed system is to be specified, or in media terminology, a scene-referred or an output-device-referred system is to be specified. Many current systems are output-device-referred, that is, the signals derived at source are specified to serve a display population which had a fixed common set of display characteristics, whereas scene-referred systems are based on source signals with characteristics which are device independent (see Section 12.6) and are designed to serve a population of displays which may have or are likely to have different display characteristics. Such systems would have been impractical prior to the digital era. Some systems are variations between these two extremes, whereby sources of different characteristics serve a display population with a range of characteristics. Examples of each of these system types appear in Part 5. 15.2.1 Specifying an Output-Device-Referred System Although formal specifications are usually arranged in an order where the parameters associ- ated with image capture and source processing only are presented, it should be remembered Colour Reproduction in Electronic Imaging Systems: Photography, Television, Cinematography, First Edition. Michael S Tooms. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd. Companion Website: www.wiley.com/go/toomscolour

290 Colour Reproduction in Electronic Imaging Systems that in terms of colour reproduction, it is the environment associated with the viewing of the rendered image that determines the value of the parameters associated with image capture and source processing. These independent and dependent parameters are illustrated in the first and second columns of Table 15.1, respectively. Table 15.1 Illustrating the independent and dependent parameters of a colour reproduction specification Viewing environment parameters Camera/Source parameters Display primaries chromaticity coordinates Camera spectral sensitivities1 Display white point Display gamma exponent System transfer characteristics2 Display highlight luminance Notional gamma correction exponent Display surround surfaces luminance Display contrast range in environment Gain of linear element of characteristics Decoding format – constant luminance?3 Encoding format – constant luminance? Notes: 1. It is common in formal specifications to quote the display primaries chromaticities in the camera or system specification rather than provide the colour-matching functions which define the camera spectral sensitivities. It then becomes the responsibility of the camera manufacturer to calculate the camera spectral sensitivities. 2. In contrast to Note 1, the values of the elements of the system transfer characteris- tic specification are usually provided from calculations based upon the values of the parameters in the ‘Viewing environment’ column. In this example, it is assumed that perceptibly uniform coding is not used; if it were, then the value of the parameters would need to be calculated independent of any gamma correction required. 3. The decision as to whether or not to use constant luminance decoding is usually dependent upon the cost premium of the decoder in the monitor or receiver and therefore becomes an independent viewing environment parameter. Each media colour reproduction system has its own form of specification format, but nev- ertheless, the majority of the parameters required to specify the various systems are identical, albeit their values are different. In addition to the parameters listed in Table 15.1, other parameters define the picture spatial characteristics, the digital representation and the digital coding characteristics. In Section 15.3 is a generic collection of the parameters used in these specifications, laid out for simplicity in the tabular format used by current television system specifications. Only those parameters which pertain directly or indirectly to the reproduction of colour are included, and the values taken as a whole are not intended to relate to a specific television system; they are representative only. Specifications relating to particular media systems are addressed in the appropriate chapters of Part 5. 15.2.2 Specifying a Scene-Referred System Ideally there are two criteria for source signals intended to service a scene-referred system:

Specifying a Colour Reproduction System 291 r The characteristics of the source signals should not limit in any way the ability of any display to provide an optimum rendition of the original scene within the capabilities of its r operating characteristics. delivering the signals in such a manner as to avoid any perceptible The means of storing and artefacts appearing in the display of the optimum rendered image of the original scene. With regard to source signal limitations, the implication is that the colour space to which the source signals are encoded encompasses all colours within a contrast range at least as great as that of the spatial dynamic contrast range of the eye (see Section 13.3) when viewing any display in its environment. However, such a system, unless constrained, would not necessarily provide the optimum rendition on displays of limited contrast ratio. Thus, the compromises necessary, which are also dependent upon the viewing environment, are described in the appropriate chapters of Part 5. In order to avoid the introduction of artefacts during storage and delivery whilst meeting the first criterion, it is implied that a perceptibly uniform coding system be adopted (see Section 13.6). 15.3 A Representative Closed Colour Reproduction System Specification 15.3.1 Camera/Source Parameters The table below lists the parameter values: Table 15.2 Camera/source parameters System values Assumed linear Item Parameter V = 1.099L0.45 – 0.099 for 1 ≥ L ≥ 0.018 1.1 Opto-electronic transfer characteristic before V = 4.500L for 0.018 > L ≥ 0 non-linear pre-correction Where: L: Luminance of the image 0 ≤ L ≤ 1 1.2 Transfer characteristic at source V: Corresponding electrical signal 1.3 Chromaticity coordinates (CIE, 1931) x y Primary 0.630 0.340 Red (R) 0.310 0.595 Green (G) 0.155 0.070 Blue (B) D65 y 1.4 Assumed chromaticity for equal primary signals x 0.3290 (reference white) 0.3127 ER = EG = EB 15.3.2 Picture Spatial Characteristics The table overleaf lists the parameter values:

292 Colour Reproduction in Electronic Imaging Systems Table 15.3 Picture spatial characteristics Item Parameter System values 2.1 Aspect ratio 16:9 2.2 Samples per active line 1920 2.3 Sampling lattice Orthogonal 2.4 Active lines per picture or pixels per picture height 1080 2.5 Pixel aspect ratio 1:1 (square pixels) 15.3.3 Signal Coding Format The table below lists the parameter values: Table 15.4 Signal coding format Item Parameter System values 3.1 Signal format Constant luminance Non-constant luminance Y = 0.2627R′+ 0.6780G′ 3.2 Derivation of Y′C and Y′ Y′C = (0.2627R + 0.6780G (Y′C = Y1/������ ) + 0.0593B′ + 0.0593B)′ C′B = (B′ – Y′C)/1.8814 3.3 Derivation of colour C′BC = (B′ – Y′C)/1.9404, C′R = (R′ – Y′C)/1.4746 difference signals for –0.9702 ≤ B′ – Y′C ≤ 0 C′BC = (B′ – Y′C)/1.5816, for 0 < B′ – Y′C ≤ 0.7908 C′RC = (R′ – Y′C)/1.7184, for –0.8592 ≤ R′ – Y′C ≤ 0 C′RC = (R′ – Y′C)/0.9936, for 0 < R′ – Y′C ≤ 0.4968 15.3.4 Digital Representation The table below lists the parameter values: Table 15.5 Digital representation Item Parameter System values 4.1 Coded signal R′, G′, B′ or Y′, C′B, C′R or Y′C, C′BC, C′RC 4.2 Sampling lattice Orthogonal, line and picture repetitive co-sited R′, G′, B′, Y′, Y′C Orthogonal, line and picture repetitive co-sited with each other 4.3 Sampling lattice The first (top-left) sample is co-sited with the first Y samples. C′B, C′R or C′BC, 4:4:4 system 4:2:2 system 4:2:0 system C′ RC Each has the same Horizontally Horizontally and 4.4 Coding format number of subsampled by a vertically subsampled horizontal samples factor of 2 with by a factor of 2 with as the Y′(Y′C) respect to the respect to the Y′(Y′C) component. component. Y′(Y′C) component. 10 or 12 bits per component

Specifying a Colour Reproduction System 293 15.3.5 The Viewing Environment Specifications The viewing environment parameters differ considerably with each media type; thus, the specifications of these parameters are covered separately in the appropriate chapters of Part 5.



Part 5 The Practicalities of Colour Reproduction – Television, Photography and Cinematography Introduction All the material appearing in Parts 1–4 of this book has, with only one or two exceptions where the differences have helped with the understanding of the concepts being described, dealt generically with the topic of colour reproduction, that is, the material presented has been of equal relevance to all forms of colour reproduction by electronic methods, whether for television, photography or cinematography. Part 5 dealing with the practicality of colour reproduction is divided into three parts, A, B and C for television, photography and cinematography, respectively, each with its own dedicated chapters. There are two reasons why this is necessary, the most fundamental of which relates to the viewing conditions under which the reproduced images are viewed for Colour Reproduction in Electronic Imaging Systems: Photography, Television, Cinematography, First Edition. Michael S Tooms. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd. Companion Website: www.wiley.com/go/toomscolour

296 Colour Reproduction in Electronic Imaging Systems the three different media systems and the correspondingly different criteria which are selected at source to compensate for these differences, as indicated in Section 9.3, and Chapter 13. The second reason is that although there is now a much broader understanding between those responsible for setting the technical specifications and evolving the standards for what were three completely different media, it has to be remembered that these media developed separately over several decades with often little interaction between the experts in each of the areas of specialisation. In consequence, inevitably specifications were proposed and standards adopted which appeared best suited to the particular media at that time, without there always being much interaction between the experts of each specialist media group. In more recent years this situation regarding the specialisation of experts within the various media groups has been largely overcome with much sharing and understanding of what is being proposed for the evolution of future industry specifications. Nevertheless, current standards do reflect these different legacy specifications, both with regard to their limitations and also to the different approaches adopted to ameliorate them in the specifications they support. For those who consider they already have a relatively good grasp of the fundamentals of colour reproduction but wish to learn or refresh their memory of material relevant to the media in which they operate, whether it be television, photography or cinematography, then the appropriate chapters in this Part are written in a largely self-contained manner with reference to the fundamentals in earlier chapters only being alluded to when the complexity or detail of the material is likely to warrant such an approach. There is little more annoying to a reader than interruptions to the continuity of explanation through continual reference to earlier material; thus although sometimes it may seem that a topic repeats that which appears earlier in the book, it will be presented in a more summarised form with only occasional references to the more fundamental approaches of the earlier material. In contrast to those seeking information only about their particular media, are those who wish for an overview of colour reproduction in all three media types and for this purpose, it is important to provide a continuity of description which follows the thread of development as it progressed through those three media types. Historically, significant electronic development was required before suitable equipment would become available to support all three media types; the costs of the early electronic colour cameras were equivalent to some 40 times the average annual salary of the time, with another even higher amount required to purchase recording equipment. Only when the resulting pictures could be shared amongst a very wide population could such costs be justified, so it was inevitable that television led the way in the development of electronic colour reproduction. Colour television was introduced to public service broadcasting in the 1950s, based at that time on analogue technology; it was a further 30 years before advances in chip design led to the practicability of introducing digital technology in a limited fashion into the demanding television domain; another 10 years before digital cameras became the order of the day and yet another ten years before the complete signal path was digitised with the introduction of digital transmissions. The spur of the availability of miniaturised solid state integrated circuits for digital pro- cessing of video signals and solid state opto-electronic image sensors led to the production of the first practical digital photographic stills cameras for use by the general public in the early 1990s, some 40 years after the introduction of colour television. Finally the availability of electronic projectors capable of producing displays bright enough for viewing in public cinemas led the film industry to adopt standards for electronic

The Practicalities of Colour Reproduction – Television, Photography and Cinematography 297 cinematography in the first decade of the twenty-first century followed by the operation of digital cinemas shortly thereafter. The stability and accuracy of digital processing led to the introduction of circuitry to carry out the processing of video signals in an ever more accurate manner which in turn led to the systems and specifications of colour reproduction systems being upgraded as the technology developed. From the foregoing it is apparent that if the historical thread is to be preserved as a basis for describing the continuing development of colour reproduction then it must be initiated by television and followed by photography and cinematography in that order.



Part 5A Colour Reproduction in Television Introduction In the context of this book, television production is defined in terms of the live editing of scenes using historic television multi-camera techniques, primarily for broadcast. This part commences with an outline description of the signal flow, highlighting those elements which are in a position to influence the colour of the viewer’s display. The following chapter briefly describes the introduction of colour into television; which is relevant since the concepts evolved at that time for capturing the colour information from the scene and processing it into a form suitable for storage, distribution and display have since been adopted by all current media systems, television, electronic photography and cinematography. The established colour rendering index (CRI) used for evaluating the performance of electri- cal discharge and LED lamps used for scene illumination has long been found to be a poor indi- cation of its suitability for this purpose. The newly introduced television lighting consistency index (TLCI) which overcomes these limitations is described in detail and supported by a work- sheet which enables the TLCI to be calculated from the spectral power distribution of the lamp. The current high definition television system, in terms of its colorimetric performance, is reviewed at length, which provides the basis for appraising the potential of the ultra-high definition system. UHDTV is currently under development and holds the promise of significant improvements in the rendition of the displayed image, as the limitations in legacy systems are addressed with the introduction of a wider colour gamut and contrast range and a constant luminance approach to the delivery of the signals to the display. In the final chapter the approach to colour management in television is described in the context of achieving satisfactory results when the final image is displayed in widely different viewing environments. Colour Reproduction in Electronic Imaging Systems: Photography, Television, Cinematography, First Edition. Michael S Tooms. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd. Companion Website: www.wiley.com/go/toomscolour



16 The Television System and the Image Capture Operation 16.1 The Television System Workflow In order to identify those elements within the system which influence the manner in which images of the scene are reproduced, Figure 16.1 illustrates the workflow of the overall television system with those elements of the workflow which can influence the portrayal of the pictures displayed in the home of the viewer, highlighted in blue. This is a typical operation; the detail of the arrangement will vary from operation to operation. Since those elements of the workflow with a grey background do not influence the grading of the picture, they are not described further but are included here to assist in understanding the context of the overall system operation. In Figure 16.1, for simplicity of display, the shooting operation is illustrated separately from the television centre, which of course it is for an outside broadcast (OB) operation, but otherwise, all the blocks of the shooting operation are replicated in each studio within the television centre. For simplicity and clarity, only a three-camera operation is illustrated, whereas in reality, the number of cameras used, depending upon the complexity of the production, would likely be in excess of this number. Generally speaking, cameras designed for the type of television operation described above are configured as a number of separate packages, which together comprise a television camera channel. The head end of the camera contains only the lens, the colour analysis optics if appropriate, the image sensors, the analogue to digital converters1 and a viewfinder; the raw red, green and blue signals are sent down the cable to the camera control unit (CCU), which contains all the signal-processing elements. The signal for the viewfinder is returned after processing from the CCU to the camera head. The CCU is usually located in the room designated as Vision Control or Picture Control or sometimes historically as ‘Racks’ and is where the raw signals from the camera head 1 Digital processing of television signals within the camera channel did not occur until the 1980s; see Chapter 17 for the phasing of the introduction of digital processing. Colour Reproduction in Electronic Imaging Systems: Photography, Television, Cinematography, First Edition. Michael S Tooms. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd. Companion Website: www.wiley.com/go/toomscolour

302 Colour Reproduction in Electronic Imaging Systems Scene Control room Matching ON-AIR illumination illumination monitor Camera Lens head 1 Monitor Monitor Monitor Vision mixer 1 2 3 Camera Production Lens head 2 RCP 1 RCP 2 RCP 3 Lighting control console Camera CCU 1 CCU 2 CCU 3 Lens head 3 Vision control Studio or OB location Image capture, image adjustment and shot selection Routing, storage, editing Transmission system and playout Television centre Receiver Display Room Viewer system illumination Lounge or home-cinema Home viewing Figure 16.1 Television operation workflow. end are processed and output to both the associated picture monitor, on which the picture is adjusted, and the vision mixer in the production control room. The vision mixer operator then selects which camera shot will be selected for the programme. The controls associated with the adjustment of exposure and the processing circuits parameters of the camera channel are located on a remote control panel (RCP) connected to the CCU. Vision Control is at the heart of the picture adjustment and matching operation and provides two operational positions, for the lighting director and the vision controller respectively, ideally adjacently located in order to facilitate communication between these two individuals who are jointly responsible for the lighting, adjustment and matching of the pictures. The lighting director manages the positioning of the key, fill, backlight, set dressing and effects luminaires, and operates the lighting console, which enables the intensity of each of the luminaires in the studio to be adjusted for the best contrast composition of the scene whilst matching the exposure and illuminant colour balance requirements of the cameras. In an ideal environment, the vision control position comprises a row of RCPs on the desk facing a picture monitor stack which includes a monitor for each camera and a monitor for picture matching, whose input is automatically switched between the ‘on-air’ camera and the picture from the camera whose RCP is currently under adjustment, usually the next camera

The Television System and the Image Capture Operation 303 to be selected to air. The RCP will normally have controls for colour balance, exposure and black level or ‘lift’, the operation of which will be further described in Section 21.6. The positioning and level of the environmental lighting in Vision Control is critically arranged and set to match recommended ambient light levels and has a colour temperature to match the white point of the adopted television system. The home viewing section comprises the final elements of the workflow and the blue background indicates those elements which influence how the final displayed picture will be perceived. The operational procedures which are undertaken to ensure well-matched high grade pictures are delivered to the viewer are described in the appropriate sections of Chapter 21. 16.2 The Television System Signal Path The full signal path of the television system may be derived from the workflow diagram illustrated in Figure 16.1; however, we are only interested in those elements of the system which influence the perception of the displayed image and these are detailed in Figure 16.2. The configuration of the elements in Figure 16.2 has hardly changed since the introduction of colour television and serves as a template for all the systems described in the following chapters. Since the introduction of digital television, the appropriate Analogue to Digital (A–D) and Digital to Analogue (D–A) converters have been added at the beginning and end Camera head Scene Lens Optical Image Analogue illumination aperture colour convert digital analysis conversion Camera control unit White Gamut Gamma Colour Multiplex balance match correction difference matrix Encoder Camera channel Display Eyes De- Colour De- Gamut Gamma Digital Primary Room Colour multiplex difference gamma match correction analogue images illiumination image conversion generator perception matrix adaption Decoder Home viewing Figure 16.2 Television signal path elements influencing image perception.

304 Colour Reproduction in Electronic Imaging Systems of the signal chain, respectively, as the image sensors and image generators are fundamentally analogue in operation. The description of the functionality of each of these elements has been described in detail in Part 4 and will be addressed further in the context of current practice in Chapters 19 and 21. The specific arrangement of the elements in the display, particularly with regard to the gamma corrector, is dependent upon whether a constant luminance system or a non-constant luminance system is in use, as described in Sections14.6 and 14.7. 16.3 The Television Standards Organisations From the viewers’ standpoint, television is by its nature a multi-source single-destination system; that is, a single television set in the viewer’s home is capable of receiving signals from different cameras in a production, different studios within a television centre and different broadcasters. Thus, if the viewed pictures are to be perceived as realistic and matching from source to source, it is essential that the signal path elements described in Section 16.2 operate to the same specifications; inevitably therefore, from the beginning, standards were established by broadcast authorities and imposed upon the broadcasters. Initially these standards were set by national bodies, but as the technology progressed to the point where the interchange of programmes could take place at national level, new bodies were incorporated and professional organisations took up the challenge to evolve specifications for adoption as national standards. In the United States, the National Television Systems Committee (NTSC) was the trade organisation of television camera and TV set manufacturers who evolved the early speci- fications for adoption by the Federal Communication Commission, and early in the United Kingdom, the British Broadcasting Corporation (BBC) and the Independent Broadcast Author- ity (IBA) cooperated in evolving mutual specifications for their broadcasts. The requirement for common standards provided the opportunity for experts from around the world to exchange views and commence to evolve shared approaches in many areas. Eventually this led to fewer but more international orientated bodies; in the United States, the Society of Motion Picture and Television Engineers (SMPTE) became the focus for evolving specifications and proposing them for adoption by the standards bodies, whilst in Europe, the European Broadcasting Union (EBU) took on the same task. In Japan, the national broadcaster NHK was at the forefront of evolving specifications for new television systems, and there was an interchange of information between the EBU and the Eastern Bloc countries. For many decades, the Consultative Committee on International Radio (CCIR) was instru- mental in setting international standards for sectors of the world which used the same basic scanning parameters, and in 1992, it evolved into the International Telecommunications Union – Radio or ITU-R. (As television uses what is formally known as the international radio frequency bands for transmission, it falls under the auspices of the ITU-R.) The ITU is a treaty organisation within the United Nations and is responsible for international agreements on communications. The ITU Radio Communications Bureau (ITU-R/CCIR) is concerned with wireless communications, including allocation and use of the radio frequency spectrum. The ITU also provides technical standards, which are called ‘Recommendations’ and which include the international television standards for the interchange of television programmes. The ITU undertakes studies, the results of which form reports, which in turn provide the basis for the formal recommendations. In broadcast television, these reports and recommendations

The Television System and the Image Capture Operation 305 follow a naming sequence, ITU-R BT.601 for example, where BT stands for ‘broadcasting service (television)’ and the number is a sequence number. In order to avoid lengthy repetition, once these document titles have been introduced, subsequent reference to them in the text is abbreviated to, for example, Rec 601. Many of the world’s television experts are now also members of the SMPTE, which helps to ensure that the specifications developed by them, the EBU and NHK share as many common values for the critical parameters as is practical and thus smooth the path for these specifications to be adopted as ITU recommendations. The technical standards for television broadcast have evolved over the decades as the progress in technological development has provided the opportunity to greatly enhance the quality of the pictures generated, transmitted and displayed. These enhancements embrace all the factors contributing to picture quality but the most perceptible improvement with each enhancement has been in the increased spatial resolution of the reproduced image. For this reason, generations of new technical specifications for broadcast are often referred to by the resolution they provide; in the early days by the number of scan lines, and latterly, by the number of pixels which comprise an image. In the following chapters, the specifications for the resolution parameters are not discussed in great detail despite the recognition of their prime importance since they are not parameters which affect the fidelity of colour in reproduction. Though not essential to the understanding of current and future specifications discussed in the latter chapters of this part on television, the next chapter on the history of colour in television does provide a sound basis for that understanding and deserves at least to be scanned if not read in depth.



17 A Brief History of Colour in Television 17.1 The Beginnings Work on experimental television systems commenced in a serious manner at the beginning of the 1920s in several countries throughout the world. The EMI Company in the United Kingdom was amongst the first to develop an electronic image sensor which made the development of a television camera a practical proposition and which led the British Broadcasting Corporation (BBC) to adopt the EMI system in order to commence the world’s first public television service in November 1936. By the end of the 1930s several other countries had also commenced public service television broadcasting, and in 1941, the Federal Communications Commission (FCC) in the United States authorised the adoption of the specification proposed by the National Television Systems Committee (NTSC) for the commencement of television broadcasting from July 1941. The NTSC was a committee established by the radio industry trade association to derive and recommend for adoption by the FCC a technical specification for broadcast television. The Second World War brought to an end public television broadcasting in Europe and with it the development work on colour television by the television equipment manufacturers. However, in the United States, work on experimental colour television systems continued apace and by end of the 1940s, pressure was mounting on the FCC to approve a technical specification as a standard for a public colour television service. After a false start, when the FCC authorised the commencement of a service using the CBS colour system, which was incompatible with the some 10 million black and white receivers then in use in the United States, the NTSC was reactivated in January 1950 to derive and agree a specification for a compatible system to be recommended for adoption by the FCC. It is difficult to exaggerate the extent and importance of the work undertaken by the NTSC; at that time amongst the various disparate colour television systems which had been developed by the principal manufacturers of television equipment in the United States, there was no common thread and no system which met all the criteria to enable it to be recommended for adoption. In consequence, under the auspices of the Committee, extensive development work and system tests were undertaken by some 300 engineers and colour scientists drawn Colour Reproduction in Electronic Imaging Systems: Photography, Television, Cinematography, First Edition. Michael S Tooms. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd. Companion Website: www.wiley.com/go/toomscolour

308 Colour Reproduction in Electronic Imaging Systems from a broad spectrum of those organisations with a contribution to make; these experts were organised into 10 panels and 55 subpanels (Fink, 1955). Following an interim report in 1951, a reorganisation of the committee to take account of the conclusions of that report and considerable further work, the NTSC agreed the final form of the compatible colour television signal specification in July 1953. The FCC approved this specification, which became the standard for public service colour television broadcasting in December 1953 and which led to the first national broadcast utilising the NTSC system on 1 January 1954. However, it was not until the mid-1960s that there was a sufficient uptake of colour television sets by the public to support all programmes being produced and transmitted in colour during prime viewing time. Since the 1950s, the performance of colour television systems has come a long way and much has been written on the differences and improvements of these latter systems when compared with the NTSC system; nevertheless, an objective analysis of all these new systems will show that at a fundamental level, the essential elements of the NTSC system remain in use. In colour terms, as opposed to spatial and temporal resolution, it is only in the manner in which the colour difference or chrominance signals as derived by the NTSC are conveyed through the multiplex1 that have continued to improve with each new system introduction and which have led to systems with different names but which are fundamentally variants of the NTSC system. In Europe, by the mid-to-late 1950s, experimental colour transmissions were taking place during the close down times of the regular monochrome services. The BBC broadcast colour test films using a 405 line variant of the NTSC system during this period and discussions commenced between the principal experimenters, notably those in the United Kingdom, France and West Germany, who were working on various solutions, to overcome the problems of the NTSC system of that era. In the early 1960s under the auspices of the European Broadcasting Union (EBU), work commenced on the selection of a common system for Europe which would avoid these problems.2 During this period, the Sequentiel Couleur a` Memoire (SECAM) system, which used a variant of NTSC whereby the multiplex used frequency modulation for the colour difference signals, was developed in France, and later in West Germany yet another variant, even closer to the NTSC system, was developed based upon alternating the phase of the chrominance subcarrier on a line-by-line basis, which became known as the ‘Phase Alternation Line’ (PAL) system. After an initial lack of agreement, most of Europe adopted the PAL system, which overcame the phase sensitivity issues of the NTSC system, and in the United Kingdom, the BBC commenced the first European colour television service using PAL in July 1967, quickly followed by Independent Television (ITV). In the same period other European countries commenced broadcasting in PAL, with the exception of France, Luxembourg and the USSR, who commenced services using the SECAM system. The remaining countries of the world selected one of these three systems as their standard, depending to a large extent on which monochrome line standard had been previously adopted and to a degree on their political affiliations. These three systems then served the world until the introduction of the high definition television system (HDTV) in the 1990s in Japan and in the remainder of the world in the 1 The multiplex in this context is the circuit component which, in a variety of different ways for different systems, combines the same three luminance and colour difference components of any compatible system into a single signal for storage and distribution. 2 The basis of these problems are briefly addressed in Section 17.2.6.3.

A Brief History of Colour in Television 309 2000s; see Chapter 19. However, within the television broadcast centres, digital component systems started to replace these traditional systems from the late 1980s onwards. 17.2 The NTSC, PAL and SECAM Colour Television Systems One of the principal requirements of the NTSC system was that it should be compatible with the large population of black and white television sets already established in the field. In con- sequence as noted above, a number of the NTSC-defined processes as applied to the camera RGB signals were, because of their fundamental nature to the solution of establishing any compatible system, also adopted by all subsequent systems. Thus, in dealing with the funda- mentals of colour reproduction in Part 4, and in particular the encoding of television camera signals in Chapter 14, we have by default already described many of the essential elements which were originally derived by the NTSC, and in consequence, it may be appropriate for the reader to review that chapter before progressing further. Nevertheless, in order to avoid repetition, the approaches to encoding described in detail in Chapter 14 are only summarised below using the values for the universal parameters specified by the NTSC or the EBU as appropriate. These systems are fully described elsewhere (Carnt & Townsend, 1961; Carnt & Townsend, 1969; Wentworth, 1955); our interest is limited primarily to reviewing those features of the systems which influence the colour characteristics of the reproduced image. However, Section 17.2.6.3 will attempt to explain the multiplex associated problems of the NTSC system in the early days of its use. 17.2.1 The System Primaries and White Point Television display devices have historically been based on primaries derived from the excitation of phosphors deposited on the faceplate of cathode ray tubes. Thus, since the introduction of colour television, the primaries have been based upon the colorimetry of the somewhat limited range of phosphors then available. The original choice of primaries by the NTSC was made on the basis of the chromaticity of silicate phosphors, which gave the widest chromaticity gamut of the limited range of phosphors then available. Though apparently based upon sound colorimetric grounds, in fact the green primary chosen, though an excellent choice from the point of view of the large colour gamut obtained, was associated with an inefficient phosphor, which enabled only relatively dim pictures to be reproduced. Receiver manufacturers soon ignored the standard and produced display devices with considerably more efficient green and red phosphors but with chromaticities which produced a relatively limited colour gamut well removed from the specification. Thus, since the RGB signals which drove the display were derived to match the NTSC primaries, the colours were inevitably reproduced with less accuracy than they would otherwise have been. Nevertheless, the colour gamut volume within the colour space was dramatically improved, and it was generally accepted that this improvement was an acceptable compromise for the loss of chromaticity fidelity. The introduction of colour television into Europe followed much later in 1967 and gave the EBU the opportunity to opt for an improved compromise in the selection of the display chromaticities between luminous efficiency and chromaticity gamut. In the knowledge of more than a decade of phosphor development, a set of phosphors based upon sulphides for

310 Colour Reproduction in Electronic Imaging Systems the green and blue primaries and a rare earth for the red primary were specified. Although the chromaticity of these phosphors was a compromise, there was recognition of the need to restrain the impetus for ever brighter displays at the cost of seriously compromising the display chromaticity gamut. The failure of the original NTSC primaries chromaticities specification was recognised and superseded by the SMPTE ‘C’ RP145 (Recommended Practice) primaries chromaticity specification during the 1980s, which generally reflected the chromaticities of the phos- phors then being used by the receiver industry. The SMPTE and the EBU primaries have very similar chromaticities. In fact they are so close, it is a pity that the SMPTE did not adopt the EBU primaries; should they have done so, they would have effectively achieved a world standard for television system primaries chromaticities, which would have eased the standards conversion requirements when programmes were interchanged between these different areas. One of the principal design criteria for the colour systems of this era was compatibility with monochrome television, the signal format standards of which did not accommodate negative excursions of the signal, thus preventing the transmission of data relating to those saturated colours beyond the gamut of the chosen phosphors. This situation will be explored further in Chapter 20. The primary chromaticities chosen by the NTSC, the EBU and the SMPTE, together with the chromaticities of their adopted system white points, are given in Table 17.1 and their chromaticity gamuts are illustrated in Figure 17.1. At the time the NTSC specification was being agreed, the recommendation for the specifica- tion of daylight was Illuminant C, but by the time the EBU and the SMPTE had defined the new primaries, the CIE had introduced new daylight ‘D’ illuminants based upon the specifications described in Section 7.3, in consequence the new D65 illuminant was selected as the system white for these new system specifications. Table 17.1 Historic television system primaries Historic system primaries and white points v′ x y u′ 0.528 NTSC 0.67 0.33 0.477 0.576 Red 0.21 0.71 0.076 0.196 Green 0.14 0.78 0.152 0.4610 Blue 0.3101 0.3162 0.2009 Illuminant C 0.523 0.640 0.330 0.451 0.561 EBU Tech 3213 0.290 0.600 0.121 0.158 Red 0.150 0.060 0.175 0.4683 Green 0.3127 0.3290 0.1978 Blue 0.526 D65 0.630 0.340 0.433 0.562 0.310 0.595 0.130 0.178 SMPTE RP 145 0.155 0.070 0.176 0.4683 Red 0.3127 0.3290 0.1978 Green Blue D65

A Brief History of Colour in Television 311 As is illustrated in Figure 17.1, there is very little difference between the chromaticities of Illuminant C and D65; however, the D65 specification has a higher ultraviolet content, which enables it to be used to more accurately simulate daylight when illuminating fluorescent surfaces. 0.7 0.6 520 530 540 550 560 570 510 580 590 600 610 0.5 500 620 630 640660 700 SMPTE NTSC D65 EE white C 0.4 490 v′ 0.3 480 0.2 470 EBU 0.1 460 450 440 400 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 u′ Figure 17.1 The gamut of historic television primaries. In Figure 17.1, it can be seen that the SMPTE gamut is very slightly smaller than the EBU gamut; however, both are close enough to be considered the same for practical purposes. A comparison with the NTSC gamut indicates a much improved blue primary, a red primary slightly inferior (SMPTE is worse) and a much inferior green primary, which significantly reduces the size of the chromaticity gamut that may be reproduced. This is the price which had to be paid for brighter pictures. It is instructive to compare the historic chromaticity gamuts with the surface colours gamut of Pointer as shown in Figure 17.2 (see also Figure 9.4). Clearly, although the television colour system is capable of good-quality reproduction of the common range of colours in a scene, its performance on saturated colours such as costumes and flowers can be disappointing; some of these colours will appear relatively desaturated in the display. Compared with the NTSC primaries, large areas of saturated green and cyan chro- maticities are not reproducible by the EBU primaries, and similarly, compared with the EBU primaries, the NTSC primaries are unable to reproduce a broad band of saturated blue-to-magenta hues.

312 Colour Reproduction in Electronic Imaging Systems 0.7 0.6 520 530 540 550 560 570 510 580 590 600 610 620 630 640 660700 0.5 500 EE white D65 0.4 490 EBU v′ 0.3 NTSC 480 Pointer surface colours 0.2 470 0.1 460 450 440 0.0 400 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 u′ Figure 17.2 Comparison of gamuts and surface colours. It is interesting to speculate on a gamut that would be composed of the NTSC red and green primaries and the EBU blue primary. As can be envisaged from Figure 17.2, such a gamut would very significantly increase the proportion of the Pointer surface colours gamut which could be reproduced and thus begin to approach an ideal set of primaries. Unfortunately, after some 60 years of research and development, no phosphors have been found with an acceptable level of efficiency which also match these chromaticities. 17.2.2 Derivation of the Ideal Camera Spectral Sensitivities The procedure for establishing the relationship between the RGB primaries of a system and the camera spectral sensitivities in terms of the XYZ colour matching functions (CMFs) was described in Section 9.3, and together with the relationships derived in Appendix F, may be used to establish the television system spectral sensitivities for any triple set of primaries as first described in Chapter 9. By entering the values given in Table 17.1 into the formulae derived in Appendix F (which is embedded in Worksheet 173), we can calculate the coefficients of the XYZ CMFs for the NTSC, the EBU and the SMPTE RP145 sets of primary chromaticities. As we have seen in Section 9.3, these coefficients lead directly to providing the camera spectral sensitivities for any set of primary chromaticities, and in Worksheet 17, these coefficients and the corresponding camera optical spectral sensitivities are derived and illustrated as follows. 3 Worksheet 17 also has a number of macro-driven ‘keys’ which when selected automatically enter the appropriate primary chromaticities into the formulae and produce the corresponding chromaticity charts and camera spectral sensitivities.

A Brief History of Colour in Television 313 1.8 1.6 1.4 1.2 1.0 Relative response 0.8 For NTSC: NTSC 0.6 r( ) = 1.9637x( ) − 0.5399y( ) − 0.2922z( ) 0.4 g( ) = − 0.9984x( ) + 2.0272y( ) − 0.0287z( ) 0.2 b( ) = 0.0591x( ) − 0.1201y( ) + 0.9105z( ) 0.0 –0.2 380 420 460 500 540 580 620 660 700 740 –0.4 –0.6 –0.8 Wavelength (nm) Relative response Figure 17.3 NTSC ‘idealised’ camera spectral sensitivities. 1.8 1.6 1.4 1.2 1.0 0.8 EBU For EBU: 0.6 r( ) = 3.2304x( ) − 1.4694y( ) − 0.5018z( ) 0.4 g( ) = − 1.0221x( ) + 1.9783y( ) + 0.0438z( ) 0.2 b( ) = 0.0716x( ) − 0.2413y( ) + 1.1274z( ) 0.0 –0.2 380 420 460 500 540 580 620 660 700 740 –0.4 –0.6 –0.8 Wavelength (nm) Figure 17.4 EBU ‘idealised’ camera spectral sensitivities. 1.8 1.6 1.4 1.2 1.0 Relative response 0.8 For SMPTE RP145: 0.6 SMPTE r( ) = 3.7149x( ) − 1.8434y( ) − 0.5765z( ) RP145 0.4 g( ) = − 1.1326x( ) + 2.0953y( ) + 0.0373z( ) 0.2 b( ) = 0.0596x( ) − 0.2086y( ) + 1.1121z( ) 0.0 –0.2 380 420 460 500 540 580 620 660 700 740 –0.4 –0.6 –0.8 Wavelength (nm) Figure 17.5 SMPTE ‘idealised’ RP145 camera spectral sensitivities.

314 Colour Reproduction in Electronic Imaging Systems The idealised camera spectral sensitivities which result from plotting these relationships for the NTSC, EBU and SMPTE RP145 primaries are shown in Figures 17.3–17.5. The absolute values as opposed to the relative values in the above equations are irrelevant, and a multiplication factor has been used in plotting the curves to equate the peak of the green response to a value of 1.0 in all cases to more easily enable the differences between the responses to be appreciated. The three charts are to the same scale and it can be seen that as the size of the chromaticity gamut reduces, the extension of the negative lobes of the spectral sensitivities and the corresponding positive lobes both increase; for the blue primary, this effect is relatively marginal since the red and green primaries are located close to the spectrum locus, but for the red primary, the effect is very significant since the blue and green primaries are located well away from the spectrum locus. 17.2.3 Matching Scene Illumination to the Spectral Sensitivities In defining the principal parameters of a colour reproduction system, the chromaticity coor- dinates of the scene illumination form an integral element of the specification along with the chromaticity coordinates of the system primaries, the reasons for which are discussed in detail in Chapter 11. Thus, for example, when a camera designed to provide signals for a display with EBU- specified primaries is shooting a scene with Illuminant D65 lighting, then any neutral reflecting surface in the scene will cause a colour-balanced camera to provide equal levels of the RGB signals. In the table in Worksheet 17 entitled ‘Static Responses, EBU Characteristics Illuminated by D65’, the derived spectral sensitivities of an EBU camera are each convolved and summed with the spectral distribution characteristic of the D65-defined illuminant to illustrate that under these conditions, the RGB signal outputs of the camera are equal. Figure 17.6 illustrates in graphical terms the process described in the previous paragraph. The broad line curves labelled EBU + EEW (Equal Energy White) are those derived for the EBU primaries, whilst the narrow line curves are those that result from the convolution by the D65 SPD. The areas enclosed by the three narrow line curves are equal, as is illustrated by the equal totals of the columns in Table 4 of Worksheet 17. 17.2.4 Matching the Camera Spectral Sensitivities to the Display Primaries As we have seen in the previous section, in order to achieve good colorimetric fidelity within the gamut of real colours described by the colour primaries triangle on the chromaticity diagram, the camera should exhibit spectral sensitivity characteristics which match the CMFs of the primaries.

A Brief History of Colour in Television 315 1.8 EBU + EEW 1.6 1.4 EBU + D65 1.2 Relative response 1.0 0.8 0.6 420 460 500 540 580 620 660 700 740 0.4 0.2 Wavelength (nm) 0.0 –0.2 380 –0.4 –0.6 –0.8 Figure 17.6 Showing the weighting of the EBU characteristics with the SPD of Illuminant D65. 1.8 1.6 1.4 1.2 Relative response 1.0 0.8 EBU 0.6 0.4 0.2 0.0 740 –0.2 380 420 460 500 540 580 620 660 700 –0.4 –0.6 –0.8 Wavelength (nm) Figure 17.7 EBU idealised camera spectral sensitivities. There are practical difficulties to achieving such a match. Using the EBU primaries as an example, an inspection of the curves in Figure 17.7 shows that over certain portions of the spectrum, all three characteristics at different wavelengths require the camera sensors to provide a negative output. No sensors exist or are foreseen which will provide such characteristics.

Relative response316 Colour Reproduction in Electronic Imaging Systems 1.8 1.6 1.4 1.2 1.0 EBU 0.8 0.6 0.4 0.2 0.0 380 420 460 500 540 580 620 660 700 740 Wavelength (nm) Figure 17.8 Adjusted EBU-positive-only lobes white balanced with D65. In practice therefore, the camera taking characteristics were made to match as closely as possible the positive lobes of the spectral sensitivities. Figure 17.8 illustrates a hypothetical set of characteristics where the skirts of the EBU-positive lobes have been adjusted slightly where they cross the baseline to achieve a shape more likely to be achieved in practice. It should be borne in mind that camera taking characteristics are the combination of the dichroic prism block, the trimming filters and the R, G and B sensor characteristics. In the early days of colour television, the performances of the camera sensors and the subsequent amplifiers in terms of signal-to-noise ratio were relatively poor and therefore limited the amount of processing which could be introduced. In consequence the practical characteristics of the camera, compared with the ideal characteristics, ensured that on narrow- band saturated colours, when the effect of the negative lobes would be to reduce the amplitude of the complementary signals, this did not occur and therefore the colours produced were significantly desaturated. As the performance of sensors and amplifiers improved, it became practical to consider techniques which could begin to compensate for the lack of the negative lobes in the responses of the camera. An inspection of the camera responses in Figure 17.8 indicates that the ideal negative lobes illustrated in Figure 17.7 may be crudely matched by adding or subtracting appropriate levels of the complementary signals to the required primary signal. For example, in emulating the idealised red response, subtracting a small amount of the green signal from the red signal would make an approximate match to the negative red lobe in Figure 17.7. Similarly, adding a small proportion of the blue signal to the red signal would begin to simulate the minor red positive lobe centred on 440 nm. Thus, by electrically adding or subtracting proportions of the complementary signals from each primary, a better match can be made to the idealised

A Brief History of Colour in Television 317 spectral sensitivities. Effectively, this is an empirically derived approximated version of the matrixing techniques described in Section 12.2 and is a technique which is adopted in all modern colour cameras. In Worksheet 17, the positive lobes of the EBU characteristics are transformed by such a matrix to derive an approximate match to the idealised characteristics. The matrix was adjusted empirically to provide the match whilst ensuring the coefficients of the primaries in each line summed to a value of 1.0 in order that the matrix does not change the colour balance when the scene is illuminated by D65. The resulting matrix coefficients are shown in Table 17.2, and the comparison of the resulting characteristics with the original white-balanced idealised characteristics are illustrated in Figure 17.9. Table 17.2 Matrix for correcting practical camera spectral sensitivity characteristics to match system CMFs Rin Gin Bin Rout 1.4700 −0.5600 0.0900 Gout −0.0900 1.1880 −0.0980 Bout −0.0200 −0.1200 1.1400 Thus, the overall response of the camera, resulting from the prism, trimming filters, sensors and matrix, approximates reasonably closely to the ideal response, and the system colour reproduction fidelity for colours not too close to the periphery of the primaries triangle on the chromaticity diagram is acceptable to most viewers, especially since the viewers generally do not have access to the original colour in the scene for comparison. For this reason, where Relative response1.8 1.6 Matrixed positive lobes 1.4 1.2 1.0 0.8 EBU + D65 0.6 0.4 0.2 0.0 380 420 460 500 540 580 620 660 700 740 –0.2 –0.4 –0.8 Wavelength (nm) Figure 17.9 Illustrating the emulation of the ideal camera spectral sensitivities by matrixing.

318 Colour Reproduction in Electronic Imaging Systems compromises were made in shaping the characteristics of the camera spectral sensitivities or the matrix parameters, then the criteria was to make the best match to flesh colours, since it is these colours with which the viewer is most familiar and often has a local reference. 17.2.5 Lighting for Colour Television Over the period in which the systems described in this chapter dominated, tungsten luminaires were the standard illuminant for television studios. At the commencement of the period, these luminaires were based upon lamps using only a relatively simple tungsten filament, but from the end of the 1950s, tungsten halogen lamps with their increased efficiency and higher operating colour temperature of 3,400 K were increasingly used. Both of these variants of tungsten illumination have CRIs of 100 and therefore present no problems in terms of rendering the colours of the scene accurately, subject to the camera having a suitable colour temperature correction filter in place; see Section 11.3. In practical terms care must be taken to ensure that when dimming a luminaire to provide a balanced level of illumination to an element of the scene, the point is not reached where the lower colour temperature produced does not change the colour balance of the camera output. 17.2.6 Gamma Correction The primary television audience is the public, and during the period being considered, all domestic television displays were based upon the CRT in one of its forms. As seen in Sec- tion 13.4, the electro-opto transfer function (EOTF) of the CRT is a power law with a gamma value which, at the beginning of the period, was somewhat difficult to determine in a precise manner. Furthermore, it was recognised that the gamma varied depending upon the circuitry arrangements for driving the CRT. Nevertheless, the use of gamma correction was universal, and it was appreciated that it was necessary to establish a figure for the display gamma in order that the correction characteristic applied at source could be specified in the system specification. The NTSC specified the correction characteristic to be based upon a display gamma of 2.2; that is in notional terms, the correction characteristic would follow a law which had an exponent the inverse of this value, that is approximately 0.45. In the United Kingdom, the System I specification updated in 1971 specified the display gamma to be 2.8, with a tolerance of +0.3 or −0.3, and various other countries adopted gamma values between these two values. It was later established that the value of 2.8 was not a true measure of the actual value of the CRT gamma, which is generally now acknowledged to be close to 2.354 when driven from a sufficiently low cathode impedance. However, the carefully measured CRT gamma established in the laboratory conditions of a fully darkened room with even more careful adjustment of the critical ‘lift’, ‘brightness’ or ‘black level’ control, as it is variously called, to establish the true black level, bore little resemblance to the reality of the effective characteristics of a CRT in a domestic environment, where the viewer has control of black level, and that level in subjective terms is impaired by the reflection of ambient light from the screen. 4 Private discussion with Alan Roberts relating to his work at BBC Research.

A Brief History of Colour in Television 319 Furthermore, before the introduction of digital processing in cameras in the late 1980s, the gamma correction was carried out by analogue circuit techniques, which varied significantly in their ability to accurately track the specified law, particularly at low luminance levels where a compromise had to be reached between the required gain of the corrector and the limited signal-to-noise ratio of the camera signals. In order to establish a satisfactory compromise between these various limitations, the vision control operator is situated in a critical lighting environment aimed at representing a reasonably critical domestic viewing environment and provided with carefully set-up monitors (see Chapter 21) and the black level control of each camera to allow him or her to provide well-set-up pictures which are matched on a camera-by-camera basis to the studio or outside broadcast (OB) output. 17.2.7 A Brief Description of Historic Encoding Techniques 17.2.7.1 The Luminance Signals In Section 14.7 the advantages and disadvantages of constant and non-constant luminance systems were outlined. The advantages of the non-constant luminance system, in terms of achieving simplicity and therefore relatively low cost of implementation in receivers of the time, ensured that non-constant luminance systems were adopted universally. As described in detail in Section 14.3, the luminance signal is derived from the addition of the red, green and blue signals in appropriate proportions, relating to the contribution each camera spectral sensitivity, respectively, makes to the luminance response of the eye. These contributions will in turn be dependent upon the chromaticity of the primaries which are used to derive the camera spectral sensitivities. In Worksheet 14, Matrix 6 provides the required coefficients in the formula for the luminance or Y signal, and by selecting the NTSC ‘button’, the coefficients for the three NTSC primaries are found to be, to three significant figures: Y = 0.299R + 0.587G + 0.114B However, as the non-constant luminance systems were adopted, it is the luma signal which is derived using the gamma-corrected versions of the RGB signals: Y′ = 0.299R′ + 0.587G′ + 0.114B′ This is the formula for luma used in the NTSC specification and it continued to be adopted later, not only by the SMPTE in the adoption of the new primaries described earlier but also by the PAL system, despite the formula no longer representing the luminance coefficients of the new primaries. This is not surprising; the composition of the luma signal does not affect the displayed colour since the matrixing process on the colour difference signals in the receiver ensures the luma signal cancels out of the equations. The luma signal will not provide an accurate luminance signal for monochrome displays, but the effect of the inaccuracy will be marginal at worst. Most importantly there is no requirement to change the receiver design to match the new coefficients of the contributions to the luma signal. The coefficients of the RGB signals required to achieve an accurate representation of luminance may be found for any set of primaries by selecting their chromaticity coordinates for entering into the Matrix 6 formula in Worksheet 14.

320 Colour Reproduction in Electronic Imaging Systems 17.2.7.2 The Colour Difference Signals In Section 14.2 the basis for deriving the three signals used in all non-constant luminance colour systems was set out as the luma signal, Y′, and the two colour difference signals, R′ – Y′ and B′ – Y′. In this respect, the different colour systems differ only in the manner in which these colour difference signals are prepared for multiplexing. In both the NTSC and the PAL systems, each colour difference signal is balance modulated onto a subcarrier of the same frequency but differing in phase by 90 degrees; the two modulated carriers are then summed, a technique known as quadrature modulation. The resulting combi- nation is referred to as the chrominance signal. By using a reference subcarrier of precisely the same frequency in the receiver, the two original signals may be demodulated with no mutual interference between them. The subcarrier frequency is arranged to be at the upper end of the video spectrum and be an odd multiple of the line scanning frequency in order to both minimise interference with the luma signal and reduce visibility on monochrome receivers. The U′ and V′ Colour Difference Signals The composite video signal is composed of the luma and chrominance video signals and the synchronisation signals. The frequency of the subcarrier is chosen in such a manner that at the fine spectral level, the components of the chrominance and luma signals interleave in order to minimise mutual interference. In order to avoid this combined signal extending beyond the signal level capacity of following equipment and particularly the transmitter, the colour difference signals are attenuated prior to modulation, which limits the excursions of the composite signal in such a manner that the positive excursion beyond the peak white value of 1.0 is limited to 1.33 and the negative excursion below the black level is limited to −0.33. The attenuation factors calculated to meet the above criteria are as follows: U′ = (B′ − Y′) and V′ = (R′ − Y′) 2.03 1.14 The letters U′ and V′ are used as shorthand to describe the specified attenuated versions of the colour difference signals. Thus, in terms of the R′, G,′ B′ signals: U′ = (1B′ − 0.299R′ − 0.587G′ − 0.114B′)∕2.03 = −0.103R′ − 0.289G′ + 0.436B′ and V′ = (1R′ − 0.299R′ − 0.587G′ − 0.114B′)∕1.14 = +0.615R′ − 0.515G′ − 0.100B′ The criteria for the attenuation factors are based upon the ability to accommodate the signal level excursions associated with the peak level colour bar signal. This signal, which has become ubiquitous, is an electronically generated video signal representing the display of eight vertical stripes, comprising all combinations of the R′G′B′ colour signals at levels of either 100% or 0%, that is, in luminance order: white, yellow, cyan, green, magenta, red, blue and black. As a consequence of the quadrature modulation of the colour difference signals, the resulting subcarrier vector will vary in amplitude and phase in accordance with the colour represented by the U′,V′ signals and will fall to zero when a neutral is scanned by the camera. This property of the chrominance signal can therefore by portrayed by a vector diagram as illustrated in Figure 17.10.

A Brief History of Colour in Television 321 0.8 Level = 100% V Magenta 60.7° Red 103.5° 0.6 0.4 0.2 Yellow 167.0° U V 0 –0.8 –0.6 –0.4 –0.2 0.0 0.2 0.4 0.6 0.8 –0.2 Blue 347.0° –0.4 Green 240.7° –0.6 –0.8 Cyan 283.5° U Figure 17.10 The chrominance U′, V′ vector diagram illustrating the peak primary and complementary colours for 100% colour bar signals. Figure 17.10 illustrates how the vector phase represents the hue of the colour and the amplitude its saturation. The positive U′ axis is defined as the zero-degrees axis, and thus, the vectors for the colour bar waveform described above appear with the amplitudes and phases shown in the diagram, as based upon the calculations and chart derived in Worksheet 14. The NTSC I′ and Q′ Signals Because of the limitation in the bandwidth available in the 525 line systems, the chrominance signal is limited to a frequency which is critical in terms of being close to the limit of the corresponding colour acuity of the eye, as discussed in Section 14.5. In consequence, the NTSC specification applies a further layer of sophistication to the manner in which the U and V signals are processed. It is beyond the scope of this book to explain the reasons why it is possible under certain limitations to allow one of the colour difference signals to have a larger bandwidth and therefore improved exploitation of the colour acuity of the eye than that enjoyed by the other signal, but it is a feature of balanced subcarrier modulation systems which enables this to be so. Unfortunately, neither set of colours represented by the U′ and V′ signals align with the axes of colours of minimum acuity of the eye as identified in Section 14.5, that is, the yellowish green to purple axis. This axis of minimum colour acuity is called the Q axis, and the axis at 90 degrees to the Q axis is referred to as the I axis, along which the reddish orange to blueish cyan colours lay. These colour axes are overlaid on the U′,V′ vector diagram as illustrated in Figure 17.11.

322 Colour Reproduction in Electronic Imaging Systems 0.8 Red 103.5° Level = 100% Magenta 60.7° I axis 0.6 Q axis 0.4 V Yellow 167.0° 0.2 33° –0.8 –0.6 –0.4 0 0.2 0.4 0.6 0.8 –0.2 0.0 Blue 347.0° –0.2 –0.4 Green 240.7° –0.6 Cyan 283.5° –0.8 U Figure 17.11 Illustrating the position of the I′ and Q′ axes. The Q axis of minimum acuity is the critical axis for establishing which colours should be encoded with the lower bandwidth signals; the I axis colours are less critically selected, though as Figure 14.6 illustrates, this axis will be close to the axis of maximum colour acuity. From the diagram it can be seen that the Q axis is at an angle of 33 degrees to the U axis. By using this angle and simple trigonometry, it is apparent that the colour vectors can be resolved into values of U′ and V′ which are aligned to the I and Q axis. The projection of a colour vector onto the Q′ axis is a combination of the components from the U′ and V′ axes, thus: Q′ = V′ sin33◦ + U′ cos33◦and similarly I′ = V′ cos33◦ − U′ sin33◦ Since sin33◦ = 0.545 and cos33◦ = 0.839, then substituting for U′ and Y′ in these formulae enables us to define the three signals which form the composite signal of the NTSC specification in terms of the original gamma-corrected R′, G,′ B′ signals: Y′ = 0.299R′ + 0.587G′ + 0.114B′ I′ = 0.596R′ − 0.275G′ − 0.322B′ Q′ = 0.211R′ − 0.523G′ + 0.312B′ The bandwidth available for the I′, Q′ signals was dependent upon the characteristics of the radio frequency channel allocations, which differed on a country-by-country basis. In the

A Brief History of Colour in Television 323 United States, where at that time, the nominal luma bandwidth was 3.0 MHz, the bandwidths allocated to the I′, Q′ signals were 1.0 MHz and 0.34 MHz, respectively. In 625 line countries, where generally the PAL system was adopted, the channel allocations were usually more generous in terms of the bandwidth allocated to each channel and it was therefore unnecessary to adopt the I′, Q′ approach in formulating the chrominance signal; thus, the U,V signals were transmitted with the same bandwidth. It must be emphasised that this section of the chapter has been a very cursory overview of the encoding systems; for more in-depth and broader descriptions of the systems, the reader is directed to the books already cited in Section 17.2. 17.2.7.3 Early Experience Using the NTSC System On the basis of the principle that the technological aspects of reproducing colour would only be discussed in this book if that technology impinged upon the reproduced image, the reader would be correct in questioning why we have gone into such detail of the NTSC system in the previous sections of this chapter. If the practical implementations of the technology were working correctly, then indeed there should be no influence on the colour of the displayed image as a result of encoding and decoding the R′, G,′ B′ signals. Unfortunately, in the early days of the operation of the NTSC colour system, this was not the case and the system was beset with problems. Figures 17.10 and 17.11 highlight how the hue of the signal is dependent upon the phase of the subcarrier vector. so it is implicit that the phase of the reference subcarrier used to demodulate the chrominance signals is aligned correctly with the I′, Q′ axes respectively, during the demodulation process. If this alignment is incorrect, the signals will be demodulated along other axes, which will cause the de-matrixed R′, G,′ B′ signals to bear increasingly little resemblance to the original signals as the reference error angle increases. Naturally, the system designers were aware of the requirement for accurate phasing and a ‘colour burst’ of a short period of reference phase subcarrier is specified to be transmitted adjacent in time to the line synchronising signal to enable the receiver to lock the phase of its reference subcarrier to this colour burst signal. However, the combination of processing circuits with poor phase stability and downstream equipment such as video tape recorders, which re-inserted the colour burst after processing the signal off tape, meant that colour TV sets had to be furnished with a phase control to enable the viewer to subjectively vary the phase to produce a displayed image of acceptable hues. The problem was exacerbated when the source of the signal changed, which often required a readjustment of the phase control. These changes of hue earned the system the epithet, based upon a different interpretation of the letters of the NTSC, often referred to as ‘Never Twice the Same Colour’, the result of the sophistication of the system being several years in advance of the technology required to implement it in a fully stable manner. It was during this early period that the European broadcasters were planning to introduce colour systems and led directly to them introducing a phase reversal of the U′ signal on alternate lines in time sequence as transmitted, which, when two adjacent lines were added together, cancelled out any phase errors, that is, the PAL system. Eventually, the technology developed to the point where it was sufficiently stable to enable NTSC signals to be displayed continuously with the correct hue, but by that time, the PAL system had been introduced into a large number of the countries of the world.

324 Colour Reproduction in Electronic Imaging Systems It was unfortunate that an initial weakness in the technology supporting the multiplex operation of the NTSC system led to it initially having a poor reputation, which detracted from what was otherwise a very sophisticated specification evolved from a superbly organised development project with complex and diverse criteria to satisfy. 17.3 The Introduction of Digital Television 17.3.1 The Evolution of Digital Specifications By the mid-1970s digital technology had developed sufficiently for digital processing to be introduced into some of the television equipment within the production workflow, and by the end of this decade, in Europe, Japan and the United States, attention was being focused on the requirements for specifying the digital coding format of composite television signals for use in the production centre. The importance of relating the digital sampling frequency to the line and frame structure of the system and also to the subcarrier frequency was recognised. For the NTSC system, this did not represent a problem, and the SMPTE had draft recommendations for a digital encoding specification for NTSC composite signals ready by the end of the decade. However, the PAL subcarrier is related to the line frequency in a more complex manner, which prevented the derivation of a digital sampling frequency geometrically aligned to the scanning format of the image, which in turn opened up the opportunity out of necessity for the EBU to take a more ambitious step and consider a digital component system as an alternative to a digital composite system. By this era the two principal organisations for evolving television specifications, the EBU and the SMPTE, were aware of the advantages of developing specifications which shared as far as possible common parameters, since not only would this keep the cost of broadcast centre equipment down but would also ease the standards conversion requirements on programme interchange between the two areas. The SMPTE therefore held back their draft composite system specification in order to allow time for the two organisations to work closely together to see whether it was possible to develop a common digital component specification that would satisfy the requirements of both the 525 and the 625 line systems. During 1980 the two organisations each undertook extensive programmes of work, inter- spersed with meetings between them, to derive a specification which in digital sampling terms shared a common specification. By early 1981 proposals had evolved for adopting a common set of parameters, and during the Annual SMPTE Television Conference in San Francisco in February 1981, the SMPTE organised a joint set of tests, using three proposed sets of param- eters, attended by interested parties (including the author) from around the globe. The work undertaken during this period5 may be favourably compared in extent to the original work of the NTSC in terms of identifying and agreeing the parameters for an international specification for interfacing equipment directly in the digital domain (Tooms, 1981). During 1981 the Japan Broadcasting Corporation (NHK) had completed their tests of the proposed specification and concurred with the proposals, which led to the EBU and the SMPTE submitting their versions of the common specification to the ITU. This led to the specification being adopted as an 5 An excellent summary of this work is contained in the EBU/SMPTE document ‘Rec. 601 – the origins of the 4:2:2 DTV standard’ (Baron & Wood, 2005): http://tech.ebu.ch/docs/techreview/trev_304-rec601_wood.pdf


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook