Chapter 8 Headend Signal Processing

8.1 Introduction

This chapter deals with the electronic devices most commonly used in headends to format signals for transmission on a cable plant. These devices include signal processors, RF devices used to take television signals off the air or from an incoming cable and put them on the cable. Also included are modulators, miniature transmitters that accept baseband audio and video signals and convert them to a channel. Demodulators do the opposite, converting an RF signal back to baseband video and audio. Stereophonic encoders convert left- and right-channel audio into the composite stereo format used in television transmission. Earth station receivers convert signals coming from communications satellites to baseband audio and video. Often these are used with descramblers that deal with the proprietary scrambling format currently used with most analog satellite transmissions.

Block diagrams and descriptions of the operation of the equipment are presented, along with information regarding the proper application of the hardware. The block diagrams are representative of commonly available equipment. They are intended to illustrate how the functions are accomplished; they are not intended to represent particular pieces of equipment. Variations will be encountered in the field. Some equipment may have controls that don’t work as do the controls shown here, as a consequence of differences in block diagrams.

Chapter 7 covered inputs to the headend from various RF sources. Chapter 9 will cover methods used to couple signals from the equipment covered in this chapter to the cable. You may wish to review the fundamental composition of baseband and RF signals in Chapters 2, 3 and 4.

8.2 Signal Processors

RF signal processors are normally used to transfer incoming VSB-AM signals from off-air antennas or incoming cables to the cable plant. The processor can change the channel (frequency) of the signal and the relative sound carrier level. The signal is bandpass filtered to remove any undesired adjacent channel power. Automatic gain control is employed to stabilize the output level. Optionally, a standby carrier can be substituted when the incoming signal goes away. Switching of IF sources can be done to maximize use of a channel.

8.2.1 Input Section

Figure 8.1 illustrates a typical block diagram of a processor. A number of variations are possible. Many modern processors are frequency agile. Differences between agile and nonagile processors are pointed out as appropriate. The incoming RF signal is supplied to a bandpass filter, BPF1, which is tuned to the incoming channel. This bandpass filter is typically of fairly low order and has only a little adjacent channel rejection. Its primary purpose is to reduce off-channel power to minimize distortion in the input circuitry. It, along with BPF2, provides image rejection, as described in Chapter 9.

image

Figure 8.1 Signal processor block diagram.

The signal passes through a variable attenuator, AT1, which is part of the delayed AGC circuit. “Delayed AGC” is a somewhat unfortunate term because it has nothing to do with a time delay. Rather, the term delayed refers to the fact that AT1 is at minimum attenuation (the front end is at maximum gain) at low signal levels. As the signal level increases (above a predetermined threshold), it becomes necessary to reduce its amplitude before it reaches RF amplifier A1 and mixer M1. Failure to reduce the amplitude would result in unacceptable distortion. At lower signal levels, the gain of A1 is needed in order to overcome the noise introduced in the mixer and following circuitry.

8.2.2 IF Section

The signal is converted to the intermediate frequency (IF) in mixer M1. The commonly used North American IF is 45.75 MHz for the picture carrier and 41.25 MHz for the sound carrier.*Note that the spectrum is inverted at IF compared with the spectrum at RF, where the sound carrier is at a higher frequency than is the picture carrier. This is a consequence of using a high-side local oscillator (LO); that is, the frequency of the local oscillator is higher than is the RF frequency of the signal being converted. Use of high-side oscillators helps control spurious responses from the mixing process.

Following mixer M1 is another bandpass filter, BPF3, tuned to the IF band, normally 41 to 47 MHz for NTSC signals. Amplifier A2 provides gain to help overcome the insertion loss of later circuits. The IF AGC attenuator is AT2, which is adjusted such that the input to the AGC detector is constant regardless of the incoming signal level. Additional amplification may be provided, and at some point in the signal chain, a sample of the signal will be routed to the AGC detector. The output voltage of the AGC circuitry changes as the signal into the AGC detector increases. This change causes the attenuation of AT2 to increase, compensating for the increasing level of the incoming signal.

If the signal level increases sufficiently, the IF AGC voltage reaches the delay threshold, above which the gain of the input attenuator, AT1, is reduced to effect stabilization of the IF signal level. Depending on the design of the AGC system, the IF attenuator, AT2, may continue to drop in gain, or it may not. Usually, the best carrier-to-noise performance of the processor is achieved at input signal levels just below the delayed AGC threshold. If sufficient level is available, this is the preferred operating point for the processor.

Returning to the main IF signal path, as illustrated, the signal next is supplied to switch S2, which selects the standby carrier upon loss of input signal. The standby carrier was originally employed to prevent TVs from showing snow when a signal goes off the air. It is also employed if the output channel is used as a pilot for the distribution system. A pilot signal is used to set the gain of distribution systems, and if it disappears when a TV station goes off the air, the operating point of the distribution system will be upset.

Some processors include the ability to modulate the standby carrier with pseudosync pulses, a low-duty cycle 15.734-kHz square wave that simulates normal TV modulation. This is necessary to support distribution AGC systems that need sync tips to operate, rather than a CW signal.

Switch S2 is normally activated by a drop in the IF AGC voltage sufficient to indicate that the input signal level is lower than a minimum needed for useful performance. Following switch S2 is switch S3. This switch is optional and is used when it is desired to switch to an alternative IF signal. This is done, for example, when the primary signal leaves the air and it is desired to substitute another signal on the same channel. For example, the cable operator may carry a local television station that goes off the air at night. It is desired to substitute another signal on the channel when the local station goes off the air. The alternative signal, processed in another modulator or processor, is supplied at the “Composite IF in” connector. Optionally, a separate AGC circuit may be used to normalize the level of the alternative IF signal. In this example, switch S3 would be connected to the IF AGC signal such that when the input signal drops, S3 is activated, selecting the alternative IF signal.

The picture and sound carriers are separated in the example processor. Aural IF, at a frequency of 41.25 MHz, is routed to a bandpass filter, BPF7, which eliminates any power associated with the picture signal or other channels. Following BPF7 is a limiter, A4, which functions identically to the limiter in an FM radio. Its output is constant regardless of the input signal level, within a reasonable level range. Following A4 is a manually adjusted attenuator used to adjust the sound carrier level. The sound carrier is normally adjusted to be −15 dB from the picture carrier level. A bandpass filter following AT3 removes harmonics of the sound carrier, which are introduced in the limiting process. The sound IF is then combined with the picture IF.

An alternative sound IF circuit uses a notch of variable depth to adjust the sound carrier level. This circuit works but cannot remove any residual AM on the sound carrier. Further, if not adjusted properly, it can add FM/AM noise, which hurts operation of many scrambling systems.

The visual signal at IF is supplied to BPF4, which provides the bulk of the selectivity of the processor. We describe the response of this filter later. After amplification, the visual IF is combined with the sound carrier.

8.2.3 Output Section

After combining, the composite IF signal is applied to output level control AT4. The location of this control in the signal chain allows it to simultaneously adjust the picture and sound carrier levels. After level control, the composite IF signal is split. One side of the splitter is supplied as the “Composite IF out” of the processor, to be used if needed as the alternative IF input to another processor or modulator. The other output of the splitter is amplified in A5 before being supplied to mixer M3, the output mixer. It is mixed with the output of local oscillator LO3. The signal is now on the assigned output channel. Bandpass filters BPF6 and BPF7 eliminate the image of the second conversion process and any spurious signals coming from mixer M3.

Amplifier A6 increases the signal level to the desired output level. Its performance is critical: any third-order distortion it introduces will generate frequencies too close to the output channel to be rejected by BPF7. The most common distortion produced is a third-order beat between the picture and sound carriers. This beat, appearing 4.5 MHz below the picture carrier, is derived from twice the picture carrier less the sound carrier. Another beat appears 4.5 MHz above the sound carrier but is lower in amplitude.

If the output is frequency agile, meaning it can be set to any of a number of channels, the output section shown is replaced with a dual conversion output section, with a first IF above the maximum frequency the processor can output. The IF is filtered to remove the image, and then it is down-converted to the output channel. Sometimes up-conversion is done externally to the modulator; this technique, along with more information regarding up-conversion, is covered in Section 8.4.1. The mechanism that creates images is covered in Chapter 9.

8.2.4 Local Oscillators

Normally, LO1 and LO3 are fixed, crystal-controlled oscillators in a fixed channel processor. If the processor is agile on the output channel, LO3 and M3 will be replaced with a dual conversion output converter, which converts the IF signal to a high frequency (typically in the high hundreds of MHz), where it is filtered and down-converted to the output channel using yet another mixer and a phase locked local oscillator.

A processor with an agile input will have the input section replaced with an agile oscillator and tracking filter. In higher-quality processors, the front end will be double conversion, with the second IF at the normal TV band of 41–47 MHz.

If the processor is used as an on-channel processor, with the input and output on the same channel, then it is desirable that the output signal be phase coherent with the input signal to avoid beat products. To effect this phase coherency, local oscillator LO1 is not used for on-channel conversion. Rather, a sample of the output local oscillator, LO3, is supplied to mixer M1. Thus, the same local oscillator is used for both down- and upconversion. This may be shown to ensure phase coherency. Switch S1 is thrown to the alternative position for on-channel conversion.

If it is desired that the output be phase locked to an external reference signal at the frequency of the output channel, then LO3 will be replaced with a phase locked circuit.

8.2.5 Processor Frequency Response

Figure 8.2 illustrates the visual channel IF response of the processor, overlaid with the spectrum of signals that may be present. An important function of a processor is to provide off-channel rejection. This is important in cases where the processor is used to process signals coming in from another cable. For example, it is sometimes practiced to route signals by AM microwave or cable to a hub, where they are reprocessed and supplied to subscribers. In this case, adjacent channel power coming into the processor must be rejected.

image

Figure 8.2 Processor visual channel IF response.

Also, if a station is being received off the air along with an adjacent channel from another city, the adjacent channel must be rejected before the signal is supplied to the cable. It is accepted that, in order to be invisible, an undesired signal should be 60 dB below the level of a desired signal. For specification purposes, this is the desired adjacent channel rejection of a processor. In the case of incoming cable signals, adjacent signals will be at about the same level, so 60-dB adjacent channel rejection is adequate. If an adjacent channel has a greater signal level than the tuned channel, as could be the case in off-air processing, then more rejection may be required. This is considered to be handled best with external selectivity rather than to burden all processors with more rejection capability.

Figure 8.2 illustrates the ideal processor passband as a heavy line and a typical passband as a dashed line. The location of the relevant carriers is shown at IF, where the picture carrier is at a higher frequency than the sound carrier. Chapter 8 shows why the spectrum is inverted from that broadcast. Ideally, the visual IF passband would pass signals from the edge of the visual spectrum, 4.2 MHz from the picture carrier, to the edge of the flat portion of the vestigial sideband region, 0.75 MHz above the picture carrier at IF.

This ideal spectral response may be approximated if a surface acoustic wave (SAW) filter is used as the IF filter (BPF4 in Figure 8.1). This filter sets the overall visual IF response of the processor for all practical purposes. Some processors employ L-C (inductor-capacitor) filters for this application. Practical L-C filters are unable to achieve the ideal passband shape. Bandpass sections are combined with traps, which provide high rejection over a very narrow band of frequencies. These traps are placed on the adjacent carrier frequencies as shown. Most of the power in an NTSC signal is concentrated at the picture carrier, the sound carrier, and the color carrier. The power density at other frequencies in a channel is relatively low. Because of this characteristic, it is possible to specify 60-dB rejection even if that much rejection is not available at all frequencies. The specification of 60-dB rejection is interpreted as meaning that power in an equal-level TV signal appearing in an adjacent channel is more than 60 dB below the on-channel picture carrier.

Even when a SAW filter is employed, it may be necessary to supplement its response with some traps. Practical SAW filters exhibit ultimate rejection of 40 to 50 dB, against a requirement of up to 60 dB depending on the frequency. Traps may be needed to achieve this last little attenuation.

L-C filters exhibit group delay. This means that signal components passing through the filter near either edge are delayed with respect to signal components passing through near the middle of the passband. The delay is a natural consequence of the attenuation of the stopband and is predicted from the mathematics describing the response of the filter. If the transition from passband to stopband occurs in a smaller frequency band, the group delay is greater.

Group delay causes components of the visual signal that occupy frequencies near the edge of the passband to propagate through the filter more slowly than do components near the center of the passband. This causes undesirable distortion of the video signal. To remedy this, another filter, called a delay equalizer, is incorporated with the bandpass filter. The delay equalizer must be designed for the bandpass filter with which it is to be used. It adds more delay to signals at the center of the passband, complementing the delay response of the bandpass filter.

A delay equalizer is necessary when an L-C filter is used as the bandpass filter, but not when a SAW element is used. Because of the way a bandpass filter is realized using a SAW device, group delay is inherently low.

8.2.6 Digital Off-Air Processing

To meet some early needs to carry off-air digital signals on cable plant, a simple system was used. An analog signal processor was retuned to widen the IF passband shown in Figure 8.2 such that it would pass the 8-VSB signal picked up off the air. The AGC was bypassed by placing the processor in the manual mode. Signals were up-converted to an off-air UHF frequency (which differs from the cable UHF frequency plan by 2 MHz, as explained in Chapter 9). This is not particularly efficient method of carrying off-air signals, but it sufficed in the early days, when adequate spectrum was available on the cable plant.

As shown in Chapter 4, off-air digital signals use 8-VSB modulation with a transport stream payload data rate of 19.393 Mb/s. The data rate of a 64-QAM video signal on cable (after removing transmission overhead) is 26.97 Mb/s; for 256-QAM it is 38.811 Mb/s. You can see that a 256-QAM channel can accept two complete 8-VSB modulated channels. Furthermore, in many cases a cable operator may want to carry some but not all of the data in the off-air channel, freeing up more spectrum. When cable operators carry off-air digital channels it is important to minimize the spectrum required, so there is a strong need to combine two off-air digital channels into one cable channel. This can be done with a digital television processor.

Figure 8.3 illustrates digital television processing, the digital equivalent to analog signal processing. A digital signal is recovered off-air in tuner #1 and supplied to the PSIP processor and remultiplexer (remux). As explained in Chapter 3, digital video signals include the program and system Information Protocol (PSIP), a series of tables that, among other functions, tell the receiver what programs are included in the multiplex (i.e., the 6-MHz channel carrying several programs) and how to find them. These tables must be modified if some programs are removed from the multiplex and others are added from another off-air channel or other source.

image

Figure 8.3 Digital television processing.

In the configuration shown, a second program stream may be added from either another off-air channel via tuner #2 or from another signal source. This program transport stream is supplied to the PSIP processor and remux. Its PSIP tables are merged with those from the first program stream and multiplexed into a higher-bandwidth stream that is supplied to a 256-QAM modulator. The modulator may include up-conversion to the frequency the combined transport stream occupies on the cable plant, or the up-conversion may be separate. Since each program stream included in each of the two input streams shown may be transmitted in variable-bit-rate format (VBR — explained later), there are some issues that must be addressed relating to the amount of data that is carried in the combined transport stream. In the case of combining two off-air channels, there is enough bandwidth to carry two complete channels. In many cases, however, only the HD elements are desired, and the remultiplexing process is capable of dropping not only the other television services but other data services as well. This would leave two off-air HD signals, with perhaps enough remaining bandwidth for another SD service of the operator’s choosing. The remultiplexing process must also derive and recombine all of the PSIP information from the desired incoming sources.1

8.3 Modulation

Figure 8.4 illustrates the video baseband, IF processing, and RF processing found in a typical modulator. (Upcoming Figure 8.5 shows the audio circuitry.) Modulators are used with many more channels than are processors since the majority of channels are programmed from satellite-distributed programming or local sources. The output circuitry of the modulator is identical to that of a processor, but the front end is significantly different.

image

Figure 8.4 Modulator block diagram, baseband and RF.

image

Figure 8.5 Modulator block diagram, aural.

8.3.1 Video Switch and AGC

Video is introduced into the modulator in the top left-hand corner of Figure 8.4. An optional loop-through facility is shown on the input. Where a video source must be supplied to several places, it is possible to loop the video through from one piece of equipment to another by using a high input impedance in each. A termination is used only at the last piece of equipment in the signal chain. Section 8.9.1 covers this topic in more detail.

As shown, a video switch is optionally available to allow selection of an alternative video source. The switch may be activated manually, from an automation system, or automatically upon loss of the primary video.

Another option available with some modulators is a video AGC circuit. Typically, the amplitude of the sync tip is measured. If it is in error, attenuator AT1 is adjusted until the sync amplitude is normalized. It is assumed that the ratio of active video signal to the sync tip is correct. The proper level of video depends on the video content, which cannot be known at the modulator. However, the sync tip amplitude is well known. An error in sync tip amplitude is indicative of a gain error in video-processing circuitry. By correcting the sync tip amplitude, the active video amplitude is corrected.

Although the ideal correction strategy for the video AGC is to correct sync tip amplitude, a few modern modulators feature additional correction strategies. It is a fact of life that certain video signals handled in headends do not conform to the expected characteristics of amplitude and sync-to-video ratio. In order to deal with these signals, some video AGC circuits include a mode in which the amplitude of the sync tip to peak video signal is normalized rather than the sync amplitude. This mode should not be used except under certain conditions. If the video level drops because the picture intentionally goes dark, this mode would cause the sync level to increase and the video to be stretched to white unnaturally. However, the mode might be useful for video sources that are always expected to include peak white, but that are not always expected to be at the correct amplitude.

8.3.2 Sound Subcarrier Trap

Following the video AGC is a trap tuned to 4.5 MHz. In some cases, video is supplied combined with sound on a 4.5-MHz subcarrier. This is done, for example, when video is supplied via a terrestrial FM microwave link. The base-band video and subcarrier sound must be separated before modulation. The aural signal is coupled out through flag С(to Figure 8.5). The aural subcarrier is removed in FL1, a 4.5-MHz trap. Frequently, the trap is switchable because it does introduce some group delay. It may not be needed, and should not be used, if the incoming video doesn’t contain significant power at 4.5 MHz.

Under some conditions other than the existence of a 4.5-MHz subcarrier, use of the trap may be desirable. If the video contains significant power at 4.5 MHz owing to distortion or other processing, that power would appear as noise to the sound subcarrier. This can adversely affect the sound-signal-to-noise ratio, and in certain circumstances, could affect descrambling (see Chapter 21).

8.3.3 DC Restoration and Peak White Clip

From the trap, video is supplied to a dc restoration circuit. Video signals may be interfaced between pieces of equipment with dc restored or ac coupled. As video content changes, the average signal level, and hence the dc value of the signal, changes. However, since it is required to transmit modulated signals with a fixed sync tip amplitude, the dc level must be reestablished before the signal is modulated onto the RF carrier.

A number of circuits have been used for dc restoration. One of the simplest is shown in Figure 8.4. If any portion of the video signal tries to drop below ground, diode D1 conducts, charging C1 positive at the right end. Since the video signal voltage is all positive with respect to the sync tip, when the charging of C1 is complete, the sync tip will be at ground level (ignoring the diode drop in D1), with all other elements of the video signal being positive with respect to it. This dc level is maintained to the actual point of modulation. Resistor R2 is used to discharge C1 so that if the sync tip rises too high the capacitor will discharge until D1 again conducts on sync tips.

After the video signal is clamped, it is applied to a peak white clipper. Visual carrier modulation is strictly limited to 87.5% in the NTSC system. If the carrier modulation is too deep, the carrier will at some point be cut off, or very nearly so. Because of the way sound is demodulated (see Sections 8.5.5 and 9.5), carrier cutoff will cause intolerable sound buzz. In order to prevent this, many modulators include peak white clip circuits. If the video voltage exceeds voltage +V in Figure 8.4, then diode D2 conducts, holding the voltage on the video line constant.

8.3.4 Modulation Process

The clamped and clipped video signal is coupled to modulator M1. It is no mistake that the modulator is shown modeled as a mixer with the multiplication sign. Amplitude modulation is a multiplicative process, as is frequency mixing. The circuits used for both amplitude modulation and mixing are essentially identical and perform a multiplication process. Chapter 2 provides background on amplitude modulation theory.

Conventional amplitude modulation of a carrier by a sine wave is expressed by the equation


image (8.1)


where

ωc = carrier frequency

ωm = modulating frequency

M = percent modulation of the signal (though TV signals use a different expression for modulation)

From the first form, we can see that the envelope (represented by the term inside parenthesis) never goes negative. (We can show that if the 1+ term is omitted, the carrier is suppressed, which is done when we modulate the color subcarrier.) The second form is obtained by trigonometric expansion of the first. It shows that the frequency domain includes the carrier and two other frequencies, spaced above and below the carrier by the modulation frequency. These upper and lower modulation products are called sidebands. The information contained within the modulated carrier is carried redundantly by the upper and lower sidebands.

A video signal is much more complex than a sine wave. The single sinusoida modulation term ωmt is replaced by a sum of sine waves of different frequencies, which taken together, define the video waveform (see Chapter 2). The resulting modulated signal has a series of sidebands above and below the carrier. One set may be removed without losing the information contained in the signal.

8.3.5 IF Processing

From modulator M1, the signal is often looped out for possible external signal processing. (The term loop out is applied to the routing of a signal from one piece of equipment to another and then back into the first piece for further processing.) Many common scrambling systems (described in Chapter 21) need access to the modulated signal at this point. They attenuate the picture carrier during the horizontal and, usually, the vertical blanking intervals by attenuating the IF signal. Though some older systems attenuated the sync by applying a sine wave attenuation, more modern systems use so-called gated sync suppression, achieved by switching an attenuator into the signal path. The switching operation introduces high-frequency modulation components, which will interfere with adjacent channel signals if not eliminated. By arranging FL2 so that it follows the addition of the scrambling signal, the undesired modulation components are removed.

After scrambling (when used), the signal returns to the modulator and is supplied to vestigial sideband filter FL2. Vestigial sideband modulation and filtering were described in Chapter 2, and the precise shape of a VSB filter for NTSC is described later. It is used to eliminate one set of modulation sidebands. Modulation sidebands more than 0.75 MHz from the picture carrier are removed from one side of the picture carrier. If the signal has been scrambled using gated sync suppression scrambling, then FL2 will also filter those modulation sidebands.

After filtering, the IF signal is combined with the sound carrier signal. The combined signal may be looped out, at the composite IF loop, to provide for processing by an older sine wave sync suppression scrambling system. It is likely that the signal will be split, with a portion supplied to the composite IF out on the rear panel of the modulator. This signal is used with other processors or modulators in IF switching applications, as described earlier.

The composite (visual and aural) IF signal is up-converted to the RF channel in mixer M3, driven by local oscillator LO3. Attenuator AT5 controls the output level of the composite visual and aural signals. Remaining processing is identical to that described in the signal processor section of this chapter. The signal is filtered, amplified, and filtered again. A phase locking system can be used to phase lock the carrier to another source, and is described later.

8.3.6 Aural Modulation

Figure 8.5 illustrates the modulation process. Baseband audio is supplied to an optional switch S2. If video switching (S1 of Figure 8.4) is used, then audio switching is likely required, too. Normally, the audio switch will be driven by the video switch: when the video switch activates, it activates the audio switch. Sometimes a third audio input, intended for emergency override, is provided though it is not shown in the Figure. Frequency response of the override input may be limited to voice band only. The reason it would be limited is to reduce any noise that may appear on an audio source that is outside the roughly 300-3,000-Hz band needed for understanding voice communication.

Preemphasis

Preemphasis is employed, as in FM radio broadcasting, to reduce the noise introduced by transmission. An FM demodulated signal can be shown to have rising noise density with frequency. In order to compensate, preemphasis is used. Above a certain frequency, the gain in the modulator increases 6 dB per octave (doubling of frequency). The demodulator employs deemphasis, which attenuates the signal by the same amount. In the process, the noise spectrum is flattened, and the total noise heard by the listener is reduced.

In North America, the standard emphasis curve followed is called the 75-microsecond emphasis curve because the break frequency, at which emphasis is 3 dB, is given by а75-µs RC break:


image (8.2)


where

j = imaginary operator

ω= angular frequency (ω= 2πf)

The corresponding 3-dB frequency, at which the imaginary component is equal to 1, is 2.122 kHz. In Europe and elsewhere, it is common practice to use an emphasis time constant of 50µs, corresponding to a 3-dB break frequency of 3.183 kHz. This affords slightly less reduction in noise, all else being equal, while providing slightly more protection against overmodulation.

The issue of overmodulation with constant amplitude signals arises in the following manner. Consider that you are normalizing the audio level of a program using any of several accepted criteria. You should not set program level based on a preemphasized signal because that is not the way a listener hears the program. On the other hand, the program is transmitted with preemphasis. If the dominant program material occurs at a frequency significantly above the preemphasis break frequency, then overmodulation is possible with normal program level audio. Fortunately, most program material tends to be dominated by frequencies below either the 75-µs or the 50-µs break frequencies. However, sometimes this is not the case, and problems can develop. The 50-µs break frequency provides somewhat more immunity. In order to compensate for the reduction in signal-to-noise ratio caused by the higher emphasis break frequency, countries using 50-µs preemphasis normally employ wider deviation of the sound carrier (50 kHz compared to 25 kHz for 75-µs countries).

When stereo audio is used, the preemphasis is more complex. The preemphasis circuit in the audio baseband circuits shown is disabled so that the input is flat. Stereo audio is covered in Chapters 2 and Section 8.6.

Aural Modulation Limit, or AGC

An optional modulation limiter may be provided. This circuit monitors the audio signal level, and if the level increases such that the deviation would be excessive, then the modulation limiter temporarily turns down the gain of the signal path by adjusting AT2 of Figure 8.5. After the overmodulation ceases to exist, the gain is returned to its previous value over some length of time, which reduces the audible impact of the change.

Some modern modulators include a second mode of operation, in which the audio level is measured and AT2 is adjusted to provide for a consistent sound level regardless of the incoming modulation. This mode would not be appropriate for a number of audio sources since it will make low-volume sounds the same level as high-volume sounds. (A whisper and a shout would be the same loudness.) It might be appropriate, though, if the program audio consists of normal speech only.

Some modern modulators also include audio clipping, in which audio that would result in significant overdeviation has its peaks clipped, eliminating the overdeviation. In the United States, the FCC requires that a television aural transmitter be able to deviate to at least ±40 kHz, so the appropriate clip point would be above this. The clipping process produces considerable distortion, so operation into the clipping region should never be done intentionally. However, it is a useful way to avoid gross overdeviation, which could cause interference into video or, worse, problems with scrambling recovery (see Chapter 21). It may be applied at all times, so long as the audio level is carefully set to avoid hitting the clipping level during normal programming.

Modulation Process

Audio is frequency modulated onto an aural carrier. Frequency modulation is usually accomplished by applying the audio directly to an oscillator, such that the frequency of oscillation depends in part on the instantaneous modulating voltage. As shown in Figure 8.5, the modulation is applied to LO2, a 4.5-MHz oscillator (other frequencies may be used with conversion).

Some method must be employed to stabilize the center frequency of the oscillator. Though not the only means to accomplish this, a commonly used stabilization method is a phase locked loop (PLL). It generates a frequency correction to the oscillator if the center frequency wanders. The bandwidth of the PLL is very low to avoid having the loop “Fight” the modulation, which is intentionally forcing the frequency of LO2 to deviate instantaneously.

Audio from Other Sources

If audio is supplied already modulated on a 4.5-MHz subcarrier, then it is introduced into the audio section via flag С(see also Figure 8.4). At this point, the aural subcarrier may be combined with the baseband video, so bandpass filter FL3 is used to separate out only the sound subcarrier. It is passed to one input of switch S3, which selects either the audio from the modulated oscillator, LO2, or the external signal. Switch S3 may be actuated permanently by manual selection of the audio source, or it may be activated by some other event.

A common situation is to activate the switch upon loss of main video. Suppose a modulator ordinarily handles a signal delivered at baseband from a television station. The baseband link may be backed up by an RF link that picks up the off-air signal and demodulates it. The primary audio is delivered in baseband format, and so is applied to the main input to switch S2. The backup demodulator supplies sound on a 4.5-MHz subcarrier, combined with the video. If the video switch (S1 of Figure 8.4) detects loss of video, it activates to couple in alternative video from the backup demodulator. At the same time, it drives the 4.5-MHz switch (S3 of Figure 8.5) to select the external aural subcarrier of flagС.

Following the switch is a limiter whose purpose is to ensure that the sound carrier level is constant regardless of the source. After the limiter is a manual sound carrier level adjustment, AT4, which sets the level of the sound carrier relative to the visual carrier level. Finally, the aural carrier is up-converted from 4.5 MHz to the sound IF at 41.25 MHz using the same local oscillator as was used to generate the picture IF (LO1 of Figure 8.4). The aural IF is coupled at flag D back to Figure 8.4, where it is combined with the visual IF signal.

8.3.7 Vestigial Sideband Filtering

As described earlier, the majority of one sideband is removed by a device known as a vestigial sideband (VSB) filter (FL2 of Figure 8.4). Figure 8.6 illustrates the operation of the vestigial sideband filter in an NTSC modulator. Recall that the spectrum is shown at IF, intermediate frequency, where the picture carrier is higher in frequency than the sound carrier. This is reversed at RF, by the use of a high-side local oscillator.

image

Figure 8.6 Modulator vestigial sideband filter response.

The VSB filter has a response that is flat from 0.75 MHz above the picture carrier to at least 4.2 MHz below. The reason for the cutoff at 4.2 (really 4.18) MHz is that this is the maximum baseband video frequency that is handled in the NTSC transmitted signal. The color carrier and all its sidebands are contained within the passband of the VSB filter.

The high (vestigial) side cutoff is rather carefully defined. The response should be flat to 0.75 MHz above the picture carrier, then drop to zero at the band edge, 0.5 MHz higher. This frequency, 1.25 MHz above the picture carrier, is at the channel edge, 47 MHz at IF.

No filter has truly zero stopband response, so it must be decided how much attenuation is “enough” at the band edge and frequencies farther out. In the section on processors, we argued that the power spilling into an adjacent channel should be at least 60 dB down from the picture carrier. This number is applied to the VSB filter also. The requirement does not mean that the filter must exhibit a stopband response of −60 dB because the power of the modulated TV waveform is not constant throughout the channel. Most of the power is concentrated at the picture carrier. Typically, other carriers are −15 dB or lower with respect to the picture carrier. The only frequency removed by the VSB filter that is likely to be that high is the color subcarrier image at about 49.3 MHz. Thus, if the VSB filter has 45-dB rejection, that is good enough. Even if the filter has only 40-dB rejection, that may be adequate given a small attenuation by other filters.

In some modulators, the sound signal may be supplied through the VSB filter, in which case a slightly wider response than shown is needed. However, with the modulator architecture of Figures 8.4 and 8.5, this would not be necessary.

The critical requirements for the VSB filter made it an early candidate for use of a surface acoustic wave (SAW) device, and in fact, this is the first known application of SAW devices in the cable television industry. SAW devices exhibit sharp cutoff regions, good temperature stability, and low group delay. However, they do exhibit higher propagation delay than do conventional L-C filters. Propagation delay is the absolute delay of a signal as it propagates through the device. The propagation delay is related to the construction of the SAW device and has decreased somewhat with improvements in the state of the art. All else being equal, the sharper the skirts (the narrower the rolloff region), the greater will be the propagation delay. Delay times of 3 to 5 microseconds tend to be typical of SAW filters built for this application. The propagation delay can be an issue when events must be synchronized between those transmitted on the picture and sound carriers. Such a situation develops in the commonly used scrambling systems where synchronization data is transmitted on the sound carrier (see Chapter 21).

A SAW filter utilizes the propagation of an RF signal on the surface of a suitable material, such as a quartz or lithium niobate crystal. A pair of transducers is built onto the surface of the crystal, using the same techniques as are used to deposit metalization on the surface of integrated circuits. One transducer launches the signal into the crystal, where it propagates as a mechanical wave on the surface. The other transducer converts the wave back to an electrical signal. The shape of the transducers determines the passband of the filter. The loss of a SAW filter tends to be rather high, but the skirts can be made very steep without incurring the group delay that accompanies a sharp L-C filter.

Digital Modulators

Digital modulation is covered in Chapter 4. In general, digital modulators are simpler than their analog cousins. Interfaces with digital modulators are shown later. One issue of concern is when scrambling is added in a digital transmission stream. We use scrambling in the cable TV sense of the word, “to render the signal unintelligible to those who are not authorized to receive the signal.” In some headend systems, there is an external scrambler that adds scrambling prior to the modulator. In other headend systems, the scrambling is an integral part of the modulator. In some cases, the modulator includes the up-converter to the final channel assignment; in other cases, the up-converter will be external to the modulator.

8.4 Phase Locking of Carriers

The output carriers of both processors and modulators may be phase locked to external sources. Two primary reasons exist for phase locking the output. If the frequency is used for local off-air broadcasting, then there is a high probability that off-air signals will leak into some television receivers or, much less likely, into the cable plant itself. This leakage, or ingress, will produce a visible moving pattern on the screen if the on-cable picture carrier is not exactly equal to the ingress frequency. The industry commonly calls this pattern a beat. The processor or modulator output can be phase locked to the off-air signal to reduce the visibility of the ingress. This works because the phase locking process forces the frequency of the signal on the cable to be exactly equal to that of the ingress.

The other application for phase locking is to lock all picture carriers on the cable to a master reference frequency. Such techniques can be shown to reduce the interference caused by distortion in the distribution plant. Two techniques have seen fairly widespread use in North America. They are the harmonically related carrier (HRC) technique, in which all picture carriers are locked to harmonics of a master reference oscillator, and the incrementally related carrier (IRC), where all frequencies supplied from the master generator are offset 1.25 MHz. Neither HRC nor IRC techniques tend to be used as much today as in the past, but they will still be found in some locations. Chapter 9 describes the standard HRC and IRC frequency allocation plans.

Yet a third possible requirement for phase locking outputs is described in Chapter 9, in the discussion of node-specific programming. More than one program may exist in a headend on the same channel, and if leakage develops from one to the other, phase locking the two picture carriers will lessen the visibility of the beat. This is true if the subject program is analog. If digital, phase locking is probably not necessary.

The term phase lock refers to a technique of forcing one RF carrier to maintain a constant phase relationship with a second carrier. The second carrier is called the reference carrier because it is a reference to which the first is locked. If the two carriers have a constant phase relationship to each other, it follows that they are identically at the same frequency. Indeed, a common technique for verifying a phase locked condition is to display one carrier on an oscilloscope that is triggered from the other. The displayed waveform should appear stable on the screen. Note that it is not necessary that the two carriers have a particular phase relationship to each other (though some applications require this), and in general, the phase relation might drift over time.

8.4.1 Phase Locked Circuits and Agile Outputs for Processors and Modulators

Figure 8.7 illustrates one possible block diagram of a phase locked circuit. This can be used with either Figure 8.1 (processor) or Figure 8.4 (modulator). In either case, the output local oscillator, LO3, is replaced by the circuit of Figure 8.7. Mixer M3 is the mixer M3 of Figures 8.1 and 8.4.

image

Figure 8.7 Phase lock option for processor or modulator.

The reference frequency, a sample either of the off-air signal or from the comb generator (for HRC and IRC applications) is selected in bandpass filter FL10. It is mixed to the common intermediate frequency of 45.75 MHz in mixer M5. The local oscillator used in this mixing process must be LO3, the same local oscillator used to mix the output channel.

After conversion to IF, the signal is filtered in FL12 before being supplied to limiter A11. This limiter must be used because, if the reference source is an off-air signal, it will have modulation on it, and the modulation must not interfere with the mixing process. From the limiter, the signal becomes one input to phase detector M4. The other input to M4 is a sample of the IF signal developed in the modulator or processor. This signal has been filtered (if needed) and limited in FL11 and A10.

Again, a multiplier is shown as the phase detector. A multiplier makes a good phase detector, just as it makes an ideal modulator or mixer. If the reference signal (at frequency ωr) and the variable signal (at ωv) are multiplied, the result is expressed as follows:


image (8.3)


where


image


This expansion uses the relation of Equation (2.5). The first term in the second line represents the second harmonic of the carrier frequency and is removed by the loop filter. The second term represents the phase difference between the two carriers,θ.

The output of the phase detector is supplied to the loop filter, which rejects the second harmonic and performs required filtering on the error voltage representingθ. These functions often include providing extremely high gain at dc, which forces cos θto be near zero. The phase locked loop will do whatever is necessary to the frequency of LO3 to keep θconstant. Note that, in order for the cosine of the phase angle to go to zero, the two signals at the mixer will be held in quadrature, that is, with a 90° phase difference. This phase relationship is typical of multiplicative phase detectors.

If it is not intuitive that the output frequency is phase locked to (that is, the same frequency as) the reference, you are encouraged to write equations for the output frequency in terms of the frequency of the IF, of LO3, and of the reference frequency. Assume that the frequencies of the two inputs to M4 are equal (a condition forced by the phase locked loop), and solve for the output frequency in terms of the reference frequency.

Some modern agile modulators offer an alternative way to achieve phase lock between channels. If all relevant frequencies in all modulators are synthesized from the same reference frequency, then phase lock is accomplished.

Comb Generation Technique

Though not used extensively today, it is instructive to consider briefly how a comb of frequencies might be generated. The comb is used for phase locking processors and modulators in HRC and IRC frequency plans. Figure 8.8 illustrates an HRC comb generator in part a and the modification to make it into an IRC generator in part b.

image

Figure 8.8 Comb generator block diagrams. (a) HRC comb generator. (b) IRC comb generator.

HRC Comb Generation

The HRC comb is simply all the useful harmonics of a 6-MHz (for NTSC) generator. A simple way to produce the comb is illustrated in Figure 8.8(a). A precision 6.0003-MHz oscillator is used to generate the basic frequency. (The nominal frequency is 6.000 MHz. See Chapter 9 for an explanation of why an oscillator at 6.0003 MHz is used.) After amplification, the signal is supplied to a fast recovery diode and to an inductor. When the signal from A10 goes positive, D10 conducts, setting up current in L10. When the signal from A10 begins to go negative, D10 becomes back biased and cannot conduct. The input impedance to A1l should be high so that no significant current exists in its input. When D10 is back biased, a current path to complete the circuit through L10 does not exist. If a closed circuit does not exist, then the current through L10 must stop. However, if you try to instantaneously interrupt the current in an inductor, the voltage across the inductor must go to infinity (less in a practical case, due to losses in the circuit). This produces a voltage spike that approximates an impulse. An impulse may be shown to contain all harmonics of the fundamental frequency at equal amplitudes. After amplification in A11, the resultant may be applied to filters that remove unneeded harmonics. The small spectrum diagram near the output illustrates the comb.

IRC Comb

Figure 8.8(b) illustrates how the IRC comb is developed from the HRC comb of 6.000 MHz. The IRC comb is the HRC comb except offset up in frequency by 1.25 MHz. This is done by band limiting the HRC comb and using it as modulation on a carrier positioned at a picture carrier frequency of about one-half the maximum frequency. The effect is to generate a carrier at the center of the comb spectrum at one of the comb frequencies. The modulation sidebands are at the other frequencies required of the HRC comb. A double balanced mixer is used for the translation. Since it will try to suppress the frequency of LO11, a mixer balance adjustment is used, and the mixer is intentionally unbalanced to the translation oscillator, enough to cause the translation carrier signal to bleed through at the same level as the sidebands. The HRC comb elements should be completely suppressed, but if they are suppressed by only a few decibels, the phase locked loop in the processor or modulator should not have any trouble recovering the correct element.

Dual Conversion

Figure 8.9 illustrates two features of modern headend processing. One is the use of an agile up-converter to allow the output of a modulator (analog or digital) or CMTS to be placed on any frequency. This generally requires a dual-conversion up-converter to provide low spurious outputs. The genesis of the spurious-output problem is with the output filter after the final conversion. A simple example will show why this is so. Refer to Figure 8.4 for a moment; then we shall return to Figure 8.9.

image

Figure 8.9 Dual-conversion output with optional phase lock to a 6-MHz reference.

In Figure 8.4, suppose it was desired to make the modulator frequency agile so that the output could be set to any channel, maybe within a range of several hundred megahertz. We can certainly operate the output local oscillator, LO3, on any frequency (using a phase locked loop to control its frequency), and this will allow us to set the output frequency to any frequency we want. The problem is that we have filters FL5 and FL6, which are tuned to a single output frequency. We will have to either eliminate these filters or make them electrically tunable if we are to allow the output to be on any frequency over a wide range.

But there were some very good reasons as to why we have these filters and why they are only one channel wide. Consider that we want to put the output on channel 13, with the picture carrier at 211.25 MHz (see Appendix A). With a picture IF of 45.75 MHz, this means that the frequency of LO3 will be 45.75 + 211.25 = 257 MHz. Mixer M3 produces the difference frequency between LO3 and the IF, 211.25 MHz. But the mixer is quite happy also to produce the sum frequency, 257 + 45.75 = 302.75 MHz, the image of the conversion. We can’t let this frequency out of the modulator because it will interfere with channel 37 (and the sound carrier with channel 36). That is one function of filters FL5 and FL6 in Figure 8.4.

So if we want an agile output, we have to modify the block diagram of Figure 8.4. We can use a dual-conversion system such as in Figure 8.9. Like components in Figures 8.4 and 8.9 have the same reference designations, so you can see how they perform similar functions. Video is supplied to all the circuitry shown at the top of Figure 8.9, which we have referred to here simply as Conditioning. The actual modulation is done in M1, just as in Figure 8.4. M1 is driven by the IF signal from local oscillator LO1. For phase lock purposes in Figure 8.9, LO1 will need to be phase locked; but it can be simply crystal controlled if phase locking is not to be done (or if it is to be done using the method shown earlier).

The IF signal passes through vestigial sideband filter FL2, just as in Figure 8.4. (Note that this diagram might apply to a digital modulator or to a CMTS, in which case the filter function is still there but is not a vestigial sideband shape and would not be called that.) For simplicity, we’ll omit the sound carrier and the IF loop, though they are present. Now what we are going to do that is different from Figure 8.4 is to pass the signal to a dual-conversion agile up-converter. This one is phase locked to an external 6-MHz source, although that is not necessary. The two local oscillators involved in the dual-conversion process will probably be phase locked to a reference, but the reference may be internal to the up-converter rather than external as shown.

The first local oscillator will up-convert the IF to some high frequency. For illustration, assume it converts to a second IF of 927.25 MHz, a somewhat arbitrary frequency. The first local oscillator, LO3a, thus operates at fLO1 = 45.75 + 927.25 = 973 MHz. The second IF filter, FL5, is set to this frequency and has a width of at least one channel. Its main job is to remove the image of the first IF, 973 + 45.75 = 1018.75 MHz. We don’t show amplification, but there may be some amplification at this second IF frequency before the signal is converted to the final output frequency in mixer M3b, driven by local oscillator LO3b. If we want the output frequency to be 211.25 MHz for channel 13, the frequency of LO3b will be 927.25 + 211.25 = 1138.5 MHz. Notice that there is still an image of the second IF, but the image frequency is 1138.5 + 927.25 = 2065.75 MHz, well above any frequency we are interested in on the cable plant, and so far from the output frequency that it is easy to filter.

There is a need for a final filter, however. We show a tracking filter at FL6, which is located after any amplification we provide at the output (amplification not shown). The reason is that there is some noise generated in the mixing process and in amplifiers. If we don’t filter this noise, it will be combined with all the other signals and could harm the output C/N of the headend. Thus, a tracking filter can be provided to eliminate this noise. As a practical matter, no tracking filter can simultaneously provide excellent adjacent channel attenuation and flat response in-band while being tuned over a wide range. Thus, some compromises must be made in its ability to attenuate noise in close-in channels versus the flatness and tunability of the filter.

We have shown an alternative way to provide phase lock in Figure 8.9. This uses a reference, in this case 6 MHz, to phase lock all oscillators. The 6-MHz reference is divided, in frequency dividers, to frequencies that allow it to serve as a reference for all three oscillators. If phase locking is not needed, then there is no need to phase lock the first IF oscillator, LO1. Local oscillator LO3a can either be phase locked or crystal controlled, at the manufacturer’s discretion. Local oscillator LO3b will typically be phase locked in any case, since it must be changed in frequency as you set the up-converter to different channels.

Yet another consideration with phase locked oscillators is phase noise produced by the oscillator. For several reasons, the phase noise is typically much greater from a phase locked oscillator than it is from a crystal-controlled oscillator. Phase noise can hurt the bit error rate of a digital signal, as explained in Chapter 4. For analog signals, phase noise can be converted to amplitude noise in the receiver’s Nyquist slope filter and can damage the signal-to-noise ratio of the demodulated signal. Considerable effort and expense go into minimizing the phase noise of oscillators in agile up-converters.

8.5 Demodulation

A third type of RF product used in the headend is a demodulator. It is useful to understand the demodulation process in order to make sense of a lot of phenomena observed in cable systems. Also, demodulation is used for quality checks, for recovery of off-air signals for backup purposes, to put signals on a digital transportation route, and as interfaces for FM microwave systems. A few headends employ demodulation and remodulation instead of processing for off-air signals. This is advantageous if considerable signal switching is done. The use of a quality demodulator as a test tool is, alone, justification for studying the technique.

8.5.1 RF Input Section

Figure 8.10 shows the RF, IF, and video demodulation portions of a demodulator. The sound section is shown in upcoming Figure 8.11. The demodulator input is identical to that of the processor of Figure 8.1. Both processor and demodulator are used to recover signals from off the air or from an incoming cable. Thus, the input circuitry through attenuator AT2 is identical.

image

Figure 8.10 Demodulator block diagram, input and video.

image

Figure 8.11 Demodulator block diagram, audio.

8.5.2 IF Processing Section

IF processing in a demodulator is significantly different from that of a processor. Power for demodulation of the sound carrier is taken at flag A. Sound demodulation will be covered later in conjunction with Figure 8.11. The AGC operates to hold the visual signal level after AT2 at a constant level regardless of the input level.

Zero chop switch S1 is optional but is often provided on professional demodulators as a way to measure visual signal depth of modulation. When enabled, the chopper momentarily interrupts the signal at IF, usually during the vertical blanking interval. The interruption allows a waveform monitor or oscilloscope to be calibrated to measure visual depth of modulation very accurately. Chapter 9 describes measuring depth of modulation using a zero chopper. The chopper is coupled to the AGC detector in the illustrated case so that the switch may be restored to the IF path during sync pulses. By doing this, the zero chop pulse may extend over several lines without interrupting sync.

From the zero chop circuit, the signal is supplied to the Nyquist slope filter, FL1, which provides the main filtering function for the visual signal. The filter is named after Harry Nyquist, a Bell Laboratories engineer who developed this technique for recovering vestigial sideband signals. The filter linearly reduces the amplitude response in the double sideband region (picture carrier ±750 kHz) such that the picture carrier is reduced 6 dB. The reason for doing this is to equalize the frequency response in the double sideband and single sideband regions.

8.5.3 Visual Carrier Demodulation

After the Nyquist slope filter, the signal is supplied to one of two demodulation circuits, an envelope detector or a synchronous detector. Home television receivers use one detector or the other, but it is advantageous to have both in a professional demodulator because the two will yield different information about a signal. Some of the differences are discussed in Section 8.5.7.

Envelope Detector

In the illustrated system, the envelope detector consists of a chroma rolloff filter, FL2, and following circuits. The chroma rolloff filter reduces the amplitude of the IF signal at the color subcarrier frequency. The purpose of the chroma rolloff filter is explained later. Diode D1 performs the actual detection process, and capacitor C1 filters the signal. The detector, except for component values, looks just like the detector for an AM broadcast receiver.

The direction of the diode is not arbitrary. Recall from Chapter 2 that the sync tip is the most negative portion of the video signal as the signal is normally interfaced. The sync tip also corresponds to the highest amplitude of the signal. With the diode in the direction shown, the voltage across C1 will be such that the highest amplitude of the RF signal (the sync tip) will be the most negative voltage across C1. Other voltages will be positive with respect to that. Of course, in a real demodulator, it is possible to detect the signal with the opposite polarity and then invert the video, but this is often not done for practical reasons.

Following the detector is the chroma peaking filter FL3, which restores flat chroma response, complementary to the operation of the chroma rolloff filter FL2. The signal is then applied to switch S2, which selects either the envelope detector or the synchronous detector.

Synchronous Detector

The synchronous detector is shown below the envelope detector in Figure 8.10. It consists of a phase locked loop used to generate a carrier phase coherent with the incoming picture carrier (at IF). In contrast to other applications described earlier, the phase of the locked carrier with respect to the input IF signal is of paramount importance: the two carrier phases must be the same. Because the phase of the carrier varies with modulation and, in addition, can vary owing to defects in transmission, the phase locked loop is normally keyed (closed) on either sync tips or the back porch portion of the modulated signal.

When a modulated carrier is multiplied with an unmodulated carrier at the same frequency, the result is recovery of the baseband signal. The advantage of the synchronous detector over the envelope detector is primarily associated with demodulation of signals having asymmetrical sidebands, which is the case for all analog television signals. An envelope detector introduces distortion that is avoided with a synchronous detector.

From the synchronous demodulator M2, the signal is low-pass filtered to remove the carrier and harmonics, and supplied to S2, the switch that selects either the envelope or synchronous detectors.

8.5.4 Video Output

Switch S2 is normally actuated manually to select either the envelope or synchronous detectors if both are provided. By comparing the results of demodulation using the synchronous and envelope detectors, we can deduce a lot about the performance of the RF components handling the signal. Under normal conditions, the synchronous detector will yield a more accurately demodulated signal than will the envelope detector.

A coupling capacitor C2 removes dc components of the demodulation process, as well as any bias voltages introduced in the demodulation process. It has been customary to ac couple the video output since this minimizes the power supplied or dissipated in the load and often simplifies circuitry. Today, both dc and ac coupling of the output are employed. If dc coupling is employed, capacitor C2 would likely be used anyway, with a dc restoration circuit in the output amplifier.

It is essential with video that the coaxial cable impedance be matched both on the source end and the load end. Failure to do so can cause unacceptable frequency response errors, due to reflections, if the interconnecting cable is long enough. A typical way to achieve multiple well-matched outputs is to provide a very low output impedance in amplifier A3 and multiple matching resistors R1 and R2 at the outputs.

8.5.5 Aural Carrier Demodulation

The aural demodulation process is shown in Figure 8.11. The sound IF is supplied from Figure 8.10 on flag A. The other input to the sound demodulation process is a sample of the signal from the 45.75-MHz phase locked oscillator, used in the synchronous demodulator (flag C, which is omitted if a synchronous detector is not supplied).

The intercarrier detection mode is the one used almost exclusively in consumer applications. The sound final IF frequency, normally 4.5 MHz, is derived from mixing the sound first IF, 41.25 MHz, with the picture carrier, 45.75 MHz. This final sound IF frequency is often called the intercarrier frequency. Use of intercarrier detection solves several problems related to television transmission. If any process in the transmission of the television signal introduces phase noise, then the phase noise will seriously damage the signal-to-noise ratio of the audio: phase modulation is a form of frequency modulation, so once phase noise is introduced, it cannot be removed. However, since phase noise will affect the picture and sound carriers identically, when the intercarrier signal is developed by taking the difference in the picture and sound carriers, phase noise is eliminated from the aural signal. The IF signal on flag A contains both the aural and picture IF carriers, so by applying this signal to both inputs of a mixer, intercarrier mixing results. (Additional filtering may be used.)

On the other hand, if phase noise is not a problem, then there are advantages to a direct detection process. The picture carrier may have some incidental phase modulation on it, and it certainly has visual modulation sidebands. These will cause interference with the aural signal if used in the intercarrier detection process. In direct detection, the CW signal locked to the picture carrier is mixed with the sound carrier to derive the intercarrier frequency. (In this mode, the sound carrier path may include filtering to remove the picture carrier though this filtering is not shown in Figure 8.11.)

After the intercarrier detector, M4, the signal is filtered in a 4.5-MHz bandpass filter FL4. Following the bandpass filter is a limiter, A4, which removes any residual amplitude modulation from the aural carrier. At this point, a sample of the aural carrier is usually taken. It is passed through a manually controlled attenuator AT3, amplifier A5, and a current source resistor R3. The purpose is to provide a sound subcarrier for external use. Often this signal is combined with the picture carrier to provide a signal to be transmitted by FM microwave link. The resistance of R3 is high compared with 75 ohms. If one of the video outputs on Figure 8.10 is paralleled with the subcarrier out connector, then R3 will act as a current source, adding the 4.5-MHz subcarrier to the video without significantly upsetting the impedance of the video circuit.

The other path from the limiter is through a discriminator, which recovers baseband audio. Note that the signal recovered at this point is either monaural (mono) or a stereo composite signal, as defined in upcoming Section 8.6. If the signal is mono, then a deemphasis filter FL5 is included. If it is desired to recover a composite stereo signal for subsequent demodulation or processing, the deemphasis is omitted in favor of more complex processing in the stereo decoder.

Finally, the signal level is adjusted at attenuator AT4, normally a manual control. It is amplified in output amplifier A6 and provided on balanced outputs. Balanced outputs are used in professional equipment to provide for reduction of pickup of 60-Hz ac components (“hum“). “Balance” refers to a transmission technique in which the two sides of the output have equal impedance to ground, as opposed to unbalanced transmission, where one side of the circuit is at ground potential. Section 8.9.2 describes use of balanced interconnection for audio.

From a long-standing telephone specification, the impedance of the audio output is normally 600 ohms between the two balanced lines. In much equipment today, the source impedance is less than 600 ohms, but the performance of the amplifier is specified when the amplifier is working into a 600-ohm load. The lack of a defined 600-ohm output impedance is not a problem in most cases since the length of audio cable is very small compared with the wavelength of any audio frequency. If the output is to drive a line several miles long, then the impedance should be controlled more carefully.

8.5.6 Nyquist Slope Filter

The Nyquist slope filter shown as FL1 in Figure 8.10 has a shape shown by the solid line in Figure 8.12. At the low-frequency side (aural carrier at IF), the shape is fairly straightforward. The bandpass characteristic should pass the color carrier and its sidebands while rejecting the sound carrier.

image

Figure 8.12 Nyquist slope filter response, with haystack.

The high-frequency side (near the picture carrier) is rather unusual. The purpose of this shape is to complement the shape of the vestigial sideband (VSB) filter used in the modulator, and shown in Figure 8.12 as a dotted line. The Nyquist shape is flat to 0.75 MHz below the picture carrier; then it begins falling off at a prescribed rate above that frequency. The rolloff must be linear with frequency on a voltage plot, not a logarithmic plot. The response of the filter must be 0.5 at the picture carrier, continuing to drop until the response is zero at 0.75 MHz above the picture carrier.

Other features of the Nyquist slope filter are emphasized in the next section, in which we describe the differences between synchronous and envelope detection. One of those features is not really a part of the Nyquist slope filter response. It is shown in Figure 8.12 as a dashed line and was mentioned in conjunction with the chroma rolloff filter (FL2 of Figure 8.10). This is the reduction of response near the color carrier applied in the case of an envelope detector. It is called a haystack response after traditional television usage: taken with the Nyquist slope around the picture carrier, the filter shape looks like a haystack. The haystack response is used to reduce the effects of luminance-to-chrominance crosstalk.

In consumer television receivers using envelope detectors, the haystack response is built into the Nyquist slope filter (FL1 of Figure 8.10). In professional receivers, the chroma rolloff may be realized independently of the Nyquist slope filter so that it is not in the signal path in the synchronous detector mode. Not that the haystack response would not hurt the response of the synchronous detector if it were perfectly compensated at baseband. Rather, it is difficult to compensate perfectly, and for measurement purposes, it is better not to introduce a response error that must later be compensated.

In modern practice, the Nyquist slope filter is normally realized as a SAW filter in consumer receivers though the shape is compromised to allow the SAW filter to be built with good cost effectivity. In professional demodulators, the filter may be realized either with a SAW device or as a conventional L-C filter with delay equalization.

8.5.7 Synchronous Versus Envelope Detection

Both synchronous and envelope detectors may be used for operational demodulation. Synchronous detection yields the higher-quality results absent a few distortions that prevent its proper operation. The most common problem that interferes with operation is phase noise, or phase modulation, on the picture carrier. A synchronous detector will not work as well as does an envelope detector under this circumstance.

Envelope detectors are susceptible to the practical limitations of diodes though there are ways to mitigate this problem. More seriously, envelope detection is susceptible to quadrature distortion errors that arise as a result of asymmetrical sidebands in the television signal. Quadrature distortion occurs when the upper and lower sidebands are not symmetrical. As shown earlier and in Chapter 2, the television signal presented to the detector does not have symmetrical sidebands at any frequency: at modulation frequencies above 1.25 MHz, only one sideband is transmitted. Below 0.75 MHz, symmetrical sidebands are transmitted but are turned into asymmetrical sidebands by the Nyquist slope filter. The lowest modulating frequencies approach sideband symmetry, but as the frequency increases, the symmetry is quickly lost.

Phasor Representation of the Modulated Signal

The mathematics of modulation were shown in Chapter 2. A useful graphical representation that shows the function of the two demodulators is to invoke the mechanism of phasors.2 A phasor represents any steady-state sinusoid as a vector whose length is a function of the magnitude of the signal and whose angle is a function of the angle between a reference and the phasor. From Equation (8.1), an amplitude modulated signal can be represented as


image (8.4)


Figure 8.13(a) represents this equation in phasor form. The first term, representing the carrier, is defined as having a phase such that its phasor is straight up (this is arbitrary). The upper sideband has a phasor that rotates clockwise (with respect to the stationary carrier) at an angular frequency of ωm. The lower sideband term, cos (ωc – ωm), is represented by a counterrotating phasor. The resultant phasor is shown to the right in Figure 8.13(a). The vector sum of the three phasors is always in phase with the carrier. An envelope detector responds to this resultant vector. Analysis of the vector at all different phase angles θwould show that the resultant traces out a sinusoid, which is the waveform used to modulate the carrier (the M cos ωmt term of Equation (8.1)). All is well.

image

Figure 8.13 Phasor diagram, double and single sideband carriers. (a) Symmetrical sidebands. (b) Single sideband.

Now consider the situation shown in Figure 8.13(b), a single sideband signal. The modulating waveform is the same sine wave as in Figure 8.13(a), but one sideband has been removed by the VSB filter (FL2 of Figure 8.4). Most notably, one sideband of the color subcarrier is removed, so the color subcarrier is transmitted as a single sideband signal. The vector sum of the carrier and single sideband phasor is shown to the right of Figure 8.13(b). Here the resultant is not in phase with the carrier. The resultant can be resolved into an in-phase component and a “quadrature” component at right angles to the carrier phase.

Response of an Envelope Detector

The envelope detector responds to the magnitude of the resultant carrier. By applying trigonometric relations to the resultant vector of Figure 8.13(b), we can compute the magnitude of the vector for various conditions. Figure 8.14 illustrates the result obtained using an envelope detector for two different relative levels of carrier and sideband.*The dashed line is the sinusoid with which the carrier was modulated. One of the two cases represented (the middle trace) results when the sideband is −6 dB (one-half the voltage) of the carrier. The third waveform represents the waveform produced by equal amplitude carrier and sideband. This is a realistic condition for a television signal: a scene with medium to high brightness has a relatively low carrier level during the lighter portions of the scene. If the object is brightly colored, the chroma carrier amplitude will be high, so when the camera is scanning the object, the chroma can be as high or higher in amplitude than the picture carrier. The resultant tends to be “scalloped,” with the negative peak too wide and the positive peak too narrow.

image

Figure 8.14 Demodulated results, envelope detector.

From this illustration, it is evident why the chroma response of the Nyquist slope filter is reduced (haystacked) when an envelope detector is used (see Figure 8.12). The chroma carrier is the highest amplitude sideband of the luminance carrier when brightly colored objects are being scanned. The higher the chroma signal is with respect to the picture carrier, the greater the distortion. By reducing the chroma amplitude before detection, the effect is reduced. Of course, the chroma amplitude must be corrected before the signal is presented for measurement purposes; hence the chroma peaking circuit FL3 of Figure 8.10. (In a consumer TV set, it is not necessary to use this peaking circuit because the chroma is separated from the luminance, and the correction can be approximated by adding more gain to the chroma channel.)

If the Fourier transform of the chroma waveform is taken, it will be seen that the amplitude of the 3.58-MHz fundamental component is less than what it would have been without quadrature distortion. This causes the chroma saturation (proportional to chroma amplitude) to be lower than it should be — the color appears washed out compared with the way it should look. An envelope detector that doesn’t reduce the amplitude of the color carrier prior to demodulation will exhibit excessive differential gain and luminance-to-chrominance and chrominance-to-luminance crosstalk.

The average level of the chroma signal is no longer in the center of the waveform of Figure 8.14. Rather, it is lower than it should be. This means that the luminance level is lower than it should be, making the object appear darker than it should. Thus, a brightly colored object of relatively high luminance level (bright yellow is an example) will appear too dark and somewhat washed out owing to the quadrature distortion introduced by the envelope detector. Whether or not the distortion is objectionable depends on a number of factors.

One useful way to show the difference between envelope and synchronous detectors is to study the various sinusoids involved in the multiburst waveform. Switching back and forth between the two detectors will readily show the difference in waveform. Some bursts will show more distortion with an envelope detector than will others, depending on the sideband amplitude seen by the detector.

Another convenient test is to study the difference between a sine squared (2T) pulse when demodulated with synchronous and envelope detectors. The demodulated shape will be correct with the synchronous detector. With the envelope detector, the pulse will appear too narrow, and possibly too tall. This would have the effect on a picture of narrowing a vertical white line on the screen.

Response of a Synchronous Detector

The synchronous detector can be shown to respond only to the in-phase component of the phasor of Figure 8.13(b). In turn, it can be shown that the in-phase component, the projection of the sideband on the phase of the carrier, is an accurate representation of the modulation.

Application of Test Demodulators Having Both Detectors

Some demodulators have a second synchronous detector, in which the phase of the reconstructed carrier is 90° with respect to the incoming picture carrier (at IF). This quadrature detector is sensitive to the quadrature component of Figure 8.13(b). By displaying the quadrature component on the y axis of an oscilloscope and the in-phase component on the x axis, we can easily spot incidental carrier phase modulation (ICPM) in the modulator. Normally, a luminance ramp is supplied as video for this test. ICPM can be caused by improperly operated power amplifiers, and is a common problem with older broadcast transmitters, particularly UHF transmitters. It tends to be less of a problem with cable modulators because of lack of high power stages, but it can be introduced in an amplifier stage that is distorting. ICPM can cause excessive intercarrier buzz if it affects only the luminance carrier.

A problem with some older and lower-cost cable modulators is poor balance in the video modulator stage. This will distort the luminance level of bright portions of a picture. The presence of this may be deduced by measuring differential gain with the envelope and synchronous detectors. A good differential gain reading with the synchronous detector and a bad reading with the envelope detector indicate possible problems with the modulator’s modulation stage.

Subtle amplitude and group delay problems in a piece of equipment can be masked when measured with an envelope detector. Amplitude and group delay problems can be evaluated by looking at the sine squared (2T) and modulated 12.5T pulses. These pulses are particularly convenient for use in evaluating problems because the results are presented in terms that directly relate to the visible distortion of a video signal. However, some problems produce waveform distortions that are similar to the distortions produced by an envelope detector. Thus, a synchronous detector is more useful when studying response problems.

8.6 TV Stereo

At least three TV stereo systems are in use worldwide, with several variations to each. North America uses the BTSC system, named after the committee that adopted it. The Broadcast Television System Committee was formed under the auspices of the Electronic Industries Association (EIA). It was active from the late 1970s to the mid-1980s. The Japanese use a variation of BTSC stereo. In Great Britain, the BBC and IBC developed a digital system known as near instantaneous amplitude companding and modulation (NICAM). With variations, it is in use in a number of European countries and elsewhere. The German IRT earlier developed a two-carrier analog system, which is used there, in Korea, and elsewhere.

8.6.1 BTSC Stereo Generation and Application

The BTSC stereo system is described in Chapter 2. Figure 8.15 shows one possible block diagram of a BTSC stereo encoder. Left- and right-channel audio are supplied to identical low-pass filters, whose cutoff frequency should be as close to 15 kHz as possible, while providing a lot of attenuation at the horizontal sweep frequency of 15.734kHz. Because of the sharpness demanded of these filters, their complexity is high. The filters are often implemented using digital signal processing (DSP) technology today. Failure to provide adequate reduction of power at 15.734 kHz will result in compromised stereo performance due to interference with the pilot carrier. Further, the difference channel approaches this frequency, and poor rejection could result in artifacts arising from spectral crossover between the sum and difference channels.

image

Figure 8.15 Block diagram, BTSC stereo encoder.

Note that the left- and right-channel program audio levels must be metered as shown to ensure the correct deviation. The monaural meter on the modulator is not usually suitable for this purpose.*In fact, it can be very misleading. Similarly, SAP level is monitored at its input.

From the filters, the signals are supplied to the matrix, which is simply a pair of networks that produce the sum (L + R) and difference (L — R) signals. The sum channel is supplied to a preemphasis circuit and a delay compensator, which compensates for the delay in the difference channel. See the discussion of emphasis in Section 8.3.6. It is necessary to use preemphasis in the sum channel in order to maintain backward compatibility with the mono system. From the preemphasis circuit, the sum signal is supplied to an adder that combines the different parts of the stereo signal.

The difference signal is supplied to the dBx encoder, which compresses it in order to reduce the effects of poor carrier-to-noise ratio in the transmission path. The compression is frequency selective and follows an algorithm proposed originally by the dBx Corporation.(For test purposes only, the dBx compression may be replaced with a 75-μs preemphasis network.)

From the compression circuit, the difference signal is applied to the modulator. It is double sideband suppressed carrier amplitude modulated onto a carrier at 2fh (twice the horizontal scan frequency), 31.47 kHz. The carrier oscillator is phase locked to the horizontal sync rate, fh, derived from the video signal. This need for phase lock is the reason that video must be looped through the stereo encoder. Failure to lock to video could result in beats and separation “pumping” from oscillators that are free running on almost but not quite the same frequency. A portion of the fh signal, obtained by dividing the output of the 2fh oscillator by 2, is filtered and supplied to the summing circuit as the pilot carrier. This fh signal is also supplied to a phase locked loop, whose output adjusts the 2fh oscillator as needed to maintain the correct phase relationship between its two inputs, one from the divider and the other from the sync separator. These three components — the sum, difference, and pilot signals — make up the basic stereo signal.

If the SAP is not used on a particular channel, that circuitry is not likely to be included in the stereo encoder. If it is, the circuitry at the bottom of Figure 8.15 is supplied. The SAP audio is compressed then frequency modulated onto an oscillator whose center frequency is locked to the fifth harmonic of the horizontal frequency, 5fh. This SAP signal is then summed with the stereo components. If the professional channel is used, then the blocks at the top of the page are supplied. It is rarely used in cable television.

Interfacing and Setting Levels in BTSC Service

A number of methods may be used to connect the BTSC encoder to the modulator. Selection of the interface method to use is based on inputs available on the modulator and outputs available on the stereo encoder. Of particular concern is the ability to set the deviation of the sound carrier correctly. Failure to do so will result in decreased stereo separation. For example, separation between the left and right channels will be degraded to less than 20 dB if there is a 1.9-dB error in setting the deviation.3

Several methods may be used to interface the encoder with the modulator. The most basic, but not necessarily the best, is to defeat the preemphasis in the baseband input of the modulator (because preemphasis is handled in the encoder) and supply the broadband signal to the baseband audio input of the modulator (Figure 8.5). This works, but you must be careful to set the deviation of the modulator correctly. Failure to do so will result in poor stereo separation, as shown earlier.

In order to use this interface, some modulator designers have arranged metering circuitry such that a modulation meter measures only the sum signal. The object would be to set the level using the left and right meters on the encoder, then set the deviation on the modulator using its meter. Setting deviation only with reference to the sum deviation is not at all accurate and will almost surely result in poor stereo separation. However, metering the sum channel at least makes a meter on the modulator make sense when handling a stereo signal.

In addition to metering the program audio, the encoder manufacturer may provide a test tone. When turned on, the test tone replaces the composite stereo signal supplied to the modulator. The modulator deviation adjustment (AT3 of Figure 8.5) is adjusted for the correct deviation of the test tone, usually 25 kHz. If this is done and the program audio levels are set correctly on the encoder, then the deviation will be correct. This method is far superior to metering only the sum channel program audio on the modulator.

A preferred interface method, if available, is to have a 4.5-MHz FMed oscillator built into the stereo encoder. An optional sound carrier modulator (oscillator) is shown in Figure 8.15. This way, the encoder manufacturer can ensure that the deviation is set correctly. The encoder output at 4.5 MHz is supplied to flag C of the modulator block diagram, Figure 8.5. It is best to not combine the sound subcarrier with the video since doing so will reduce the carrier-to-noise ratio in the aural channel and will create more group delay problems in the video channel. The modulator must be capable of accepting a separate input to FL3.

Another possible but somewhat less desirable interface method is to locate a 41.25-MHz FM oscillator in the encoder and supply sound at this frequency to flag D of the sound IF loop at the modulator, Figure 8.4. This provides the same level calibration advantage as does the 4.5-MHz interface, but exacerbates the problem of maintaining accuracy in the frequency spacing between the picture and sound carriers. In any of the other methods described, sound carrier spacing depends only on a single 4.5-MHz oscillator. The FCC-required accuracy in spacing for cable television is ±5 kHz. Applied to the 4.5-MHz spacing, this represents an accuracy requirement of 1,100 parts per million, a trivial number for a crystal-controlled oscillator, and barely possible with an L-C oscillator (used in set top terminals but definitely not appropriate for professional use). If the interface is at 41.25 MHz, then two independent oscillators are involved, one at 41.25 MHz in the encoder and another at 45.75 MHz in the modulator. This means that each must be accurate to about 57 parts per million. This is not excessive for a crystal oscillator but is getting to the point that care must be exercised.

In addition, there is some possibility with a 41.25-MHz interface that ICPM could exist on the picture carrier and not on the sound carrier. This would cause a degraded signal-to-noise ratio in the intercarrier demodulator in the TV receiver. (See the discussions of ICPM and intercarrier detection in Section 8.5.)

Regardless of the interfacing method used, it is necessary to ensure that the deviation of the main sound carrier by the composite stereo signal is correct, then to set the level of the left and right channels using only attenuators AT1 and AT2 of Figure 8.15. The stereo encoder should provide metering to allow accurate setting of the program audio levels.

8.6.2 The NICAM Stereo System4

NICAM’s formal designation is NICAM 728 because the bit rate is 728 kb/s. The system is a digital transmission system developed in the mid-1980s in the United Kingdom. It has been adopted for use in other countries as well. It is capable of carrying one stereo pair, or two independent monaural programs, or 352-kb/s data and one monaural program, or 704-kb/s data only.

Data is initially sampled at 14 bits per sample and down-converted to 10 bits (plus parity). The data is divided into blocks of 32 samples, and the scaling of the block is based on the highest amplitude sample in that block. A scaling factor is transmitted with the data. Transmission is divided into 1,000 frames per second. Each frame consists of 704 bits of data and parity, with the remaining 24 bits being overhead and synchronization.

Modulation is differential QPSK (see Chapter 4). As practiced with the CCIR-I television system (Great Britain), the modulator and demodulator both use a root raised cosine filter with an alpha of 100%. When NICAM is used in countries employing the CCIR-B/G television format, the filter alpha is reduced to 40% to accommodate the narrower available bandwidth.

In the CCIR-I system (used in Great Britain), the NICAM carrier is located 6.552 MHz (nine times the 728-kHz bit rate) above the picture carrier. The normal analog sound carrier, 6 MHz above the picture carrier, is transmitted — 7 dB with respect to the picture carrier, and the NICAM signal is set to — 20 dBPC. The power in the NICAM signal does spill out of the channel, slightly invading the upper adjacent channel. This has not been a problem.

In the -B/G system (used in much of continental Europe except France and Russia), the NICAM center frequency is slightly outside the channel, with energy spillage even farther out. At UHF (system -G), the occupied bandwidth falls in the 1-MHz guard band between the channels. At VHF (system -B), the power spills into the upper adjacent channel. By narrowing slightly the vestigial sideband filter for the upper adjacent channel, it is claimed that a cable system can successfully add the NICAM signal while employing adjacent channel transmission.5

8.6.3 The IRD Two-Carrier Stereo System

An earlier stereo system developed in Germany employed two analog carriers. It is used there with the CCIR-B/G television system. It has also been used, with modification, in Korea, where NTSC is the television system in use. The normal 5.5-MHz (352fh) sound carrier is modulated with a left plus right signal. A second carrier is added at 5.7421875 MHz (367.5fh). It is modulated with the right channel only. This second signal is demodulated and supplied directly to the right loudspeaker. The left signal is obtained by subtracting the right signal from the L + R signal on the main aural carrier. The reason for doing this rather than placing a difference (L — R) signal on the second carrier is that noise from the second, low-amplitude, carrier theoretically gets canceled by virtue of its phase being opposite on the two channels.

The main sound carrier at 5.5 MHz above the picture carrier is carried 13 dB below the picture carrier. The second sound carrier is carried — 20 dB from the picture carrier and is also frequency modulated with a deviation of ±50 kHz. A pilot carrier of 64.6875 kHz (3.5fh) contains modulation that identifies the broadcast as mono, stereo, or dual sound (two mono channels, as for two languages). In addition, mode identification is carried in the vertical blanking interval of the video.

8.7 Satellite Earth Station Receiving Equipment

The modern cable television industry was enabled by the availability of communications satellites that could distribute many television programs simultaneously to all cable headends in North America. The watershed event was the September 30, 1975 (U. S. time), Ali-Frazier prizeFight, the “Thriller from Manila,” seen at the two or three headends in the United States that had received 10-meter earth station antennas by then.

Figure 8.16 illustrates the major electronic components of the earth station used at cable headends. Signal is coupled from the “dish” antenna (see Chapter 7) to a feed located at the focal point of the antenna. The signal is amplified and, typically, down-converted to a lower frequency, by a low-noise block converter (LNB) located at the feedpoint. The LNB is located at the feedpoint to minimize signal loss. Communication satellite downlinks today operate on two bands, C band (3.7 to 4.2 GHz) and Ku band (11.7 to 12.2 GHz). Coaxial cable loss is significant at these frequencies though older earth stations used large, high-quality cable to route C band signals into the headend.

image

Figure 8.16 Analog earth station electronics, block diagram.

The most common band to which signals are converted is 950–1450 MHz (roughly L band). Some older equipment used 270–770 MHz. At these frequencies, the signal can be routed into the headend using fairly small coaxial cable. The cable may also carry power to the LNB as shown. Fiber optics may also be used to bring signals to the headend.

Once in the headend, the signal is split to serve a number of receivers: today, up to 12 receivers may be connected to a single LNB. A second LNB located at the same feedpoint can handle up to 12 additional channels coming from the same satellite in the same band, with different polarization.

At each receiver, the signal is again amplified by amplifier A2, filtered, and down-converted to another IF (dual conversion downconversion will be used if the receiver is frequency agile). The downconversion is done in mixer M2, using local oscillator LO2. After amplification and filtering, the signal is demodulated in FM demodulator DM1. As shown in the insert of Figure 8.16, the spectrum coming out of DM1 consists of baseband video (probably scrambled) and possibly one or more sound subcarriers. A number of subcarrier frequencies have been used for sound, but the two most common are 6.2 and 6.8 MHz. Today, these subcarriers are generally not used for primary program audio, that being carried digitally in the video. The subcarriers may be used for cue tones and supplementary audio. Cue tones are used to initiate local advertising inserts.

If an aural subcarrier is used, it is filtered in bandpass filter FL4, amplified, and detected in FM demodulator DM2. The audio is preemphasized using the standard 75-μs preemphasis, and so must be deemphasized in FL6. Attenuator AT4 allows the output level to be set, and differential output amplifier A4 supplies the output gain.

In the video path, a roofing filter FL5 may be used to remove power above the video spectrum. If not scrambled (rare today), the video passes from the roofing filter to deemphasis filter FL7. The video is emphasized to reduce the effects of noise on the signal, as is audio. A particular emphasis and deemphasis curve is used, as prescribed by the CCIR. After deemphasis, a dispersal removal clamp removes any dispersal signal that may be imposed on the video. Dispersal is used in C band satellite circuits because the same 3.7-4.2 GHz band used for satellite downlinks is also used for terrestrial microwave communications, primarily by the telephone industry. The fear is that if modulation is removed from the satellite signal, enough power could be concentrated at one frequency to interfere with terrestrial transmissions. A dispersal signal was required in the early days to prevent that from happening. The dispersal signal is a 30-Hz triangular waveform synchronized to the vertical blanking interval (VBI) and changing direction during the VBI. It is removed using a clamp or dc restorer, somewhat as illustrated by C1 and D1 of Figure 8.4. From the deemphasis filter, the video level is set using attenuator AT5. After buffering in A5, nonscrambled video is output from the receiver.

Dispersal waveforms are not always used in modern practice. If a transponder is used full-time for video, then the downlink power will not be concentrated at one frequency. For transponders having occasional video service, dispersal waveforms are still used, but they are frequently used only to replace video when it is removed but the uplink remains on the air (possibly for test purposes).

Almost all signals delivered to cable headends today are scrambled. Programmers went to scrambling during the 1980s to prevent theft of service by privately owned earth stations that had not paid for service. The analog scrambling system used almost universally in the cable industry today is a sync replacement system (similar to sync suppression — see Chapter 21), with the horizontal blanking interval (HBI) replaced by encrypted audio. The idea is to make the video scrambling hard to break. Then the audio is “hard” encrypted, such that it could not be decoded without authorization. Descrambled audio and video outputs from the receiver are amplified in A6 through A8, and supplied to the outside, normally to modulators and stereo encoders as described earlier.

Signals to be descrambled are passed to the decoder, which accepts authorization and then descrambles the signals. The system in use has the capability to deliver two audio signals so that a stereo pair can be transmitted. If a second language program is transmitted, it uses one of the FM subcarriers described earlier.

8.7.1 Digital Satellite Receivers

Digital satellite receivers look similar to analog receivers at the block diagram level, with the exception that the demodulator is QPSK, OQPSK, or sometimes 8-PSK. All of these modulation formats are defined in Chapter 4. Typical receivers include an MPEG decoder, and they supply both analog video (from one channel in the received multiplex) with stereo sound and the entire digital transport stream in digital form. Audio outputs may include analog and sometimes a professional digital audio interface.

Figure 8.17 illustrates a digital earth station electronics block diagram; compare it with Figure 8.16 for the analog equivalent. Note that the antenna and LNB are identical, as is the front end of the receiver. In fact, the same antenna and LNB can be used to receive analog on some transponders and digital on others. The only difference is in the receiver.

image

Figure 8.17 Digital earth station electronics, block diagram.

As shown, the front end of the receiver is the same as in the analog receiver, but rather than an FM demodulator, a QPSK demodulator (most common today) is used. Its output supplies a descrambler that removes the signal scrambling done at the uplink to protect the programs for being pirated. Commonly, an analog decoder is supplied through a program selector, to recover an analog signal from one of the signals being transmitted. Its output is normal analog audio and video. An ASI driver formats the digital video signal for output, as described shortly.

8.8 Digital Video Interfaces

Digital video interfaces connect one or (usually) more MPEG-2 transport streams (TS) from source equipment, such as a satellite receiver, an encoder, or a video-on-demand (VOD) server, to information sink equipment, most commonly QAM modulators. The MPEG-2 transport stream is described in Chapter 3. It includes video and audio as well as ancillary data for one or more programs. Note that multiple audio streams may be associated with one video stream. These audio streams can include surround sound and one or more languages. Since the audio, video, and ancillary data are transmitted in the same transport stream, there is only one connection for a program; there are no separate audio and video connections as you have in analog transmission. Also, it is most common to have more than one program in the same transport stream.

8.8.1 Asynchronous Serial Interface (ASI)

A common interface for digital video is the asynchronous serial interface (ASI), specified by the DVB Project Office. This is a serial interface, as the name implies, transmitted over a single 75-ohm coaxial cable. Besides coaxial cable, there exists a standard for an ASI using fiber-optic cable.

ASI is a one-way data transmission standard, with no acknowledgment of receipt of information and no built-in error correction. Being derived from the fiber channel standard, the signal is 8B/10B encoded, as explained in Chapter 19. For the coaxial cable version of the ASI, the transmitter supplies a baseband digital signal at an amplitude of 0.8 volts p-p into 75 ohms. The receiver must successfully detect a signal amplitude of 0.2 volts p-p. The return loss of the 75-ohm system is ideally 17 dB minimum.

The wire speed (actual data speed) of the data is 270 Mb/s with 8B/10B coding. Removing the 8B/10B coding results in a data speed of 80% of this, or 216 Mb/s. Included is a layer of ASI sync. One ASI link is able to contain enough data to fill several channels with 64- or 256-QAM data.

Figure 8.18 illustrates the logical ASI. The input at the transmitter (internal to a piece of equipment, so you normally won’t see it) is a packet-synchronous MPEG-2 transport stream. As it enters the ASI transmitter, the signal is 8B/10B encoded. Chapter 19 explains 8B/10B encoding. Since the MPEG signal does not operate at the ASI data rate, there will be times when the ASI transmitter is ready to send more data and yet no data is ready to be sent. At such times, filler code words are inserted. This must be done to keep a constant bit rate on the ASI link. The signal is converted from parallel to serial for transmission, amplified and impedance matched, and then coupled to the connector. Transformer coupling is anticipated in the specification.

image

Figure 8.18 Digital video ASI.

After transmission, the signal is coupled into an amplifier through the input-coupling and impedance-matching mechanism. The bit clock can be recovered from the transmitted signal, and controls conversion of the signal from serial back to parallel for further processing. The ASI sync is detected, and the filler code words added in the transmitter are removed. 8B/10B coding is removed. The receiving circuit has a choice of how to recover the transmission stream data sync. Either the signal can be reclocked from the receiving application’s clock or the transmission stream clock from the incoming signal may be recovered and used in the receiving application. In the former case, the signal is supplied to a first-in-first-out (FIFO) register, which is used for retiming a signal. It is clocked in from the transmission stream’s clock and clocked out from the application’s clock. In the latter case, the transmission stream clock is recovered and supplied to the application, along with the transmission stream itself.6, 7

8.8.2 Ethernet for Video Transmission

Gigabit Ethernet is also popular as a method for transferring digital video in headends. The primary advantage is speed. ASI has a practical transfer capacity of about 160 Mb/s after all inefficiencies and overhead are removed. On the other hand, the practical transfer capacity of a dedicated gigabit Ethernet link can be as high as 900 Mb/s.8 This means that when coupling signals between high-density video sources and modulators, such as in a video-on-demand system, fewer interfaces are required with gigabit Ethernet interfaces than with ASI. Recall that there does not have to be a 1:1 correspondence between headend interface cables and channels; one headend interface can serve several 6-MHz channels’ worth of video, so long as the video is coming from one place (e.g., a VOD server) and going to one place (e.g., a cluster of QAM modulators).

When gigabit Ethernet is used for video transmission, it is common to embed the MPEG-2 transmission stream into RTP packets (explained in Chapter 6 — this same packetization is done for VoIP). The RTP packets allow for removal of jitter introduced in the transmission process; they also keep track of packet numbers to allow reordering of packets received out of order.(Out-of-order packets should not be a problem for local point-to-point transmission of video.) The RTP packets are embedded in UDP packets, which enable one-way or broadcast transmission of packets. In turn, the UDP packets are embedded in IP packets, as explained in Chapter 5, and the IP packets are embedded in Ethernet packets.

8.9 Signal Handling in Headends

This section includes information that should be useful to practitioners setting up and maintaining headends. Included are notes about video and audio signal handling. RF signal handling is covered in more detail in Chapter 9.

8.9.1 Video Loop-Through

Figure 8.4 included the illustration of video loop-through. This is a technique whereby you can supply video to several using devices such as modulators, video monitors, or switchers, without using amplifiers. It is normally recommended that no more than three devices be cascaded with a loop-through because there is adverse effect on the signal from each device. This section explains what loop-through is and the most common alternative to it.

Figure 8.19 illustrates video loop-through. Figure 8.19(a) illustrates video being looped through three modulators. The devices could also be a video switcher, a picture monitor, and a modulator. A common application in head-ends is to loop the video through a BTSC stereo encoder, a scrambler, and then terminate it at a modulator. Each modulator in the example has the ability to either terminate the video in a 75-ohm resistor or not. It is necessary to terminate video in the characteristic impedance of the coaxial cable carrying video so as to prevent reflections that would damage the frequency response of the picture.

image

Figure 8.19 Video loop-through. (a) Looping through three devices. (b) Equivalent circuit for (a). (c) Incorrect application of loop-through. (d) Video distribution amplifier alternative.

If a signal is not terminated, it will be reflected toward the source. This reflected signal will combine with the direct signal, resulting in a frequency response that peaks at frequencies where the two signals are in phase and nulls where the two cancel.

So long as the coax is terminated at the last device in a chain of devices, there is not a problem. This is illustrated in Figure 8.19(a), where three modulators or other devices are in a cascade, with the video being terminated by the resistor in the last device. The modulators shown all have the ability to terminate or not. This is usually selected by a switch on either the back or front of the modulator.

Figure 8.19(b) illustrates the equivalent circuit of the arrangement of Figure 8.19(a). High input impedance amplifiers are used to sample the signal from the coax transmission line without affecting significantly the impedance of that line. At the last modulator, a 75-ohm resistor terminates the line. As illustrated in Figure 8.19(c), if the termination is placed other than at the last device, reflections will result. Signal power will be reflected from the last device in line, modulator 3. When it arrives back at modulator 2, several bad things happen. First, depending on the frequency and the length of cable between the two modulators, signal components at some frequencies will add, increasing the amplitude, whereas others will cancel. The frequency response at modulator 2 is thus corrupted.

Furthermore, reflected energy arriving at modulator 2 will see a double termination: the 75-ohm terminating resistor in modulator 2 and the 75-ohm impedance of the transmission line leading back toward the source. This impedance mismatch creates a 6-dB return loss, so one-half the signal propagates toward the source (to the left), and one-half propagates again toward modulator 3, upsetting the response there. Thus, it is essential that a looped-through signal be terminated at the last, and only at the last, device in the signal chain.

Similarly, it is unacceptable to parallel more than one signal path because reflections will cause frequency response problems. This would happen if you branched off from device 1 and routed signals to devices 2 and 3 on different cables.

Each device the signal is looped through must use a very high impedance amplifier to bridge the video signal. As a practical matter, it is not possible to prevent some signal degradation as you loop video from one device to another. Because of this, the number of devices a signal is looped through should be limited. A rule of thumb is that three devices can be looped through without excessive deterioration in signal quality. Another limitation is that if a device the signal is looped through must be removed, then service will be interrupted to other devices in the loop-through chain. Finally, looping through may result in awkward headend wiring. An alternative, when video is to be distributed to many locations, is to use a video distribution amplifier (VDA).

Figure 8.19(d) illustrates a common form of VDA though most have more than three outputs. An amplifier provides a voltage gain of 2 to the video signal so that a 1-V p-p signal at the input becomes a 2-V p-p signal at the output. A number of outputs are provided, each with a series 75-ohm resistor to establish the source impedance of the path. This resistor and the 75-ohm terminating resistor in the following device form a voltage divider, reducing the voltage back to the video standard interface of 1-V p-p.

8.9.2 Audio Wiring

Many of the same factors that apply to video also apply to audio, except that audio is usually less critical in terms of signal wiring than is video because of the lower bandwidth. It is possible, and many times recommended, that audio distribution amplifiers (ADAs) be used for much the same reasons that VDAs are used. However, it is sometimes acceptable to wire a number of audio-using devices in parallel without regard for reflections since typical headend wiring distances are insignificant compared with the wavelength of audio. It is important, though, to provide one and only one 600-ohm termination in an audio chain.

Professional audio sources are usually specified to drive 600-ohm loads. Some sources may actually exhibit an output resistance of 600 ohms though it is not at all uncommon for the source resistance to be less. Since audio lines are usually of insignificant length compared with a wavelength, this lower driving resistance is not a problem unless it is desired to drive a signal several miles. In this case, more care should be taken to match resistance on both ends.

A major headache in audio wiring is hum elimination. Most hum is due to ground loops and other sources of potential difference. Ground loop mitigation is the most magic of all magic arts: no matter how many words are written about it, the problems encountered in the field will not be covered. However, some pieces of advice are sufficiently general to be worth mentioning.

Figure 8.20 illustrates options in audio interconnection between an audio source and load. Figure 8.20(a) represents the interconnection method strongly recommended and practiced almost exclusively in professional audio applications. Balanced transmission is used, in which the source consists of two signals having equal resistance to ground (not shown) and exactly opposite phase. One way to produce such signals is to use two amplifiers, one for the + phase and another inverting that signal to generate the — phase, as shown. Another way is to use an audio transformer.

image

Figure 8.20 Options in audio interconnection. (a) Balanced audio. (b) Unbalanced audio with hum source. (c) Unbalanced source and load with transformer. (d) Unbalanced source and balanced load.

The signal is connected to the load on the right of Figure 8.20(a), using a balanced, shielded cable. Several cables made for this purpose are available. The two wires of the cable are terminated in a differential amplifier, whose output is an amplified version of the difference between the two input signals on the + and — inputs. By using balanced transmission, any hum pickup will be the same on the two wires of the balanced pair, so the hum will be subtracted out at the receiving differential amplifier (or transformer).

Notice that the shield is connected to ground at only one end. It is not correct to ground the shield at both ends since this can create a ground loop.9 Ground loops exist any time there is more than one path for current between pieces of equipment. As a practical matter, this is hard to prevent: racks must be bonded together and to ground, providing a path. Individual pieces of equipment must have third-wire safety grounds, providing a second path. After that, consider all the interconnecting wiring, much of which involves grounds. A loop can form the secondary of a giant transformer, the primary of which is all the ac wiring in the headend. The result is current induced in the ground paths, which sets up potential differences between equipment. Besides the transformer model, it is likely that many headends will experience potential differences between racks because of the various routes supply current can take to get back to the neutral of the primary power supply.

Figure 8.20(b) illustrates the unbalanced signal connection method used in most consumer equipment but not recommended for professional use. A single-ended output amplifier sends the signal to the receiving equipment, which receives it using a single-ended input. The shield of the cable serves as the return path. Also shown is a prime reason to not use this method in headends. If an ac voltage exists between the two ends, it will result in current on the shield. Because the shield has some resistance, a voltage drop will exist between the two ends. This voltage will be seen by the receiving amplifier, resulting in hum in the audio.

If the equipment at hand can accommodate only unbalanced audio, and if hum is a problem, then the solution of Figure 8.20(c) will have to be used. A high-quality audio transformer having a 1:1 turns ratio can be used at the receive side to isolate the ground on the two sides. The input side is connected to the center conductor and shield, but not to ground. The output side is connected to the amplifier input and ground. Transformers must be operated at their specified impedance. Small, low-cost transformers will distort the audio, particularly at lower frequencies, though high-quality professional transformers are available that do a good job.

Figure 8.20(d) illustrates an option where the sending equipment is single ended, but the receiving equipment can accommodate a balanced input. Balanced audio cable is used, with the sending ground connected to one of the wires of the pair. Again, note that the shield is connected at one end only.

In the end, though, we can say only that ground loops are so different from each other, and so difficult to resolve, that whatever works is the right thing to do.

8.9.3 Controlling Hum in Video

Hum in video systems using NTSC and in 60-Hz power grids will show up as horizontal bars moving slowly up through the picture. The upward movement is due to the slightly slower than 60-Hz field rate of NTSC. Just as ground loops can cause great anguish with technologists attempting to interconnect audio circuits, it can be a problem with video. Because of the frequencies involved, balanced (differential) coupling of video is not as predominant as it is in audio applications though it is possible to interconnect video signals using balanced transmission. It is far more common to use single-ended equipment to interconnect video. This makes resolution of ground loop problems even more difficult for video than for audio.

Figure 8.21 illustrates two methods that can be used to mitigate hum in video circuits. Shown are video source and terminating equipment. Both the signal and chassis grounds are shown, emphasizing that they are usually one and the same in video equipment. They may be kept separate internally to the equipment, but for the most part, they are the same outside the equipment.

image

Figure 8.21 Methods used to mitigate hum in video circuits. (a) Hum bucking coil. (b) Floating input connector.

Figure 8.21(a) illustrates use of a hum bucking coil. These may be purchased commercially and consist of a number of turns of 75-ohm coax through a high-permeability ferrite core. Typically, BNC connectors are provided on the case of the hum bucking coil. The connectors must be insulated from each other. To a current on either the center conductor or shield, the arrangement presents a high inductive reactance, which opposes the current. However, to a current traveling in one direction on the center conductor and the opposite direction on the shield (correct differential mode signal transmission), the inductive reactance is not present: the inductance operates only on the net current in the coax, and in this case, the net current is zero.

So long as the inductive reactance at 60 Hz is high enough, it is not possible for significant ground loop current to develop through the coax. Thus, the hum bucking coil can offer significant reduction in hum, but only if it is not shorted between the input and output shields. The short would afford ground current a way to avoid the inductive reactance of the coil.

Some video equipment does offer a balanced input driven from an unbalanced source, as illustrated in Figure 8.21(b). This is analogous to the situation of Figure 8.20(d). Usually, a special BNC connector is used, which has its shield insulated from the chassis. A differential amplifier recovers the voltage difference between the center conductor and the shield. The resistor to ground is high and is used to prevent a static charge buildup on the shield if the other end is not terminated. A floating BNC connector as shown is used on some professional equipment, but the cost prevents its use in lower-cost lines of videoprocessing equipment.

8.9.4 Handling Digital Video Signals in the Headend

Asynchronous serial interfaces should be routed point-to-point and not looped through from one piece of equipment to another, even if the same signals are bound for more than one place. Use a digital distribution amplifier to route the signals to more than one place.

ASI signals are handled on the same type of 75-ohm coaxial cable used for RF and analog video, so the same handling precautions apply. The digital signal will in general be more immune to interference pickup than is the analog signal. However, signal radiation from a digital interface to an analog interface is a possibility. The same measures taken to minimize interference in analog signal transmission will work here. Use the best-quality coaxial cable possible. Quad shielded cable is recommended for headend use. Pay particular attention to the shield connections, because poor shield connections can encourage signal radiation. If digital pickup in analog cabling is a problem, also try physically separating the cables. If this fails, try ferrite beads on both the digital and the analog cables. Make sure the ferrite beads are specified for the frequency range over which the interference is a problem. A ferrite optimized for use at power frequencies will perform very poorly at several hundred megahertz.

A lot of signal transmission is on Ethernet cables today. These cables consist of a number of twisted pairs handling balanced signal transmission. As shown in the telephony chapter, balanced transmission is a good way to minimize pickup or radiation of signals, but it is not perfect. If you are having problems with radiation from Ethernet cables, you can also try appropriate ferrite beads on the Ethernet cables. You can also try shielded Ethernet cables (STP). Many times, though, the best strategy is to separate the Ethernet cables from victim cables.

8.10 Ad Insertion

Insertion of advertising in the programs transmitted by various programmers is a significant source of income for many cable systems. Many basic cable networks charge cable systems for the privilege of carrying the network. In turn, the networks make available “avails,” or time each hour that is available for sale of advertising by the local cable systems. Though practices vary, it is not uncommon that two blocks of time, each 2 minutes long, are made available at predetermined times each hour for insertion of advertising sold by the local cable system. Control of insertion is done by the network, which transmits cue tones to initiate the play of local commercials and cue tones to transfer back to the network.

In the past, ad insertion was done using analog tape players controlled by a computer. Today, file servers have replaced the tape players in new systems. Figure 8.22 illustrates a file server used for this purpose. Differences will exist from one system to another. The file server itself is usually a computer workstation optimized for transfer of large digital files from its internal hard disk drives to the outside world. It interfaces with a traffic system, which provides the schedule for playing commercials. Through a second interface, commercials are loaded onto the hard drives. Typically, the commercials are compressed using MPEG-2 encoding.

image

Figure 8.22 Video file server system used for ad insertion.

One file server is typically capable of supplying data fast enough to service a number of channels simultaneously. As shown in Figure 8.22, the file server “plays” the file out through a system interface, to an individual analog channel interface, which is shown in more detail in Figure 8.23. The channel interface receives analog video and audio from the IRD or other source, along with cue tones. Usually, the cue tones are transmitted on analog sound subcarriers since the audio is transmitted digitally with the video. FM demodulators, such as DM2 of Figure 8.16, recover the cue tones. The cue tones are passed to the file server, where they initiate playing of commercials.

image

Figure 8.23 Individual channel interface in ad insertion system.

Figure 8.23 illustrates an individual channel interface. NTSC video is supplied to a switch, which allows replacement of the incoming video with the commercial. Sync is extracted from the video to allow the inserted video to be synchronized with the incoming video, a requirement for smooth switching between sources. An MPEG decoder receives video from the file server and converts it to NTSC video. The cue tones are supplied to a detector, which passes them on, ultimately to the file server. One to three (if SAP is included) channels of audio are switched with the video.

The video and audio from the ad insertion system are supplied to a modulator and, usually, stereo encoder, for transmission to subscribers. If the inserted video is synchronized properly with the incoming video, and the actual switchover between incoming and advertising video is done in the vertical blanking interval, the subscriber does not see a change that indicates the video is being originated locally. Cue tones cannot be heard because they are transmitted on a channel other than the one used for program audio.

Ad insertion in digital programs is covered next.

8.11 Video On Demand and Digital Ad Insertion

Video on demand (VOD) is a popular service. VOD allows a subscriber to order his or her own custom video program from the headend. It is the cable equivalent of the video rental store, without late fees and rewind fees! Programs are stored on a server and listed in the electronic program guide (EPG) on the subscriber’s STT. When a subscriber orders a VOD program, the request is relayed to the headend on the return data channel from the STT. The VOD server must determine a free time slot in a free channel on that subscriber’s node. The STT is told what RF channel to tune to and what PID to select on that channel. The server then “plays out” the program, multiplexing it with several other signals bound for other subscribers on that node.

VOD is a personal service sent individually to one subscriber. However, in order to reach that one subscriber, the program must be sent to all subscribers on that fiber node. Only the one subscriber for whom the program is bound has the correct descrambling key to allow recovery of the program. Usually the subscriber has the ability to perform “VCR-like” functions, such as pause, rewind, and fast-forward. The subscriber is allotted so many hours to watch the program, an amount of time somewhat longer than the run time of the program. This will allow for meaningful pausing and rewinding.

Sometimes two or more levels of file server hierarchy are employed, with a file server in a hub close to a group of subscribers that has the more popular titles on it. A file server in the master headend or some other location keeps less popular programming and supplies it when requested. A significant issue in VOD is the need to accurately keep track of what programming is available and where it is located. The program owners may have strong feelings about controlling where the programming resides and how it is distributed from there.

A VOD file server can be constructed architecturally the same as an ad insertion file server, except more storage capacity is required. To improve reliability, VOD servers may employ redundant array of independent (or inexpensive) disks (RAID) technology, a category of disk drives that employ two or more drives in combination for fault tolerance and performance. In RAID systems, multiple disk drives are used to store different portions of the program data such that the failure of one disk drive does not cause the loss of any program material. There are several levels of RAID drives, of which levels 0, 3, and 5 are the most popular.

Level 0 provides data striping (spreading out blocks of each file across multiple disks) but no redundancy. This improves performance but does not deliver fault tolerance. Level 3 is the same as Level 0 but also reserves one dedicated disk for error-correction data. It provides good performance and some level of fault tolerance. Level 5 provides data striping at the byte level and also stripe error-correction information. This results in excellent performance and good fault tolerance.10 In other words, RAID spreads out the bits of each byte in the file over several disk drives and puts the data back in the right sequence upon play out. Error correction is added to allow the file to be recovered if any one disk drive fails. Typically, the disk drives are hot-swappable, meaning you can change a failed drive out without stopping operation of the file server. The server can operate with one disk down by doing on-the-fly error correction. As soon as the faulty disk drive is replaced, it can rebuild the file by error-correcting the surviving data.

A related service is a network personal video recorder (PVR). The idea is to use a video file server to capture programs as they are broadcast and then to allow subscribers to view them later as a VOD program.

8.11.1 Digital Video Rate Management

Cable systems have long inserted advertising on analog channels. But with the advent of digital television, ad insertion had to be extended to this medium. In principle, the block diagram of Figure 8.23 applies to insertion in digital video streams, but there are some new issues to consider. The first issue in digital ad insertion is that you have to exit the programming stream in favor of the advertising stream. You do this once you have transmitted enough information to recover all frames of the program picture that are to be displayed prior to starting the advertisement. Then you must start the advertisement with an MPEG I-frame, since it is the only frame that will contain enough information to allow the first frame of the advertisement to be displayed. It is not enough to do this just for video; you must make the audio match what the video is doing, and audio is compressed independent of the video. Typically, the audio will be muted for a short time during the switch-over. Finally, since the advertisement is being inserted into a multiplex of channels that have usually been statistically multiplexed, you have to ensure that introduction of the advertisement does not overrun the capacity of the RF channel.

The first job, digital program insertion, is covered in Section 3.5.4. Figure 3.26 shows the steps that must take place to leave the normal program stream and go to the ad insertion stream. The second issue, managing the data rate to stay within the available bandwidth, is covered here. This discussion applies any time you modify the content of a digital video stream in a VBR environment (see next subsection). This happens when you insert a local commercial and when you groom an incoming video feed to select some programs and combine with other programs on a different incoming stream.

Constant and Variable Bit Rate

Video programs may be transmitted using either a constant bit rate (CBR) or a variable bit rate (VBR). The terms are self-explanatory. If a program is sent using CBR, the data rate is set to whatever is necessary to provide a good picture at all times. If fewer bits are required, then filler bits are added (“stuffed”) to round out the set bit rate. If by chance, however, a picture would benefit from a higher bit rate for a time, the picture can’t have a higher bit rate; the encoder must make do with the bit rate allocated.

VBR transmission is more efficient because it transmits only whatever data rate is necessary to recover a good picture. VBR transmission frequently originates where video is encoded, using a statistical multiplexer, or stat mux. A stat mux monitors all the data coming from a number of encoders, which data is being multiplexed into one transmission stream. The transmission stream, of course, is at a constant bit rate, required to enable the receiver to lock to the transmission. But the transmission stream can be composed of a number of VBR streams managed by the stat mux. The stat mux buffers each encoder’s output, multiplexing all outputs into the output transmission stream. If the buffer starts to empty, meaning that not enough bits are going to be available to make up the output transmission stream, then one of three things can happen: (1) The stat mux can call for more data from one or more of the encoders, even if the additional data is not needed for quality video; (2) the data stream is stuffed with filler bits; (3) opportunistic data is transmitted, that is, data held in queue until there is room for it.

On the other hand, there will be times when the stat mux buffer starts to get too full, indicating that there is not going to be room in the transmission stream for all of the bits. In that case, the stat mux must tell one or more encoders to back off and encode to a lower bit rate, even if the target video quality is compromised. Hopefully, the data peak is over in a fraction of a second, and viewers remain unaware that an encoder had to compromise its target quality momentarily.

Most VOD systems store material in CBR format, though there could be savings of maybe 30% on storage space by using VBR.11 The difficulty with VBR is that when you multiplex several VOD streams together, you have a problem managing the data rate. This job is simplified if you work with CBR.

Many program multiplexes are encoded using VBR, however, in order to gain efficiency in transmission. The problem comes in when you pull some program streams out of a VBR transmission stream and combine them with other VBR streams, from an ad insertion system, from a broadcaster, or from somewhere else where you have no control over the data rate to which the material was encoded.

Figure 8.24 illustrates a VBR transmission stream with four programs. In order to provide better quality for programs that we want to look better, we have set a peak rate for program 2 that is three times that of programs 1 and 4, and we have assigned a peak rate for program 3 that is twice that of programs 1 and 4. You can see that in two instances in the time monitored, the total rate for the four programs exceeds the capacity of the channel. If the programs were being put together by a stat mux, it would have one or more encoders back off during the peaks. You can also see several valleys in the total rate, in which opportunistic data may be inserted if available.

image

Figure 8.24 VBR transmission stream with four programs.

The problem comes in where you have assembled this transmission stream from several independently encoded programs and you have no control over the incoming data rate of each. Suppose, for example, that program 4 is a commercial you have inserted into a transmission stream. You have no control of the three program streams you are passing through, and the commercial substituted for program 4 is preencoded, so you have no capability to influence its data rate either. Specialized equipment called a rate changer, ora groomer, has been developed for this situation.

It is possible to decode each program back to the non-MPEG-2 encoded video, and to reencode using a stat mux system. There are a couple of difficulties with doing so. The first is that it is rather expensive, and it must be done with each program in the multiplex. The other problem is that, as pointed out in Chapter 3, MPEG-2 encoding is a lossy process. If you cascade multiple encoding, you are going to damage the picture. It is possible to find that the second encoding process needs a higher bit rate to handle artifacts introduced in the first encoding process.

In order to avoid these problems, a rate changer analyzes the picture and makes a judgment on what information can be dropped from the program stream without doing noticeable damage to the picture. It may be that the rate changer will decide that it doesn’t need to encode the higher-order DCT coefficients with as many bits as was done originally, or it may decide that it can drop some of the higher-order coefficients altogether, for example. Figure 3.9 and the associated text shows what is going on here.

8.12 Summary

This chapter has provided information about the common types of equipment found today in headends. This equipment includes processors, modulators, demodulators, stereo encoders, earth station receivers, and advertising insertion equipment. We have demonstrated typical circuit sets and features, recognizing that a number of ways to do each job exist, and that not every piece of hardware will have every feature shown. The next chapter will cover what happens to signals after they exit the individual pieces of equipment and will also discuss other techniques related to headend use.

Endnotes

* PAL countries use 38.9MHz for the picture carrier IF and 33.4 (PAL-B/G) or 32.9 (PAL-I) MHz for the audio IF. In China, 38 MHz is used for the picture carrier IF.

* The Figure is displayed as would be a television signal, with the lowest resultant amplitude at the top. The gain has been normalized in the three cases to produce the same peak-to-peak signal.

* If the monaural meter drive signal is limited to a 15-kHz frequency response it may be marginally useful as a deviation indicator, but this is not recommended.

1. www.wegener.com. See data sheets for DTV 700 Series.

2. Donald G. Fink, Electronics Engineers’ Handbook. New York: McGraw-Hill, 1975, pp. 2–26.

3. Christopher Bowick, The Importance of Setting and Maintaining Correct Signal and Modulation Levels in a CATV System Carrying BTSC Stereo Signals. This has appeared in several places, but a useful source is the BCT/E Certification Program Reference Bibliography Reprint Manual, Society of Cable Telecommunications Engineers, 140 Philips Rd., Exton, PA 19341-1318. Telephone 800-542-5040. Many other useful reprints are also found in this volume.

4. IBA, BREMA, BBC, NICAM 728: Specification for Two Additional Digital Sound Channels with System I Television, 1988. The current status of this document cannot be determined. However, the Instituto de Communicagoes de Portugal has published substantially the same information, which can be found at http://www.icp.pt/legisuk/p316-93uk.htm.

5. Televerket Radio Laboratory (Sweden), Digital Multisound in Television — Feasibility of Filtered 728-Kbit/s NICAM with 5.85-MHz Carrier Frequency in CCIR System B. Adjacent Channel Operation in Cable Networks. Paper is dated January 16, 1987, and was privately furnished to the author. Publication information is unknown.

6. DVB Project Office, Interfaces for CATV/SMATV Headends and Similar Professional Equipment. DVB Document A010, rev.1, 28 May 1997.

7. ETSI, Digital Video Broadcasting (DVB); Professional Interfaces: Guidelines for the implementation and usage of the DVB Asynchronous Serial Interface (ASI). ETSI TR 101 891 b1.1.1 (2001-02).

8. Per private correspondence with Bob Gaydos, Concurrent Computer Corporation.

9. Linc Reed-Nickerson, Audio Levels Scream for Attention. CED, October 1997, p. 54ff.

10. www.webopedia.com

11. Per private correspondence with Bob Gaydos, Concurrent Computer Corporation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset