Chapter 9 Headend Operation

9.1 Introduction

Previous chapters in the headend section have covered the input devices to headends and the electronics that compose the bulk of the headend equipment. This chapter covers a number of issues related to headend operation. First, the accepted band plan for cable television is presented. This defines the accepted channel number versus frequency plan adopted by both the cable television and consumer electronics industries, and mandated by the FCC.

Management of optical and RF connections and signals is important in today’s headends, so these topics are covered in some detail. This topic includes steering signals to and from nodes and organizing wiring such that performance is optimized. Finally, topics related to the assessment of the quality of signals in headends are covered.

9.2 Cable Television Band Plan

For reasons shown later, the frequencies used for transmission of television signals are not contiguous, as they are in AM or FM radio broadcasting. Therefore, television stations are identified, not by frequency as is the case with radio, but by channel number. The channel numbers are easy for viewers to remember and avoid the confusion of noncontiguous frequency assignment.

9.2.1 Concept of Image Response and Historical Background

The VHF television assignments are chosen to allow certain simplifications in tuner design. Recall from Chapter 2 that the NTSC television signal modulated onto an RF carrier has a picture carrier, a color carrier located 3.58 MHz above the picture carrier, and a sound carrier located 4.5 MHz above. The spectrum from 0.75 MHz below the picture carrier to 4.2 MHz above the picture carrier (at RF) is occupied with visual information. Engineers often talk of the channel frequency as the frequency of the picture carrier, understanding that the other carriers and the modulation sidebands are also present.

Inside a demodulator or television receiver, the incoming signal is quickly converted to an intermediate frequency (IF). This is a standard frequency at which a tuned signal on any channel will be amplified, filtered, and detected. The spectrum of the signal at IF is reversed, or inverted, from that at RF, due to use of a high-side local oscillator (LO) in the TV receiver. Using a high-side rather than low-side LO mitigates certain problems that would create major performance limitations. Local oscillators are used to convert an incoming frequency to another frequency and sometimes are used to convert from one internal frequency to another. When you tune a TV or radio, you are changing the frequency of the first local oscillator such that the difference between its frequency and the desired incoming frequency equals the IF frequency.

Figure 9.1 illustrates the frequency conversion process, starting with the RF frequencies transmitted over the air or on cable and ending in the IF band normally used today. The top portion is taken from Figure 8.10 and shows the conversion of the received signal to IF. The lower portion illustrates what happens when a TV is tuned to channel 2. The local oscillator operates at 101 MHz for channel 2. The mixer takes the difference between the LO frequency (101 MHz) and the RF picture carrier (55.25 MHz). This difference is 45.75 MHz, the standard picture carrier IF used in NTSC television reception. Also shown are the conversions for the sound and color carriers. Notice that the spectrum of the channel at IF is inverted from that at RF, due to subtraction of the RF frequencies from the local oscillator frequency.

image

Figure 9.1 Operation of high-side local oscillator at channel 2.

If bandpass filter BPF2 is not present, and if there exist incoming signals at cable channels 17 and 18, then an “image” is produced in the IF. The desired IF signal is produced by subtracting the incoming frequency from the LO frequency, but the image is produced by subtracting the LO frequency from the incoming image frequency. This is shown in the IF spectrum, where the channel 18 picture carrier is converted to 44.25 MHz by the same local oscillator that converts the desired signal. The presence of these “spurious” signals in the IF passband will produce interference beats. No amount of filtering at IF (represented by BPF3) can remove the interference. The only way to remove it is to never allow the image-producing signal to reach the mixer. It can be removed by filter BPF2 before frequency conversion. This filter is tuned to the desired channel.

The image frequency is given by fimage = fRF + 2fIF. It is desirable that the image frequency be removed as far as possible from the desired RF frequency to simplify BPF2. The filter must allow adequate rejection of the image frequency so that the image does not adversely affect the received signal. If the IF frequency is raised, then the transition region of BPF2 can get larger, resulting in a simpler filter. The transition region is the frequency region between where the filter must pass signal (the passband) and the frequency at which it must provide a specified rejection (the stopband).

VHF off-air allocations don’t include the image frequency for any channel. Thus, a TV receiver designed for off-air reception doesn’t have to worry about a television signal being present on the image frequency. Furthermore, local oscillator frequencies don’t fall within a channel. If a set radiates power at the local oscillator frequency, there is no danger of causing interference to TV reception. The UHF band does not include these desirable characteristics since channels there are contiguous over a frequency range of 336 MHz. However, the channel allocation plan the FCC uses includes “taboo” channels. When one channel is assigned in a city, several other channels become unavailable owing to image, local oscillator, and adjacent channel considerations. Assignment of channels for digital television broadcasting violates the taboo channels for the first time in the history of North American television broadcasting.

9.2.2 Introduction to the Modern Cable Television Band Plan

The need to use spectrum efficiently has driven the cable industry to include channels that violate the off-air plan and taboos. That this has worked is due in part to the fact that signal levels in cable are controlled much better than they can be controlled over the air. When all else fails, set top terminals are used to combat image problems. Set-top terminals use a dual-conversion frequency plan with a first IF above the tuning range, so image rejection is not a problem.

Early on, the cable industry began using the so-called midband frequencies between the FM band and the VHF high band (channels 7–13). Initially, these were designated using letters, beginning with channel A (120–126 MHz). When need and technology opened up higher frequencies, the spectrum above channel 13 (210–216 MHz) was called the superband, and channel designations continued to use sequential letters. About 1980, set top terminals began using all-electronic tuning in order to permit the use of economical remote control. The low-cost numerical displays were not capable of displaying letters, so set top manufacturers began using numerical channel designations, beginning with channel 14 for what had been known as channel A.

Each manufacturer used his or her own allocation plan though to an extent a consensus developed: what had been channel A became channel 14, and sequential numbers were used from that point on. The problems developed when they got to channel 36, 298.25 MHz. This was the top frequency being used when mid- and superband numerical designations became popular. In order to get a little more capacity, cable systems were sometimes putting TV signals in the FM band, 88–108 MHz, and in the 108–120 MHz region previously avoided because of concern for potential aeronautical interference. In addition, HRC and IRC plans were becoming popular. As we will show, use of these plans opened up another 6-MHz block between channels 4 and 5. When set-top manufacturers added all these channels to set-top terminals, they took different directions in channel number assignment.1

Shortly after, TV manufacturers began using tuners that tuned the cable channels. With such tuners and 75-ohm inputs, the sets were called “cable ready” and were sold as not needing set-top terminals.*Lacking a standard channel assignment plan, each TV manufacturer copied the plan used on whatever set-top terminal he or she looked at. This led to considerable confusion on the part of the subscriber, who had to be trained that, in order to get such and such a program, he or she would have to tune to channel 37 (for example) with one set top, channel 99 with another, or maybe channel 1 if he or she had a TV without a set-top terminal. The cable television and consumer electronics industries began discussing a standard band plan, under the auspices of the joint EIA (Electronic Industries Association) and NCTA (National Cable Television Association) engineering committee. The first plan, produced in the mid-1980s, was assigned the designation IS-6, an EIA designation for an interim standard. It covered allocations to 300 MHz. Even as an interim standard, it received widespread support from both industries. Set-top and TV manufacturers began applying it to new product.(However, even today there is product in use that predates this standard.)

Before IS-6 could be adopted formally, the industry began using frequencies above 300 MHz. The committee extended the plan to cover channels through 1 GHz and to define how channels above this would be allocated. The new standard was designated IS-132. The prototype standard was rather quickly produced, but a number of years elapsed before it was officially approved and sent to the American National Standards Institute (ANSI). The final EIA-assigned standard number was EIA-542. Later the standard was upgraded to show digital channel allocations, in the process becoming EIA/CEA-542 Rev. A. This standard is described in the remainder of this section. Some seemingly strange violations of a straightforward channel number allocation plan exist for historical reasons: a new standard must not cause undue problems for equipment already in use when it is adopted.

9.2.3 The EIA-542 Band Plan

Outline of the Plan

Figure 9.2 shows the relation between the standard cable channels of EIA-542-B and the FCC frequency assignments.2 (Besides the standard assignments, EIA-542-B defines HRC and IRC frequency plans, shown in Figure 9.4.) The boxes in Figure 9.2 contain the FCC channel numbers for off-air assignments plus the traditional letter designations for cable-only channels. The new channel numbers as defined by EIA-542 are indicated above the boxes. Below the boxes are representative frequencies for the channel boundaries. Finally, the traditional cable band designations are shown below the frequencies. These band designations are not used very often today.

image

Figure 9.2 Relation between cable channels and FCC assignments.

image

Figure 9.4 Relation between standard, IRC, and HRC plans.

Channels 2–13 are identical to the FCC assignments for off-air broadcasting. The midband comprises channels 14 (121.25-MHz picture carrier) through 22 (169.25 MHz). Channel 23 (217.25 MHz) is the first channel above channel 13. From channel 23, the channel numbers are progressive through channel 94 (643.25 MHz). Channels 95–97 lie in the FM band, and channels 98–99 define the frequencies between the FM band and channel 14. Channel 100 picks up at 649.25 MHz. From there, the plan progresses one channel number per 6 MHz, up to channel 158, 997.25 MHz, where the present revision of EIA-542 stops. Above this frequency, the plan specifies that the channel numbers shall continue to progress one channel number for each contiguous 6-MHz band.

Relation to the Off-Air UHF Band

Figure 9.3 shows the relation between the cable plan and FCC assignments in the UHF television band. The FCC plan is offset up 2 MHz from what we would expect if we were to keep adding 6 MHz to the VHF allocations. The cable plan fills the spectrum completely from above channel 13 to the UHF band. Consequently, the cable band edges are lower than the off-air band edges by 2 MHz. This can cause problems when these channels are used on cable systems. In the UHF region, the channel offset can result in a local broadcaster rendering two channels less desirable on the cable. For example, assume channel 15 was used off-air. The picture carrier from channel 15offair will fall in channel 66cable. The aural carrier from channel 15offair will fall in channel 67cable.

image

Figure 9.3 Relation between cable and FCC assignments in UHF band.

If the field strength of channel 15offair is high, the sound carrier will produce beats in channel 67cable. On the other hand, if the field strength of channel 15offair is not too high, the offset can help cable reception. Because the picture carriers are offset by 2 MHz, the visibility of any beat produced is less than if the offset were not present. The so-called W curve, described in Chapter 2, shows that an interfering carrier offset from the picture carrier by 2 MHz is not as visible as if it was closer.

IRC and HRC Assignments

In order to control the effects of distortion in cable television distribution plant, some systems employ alternative frequency plans. Of a number of plans that have been tried over the years, only two alternative plans survive. Figure 9.4 contrasts these two plans with the off-air (standard) assignments. The low VHF band is shown since it is the most affected.

The incrementally related coherent (IRC) plan (lowest line of the figure) places all carriers, except for channels 5 and 6, on the same frequency as in the standard plan. All picture carriers are phase locked to a comb generator, whose output is derived from a common 6-MHz oscillator, but with the comb shifted 1.25 MHz high in order to place the picture carriers on their normal assignments. We can show that many third-order distortion products fall coherently on other carriers in this case, yielding lower beat visibility. The 4-MHz gap between channels 4 and 5 becomes 6 MHz because channels 5 and 6 must be shifted in order to accommodate IRC assignments. They are shifted up by 2 MHz, invading the FM band. This opens a 6-MHz gap between channels 4 and 5, which is assigned the designation “channel 1.” (Some cable systems are using 48–54 MHz and calling it channel 1. This is not recognized in the plan.)

The other plan is the harmonically related coherent (HRC) plan (top row in the figure). All picture carrier frequencies are at harmonics of 6.0003 MHz. We explain the significance of the 6.0003-MHz frequency later. The picture carrier frequencies are derived by phase locking the picture carrier generators to a 6.0003-MHz comb. We can show that this causes all second- and third-order distortion products to zero beat other carriers, providing somewhat more immunity against distortion than does the IRC plan. All carriers are shifted low by 1.25 MHz, to fall on the harmonics of 6.0003 MHz. Again, the exceptions are channels 5 and 6, which are shifted high by 0.75 MHz. The combination of shifting channel 4 lower by 1.25 MHz, and channel 5 higher by 0.75 MHz, again creates a 6-MHz opening that is assigned the designation “channel 1.”

Operation in the Aeronautical Bands

Cable systems wishing to put signals in the frequency bands utilized in the aeronautical service (108–136 and 225–400 MHz) are required to periodically measure leakage from their systems and to repair any leaks that exceed a certain standard. The standard is much more stringent than is the standard to which consumer electronics manufacturers are held. In addition to controlling leakage, signals carried on the cable system are required to be offset in frequency from the nominal aeronautical carriers. The nominal offset puts the cable picture carriers at the edges of the aeronautical channels. The offset requirements apply to any carrier that exists in the distribution plant at a level exceeding 10−4 watts (+38.75 dBmV), measured in a noise bandwidth of 25 kHz and averaged over 160 μs.3 With realistic distribution plant signal levels, this often includes picture carriers, but rarely includes sound or data carriers.

Channel Assignments in the Aeronautical Band

Figure 9.5 shows the relation between cable carriers and aeronautical channels. The aeronautical bands are subdivided into aeronautical radio navigation and radio communications bands. The navigation channels are assigned channels in 50-kHz increments, and the communications bands in 25-kHz increments. The aeronautical carrier frequencies fall precisely on the nominal TV picture carrier frequencies. Because of this, cable systems must offset their picture carriers so that they fall on the aeronautical channel edges. In the aeronautical radio navigation bands, this necessitates a 25-kHz offset. In the communications band, the offset is 12.5 kHz.*The FCC permits any aeronautical channel edge to be occupied by a picture carrier. In order to simplify the allocations, CEA-542-A or -B recommends picture carriers be placed at the next channel edge higher than the nominal frequency. This seemed to be the most common assignment used before the recommended standard was written.

image

Figure 9.5 Relation between cable picture carrier frequency and aeronautical channels.

Some cable systems, though, offset to the next lower channel edge, which is shown as the secondarily recommended picture carrier. One reason a system may want to use the negative offset is to help identify the source of leakage. Cable systems must formally measure leakage from their systems frequently, and must informally monitor leakage continuously. If two systems are adjacent to each other, it can be difficult to determine which one is responsible for a leak. When this condition exists, one of the systems will employ positive offsets, and the other negative offsets. Thus, by measuring the frequency of the leakage, it is possible to identify the system with the leak, allowing the correct repair crew to be dispatched. The recommended standard permits, but doesn’t encourage, any offset up to ±250 kHz. It is felt that most TVs will accommodate this much offset without difficulty.

The tolerance on the carrier frequencies is specified by the FCC as ±5 kHz in the aeronautical band (elsewhere the tolerance is ±25 kHz). In order to accommodate HRC assignments within this limit, it is necessary to use a comb generator with a fundamental frequency of 6.0003 MHz and a frequency tolerance of ±1 Hz. Section 8.4.1 describes an HRC comb generator.

FM Band Usage

In some cable systems, spectrum is at such a premium that the operator has been driven to use the FM band for TV program distribution. This is not common, but is sometimes done. Also, some systems use the FM spectrum for digital music channels. The recommended standard permits the use of the FM band frequencies, but does not require that the TV tune these frequencies. The cable operator is advised that many “cable-ready” sets will not tune to these channels. Many TVs include traps in the FM band to reduce distortion effects in the tuner. These traps preclude reception of the FM band.

Alternative Modulation Methods

Acknowledging the use of digital modulation in 6-MHz channelization, the recommended standard proposes that any signal delivered to the consumer should fit in channels having the edges at the frequencies of the NTSC channel edges. This does not preclude distribution of programming in the plant using some other frequency plan, as long as any signal handed to the subscriber at RF follows the plan. For example, digital signals might be carried in a 12-MHz band on the cable. This would be permitted so long as that signal was never handed to the subscriber. If a device supplied by the cable operator converts the signal to something contained within one 6-MHz channel having the band edges shown in Figure 9.2, that system would comply with EIA-542-B.

9.3 Headend RF Management

Until recently, headends were truly broadcast-oriented structures. They were built to collect various signals, format them, and send the same set of signals to all subscribers. This has been changing. The transition of the industry from tree-and-branch networks to nodal networks has enabled reuse of frequencies on different nodes. The industry has begun routing different signals to different nodes for telephony and data applications. Signals in these services are intended for individual subscribers rather than for all. In order to utilize bandwidth efficiently, it is desirable to determine the node location of an intended subscriber and to send signals bound for him or her to only that one node. This increases the efficiency of the cable network by allowing reuse of the same frequencies on each node, transmitting only information bound for someone in that node.

Besides the fairly obvious advantages of sending data and telephony signals to only the node requiring them, other services can benefit from this capability. For instance, the industry has talked about node-specific advertising, in which advertisements are selected based on the demographics of subscribers connected to that node. Yet another anticipated application is to supply customized video-on-demand (VOD) programming to individual subscribers.

RF management is the art of grouping signals bound for each node while maintaining level control in the most economical way possible. It involves physically segmenting the signals while maintaining isolation between them. It also involves handling return signals entering the headend, where it is desired to route signals to each of several return services, while minimizing the amount of hardware required.

9.3.1 Downstream RF Management

Figure 9.6 illustrates a segment of a modern headend. Signals are received from satellite earth stations and other sources. As shown, each signal received from the earth station may have a number of pieces of hardware associated with it. The satellite receiver, described in Section 8.7, converts the signal from RF to baseband NTSC, usually in association with a decoder to descramble the satellite signal. These two pieces of equipment are often combined into an integrated receiver-decoder (IRD) in modern headends. If stereo is offered on the channel, video from the IRD is routed through a stereo encoder (see Section 8.6) and then to the scrambler if used (see Chapter 21). Finally, the signal is modulated onto an RF channel in the modulator. A number of such signals are acquired and combined in headend combiner CM2. Additional signals, such as digital audio broadcasts, may also be included. The output of combiner CM2 is the total set of broadcast signals, which are supplied to all subscribers.

image

Figure 9.6 Downstream RF management.

Besides these broadcast signals, node-specific services are becoming popular. These commonly include telephony, data services, and in the future, node-specific programming. These services are duplicated for each node; different signals are transmitted to each node, but all use the same set of frequencies. Adequate input ports must be provided for addition of future services. Unused ports must be terminated. Services for node 2 are combined in combiner CM5, and those for node 1 are combined in CM4. These signals are combined with the broadcast signals before being supplied to the individual node optica transmitters.

The combining of broadcast and node-specific services is accomplished by first splitting the broadcast services in splitter CM3. One output of CM3 is supplied to directional coupler DC2, where signals bound for node 2 are coupled to the broadcast signals. Similarly, directional coupler DC1 is used to couple the broadcast signals with signals bound for node 1.

Level Management

A portion of the RF management issue involves physically arranging the combiners and directional couplers such that efficient use is made of the hardware, without causing problems for future expansion. Additional inputs are provided as illustrated at CM5 for services that may be added later.

Provision of additional inputs as shown does not come without penalty, however. Each time the number of ports provided at CM5 is doubled, the attenuation of signals from each existing service is increased by 3.5 to 4 dB. This means that at some point additional amplification may have to be provided.

Figure 9.7 illustrates a simple way in which the node 2 combining may be modified to reduce the level required of existing services, compared with Figure 9.6. In Figure 9.6, an eight-way splitter is provided at CM5, allowing eight services to be combined. The insertion loss of an eight-way combiner is approximately 12 dB (budgeting 4dB per two-way combine). If this does not let the node 2 transmitters operate at comfortable levels, then Figure 9.7 can be used. Combiner CM5 is replaced with a four-way combiner, which has only 8-dB loss for each input. One of those inputs is from another four-way combiner CM6, which provides for additional services. By using two four-way combiners rather than one eight-way combiner, today’s services can operate with 4dB less level required at the respective specific services transmitters. The penalty is that future services will have to operate at levels 4dB higher than they would otherwise (they see a 16-dB loss before directional coupler DC2), and one input is lost. Some cable operators consider this to be a good trade-off to allow the headend to be built simpler today — in some cases, using the Figure 9.7 configuration can save an amplifier in the combining network.

image

Figure 9.7 Alternative combining arrangement for node 2.

Signal Isolation

A major issue in both downstream and upstream RF management is isolation between nodes. This is illustrated in Figure 9.6, with the example of interference from node 1 into node 2. It is assumed that a similar service at the same frequency is used on each node. The problem is to avoid interference from node 1 into node 2, and so on. The heavy arrow of Figure 9.6 illustrates the example case. Signals from services specific to node 1 are coupled to the node 1 transmitter in directional coupler DC1. Theoretically, all the signal either is routed to the optical transmitter or is dissipated in the internal load resistor of DC1. If DC1 is a 17-dB coupler, then the power that appears at its output (connected to the node 1 optical transmitter), is — 17 dB from the incoming signal.

In the real world, some power is coupled backward in the directional coupler and goes toward CM3. Theoretically, any power that impinges on CM3 from one of its outputs either is coupled to the input (connected to CM2) or is dissipated within CM3. In the real world though, some signal is coupled from one output of CM3 to the other. Thus, node 1 signal power from CM4 can wind up being combined with node 2 signal power, where it becomes an interfering signal.

It is assumed that the isolation of each directional coupler and combiner is 20 dB. In traversing the wrong way through DC1, the signal is attenuated 20 dB. In traversing from one output of CM3 to another, it is attenuated another 20 dB. Assuming then that it is combined with another signal of identical level in DC2, the signal from node 1 becomes a — 40-dB interfering signal to a node 2 service.

The point at which interference becomes significant is a function of a number of factors. To provide a feel for the isolation necessary in headend combining, Table 9.1 lists some potential interfering situations and a recommended headend isolation number. You can see that as modulation density increases, more isolation is required. This is because higher-modulation density signals have less distance between modulation states and so can tolerate less interference (see Chapter 4).

Table 9.1 Recommended headend isolation

Interference Recommended Isolation (dB)
Analog visual to analog visual 70 dB (phase lock recommended)
QPSK to QPSK 33 dB
64-QAM to 64-QAM 54 dB
256-QAM to 256-QAM 61 dB

The analog visual carrier recommended isolation is based on the traditional 60-dB threshold for interference, with 10 dB added to account for possible level differences between nodes. It is recommended that, even if this much isolation is provided, the two video sources be phase locked since this has been shown to significantly reduce the visibility of interference. Phase locking will afford additional margin to the threshold of visibility. It is known that the threshold of visibility can be more severe than — 60 dB under optimum conditions, but this number, derived from work done at Bell Labs in the 1950s, is taken as a working number.

The recommended isolation for digital signals is derived from the crossing of a decision state threshold, taking into account the peak-to-average ratio of an unfiltered signal. No allowance is made for demodulation errors or filtering, but a 10-dB allowance is made for level difference, and a 20-dB allowance is made because other sources of signal degradation exist. The headend should not be a limiting factor.

9.3.2 Upstream RF Management

Some of the issues in RF management in the upstream direction are similar to those in the downstream direction, but upstream management introduces some unique issues. Signal isolation issues are similar to those in the downstream direction, but tend to be more complex owing to the need to share resources. Level issues become more complex because changes in the headend can easily affect operation of return components in the field, with operating personnel being unaware of the consequences of changes they make.

Level Management

Level management in the upstream direction is complicated because of the need to maintain accurate levels at the node (in the field) using long loop automatic level control (ALC). This issue doesn’t have a close analog in the downstream direction and is not familiar to a lot of practitioners. Figure 9.8 illustrates a long loop ALC. The host digital terminal (HDT), a generic name for the headend equipment used with any two-way service, consists of a receiver for the upstream signals, a transmitter for the downstream signals, and logic to control those signals, as well as all the components required to operate the appropriate service. The topic of upstream level management is covered more fully in Chapter 16. The purpose of including this information here is to emphasize how the headend must be configured to allow it to function smoothly.

image

Figure 9.8 Long loop ALC control of return transmission.

Downstream signals are transported to the optical node, where they are transferred to the coaxial network and to the home receiver, RX3. In the upstream direction, the home transmitter, TX4, supplies upstream signals, which are transported to the optical node and then supplied to the return optical transmitter, TX2. From TX2, the signal is transported to the headend receiver RX2, then supplied to the RF receiver RX4 in the HDT.

Each HDT using the return path should establish a long loop ALC as shown, in which the return transmitter (TX4 at a subscriber’s home) is commanded to increase or decrease its level until the level returned to the headend subsystem, at Lvl 1, is some target value. After the return system is aligned, it is imperative that the gain between the headend optical receiver, RX2, and the HDT not change. If a new service is added, then power will have to be tapped from optical receiver RX2 to supply the receiver of the new service. If this has not been accounted for initially, by provision of extra ports at CM1, the ports added will change the gain between Lvl 2 and Lvl 1. The existing subsystems will respond by increasing level at the home transmitter until the level at Lvl 1 has returned to its previous value. This will increase the level at Lvl 3, potentially overdriving the return optical transmitter, TX2. To avoid this, it is imperative that an adequate number of reverse ports be provided at CM1 when the headend is built.

Signal Isolation

Signal isolation issues in the reverse direction tend to be more difficult than in the forward (downstream) direction because many operators combine signals from different nodes for some services and not for others. For example, a telephony service may employ a separate HDT for each node. This would be done to permit good availability of service, with little chance of blockage. A modem service may combine two or more nodes to a common HDT to achieve better utilization of equipment — modem systems tend to degrade gracefully as more people use the service. Finally, a status-monitoring system doesn’t need constant communications, so it is possible to combine signals from many nodes to minimize the number of status-monitoring HDTs required.

Figure 9.9 illustrates such a situation. The telephony system has a dedicated HDT for node 1. A data CMTS is shared between nodes 1 and 2. Note that the CMTS may serve two nodes downstream but have individual upstream receivers. This helps compensate for the limited upstream bandwidth. Finally, a status-monitoring HDT is shared among many different nodes, of which three are shown. Splitter CM12 divides signal from node 1 to the telephony subsystem, the data subsystem, and the status-monitoring subsystem, and it has one or more additional ports for other services. As a minimum, a test point is needed, and ideally, the operator has made provision for future service expansion. Splitter CM13 provides the same service for node 2, as does CM14 for node 3.

image

Figure 9.9 Return path signal coupling.

Combiner CM10 has two inputs, from nodes 1 and 2, to allow them to share the same data modem HDT. Combiner CM11 has many inputs to allow many nodes to share the same status-monitoring HDT. All this sharing promotes efficient equipment utilization, but works against performance in that isolation is a problem. It is assumed that like equipment uses the same return frequencies on all nodes. This is usually the case, due to the scarcity of useful return spectrum.

In this limited example, three sneak paths exist to couple telephony signal power from nodes 2 and 3 to the node 1 HDT. Paths 1 and 2 both couple power from node 2 to the node 1 HDT. Path 1 couples power from node 2 via CM10 and CM12. Path 2 couples power from node 2 to node 1 via CM11 and CM12. Path 3 couples power from node 3 to node 1 via CM11 and CM12. The power arriving at the node 1 telephony HDT adds from each sneak path.

In the example, if all combiners exhibit isolation of 20 dB, and all nodes have equal level return signals, then the total relative interference seen at the node 1 telephony HDT is three times the 10−40/10 = 10−4 provided by one sneak path. That is, each interference path traverses two combiners or splitters, each of which exhibits 20-dB isolation, so each interfering signal is — 40 dB from the desired signal, and there are three such signals in the example. This works out to an interference level of 10 log (3 × 10−4) = — 35.3-dB interference from the three sneak paths, assuming equal signal levels. In order to get back to parity with one sneak path, which would exist with no status-monitoring HDT sharing, the isolation of each combiner must improve by 4.7 dB. We can see the problem if the sharing of the status-monitoring HDT is spread over many nodes.

Isolation between nodes can be improved in several ways. Some operators characterize each combiner in the frequency band of interest and choose ports having the best isolation. This works for a limited number of ports in use, but as ports are filled up, the technique will ultimately break down. Sometimes attenuators in the combiner inputs can be used to effect improvement in isolation at the expense of more signal loss.

Filters can be used. For instance, suppose the status-monitoring HDT used a fixed return frequency of 8 MHz. If each input to CM11 had an 8-MHz bandpass filter, then many of the sneak paths used by the telephony subsystem (which probably uses frequencies above 20 MHz), would be broken. The problem, though, is that filters can limit frequency sharing between different services.

Finally, amplifiers may be used to effect isolation. A low-gain, high-isolation amplifier in each input to CM11 will significantly reduce the leakage. This, of course, is at the expense of more cost, possibly more distortion (though this can usually be controlled), and maybe a slight reduction of availability of the sub-system. An amplifier does not have unlimited isolation: many have as little as 15-dB isolation, but judicious selection of an amplifier and padding can often make the project succeed. Many operators use amplifiers as a last resort; nonetheless, the technique is gaining in popularity as headend RF management becomes more complex.

As a final note, it would be desirable to use some attenuation on each splitter or combiner output that goes to any receiver. Isolation of each splitter/combiner is enhanced by so doing. Power coming into one leg of CM10, for example, at the telephony receive frequency will probably see a poor return loss at the data modem upstream receiver. When the signal encounters a poor return loss, it must be reflected to the combiner, and one-half the reflected signal will be routed to the telephony upstream receiver. This mechanism reduces the isolation of a splitter compared to that isolation of which it is capable. An attenuator on each output port will improve the situation.

9.4 Headend Fiber-Optics Management

Almost all modern headends drive some amount of fiber-optic cable. The use of fiber-optic transmission has enabled many modern service offerings. However, fiber optics has its own set of requirements and limitations. Fiber-optic components and handling techniques have evolved rapidly. Chapter 12 describes the three methods of joining fiber-optic cables: fusion splicing, mechanical splicing, and remateable connectors. Optical connectors may be installed directly in the field, but it is much better to use factory-manufactured pigtails, which are cut and each end fusion spliced onto the incoming optical cables. Installation of optical connectors traditionally has required difficult polishing best left to a factory. However, some newer connector systems now allow connectors to be put on in the field.

9.4.1 Fiber Distribution

Though many techniques are used to bring fiber cable into headends, a popular method is shown in Figure 9.10.4 Outside plant fiber cables are brought to the headend at the right of the figure. The jumpers are fusion spliced in the outside cable entrance facility (OCEF), which typically is located just outside a headend wall. A pigtail brings signals into the fiber distribution bay inside the headend. The pigtails are often made by cutting a factory-manufactured jumper in two, yielding two pigtails. The jumper can be tested before it is cut; then two known good pigtails are produced when it is cut. Each pigtail is fusion spliced onto an outside fiber in the OCEF.

image

Figure 9.10 Fiber distribution in headend.

Connectors on the pigtails are permanently mounted to the rear of optical couplers (bulkhead connectors) in the fiber distribution bay. A fiber jumper is plugged into each active cable and couples signals to and from the electronic equipment racks, where the optical transmitters and receivers are located. Optical transmitters and receivers are usually supplied with a pigtail that couples power from the laser diode or to the receiver diode. The pigtail normally terminates on a coupler (bulkhead) located on the chassis of the transmitter or receiver. The jumper from the fiber distribution bay plugs into that pigtail.

Of course, one could simply bring the outside plant cable into the headend, directly to the transmitters and receivers, bypassing the fiber distribution bay. The advantage of the fiber distribution bay and optical jumper is that they allow rapid recovery from plant damage and equipment failure with minimum sparing. The remateable connection at the transmitter or receiver is useful for recovering from equipment failure: if a transmitter or receiver fails, it can be replaced and the cable plugged into the new device. If a fiber fails, the connector at the fiber distribution bay permits rapid patching to a spare fiber strand.

Typically, the same bundle of fiber strands (individual optical conductors) will service several nodes, as suggested in Figure 9.10. Spares will be carried to each node. However, often it is desired that a spare optical strand will serve either a transmitter or receiver, depending on what fails. In some headend configurations, the transmitter and receiver servicing a particular node are not located close to each other. This means that it is not feasible to bring a particular strand up to, for example, a transmitter. If the strand were later needed for a receiver, that strand would have to be extracted from typically very large cable bundles in the headend and routed to the receiver. This would eventually destroy the organization of wiring in the headend, reduce reliability because of cable damage, and be time consuming. With the fiber jumper as shown, it is a simple task to move the appropriate jumper to the failed equipment on one end and to the appropriate outside plant strand.

The fiber distribution bay provides a convenient place to perform optical splitting when used. Optical splitting is used to route the output of one downstream transmitter to more than one node, and also to route light from one transmitter in two directions in a ring configuration. Ring configurations are used to provide path redundancy: if one path is damaged, such as from an overhead fiber being snagged, the second path takes over until the fiber is repaired.

In addition to covering the failure, use of the fiber distribution bay permits more rapid deployment of additional services that employ the unused (“dark”) fiber running to a node. We do not have to worry about locating new equipment where the fiber is physically terminated; rather, a new jumper can be routed in the headend as desired.

It is also possible to omit the OCEF, bringing the outside cable directly to the fiber distribution bay. The outside cable will still need to be fusion spliced to a pigtail. Besides providing a convenient place to terminate outside plant fibers (if not done in an OCEF), the fiber distribution bay provides for labeling of the fibers. It also provides protection to the fiber, by shielding it from accidental contact, and provides for adequate strain relief and acceptable minimum bending radius.

9.5 Signal Quality Tests

In this section, we illustrate some simple yet effective basic quality tests that should be performed on video and audio signals in the headend. Also see Chapter 2, which contains information on the baseband video signal. Quality measurements for digital signals are covered in Chapter 4.

9.5.1 Visual Modulation Definition

Figure 9.11 shows a segment of the same video waveform used in Chapter 22 for illustration. The waveform is amplitude modulated onto an RF (or IF) carrier to the standard depth of modulation prescribed for the standard in use. As described in Chapter 8, the waveform is clamped on sync tips, such that the peak amplitude of the modulated waveform, which occurs during sync tips, is constant regardless of the modulation. The amplitude, A, during the sync tip is defined as the amplitude of the signal. As the video goes toward white, the instantaneous amplitude of the carrier drops. By definition, the peak white video modulates the carrier to 12.5% of the sync tip amplitude (that is, B = 0.125A, NTSC system). Video modulation is characterized by depth of modulation (DOM), which expresses the percentage of the distance the carrier transitions from peak sync amplitude toward zero amplitude:

image

Figure 9.11 Definition of the modulated visual signal.


image (9.1)


In most PAL systems, the depth of modulation is limited to 80%, whereas in NTSC, a depth of 87.5% is standard. The carrier is never totally cut off. This is important for several reasons. Many envelope detector circuits, as the envelope drops, tend to degrade in terms of their linear detection of the modulated signal. This is due to the diode curve, which exhibits a square law characteristic at low bias levels. This would tend to translate into differential gain (and differential phase due to capacitance modulation) if the carrier amplitude drops too far.

Another reason to maintain some carrier is the almost universal use of intercarrier sound detection in consumer receivers. The sound carrier at intermediate frequency is mixed with the picture carrier to produce the so-called intercarrier frequency, the difference between the two. Use of intercarrier detection is important to reduce the effects of incidental frequency modulation of the sound carrier. It also provides a convenient way to place the sound carrier in the middle of a passband that is narrow with respect to the accuracy of frequency conversions preceding it. However, if the picture carrier goes too low in level, the intercarrier detector will interrupt and/or phase modulate the sound intercarrier, with resultant buzz in the audio.

Thus, by standard and by FCC regulation, the depth of modulation of the visual signal is limited to 87.5 %. Note that the color subcarrier is not included in the depth of modulation issue: often depth of modulation is measured using a low-pass filter that eliminates the color subcarrier from the measurement. The issue of whether to include the color subcarrier in the measurement has long been controversial. For most programming, the issue is somewhat academic since most bright (high-luminance) picture elements tend to have little color saturation, so little color subcarrier is present near peak white.(A notable exception, and one that has caused some headache, is the famous Big Bird of PBS’s Sesame Street. His bright yellow color has been known to cause clipping and out-of-channel emissions from older transmitters.)

If color information does cause overmodulation, there may be some color and luminance distortion, but it is in a portion of the color palette where the distortion tends to be somewhat less visible. Intercarrier problems in the audio tend to be masked because the color subcarrier is of such a high frequency compared with audio.

Measuring Depth of Modulation

Figure 9.12 shows a general setup for measuring depth of modulation (DOM). The measurement can be made in service using a test point on the modulator or an external directional coupler. Two DOM measurement methods are shown. A spectrum analyzer can be used as a linear receiver. Alternatively, a demodulator with a zero chopper can be used with an oscilloscope or waveform monitor. The waveform monitor is a specialized oscilloscope that is calibrated in IRE units and in depth of modulation.

image

Figure 9.12 Measuring DOM.

DOM Measurement with Spectrum Analyzer

A spectrum analyzer may be used by tuning it to the picture carrier. Suggested settings are shown in Figures 9.12 and 9.13. In order to measure DOM, two references must be established. Sync tips form one reference, and are set eight divisions above the bottom of the analyzer screen. The other reference is zero carrier, which by the nature of the analyzer display is on the bottom of the screen. Note that the video waveform is presented upside down from the way it is normally drawn: the analyzer displays increasing signal level toward the top of the screen. Recall that sync represents the highest amplitude of the signal. Also, note that most spectrum analyzers do not have a wide enough bandwidth setting to pass the color subcarrier, so the measurement won’t include color signals.

image

Figure 9.13 DOM display using a spectrum analyzer.

DOM Measurement with Precision Demodulator and Oscilloscope

Figure 9.14 illustrates the waveform obtained with the lower configuration of Figure 9.12. As we noted, two points of reference are needed in order to establish a scale for measuring DOM. The sync tips at the bottom of the waveform are one reference. The other must be related to the white level. A demodulator can insert such a reference by interrupting the signal before the detector. Usually, the interruption is during the vertical blanking interval. This “zero chop pulse” is placed eight divisions above the sync tip. Then peak depth of modulation is adjusted until peak white is seven divisions above sync. Zero chop pulse generation is described in Chapter 8.

image

Figure 9.14 DOM display using a precision demod and oscilloscope.

9.5.2 Aural Modulation

Equal in importance with setting visual modulation is the setting of aural modulation. One of the most frequent complaints against cable systems is inconsistent audio level from one channel to another and from one program or commercial to another. This complaint has been leveled against broadcast stations as well. An objective measurement of audio level is difficult if not impossible owing to the subjective nature of human perception of sound. Much research has been done in this area, and a number of methods exist for making such measurements. This is not intended to be a thorough treatment of the subject. Rather, it is intended to impart some concepts that should prove useful in monitoring the audio being transmitted.

Deviation of the Sound Carrier

Television sound is frequency modulated onto the sound carrier in both NTSC and PAL television systems (it is amplitude modulated in SECAM). In frequency modulated systems, the amplitude is constant, but the frequency changes with respect to the center, or rest, frequency. The output of the detector is proportional to the deviation with respect to that center frequency. Note that the amplitude of the sound carrier is not a factor in loudness of the sound. The more the carrier is deviated, the louder the resulting signal. The art of setting the perceived loudness of the sound is often called “setting the deviation.”

It is assumed that some facility for measuring deviation of the aural signal is available. This facility may be a meter in a modulator or a stereo encoder. It may also be an external demodulator with, or connected to, some sort of metering means. It may be a light that flashes at the prescribed peak deviation. This is undesirable as the modulation monitor because it gives so little information. However, many operators use it because of its low cost. The lights were intended to be overdeviation warning indicators only but have been used for setting deviation.

Confirming Accuracy of the Deviation Indicator

The modulation indicator must be properly calibrated. A particularly accurate method to ensure this, and one used in many labs, is the Bessel null technique. It utilizes the fact that, when frequency modulating a carrier with a sine wave, the carrier component goes to zero (all the power is contained in the sidebands) when the modulation index, M, is 2.4. This point is often called carrier null. The modulation index is defined as the ratio of peak deviation to modulating frequency:


image (9.2)


In order to null the carrier component, we deviate the carrier by 25 kHz (the prescribed peak deviation for NTSC sound) using a modulating frequency that makes M = 2.4:


image (9.3)


The frequency of 10.42 kHz for the modulating signal is chosen to make the equation correct. When we modulate with a 10.24-kHz sine wave and find the first carrier null, we know that the deviation must be 25 kHz.

Carrier null is easily detected using a spectrum analyzer as a monitoring instrument and a sine wave generator as a source. Set the analyzer to distinguish between sidebands and the carrier. Set the oscillator frequency to 10.42 kHz. Begin increasing the amplitude of the oscillator output, starting from near zero, while watching the carrier component of the analyzer display. When the carrier component (not the sidebands) first goes to zero, the modulation index is 2.4, and the deviation should read 25 kHz on the indicating instrument. (Note that the carrier will go through zero for certain higher-modulation indices also.)

Exercise caution when making this measurement so as to properly account for preemphasis. The aural signal is preemphasized, as explained in Chapter 8, to reduce noise. At a modulating frequency of 10.42 kHz, preemphasis adds +14 dB to the low-frequency deviation. If the indicating instrument reads the nonpreemphasized signal, then it will read 14dB low in the test described.

In countries using 50-kHz peak deviation and 50-μs preemphasis, the same procedure may be used, except that, at the indicated test point, the deviation should read — 6 dB from peak (because the standard peak deviation is 50 kHz rather than 25 kHz). The correction for preemphasis, if needed, is 10.7 dB.

Often the actual modulation level used is backed off from peak by perhaps 6–7 dB if a traditional VU type meter ballistic is used so as to account for the dynamic response of such a meter. If a peak indicating instrument is used, this backoff is not needed.5 Unfortunately, some meters that have been provided on cable modulators don’t have dynamics that correspond to any particular standard. A meter that is accurate for a continuous sine wave may have significant error when measuring real audio.6

Routine Audio Monitoring

Figure 9.15 illustrates the setup for detecting the Bessel null using a sine wave audio oscillator and spectrum analyzer (switch down). This is the most accurate way to calibrate an audio deviation indicator. The figure also shows an audio demodulator and stereo decoder connected to an xy oscilloscope (switch up). This is a particularly convenient way to routinely monitor audio quality. Some broadcasters have such a monitoring facility permanently installed in their audio signal chain. An experienced engineer can obtain a lot of information quickly from such a display. One audio channel (the left as illustrated) is connected to the x input of the oscilloscope, and the other is connected to the y input. The two channels must be calibrated for identical gain, and the position controls must be adjusted for zero crossing at the center of the screen.

image

Figure 9.15 Audio monitoring options.

Figure 9.16 illustrates a number of situations that can be detected using this pattern. Figure 9.16(a) illustrates a properly calibrated setup with a monaural signal. Since each channel of a mono signal contains the same voltage, the display should be simply a straight line from lower left to upper right.

image

Figure 9.16 Oscilloscope patterns for stereo audio. (a) Mono, in phase. (b) Mono, out of phase. (c) Stereo, in phase, higher separation. (d) Same as (c), except left channel high. (e) Stereo, in phase, lower separation. (f) Synthesized stereo may show more occupancy of out-of-phase quadrants.

Figure 9.16(b) illustrates a mono signal, but with the left and right channels out of phase.(An out-of-phase signal cannot be properly located by the human ear. When listened to on a mono TV, there will be no sound.) The length of the line is related to the peak deviation of the signal. Because a signal normally spends only a small fraction of the time at peak deviation, the line will become dimmer the farther you look from the center of the screen. Figure 9.16(b) also illustrates conventional quadrant designations: the majority of a proper in-phase signal will reside in quadrants I and III. An out-of-phase signal will largely reside in quadrants II and IV.

Figures 9.16(c) and (d) illustrate a stereo signal. The loci of all points occupied is “smeared” because the same signal is not being applied to each channel. The more the signal deviates from the mono axis of Figure 9.16(a), the greater the channel separation. (Compare Figure 9.16(c) with Figure 9.16(e), which shows less separation.) Low or high separation in program material is often a fact, not necessarily an error condition. Once an engineer becomes proficient at looking at these waveforms, he or she knows whether the separation is normal for a particular type of program material.

If the balance is not correct, then the waveform will be “tilted.” The tilt will be down if the left-channel level is higher, as illustrated in Figure 9.16(d). If the right-channel level is higher, the waveform will be tilted up.

Notice that, though the majority of a stereo trace appears in quadrants I and III, some points appear in quadrants II and IV. This is to be expected with a stereo signal. Figure 9.16(f) illustrates a synthesized signal. Depending on the method of synthesis, more power may occupy out-of-phase quadrants II and IV. Some programmers use synthesizers when they are carrying programming that originates in mono.

9.5.3 Measuring Picture and Sound Carrier Levels

No measurements are as basic to setting up a headend as are measuring the picture carrier level and the sound carrier level. This may be done with a specialpurpose signal level meter, or it may be done with a spectrum analyzer. The spectrum analyzer yields more information for the operator who understands it, but it is more expensive, more difficult to use, and capable of providing misleading results when used improperly. The signal level meter makes the reading without requiring setup, but it can be more difficult to use if many channels must be compared, and it gives less information.

Measuring Picture Carrier Level with a Spectrum Analyzer

Several points must be considered. The IF (resolution) bandwidth must be set wide enough to admit enough of the sync tip sidebands to allow the trace to reach its maximum. The video filter must also be set wide enough to allow the detected signal to reach full amplitude. As a practical matter, this means that both the IF and video bandwidths must be set to at least 300 kHz. The IF bandwidth must not be set so high that adjacent channel energy affects the measurement. Also, the sweep speed must be set low enough that you will see a number of horizontal lines on the waveform. With the correct setup, a spectrum analyzer can yield excellent results, though.

Scrambled signal level cannot be measured with any convenient setup. This is explained in Chapter 21, which makes the case for not trying to measure the level of a scrambled signal. Rather, to measure the level, turn off the scrambling, and make the level measurement normally. Then turn the scrambling back on.

Measuring the Picture-Sound Ratio with a Spectrum Analyzer

The ratio of the picture and sound carrier amplitudes must also be set. This may be done with a spectrum analyzer set to measure picture carrier level. Normally, in cable work, the picture-sound ratio is set to 15 dB. The off-air ratio is different, but cable uses a lower sound carrier to minimize adjacent channel interference.

The question often is asked about how to set the picture-sound carrier ratio when the signal is scrambled and pulses are used on the sound carrier. The pulses make the sound carrier level go high for a few microseconds to communicate information to the descramblers. Again, the proper way to set level is to turn off the scrambling, set the picture-sound ratio, then turn the scrambler back on. The scrambling manufacturer must ensure that the correct levels are maintained when the scrambling is turned on. Generally, the signal levels are set such that the picture-sound ratio is maintained at about 15 dB between pulses and goes to 9 or 10 dB at the peak of the pulses.

9.5.4 Measuring Digital Signals

Unique measurements to be made on digital signals are covered in Chapter 4.

9.6 Summary

This chapter has dealt with a number of somewhat disconnected topics generally related to headend operation. Three general topics were covered: the standard band plan for cable television in the United States, management of outgoing optical and electrical signals in the headend, and monitoring of the quality of signals leaving the headend.

The television receiver and cable television industries have agreed on a band plan that defines the frequency versus channel number for television channels. The plan has been defined for three sets of frequency allocations: the standard plan, used in most instances today, and the HRC and IRC plans, chosen to reduce the visibility of distortion products. In the VHF band, the standard plan is identical to the off-air FCC channel assignments. In the old midband and superband regions, the plan follows traditional assignments with numerical channel designations. In the UHF band, the cable assignments are 2 MHz below the FCC plan because the FCC plan does not move from channel 13 (210–216 MHz) to channel 14 (470–476 MHz) in 6-MHz increments.

Signals in the headend must be managed in such a way that it is possible to carry different signals in different nodes but on the same frequency. This means that isolation must be maintained between unique signals bound to or from different nodes. On the other hand, certain signals are common to all nodes, and these have to be handled in common.

It is important to handle headend signals so that quality is maintained. Because the quality of signals exiting headends must be monitored, we have presented some fundamental ways to do so. Books have been written to cover the topic in detail.

We now move out of the headend and into the distribution plant. The next several chapters deal with the technology of distributing signals outside the headend.

Endnotes

* In fact, the TV tuners often had to have set-top terminals placed in front of them anyway. They were not well shielded against direct pickup. They didn’t have adequate image rejection and were susceptible to overload from too many signals at the input. Further, the TVs often had inadequate adjacent channel rejection. These problems still required the use of a set-top terminal, to the considerable frustration of the subscriber, the store that sold him or her a “cable-ready” set, the TV manufacturer, and the cable company. Eventually, the FCC got involved.

* For a number of years, there has been talk of splitting the aeronautical channels into two, halving the separation between center frequencies. If this ever happens, the now-required cable offsets will fall on carrier frequencies rather than on band edges.

1. http://global.ihs.com

2. J. O. Farmer, The Joint EIA/NCTA Band Plan for Cable Television, IEEE Transactions on Consumer Electronics, August 1994, p. 503ff.

3. Code of Federal Regulations, 47CFR sec. 68.610. U. S. Government publication, available at Government Printing Offices in major cities.

4. AT&T Corp., LGX Broadband Fiber Management System Application Guide, 1995.

5. IEEE Std 152-1991, IEEE Standard for Audio Program Level Measurement is one of many suggestions for measuring audio loudness and peak deviation. It includes an extensive bibliography on measuring audio, a controversial topic. IEEE, Piscataway, NJ.

6. J. O. Farmer, Setting and Maintaining Modulation Levels. CED, March 1987, reprinted in BCT/E Certification Program Reference Bibliography Reprint Manual, available from the Society of Cable Telecommunications Engineers, Exton, PA.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset