Chapter 15 End-to-End Performance

15.1 Introduction

The various parts of the broadband distribution system, the trunk, coaxial distribution, and drop, must work together to provide a transmission path whose performance allows delivery of adequate quality signals to the end user. This means that the total system performance requirements must be defined and that the allowable signal degradation must be allocated among signal acquisition, headend processing, distribution system, and terminal equipment.

This chapter will first discuss quality standards, then the question of allocation. Next, it will deal with the calculation of the cascaded broadband network sections. Finally, it will discuss the typical performance of a modern HFC distribution system, including factors that are not normally part of the cascaded network calculations, such as group delay and several classes of undesired signals that affect transmission channels.

15.2 Quality Standards and Requirements

In order to understand how performance standards are related, it is important to first determine the entire chain from signal generation to end user. The components of this chain will vary depending on the service, of course. Some signals may be generated in the headend or even closer to the subscriber, whereas others will be generated thousands of miles away. Figure 15.1 illustrates an example of the transmission elements through which an analog video signal may be sent.

image

Figure 15.1 Various performance measures for a typical broadcast television channel.

The video source in this example is a television broadcasting station. The source degradation will include some level of white noise, crosstalk, transmitter distortion, and possibly quantizing noise if the video or audio were digitized in any part of the process. Often the (analog NTSC) video S/N is limited by the originating camera and further degraded by a videotape recording and playback process to result in a 55-60-dB input to the transmitter.* The broadcast transmitter diplexing filter (used to combine visual and aural signals) and antenna will affect the frequency response, whereas visual transmitter incidental carrier phase modulation (ICPM) limits attainable stereo audio performance.

The source-to-headend path, if the headend is receiving the signal via the normal transmission path, might include multipath distortion (echoes resulting from signals bouncing off objects to create more than one transmission path), frequency response variations due to receiving antenna characteristics, electrical noise interference, and a host of other factors.

Within the headend complex, the signal may be demodulated, digitized, switched, amplified, combined with other signals, modulated, and/or shifted in frequency. In some cases, signals may be received at a different location than where they are finally combined into the RF spectrum that is transmitted to customers, adding additional layers of processing and transmission. In that case, the term headend is intended to include everything from first reception by the network operator to generation of the modulated FDM signal complex for transmission to customers. Each of the processing steps within the headend complex adds a measure of degradation.

Finally the complete frequency spectrum enters the broadband, linear distribution network, where, among others, noise, intermodulation distortion, cross modulation, hum distortion, and group delay are added.

Analog (and possibly digital) video signals may be directly connected to subscribers’ receivers, or they may pass through a network termination device, such as an analog or digital set top terminal. If a signal passes through a set top terminal, it may be demodulated, descrambled, and remodulated — each an imperfect process.

Finally, the subscriber’s own television receiver will add a measure of degradation. In addition to noise and distortion, it may be inadequately shielded so that off-air signals are mixed with the cable-delivered signals, creating a special form of crosstalk known as direct pickup (DPU) interference. DPU results in a variety of visible picture defects in analog signals and may reduce the bit error rate for digital signals.

As shown in Figure 15.1, the viewer sees and hears the composite effect of everything that happens to the signal from source to his or her screen and speakers (or other end application). Chapter 2 describes the results of tests undertaken to determine how overall technical performance in each of several categories is related to subscriber perception of NTSC signal quality.

Obviously, both the chain of network elements and the required quality of signal will be different for services other than analog video. In particular, it appears that various levels of QAM (from QPSK to 256-QAM) will be used to transport digitally modulated downstream signals, whereas lower levels of digital modulation, such as BPSK, QPSK, and possibly 16-QAM, will be used in the upstream direction. The requirements for the various digital signal formats are discussed in detail in Chapters 3, 4 and 5.

Though no end-to-end formal standards have been proposed by the FCC for U. S. television distribution, the Deutsche Bundespost (DBP or German post office — the responsibilities for cable television have since been transferred to Deutsche Telecom) has published standards covering all segments of their entire network from camera to viewer.1 The standard is very complete with 19 different quality measures covered. By contrast, the FCC regulates few aspects of television broadcast stations (primarily those that assure a lack of interference among stations) but many parameters of cable television systems (including drops if installed and maintained by network operators) and set top terminals (again, only if owned by the network operator). Performance standards for subscriber receivers vary according to how they are marketed. These issues are covered more thoroughly in Part 6. The FCC does not, as of the date of this book, regulate network performance for any signals other than analog video, nor does it regulate the quality of drop materials available for purchase by end users, though voluntary SCTE standards are being developed to cover drop components.

The FCC’s standards applying to cable television systems generally specify the total allowable signal degradation from acquisition at the headend to delivery of the final signal to subscribers’ receivers (though there are exceptions). This range of coverage is shown by the second horizontal line in Figure 15.1. The quality parameters are applicable (with a couple of minor exceptions) to every analog video signal at every subscriber equipment terminal under all normal operational conditions.

It is important to recognize that the FCC’s requirements apply to performance when the network is carrying normally modulated signals. Although these parameters relate directly to subscriber picture quality perception, the nature of NTSC video is such that some parameters (for example, IM distortion products) are difficult to measure precisely under normal signal conditions because they vary depending on instantaneous levels, carrier phase relationships, and timing of video modulation. Since the total RF power in most cable systems is dominated by analog NTSC video signals, networks are designed and initially tested using unmodulated carriers in place of each expected visual carrier. This yields higher, but more consistent, levels of distortion. Various conversion factors are used to estimate performance under normal signal-loading conditions. In-service systems are tested using normal modulated signal loading, with precision sacrificed to avoid the subscriber service interruption that would be required to perform unmodulated signal testing.

The FCC’s operational standards for television broadcasters are contained in the Code of Federal Regulations, Section 47, Part 73, and the cable television regulations are contained in Part 76 and are summarized in Table 15.1. Regulations pertaining to both set top terminals and subscriber receivers are contained in Part 15.2

Table 15.1 FCC rules for analog signals in U. S. cable television systems

Parameter/FCC Rule Paragraph Performance Limit
Visual carrier frequency §76.612 No specific requirement, except that carriers within the specified aeronautical bands (108–137 and 225–400 MHz) must be offset from communications channels with a frequency tolerance of ±5 kHz (see rule for details)
Aural/visual carrier frequency difference §76.605(a)(2) 4.500 MHz ± 5 kHz
Minimum visual signal level §76.605(a)(3) ≥+3 dBmV at the end of a reference 100-foot drop cable connected to any subscriber tap port, and ≥0 dBmV at the input to each subscriber’s receiver
Visual signal level stability §76.605(a)(4) ≤8 dB pk-pk variation in visual carrier level over the sum of two 24-hour tests, one taken in July or August and one taken in January or February
Adjacent visual signal level difference §76.605(a)(4)(i) ≤3 dB
Total variation in levels among all visual signals §76.605(a)(4)(ii) ≤10 dB pk-pk for systems with upper frequency limit of 300 MHz or less, plus 1 dB for each additional 100-MHz increment, or fraction thereof (e.g., for a 550-MHz system, the allowed variation is 13 dB)
Maximum visual signal level §76.605(a)(4)(iii) Below TV overload point (level not specified)
Level of aural signal relative to visual signal §76.605(a)(5) −10 to −17 dB (−6.5 dB maximum at set top terminal output)
In-channel visual frequency response variation §76.605(a)(6) ≤4 dB pk-pk from 0.5 MHz below the visual carrier to 3.75 MHz above the visual carrier; includes set top terminal response
Visual signal carrier-to-noise ratio §76.605(a)(7) ≥43 dB, measured in a 4-MHz bandwidth
Composite triple beat (CTB) and composite second-order (CSO) IM product levels, relative to visual signal level §76.605(a)(8) ≤−51 dB, time averaged, except ≤−47 dB for products that are frequency coherent with the visual carrier in HRC and IRC systems
Hum modulation §76.605(a)(10) ≤3% pk-pk modulation of the visual carrier (≤−1.5% modulation by conventional definition)
Relative chrominance/luminance transmission delay (chroma delay) §76.605(a)(11)(i) ≤ ±170 ns, not including the contribution of any set top terminal
Differential gain §76.605(a)(11)(ii) ≤ ± 20%, not including the contribution of any set top terminal
Differential phase §76.605(a)(11)(iii) ≤ ± 10 degrees, not including the contribution of any set top terminal

In addition to the FCC’s regulations, negotiations between the cable television and consumer electronics industries have resulted in supplemental voluntary standards applying to both cable systems and consumer receivers. Although these were not released as of the writing of this book (and are subject to change), key parameters in draft standard EIA-IS-23 affecting cable system signals delivered to the input terminals of consumers’ receivers are as follows:3

The visual level of any individual analog television signal delivered to a receiver should not exceed +20 dBmV, whereas the average of the power levels of all the visual carriers carried should not exceed +15 dBmV.

When distribution systems use multiple cables, the levels of television signals on the same frequency, but different cables, shall not differ by more than 5 dB.

The frequency of the visual (luminance) carrier of any analog television signal shall not vary more than ±25 kHz from the nominal values specified in ANSI/EIA-542 (any frequency plan).

The maximum levels of any non-analog-video signals whose frequencies lie between 0.5 and 30 MHz shall not exceed +42 dBmV, whereas the levels of signals between 54 and 1,000 MHz shall not exceed +20 dBmV (acceptable maximum levels for signals lying between 30 and 54 MHz have not been determined).

The maximum total average power of nonvideo signals whose frequencies lie between 54 and 1,000 MHz shall not exceed +15 dBmV in any 6-MHz channel.

Regardless of their official status, these additional performance guidelines represent good engineering practice in assuring that cable television signals are compatible with most consumer electronics receivers.

Not included in either U. S. government regulation or interindustry agreements are specifications covering several aspects of the sound accompanying NTSC signals (such as loudness level or stereo separation) though these are included in the DBP model. An issue with both over-the-air broadcast and cable-generated video programming has been the perceived loudness variations among channels and within channels (for example, between the main program and commercials). Despite several attempts by both governmental and industry groups, it has never been successfully resolved.

Also not specified in any document are limitations on either microreflections or external signal ingress. Microreflections were discussed in Chapter 11 and result in visible ghosts in analog NTSC pictures if the amplitude and delay are sufficient. As will be seen later in this chapter, they also contribute to both group delay and frequency response variations. External signal ingress can occur in either the distribution network or within subscribers’ terminal equipment. Signal pickup in subscribers’ receivers, discussed in Chapter 24, can affect the network if ingressing signals are transmitted out the antenna terminals. It should be noted that, though the FCC does have strict standards covering signal egress from cable systems, where strong external signals are present, ingress will usually be visible to subscribers before the FCC egress limit is exceeded though the two effects both arise from inadequate shielding and are obviously related.

Several voluntary industry standards exist for digitally modulated signals. The interindustry specification EIA-105, Decoder Interface, for example, includes several tuner performance requirements that are intended to ensure successful reception of high-order VSB or QAM signals; the SCTE’s Digital Video Subcommittee and Advanced Television Systems Committee are developing digital video transmission standards, and the data-over-cable service interface specification (DOCSIS) covers cable modems. Chapter 4 covers digital modulation in detail, along with the required transmission quality.

15.3 Performance Allocations Among Sections of the Cable System

Among the listed parameters, clearly some are determined exclusively (or primarily) in one element of the transmission chain, whereas others (such as carrier to noise) are cumulative as the signal travels through the sections. Specifically:

The visual carrier frequency is initially determined at the point of RF modulation but may be modified by subsequent frequency conversions. In most set top terminals, the video signal is demodulated and re-modulated so that the frequency of the signal delivered to the subscriber’s equipment is not related to that on the network. In that case, the FCC’s frequency standards apply to both the headend output and the output of each terminal.

The difference between visual and aural carrier frequencies is also determined at the point of modulation and is unaffected by subsequent frequency conversions that modify the entire channel’s spectrum (or the entire RF spectrum, for instance, in an AML transmitter). Similarly, the deviation of the aural carrier (and therefore the “loudness” of the detected sound) is determined at the last point of modulation, be that the over-the-air transmitter, headend, or set top terminal. Here again, the FCC’s rules apply at the last point of modulation.

The levels of signals delivered to subscribers’ homes are primarily a function of the broadband distribution system but are also affected by changes in the headend. Where set top terminals are used, however, the signal levels reaching subscribers’ receivers may have no relationship to system operating levels. The FCC rules on levels delivered to subscriber equipment apply both before and after any operator-supplied terminal equipment. In addition to the Part 76 rules covering cable television, specifications in Part 15 further limit the allowable output RF power range of terminals.

In-channel frequency response variation, relative visual/aural carrier level difference, chroma delay, differential gain, and differential phase in a properly operating cable system are primarily determined in the headend and only slightly modified as a result of variations in the response of the broadband network. As with visual levels, however, set top terminals significantly modify all these parameters.

Intermodulation distortions (CTB, CSO, XMOD, and CIN) primarily occur in the broadband distribution network but can be further degraded by set top terminals.

Hum modulation can occur at any place in the network though, in a properly maintained system, it will primarily occur because of the side effects of ac powering being carried over the same coaxial system as RF signals — a combination of power pack ripple and parametric modulation of magnetic components.

Noise is added at all stages in the process though not in equal amounts.

In summary, the primary impediments that occur in multiple sections of cable systems are distortion (occurring in both broadband distribution network and terminal equipment) and noise (with contribution from headend, broadband distribution network, and terminal equipment). With this understanding, we can discuss allocations of the allowable degree of cable system signal degradation among the various major network sections and within the subparts of the broadband distribution section.

15.4 Noise and Distortion Allocations in Cable Systems

15.4.1 Carrier-to-Noise Ratio

Applying Equation (11.2) to the elements of a cable system, it can be seen that the total C/N measured from signal acquisition to the input of subscribers’ receivers is


image (15.1)


where

C/Nttl = the system C/N considering only noise contributions from the headend input to the input of subscribers’ receivers

C/Nh = the C/N of the headend, considered alone

C/Ns = the C/N of the supertrunk (generally fiber-optic or microwave link), considered alone

C/Nd = the C/N of the coaxial distribution, considered alone

C/Nt = the C/N of the terminal equipment (for example, set top terminal), considered alone

Although this formula appears somewhat complex, it reflects converting each section’s relative noise level to a scalar value, adding the normalized noise levels, and reconverting to familiar logarithmic (dB) terms. The initial 10 and the 10 used to divide each of the original C/N values reflect the fact that the thermal noise generated in each section is noncorrelated and so adds on a power rather than voltage basis. A shorthand way of stating this relationship is to state that the “cascade factor” is “10 log.”

One eminent cable engineer has suggested the use of a shorthand notation to express such equations, which we have slightly modified to make it more universal.4 Using this notation, Equation (15.1) can more simply be written as


image (15.2)


where the ⊕ sign indicates the conversion to scalar values, addition, and reconversion to dB form, and the term after the colon indicates the cascade factor, which may vary from 10 log (totally uncorrelated power addition) to 20 log (voltage addition). We will use this notation throughout this section.

Typical set top terminals have noise figure specifications ranging from 10 to 13 dB. Since cable operators are only required to deliver analog television signals at levels of 0 dBmV (and typically deliver many channels at close to that level), we can use Equation (10.19) to determine that, with the noisiest terminals and minimum signal levels:


image (15.3)


At the other end of the chain, individual modulators and signal processors will generally exhibit 60-dB C/N or better. That is degraded, however, by the out-of-band noise from other modulators when many channels are passively combined. The various summing and isolation amplifiers add additional noise as well (see Chapter 8 for a discussion of required headend processing) so that the typical C/N of individual channels as measured at the input to the broadband distribution network is of the order of 57 dB.

Assuming that the goal is to just meet the FCC’s required C/Nttl of 43 dB, we can find the minimum composite C/N of the supertrunk and coaxial distribution together using


image (15.4)


where the ⊖ sign indicates that the scalar quantities are subtracted rather than added.

What this equation shows is that our worst-case terminal has effectively used up half the total noise budget of the system. Though most modern terminals have lower noise figures, set top terminals still contribute significantly to total noise.

Typical design specifications for the supertrunk plus coaxial distribution portion of the plant call for 48–49 dB C/N. The margin of about 2 dB provides an allowance for such operating variations as errors in setting operating levels at the headend, aging of components, and imperfect frequency response through chains of amplifiers. The variation in frequency response is known as peak-to-valley (P/V) and is a measure of the peak-to-peak variation, in decibels, from the ideal. Though fiber-optic links are typically flat within a few tenths of a decibel, the usual allowance for coaxial distribution networks is N/10 + 2, where N is the number of cascaded amplifiers.

The allocation of noise budget between supertrunk and coaxial distribution depends on both architecture and technology. Typical downstream fiber-optic links driven by directly modulated DFB transmitters will exhibit C/N values of 51 to 54 dB, as will broadband AML microwave links. In a simple architecture, a 49-dB supertrunk/coaxial C/N requirement might be met by cascading fiber-optic and coaxial sections, each independently providing 52-dB C/N. More complex architectures might require cascaded supertrunk links, requiring better performance in each section.

15.4.2 CTB and CSO

Since headend levels can be carefully optimized and premium amplifiers utilized, intermodulation distortion within the headend is minimized, and most distortion occurs in the supertrunk, coaxial distribution, and set top terminal. As discussed in Chapter 11, however, the calculation is not straightforward. Continuing the example of a simple HFC network, the distortion mechanisms in the directly modulated DFB laser, the balanced RF amplifiers, and the input stages of the set top terminal have some characteristics in common (all exhibit some form of limiting) and others that are different (the small-signal linearity variations and nature of the limiting when it does occur).

Using the same notation as in Figure 15.1, we can write a general equation for the composite distortion:


image (15.5)


where dist could indicate CTB or CSO and n is the cascade factor, which can vary from 10 to 20. When the dominant distortion mechanism is similar in the cascaded elements, for instance, symmetrical compression in cascaded push-pull amplifiers running at the same output levels, the cascade factor will be close to 20, which was the number traditionally used for both CSO and CTB in large all-coaxial networks. Research in the performance of dissimilar cascaded sections, however, has shown that distortions often combine at less than a voltage addition rate (see Endnote 2, Chapter 11). In occasional cases, in fact, the distortion may build up at a less than 10 log rate when distortions partially cancel.

As discussed in Chapter 11, the broadband distribution network is designed and (often) tested using unmodulated carriers. This results in CTB and XMOD product design levels that are worse than average operational values by about 12 dB, whereas CSO products will be worse than operational values by about 6 dB, though the difference may not be that great for very short cascades or where the modulation on multiple channels is synchronized.

Although there are no standards in this area, common industry practice has been to design for about 53-dB C/CTB and C/CSO in the broadband network (supertrunk plus coaxial distribution) under test (unmodulated carrier-loading) conditions. Even when combined with typical set top terminal distortion levels (57–65-dB C/CTB and 60-dB C/CSO), the total distortion under operational conditions is well within FCC requirements, even with the most conservative cascade factor assumptions, as shown in these equations:


image (15.6)



image (15.7)


Typical specifications for directly modulated DFB optical transmitters include a C/CTB of 65 dB and C/CSO of 62 dB. Externally modulated transmitters are generally better, perhaps 70-dB C/CTB and 65-dB C/CSO, whereas broadband microwave links may generate slightly worse distortion.

The remaining distortion “budget” can be allocated to the coaxial distribution system or to additional supertrunk links, if required by the architecture. Fortunately, though fiber-optic links have higher CSO than CTB, coaxial amplifiers have higher CTB so that the cascade of the two technologies is somewhat complementary with regard to distortion.

Although we could do a similar calculation for XMOD, the results would generally track CTB since both are third-order effects. In practice, CTB testing has proved to produce more consistent results and so is generally preferred to XMOD evaluation. It should be noted that the FCC no longer specifies XMOD performance.

15.5 Typical Network Transmission Quality Under Operational Conditions

Having discussed how various signal imperfections arise in the network and how they are designed and tested, we will now consider, as an example, typical HFC network performance under the following conditions:

54- to 750-MHz total downstream frequency range, loaded with 78 analog NTSC channels extending up to 550 MHz, plus 200 MHz of digitally modulated signals extending to 750 MHz

A single fiber link extending from the headend to each node and feeding a coaxial distribution network that passes 500-2,000 homes

A nominally flat frequency response from the headend output through the fiber link and to the input of the coaxial distribution system; an interstage circuit to set an optimum gain/frequency slope within the first amplifier; and a cascade of amplifiers, interconnecting cable, and passive devices adjusted so that the frequency response from the output of any amplifier to the output of the subsequent amplifier is flat (see Figure 10.19 for a graphical illustration)

Equal levels among the analog video channels leaving the headend and the levels of digitally modulated carriers set such that the total average power per 6-MHz channel is 10 dB below the levels of the analog video visual carriers during synchronizing peaks.

These parameters are typical of specifications, maximum signal-loading capabilities, and operational parameters of many cable systems built in the mid-1990s. Most coaxial and fiber-optic equipment designed for this application is specified under both the preceding carriage conditions and, alternately, under a load of 110 NTSC video signals.

As discussed in Chapter 13, it is not always the case that digitally modulated signals and analog video signals share the same optical fibers on transmitters. Nevertheless, the methodology discussed in this chapter can easily be applied to more complex systems.

This section should provide guidance on typical end-to-end transmission channel conditions to those designing products whose signals will be transmitted through cable systems, as well as general guidance on how operating performance is related to design parameters. An important caveat, however, is that although the cable television industry has now deployed HFC architectures for most of its networks, many older all-coaxial networks are still in use, and new networks are being constructed with more complex architectures both with and without microwave supertrunks. These differences must be taken into account. As an example, microwave links are subject to such factors as fading, whereas all-coaxial networks will exhibit greater level instability and in-band response variations. Either technology, however, will outperform directly modulated DFB laser transmitters with regard to clipping distortion.

Each of the channel impairments summarized here will be expressed relative to two kinds of signals:

Analog video signals that originate within or are received at the headend and that pass through the network and a set top terminal before delivery to customers’ receivers

Digital signals that are inserted at the last combining point in the headend, operate at 10-dB depressed levels (as defined earlier), and are delivered directly to terminating equipment

Given these two classes of signals, it is simple to estimate the effect on other signals of different characteristics.

15.5.1 Carrier-to-Noise Ratio

The FCC’s video C/N specification is one of the most difficult to meet for the most distant (in terms of equipment cascade) cable system subscribers. For those subscribers who also use set top terminals, it is likely that the required 43-dB C/N is barely met on some channels. For customers who are located close to a fiber node and who directly connect their receivers to cable outlets, the C/N may be more than 50 dB (limited by the noise contributions of the headend, fiber link, and one coaxial amplifier).

A digitally modulated signal that is inserted at the input to the downstream laser transmitter (and thus avoids the noise contributions of headend amplifiers), which is operated at a level 10 dB below equivalent analog video channels and is directly connected to termination equipment, is affected by different factors. The C/N at the most distant subscriber tap for such signals is essentially that of the supertrunk and coaxial distribution network, corrected for the signal level difference between video and data signals, corrected for the noise susceptibility bandwidth of the digital receiver relative to 4 MHz, and with an added allowance for P/V, errors in setting levels, and aging of components. As with analog video, the total variation due to these factors may be of the order of 2 dB. Thus, we can write a generic equation for likely worst-case C/N for a non-analog-video signal:


image (15.8)


where

C/NS = the analog video C/N in the supertrunk

C/ND= the analog video C/N in the coaxial distribution

S = the suppression of digital signal levels relative to analog video (in dB)

B = the noise susceptibility bandwidth of the digital receiver (in MHz)margin = the expected variation from design performance due to aging, P/V, and operational tolerances (in dB)

As an example, if C/NsC/Nd is 48 dB, the signals are run 10 dB below video, the bandwidth is 6 MHz, and we allow a 3-dB margin, the worst-case C/Ndigital would be 33.2 dB.

15.5.2 Intermodulation Products from Discrete Carriers

As discussed earlier, typical broadband networks (supertrunk plus distribution) are designed to provide C/CTB and C/CSO levels of at least 53 dB under unmodulated signal test conditions. Combined with hypothetical set top terminal distortion levels and adjusted for the difference between unmodulated carriers and normal television signals, worst-case C/CTB will be about 61 dB, whereas C/CSO will be about 56 dB.

As with C/N, a further allowance must be made for imperfect frequency response (P/V) and for operational variances and aging of components. Errors in setting signal levels will cause larger differences for CTB since the distortion product amplitudes change by 3 dB for every 1-dB change in desired levels, whereas CSO products change by only 2 dB. Furthermore, the levels of each video signal and the total peak RF power carried will change over time as average picture brightness changes and as the various video-modulating waveforms drift into and out of time synchronization. The result is that the observed product clusters affecting any one channel will vary by several decibels over time (the FCC allows time-averaged tests in evaluating operational systems). If we allow 5 dB for the total of these effects on CTB and 3 dB for the CSO effects, we can predict that the worst-case C/CTB on any channel at any time will be about 55 dB, whereas the worst-case C/CSO will be about 53 dB.

The improvement in performance experienced by subscribers who are near optical nodes is not as great as for C/N since the largest contributor to CSO is the optical transmitter, whereas the largest contributor to CTB is generally the last amplifier (or two) that drive each subscriber tap string.

In the case of digitally modulated signals sharing the cable network with analog television signals, some intermodulation products will fall into the digital spectrum. If the same channelization is used, then the products will appear in the same relative positions within each channel. That is, for ANSI/EIA-542 Standard channels, they will occur at harmonics of 6 MHz and 1.25 and 2.5 MHz above those harmonics, with the first and third of those being CSO products and the middle one being the CTB product. Scaling from the analysis just done for video signals, adjusting for the 10 dB of level difference, and eliminating the contribution from the set top terminal (approximately 69-dB C/CTB and 66-dB C/CSO with modulated signals), we would expect that the worst-case C/CTB might be 47 dB, whereas the worst-case C/CSO might be 45 dB.

Given the distribution of CSO products with frequency (Figure 10.8), it would be expected, further, that the only significant CSO products would be those occupying the highest frequency in each set (2.5 MHz above the harmonics of 6 MHz) when digital signals are placed at the top end of the spectrum. ANSI/EIA-542 Standard chennels are further protected against lower-side CSO products because they occur only at the boundary between channels.

Though other ANSI/EIA-542 channelization alternatives are declining in popularity, it should be mentioned that with the HRC system, all the second- and third-order products will fall exactly on harmonics of the reference oscillator (generally 6.00003 MHz), as will the visual carrier, if present. Since the products are frequency coherent, they do not cause visible beats in analog channels and so are less visible. With the IRC system, the products have approximately the same frequencies as with Standard channelization, except that each product group is frequency coherent. Because the second-order products are not coherent with the visual carrier, second-order beats may become visible in analog channels carried in IRC systems before third-order beats in some cases.

15.5.3 Composite Intermodulation Noise (CIN)

As discussed in Chapter 10, intermodulation products among digital signals and between digital signals and analog visual carriers add “bands” of noiselike products, which add to thermal noise to raise the effective noise floor of the system. The frequency distribution of these products is a function of the distribution and relative power levels of both the analog and digital signals. Fortunately, when the average power density of digital signals is suppressed by 5 dB or more relative to visual carriers, the effective C/N degradation for the visual signals is typically less than 1 dB.

15.5.4 Laser Transmitter Clipping Distortion

As discussed in Chapter 12, downstream laser clipping distortion is a statistically rare event in a properly adjusted, directly modulated DFB transmitter. When external modulation methods are used, clipping is even rarer since other non-linearities limit average modulation levels before clipping probability becomes significant. Thus, it can reasonably be expected that laser clipping will have no measurable, visible, or audible effect on analog NTSC channels under normal operating conditions. Should the optical modulation levels be adjusted too high, however, horizontal black streaks will appear in pictures.

Studies of clipping-induced errors in 64-QAM data signals that share DFB optical transmitters with a full spectrum (54–550 MHz) of analog video signals have shown that, with proper transmitter adjustment, bit error rates due to clipping are less than 10−9 even without forward error correction. If, however, the modulation is set too high, the BER can increase to the order of 10−7 or even lower before affecting subjective picture quality. Once the threshold of clipping is approached, the decline in performance with increasing modulation is very steep.

15.5.5 Hum Modulation

Hum modulation occurs in both powered equipment (due to imperfect power supply filtering) and as a result of parametric modulation of magnetic components in both amplifiers and passive devices. As discussed in Chapter 10, even though the FCC allows 3% pk-pk modulation levels, systems seldom exceed half that in practice. Modulation due to power supply ripple should logically be more or less independent of RF frequency. To the extent that parametric modulation of magnetic components is a factor, however, its effects may vary with frequency. The primary mechanism is that magnetic cores may partially saturate at peaks of the ac current. This will affect their impedance at RF, and so create a change in return and transmission loss that will vary at a 60-Hz rate. Since the impedance change may vary with frequency, as may the effect of the impedance change, there is no way of predicting how the modulation percentage will vary across the spectrum. At a given frequency, however, the percentage modulation of a video signal should be the same as that of a digitally modulated signal though the latter may be able to tolerate a higher modulation percentage.

15.5.6 Microreflections

Microreflections can occur anywhere within the broadband coaxial network (or, for that matter, in the optical link) but are most severe where components are mounted within a few hundred feet of each other, namely, in the tapped coaxial feeder lines and drop structure. Even though component quality (and thus impedance match, isolation, and directivity) is higher in the “hard cable” plant (that is, the coaxial portion on the network side of subscriber taps), that is partially offset by the lower per-foot cable losses and greater reflection delays.

The mechanisms by which microreflections occur were analyzed in Chapter 10. Drops were analyzed in Chapter 11. In summary, occasional microreflections should be expected whose magnitude can vary from as large as −10 dBc at zero delay to a maximum amplitude that will decline in accordance with round-trip cable losses at the frequency in question.

Such large reflections, however, can only occur in the trunk and distribution network in the case of a major cable or component fault, poor-quality passive components, or a terminating tap, all of whose subscriber ports are unterminated. If any tap port is terminated, even by connection to a drop cable, the reflections will be significantly smaller. In the drop structure, reflections from receivers and unterminated outlets are partially compensated for by the larger attenuation of the interconnecting cables. As a practical rule, most delayed signals should be at least 20 dB below direct signals, with delays ranging from 0 to about 500 ns. In addition to reflections in tapped lines and within the drop structure, mismatches occurring in long, untapped cable spans (trunk or express feeder lines) can result in smaller reflections (≤-40 dBc) with delays as long as 5 μs.

Microreflections can cause various picture degradations in analog transmitted signals, ranging from subjective softening or overemphasis of vertical lines (depending on the relative polarity of the reflected signal) when delay times are short to visible ghosts when the delay time is sufficient that the reflected picture is distinguishable from the direct picture. Customer perception of micro-reflections’ effect on picture quality is discussed in Chapter 2.

With data signals, the reflected signal may be interpreted in a number of ways. In the time domain, the effect is to create an amplitude uncertainty in detected amplitude and phase as the delayed and direct signals add at the detector. In the frequency domain, these same effects can be interpreted as group delay and in-band response variations.

15.5.7 Group Delay and In-Band Response Variations

To review, the return loss of a component is the ratio between the signal impinging on a port (with all other ports properly terminated) and the signal reflected from that port:


image (15.9)


where

R = the return loss in dB

Li and Lr = input and reflected power levels in dBmV (or any other consistent logarithmic units)

Pi and Pr = input and reflected power levels in mW (or any other scalar quantities)

Another way of quantifying the mismatch recognizes that the voltage due to the reflected wave will, at some points along the transmission line, add to the incident wave, whereas at others it will subtract from it. The ratio of maximum to minimum combined voltage is known as voltage standing wave ratio (VSWR). Note that although at any point the voltage will vary in a sinusoidal fashion at the frequency of the signal, the ratio of voltages along the line will not change, thus a “standing” wave ratio. The relationship between VSWR and return loss is


image (15.10)


Figure 15.2 illustrates the standing waves resulting when a transmission line is terminated in a pure resistance equal to half its characteristic impedance.

image

Figure 15.2 Standing wave pattern.

When a line is terminated in a pure resistance of any value, the VSWR is Z0/R or R/Z0, whichever is greater.

As can be seen, for purely resistive terminations, the standing wave pattern has a maximum value at distances that are odd multiples of one-quarter wavelength from the termination and a minimum value at distances that are even multiples. The reverse would be true if R were greater than Z0. If the termination is complex (not a pure resistance), then the distance to the first minimum or maximum will be other than a quarter wavelength. Regardless of the termination, however, the maxima and minima will be exactly one-quarter wavelength apart at the frequency of the signal.

The phase shift, φ, through a length of cable, in radians, is image, where L is the length and λis the wavelength in the cable. Substituting for λfrom Equation (10.9):


image (15.11)


where

φ = the phase shift in radians

L = the distance, in feet, over which the phase shift is measured

f = the frequency in MHz

VP = the relative velocity of propagation in the cable

We would like to know what frequency shift, Δf, would be required to cause a relative phase difference of 2π radians through a fixed length L of cable. If we solve Equation (15.11) for f and let φ = 2π, we get


image (15.12)


If a direct signal and one delayed by being transmitted through a cable of length L are mixed, as through a reflection, then Δf is known as the ripple frequency.

Reflections Due to Mismatches in Trunk and Distribution Plant

To see how this affects amplitude response and group delay, let us examine a portion of a typical tapped distribution line (Figure 15.3). Signals exit the source, which might be an upstream tap or amplifier. They travel through cable 1 and enter the tap. Signals exiting the tap through-port travel through cable 2 to the next downstream component, labeled load, which might be another tap or the input to a following amplifier.

image

Figure 15.3 Sample of tapped line segment.

The following parameters will be used in the calculations:

Length of cable 1 in feet = L1

Length of cable 2 in feet = L2

Source return loss in dB = Rs

Load return loss in dB = RL

Tap return loss in dB = RT

Attenuation of cable 1 in dB = AC1

Attenuation of cable 2 in dB = AC2

Tap directivity in dB = D

Frequency in MHz = f

Reflected power will be coupled into the drop by two primary mechanisms:

A portion of the signals that are reflected from the tap input will be re-reflected from the source and combined with the direct signals delayed by the transit time through 2L1 feet of cable. The difference in level between the desired and reflected signals, R1 (in dB), will be 2AC1 + RS + RT.

A portion of the signals that are reflected from the load will be coupled into the drop due to the finite directivity of the tap. These signals will be delayed relative to the desired signals by the transit time through 2L2 feet of cable. The difference, R2, in level between the desired and delayed signals in this case will be 2LC2 + RL + D.

In either case, the ratio of the scalar magnitudes of the desired, EF, and delayed, ED, signal voltages appearing at the drop port will be 10R/20.

The presence of the reflected/delayed signal affects the desired signal in three ways. First, the delay, if longer than the symbol period of a digital transmission, will reduce the error threshold by interfering with the following symbol. In the case of an analog television signal, the visible effect is a horizontally offset image whose visual appearance depends on the amplitude and delay time, as covered previously.

Second, the frequency-sensitive nature of the VSWR pattern between the direct and reflected signals will affect the apparent amplitude versus frequency response of the system. Using Equation (15.10), the peak-to-peak change in amplitude response can be calculated to be


image (15.13)


This ripple will occur at the incremental frequency rate calculated in Equation (15.12), where L is either 2L1 or 2L2 depending on the dominant coupling mechanism.

Finally, the vector combination of the desired and delayed signals incrementally affects the phase shift through the system, and thus group delay. Figure 15.4 shows the relationship between the magnitudes of the desired and reflected signals, the angle between them, and the angle and magnitude of the resultant signal. In general:

image

Figure 15.4 Vector addition of desired and reflected/delayed signals.


image (15.14)


We can restate the definition of group delay (Equation (10.32)) in similar units:


image (15.15)


where

θ= the incremental phase shift in radians

dθ/df = the rate of change of phase shift through the device as a function of frequency in units of radians per MHz

This allows us to solve for the peak-to-peak delay variation due to the presence of the reflected signal:


image (15.16)


where

L = the incremental cable length through which the reflected signal is transmitted in feet

VP = the relative propagation velocity (dimensionless)

R = the ratio of the desired to reflected signal strength as measured at the drop port and expressed in dB

A practical example may be useful in quantizing the effects of reflected signals in typical coaxial distribution lines. Typical amplifiers and passives have return loss specifications of about 16 dB when properly terminated, with the worst performance near the ends of the passband, as can be expected. The specified directivity of taps varies as a function of both frequency and tap value. In general, the worst values are also at the extremes of the passband (especially at the low end of the upstream frequency range) and at high tap values, where the absolute coupling between ports limits the isolation. For most values, the directivity in the downstream spectrum is 10–15 dB, whereas the return band directivity is 6–9 dB. Finally, most distribution lines use cable whose loss is about 1.6 dB/100 feet at 750 MHz and varies, as we have seen, as the square root of frequency. The relative velocity of propagation for these cables is about 0.90.

Given these parameters, assume that an amplifier and two taps are spaced at 100-foot increments along a cable. At 750 MHz, the following will be the situation under proper operating conditions:

The signal reflected from the first tap and re-reflected from the amplifier will be attenuated by 16 + 16 + 2(1.6) = 35.6 dB and delayed by 200/[(0.984) (0.9)] = 226 ns. The signal transmitted through the first tap, reflected from the second tap and coupled into the drop due to a directivity (which we will assume to be 12 dB), will be attenuated by 31.6 dB and also delayed by 226 ns. We will use this larger delayed signal for calculating the effects.

If a digital datastream is being transmitted, one effect of this “microreflection” is that if a pulse is transmitted, an echo of that pulse will create a voltage amplitude uncertainty of about 2.6% in the signal 226 ns later.

The amplitude response will have a “ripple” versus frequency. The frequency of the ripple will be 4.42 MHz (meaning a full cycle plus in every 6-MHz channel), and the amplitude will be 0.45 dB peak-to-peak.

The group delay will also have a variation versus frequency at the “ripple” rate. The delay variation will be 11.7 ns peak-to-peak.

Many cable operators, on the other hand, do not terminate unused tap ports, but only cover them to prevent undue weathering of the connectors. Their reasoning has been that a loose type F terminator acts as both a radiator and signal pickup point, reducing the shielding integrity of the system. The consequences for reflected signals, however, can be serious. In the case of a terminating 4-dB tap (actually just a two-way splitter), for instance, the return loss when unterminated is typically <5 dB. If the preceding calculations are made assuming a 5-dB return loss, the delayed signal will be only 20.2 dB below the desired signal, causing the delayed voltage uncertainty to increase to 9.8%, the amplitude response variation to increase to 1.7 dB, and the group delay variation to increase to 45 ns. Although none of the results from a properly terminated system would significantly degrade most signal formats, the second set of values is large enough to be of concern.

If we do a similar calculation at 10 MHz in the upstream band, the principal differences are that the cable loss is much less, and tap directivity, and sometimes return loss, is degraded. Figures 15.5 and 15.6 show the amplitude and group delay variations for some typical situations based on an average of several manufacturers’ component specifications.

image

Figure 15.5 Passband frequency response as a function of tapped line configuration.

image

Figure 15.6 Group delay as a function of tapped line configuration.

Though the preceding calculations are simplified to a single reflection, in actual systems, many reflections occur. Since a tapped distribution line can easily contain a bridger, 2 line extenders, and 30 taps, many microreflections exist simultaneously and will add vectorially. Consider, for example, four taps equally spaced along a cable that has 0.5 dB of loss in each segment (a typical upstream value), as illustrated in Figure 15.7. Assume that each has a return loss of 16 dB at all ports and, for simplicity, that each of the first three has a through loss of 1 dB (including any mismatch losses). We can calculate the worst-case effective return loss and VSWR at the input to the first tap as follows (assume the last tap is a terminating type):

image

Figure 15.7 Cascaded reflections.

The signal reflected from tap 4 is 16 dB below the incident signal and at the input port.

Observed from the input of tap 3, tap 4’s return loss will look 3 dB better because the incident and reflected signals are both attenuated by the through loss of tap 3 and the intervening cable loss. Using Equation (15.10), this corresponds to a VSWR of 1.253.

The 16-dB independent return loss of tap 3 corresponds to a VSWR of 1.376.

The two mismatches will combine, in the worst case, to the product of the individual VSWRs, or 1.253(1.376) = 1.724.

This can be converted back to an equivalent return loss of 11.5 dB using the reverse of Equation (15.10):


image (15.17)


This, in turn, is attenuated so that, when observed from the input of tap 2, it is 14.5 dB. Converting to VSWR and combining with tap 2’s own input match, we get a worst-case combined VSWR of 2.014 and return loss of 9.46 dB.

Repeating the process at the input to tap 1, we get a net VSWR of 2.24 and an equivalent return loss of 8.36 dB. If the string of components were longer, the equivalent match would continue to degrade though at a slower rate.

Since the phase shift through the interconnecting cables changes quickly as a function of frequency (rotating through a full 360° at the ripple frequency), the reflections from two mismatched components should be assumed to reinforce at many frequencies. This is the mechanism that causes structural return loss in coaxial cable to have sharp peaks at many equally spaced frequencies if the manufacturing process produces equally spaced physical defects. The reflections from multiple mismatches, however, will be less likely to reinforce if the spacings are random: a powerful argument against equally spaced similar components in coaxial distribution lines.

Reflections in Drops

As with the trunk and distribution plant, microreflections in the drop system lead to both group delay and frequency response variations. Equations (15.16) and (15.13) can be used to estimate the magnitude of these effects, whereas Equation (15.12) will provide the ripple frequency when applied to the typical drop configuration shown in Figure 11.5 as follows:

For R in the equations, use the sum of the splitter isolation plus twice the loss of the cable from the splitter to an outlet.

For L in the equations, use twice the length of the cable from the splitter to an outlet.

For VP in the equations, use the relative velocity of propagation in the drop cable (typically 0.85).

For example, at 550 MHz, a splitter isolation of 15 dB combined with a nonterminated 50-foot long, size 59 drop cable (one-way loss about 3 dB) will create a sinusoidal group delay variation of 21.5 ns peak-to-peak with a period of 8.4 MHz. The corresponding amplitude response variation will be 1.6 dB peak-to-peak with the same periodicity.

Group Delay Due to Diplex Filters

In addition to the group delay caused by reflections, frequencies close to crossover frequency of diplex filters used in amplifiers to separate upstream and downstream frequency signals will be subject to additional group delay. As discussed in Chapter 10 (see Figure 10.12), typical chroma delay (differential delay between the luminance and chrominance frequencies) per amplifier is 21 ns at channel 2, 9 ns at channel 3, and drops off rapidly at higher channels. Since this delay is cumulative through multiple stages, a typical five-amplifier HFC cascade will have a total chroma delay of 105 ns at channel 2.

Older all-coaxial cable systems and systems using longer cascades typically limit the upstream frequency range to 5–30 MHz so that the spacing between upstream and downstream frequencies is greater and the group delay per amplifier is reduced accordingly.

As a general rule, the total channel 2 group delay through most systems will not exceed 100–150 ns and will meet the FCC’s 170-ns requirement. Upstream group delays due to multiple reflections are often more severe than downstream due to the lowered cable loss and, frequently, worse component matches at the lowest frequencies. Upstream transmission is discussed in detail in Chapter 16.

Other In-Band Frequency Response Variations

In addition to response variations at the ripple frequency associated with multiple reflections, channels will experience in-band variations from three causes: the intentional gain/frequency slope of amplifiers (typically 10 dB across the full downstream spectrum), the variation in cable loss with frequency (in both distribution and drop cables), and the precision with which the various loss variations can be compensated within each amplifier (P/V).

In addition to the other effects discussed, individual drops may be subject to additional amplitude and group delay degradation when filters and traps are inserted to tailor the spectrum delivered to customers. Various combinations of high-pass, low-pass, bandstop, and bandpass filters are often used for this purpose, as discussed in Chapter 21.

It is rare for the total in-band response variation within any single 6-MHz channel to exceed 1.0 dB pk-pk in a well-maintained cable system, not including the effects of drop filters.

15.5.8 Local Oscillator and Other Interfering Signals from Receivers

The effects of the performance of consumers’ receivers that are directly connected to the network are covered in detail in Chapter 24. In summary, however, local oscillator signals transmitted out the antenna terminals of receivers may have amplitudes as high as −10 dBmV, but are more likely to be −20 dBmV or lower.5 These signals may be coupled into the downstream signal paths feeding other receivers via the imperfect isolation between tap or splitter output ports. Assuming 20 dB of isolation in such devices and short interconnecting cables, the likely maximum undesired carrier levels at the input to an adjacent receiver is about −40 dBmV, or 40 dB below the minimum acceptable analog desired signal.

The probability of such interference to any given channel, however, is slight. With normal television design, it occurs only when the receiver generating the interference is tuned exactly seven channels lower than the affected receiver, and is connected directly to the cable drop (as opposed to through a converter). When it does occur, the interfering carrier will appear 5 MHz above the lower band edge or 3.75 MHz above the desired luminance signal and very close to the chrominance signal, where the threshold of visibility is great, as discussed in Chapter 2.

Although the interfering signal magnitude is higher relative to depressed-level, digitally modulated signals, when those signals are placed above the analog signals in the downstream spectrum, the potential for interference exists only in the first seven nonvideo channels above the highest analog channel, at least until digital standards are developed to the degree that digital descramblers are incorporated into consumer-owned receivers.

15.5.9 Antenna-Conducted Ingress from Receivers

Another potential source of interference is over-the-air signals that are picked up by the internal wiring of consumer receivers, by poorly shielded or faulty drop components (including cable and connectors), or coupled across antenna selector switches owing to finite isolation between ports. These mechanisms are discussed in Chapter 24. Such signals can affect the reception at the inadequately shielded receiver, but they can also be conducted out of the antenna terminals and back up the drop cable (thus, “antenna-conducted”). Assuming drops are properly installed using quality components, the largest effect is due to the finite shielding of consumer receivers.

If the source of the off-air signal is a VHF television broadcast station, then the interfering signal will be close in frequency to the corresponding ANSI/EIA-542 Standard scheme channel. Often, in fact, cable channels are phase locked to the corresponding strong off-air VHF stations to reduce the visibility of co-channel interference by eliminating the difference frequency “beat” lines that otherwise occur. A major disadvantage of IRC and HRC channelization plans is that this mechanism for improving subjective picture quality in the presence of ingress interference is unavailable since the visual carriers in each case are phase locked to a master headend frequency source. The potential for interference is especially severe in HRC systems, where the difference between off-air and cable visual carriers is 1.25 MHz and results in a highly visible diagonal line pattern on viewers’ television screens when the interfering carrier levels are suppressed by less than about 55 dB below the desired signal.

If the interfering signal is a UHF television broadcast station, then its visual carrier will fall 3.25 MHz above the lower channel boundary of the overlapping Standard cable channel. (Cable channels 73–128 overlap the spectrum utilized for UHF broadcasting.) To the extent that other external radiators cause interference, the frequencies will differ. For instance, cable channel 19 is often affected by such services as paging transmitters.

Though the probability of destructive interference from external ingress is still relatively low, the interference, when it occurs, is likely to persist since it is not related to the tuning of the inadequately shielded receiver. Tests (discussed in Chapter 24) conducted on typical receivers that were immersed in an external field of 100 mV/m showed that most antenna-conducted egress levels from this cause were −20 dBmV or less.6 Thus, like local oscillator interference, the probable maximum levels at adjacent receivers exposed to this field strength are about −40 dBmV. It must be pointed out, however, that significant areas in the country are exposed to much higher field strengths.7 In some cases, well-shielded converters are connected between drop outlets to serve the dual purpose of converting all channels off the off-air frequency in the downstream direction while isolating the cable system from antenna-conducted receiver egress in the reverse direction.

15.5.10 Signal Levels and Stability

Cable systems typically have little problem meeting the various signal level, spectral flatness, and stability requirements imposed by the FCC’s rules. In part this is because it would be difficult to meet the noise and distortion requirements if signal levels varied far from optimum levels. It is unlikely, for instance, that total variation across the delivered downstream spectrum will exceed 10 dB while individual carrier levels will probably not vary by more than 5 dB with time and temperature.

The implications of the expected levels and their variation of required video receiver performance are as follows:

The receiver front end may be exposed to as high as +35 dBmV total RF power across the downstream spectrum (100 channels at +15 dBmV), but more probably 5–10 dB less than that. Additionally, the receiver may be exposed to individual signals as high as +40 dBmV in the return band.

The receiver must tolerate signals appearing at the image frequency (assuming a standard 41–47 MHz IF band) that exceed the desired signal level by 10 dB.

The receiver must tolerate adjacent video channels whose levels exceed the desired channel by 3 dB.

The level of the desired signal could be as low as 0 dBmV or as high as +20 dBmV but will likely not vary by more than 5 dB over time (the FCC allows 8 dB).

The requirements for nonvideo receivers are similar with regard to the total RF power presented at the input port and the stability of the desired signal; how- ever, the desired signal will be lower by the degree of suppression of carrier levels with respect to analog video signals at the headend. Whether image frequencies are present at the receiver input will depend on the design of the receiver and the frequency of the data carrier.

15.6 Summary

In summary, typical HFC broadband distribution networks provide a broadband thermal noise floor at a level of approximately −45 dBc relative to normal video levels, as measured at the entry point to buildings and referenced to a 4-MHz bandwidth. The noise level in other bandwidths, after set top terminals, or relative to other signal levels is easily calculated.

Various signals may occur adjacent to or within standard 6-MHz channels as a result of network nonlinearities, normal signal loading, and/or attached subscriber receivers. These are summarized in Figure 15.8. For reference, the approximate level of suppression required to maintain a subjectively good analog NTSC picture is shown as a dotted line (see Chapter 2 for a detailed treatment of subjective picture impairments due to interfering carriers).

image

Figure 15.8 Maximum expected levels of extraneous signals in or near ANSI/EIA-542 Standard television channels as received at the input ports of terminal equipment.

Finally, downstream signals transmitted through the network will be subjected to the following:

Hum modulation that may be as high as 3% pk-pk, but is unlikely to exceed 1.5%.

Microreflections that could be as high as −13 dBc, but are unlikely to exceed −20 dBc, with delays ranging from 0 to 500 ns, with smaller echoes having delays as long as 5 μs.

Sinusoidally varying group delay (as a function of frequency) with an amplitude as high as 100 ns pk-pk, but unlikely to exceed 30–50 ns. The periodicity of the delay variation will typically be in the 2–5-MHz range. Additionally, channel 2 (in a standard low-split system) may be subjected to chroma delay as high as 170 ns, but more likely 100–150 ns owing to the diplex filters. Finally, individual subscriber drops may experience additional group delay due to the effects of any installed filters.

In-band frequency response variations arising from microreflections, differential cable losses, intentional amplifier response slope, and imperfect frequency compensation. The total variation from all these causes is unlikely to exceed 1 dB pk-pk in any 6-MHz channel. As with group delay, however, the use of drop filters may add considerable response variations to some channels.

Received signal levels (for signals launched at video-equivalent levels) ranging from 0 to +20 dBmV at subscriber outlets, with variations within that total range as large as 8 dB (but more likely less than 5 dB) over time and temperature.

As will be seen in the next chapter, signals transmitted in the reverse direction (from subscriber toward the headend) are subject to channel conditions that are less well defined.

Endnotes

* In this qualitative discussion, we are not distinguishing between baseband noise (S/N) and noise added to the RF modulated signal (C/N). Both affect the quality of the final product.

1. Hans Stekle (Hrsg.), Breitbandverteilnetze der Deutschen Bundespost. Heidelberg: R. v. Decker, 1988.

2. Code of Federal Regulations, Section 47, available from the U. S. Government Printing Office, Washington, DC, in several volumes covering the entire FCC set of regulations, Parts 0 through 80.

3. EIA-IS-23 RF Interface Specification for Television Receiving Devices and Cable Television Systems, unreleased draft document of the Consumer Electronics Association, Arlington, VA, May 12, 1998.

4. Private correspondence from Archer Taylor of the Strategis Group, who doesn’t claim to have invented it. We have modified Archer’s suggested notation to add a variable cascade factor to broaden its applicability.

5. Customer Premises Equipment Performance and Compatibility Testing, CableLabs, Louisville, CO (undated, but published in late 1993). Part 4: Receiver Performance.

6. Ibid.

7. Ibid., Part 2, Potential Impact of Direct Pickup Interference. The report concludes that about 20% of U. S. households will be exposed to external field strengths of at least 100 mV/m from at least one VHF television broadcast station, while 41% will be exposed to that magnitude of signal from at least one UHF station operating at or below 550 MHz. Television receivers, however, are usually placed within buildings that provide some attenuation of the external signal.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset