Chapter 10 Coaxial RF Technology

10.1 Introduction

Cable television’s technical roots are the distribution of analog television signals to the antenna terminals of customers’ receivers. The least expensive way to accomplish that was to avoid the need for in-home equipment by carrying each video stream on a different, standard television channel so that subscribers could use their existing TV sets to select and view programs. Although cable systems have evolved since then into sophisticated multiservice networks using combinations of linear fiber optics and coaxial cable for transmission, the essential characteristics have remained unchanged. In particular, the last portion of the network is still a linear, broadband coaxial network that simultaneously carries many modulated RF signals, each occupying a different band within the spectrum — frequency division multiplexing (FDM) — as discussed in several previous chapters. In modern systems, many of the modulating signals may be digital rather than analog; however, the network must still be linear to avoid generation of unwanted additional signals due to mixing among the desired carriers. This is discussed in detail in Section 10.3.3.

This chapter will treat coaxial network technology in detail, including cable, amplifiers, passive components, and powering systems. Basic linear network concepts will be introduced that will also apply to the linear fiber-optic links whose unique characteristics are discussed in Chapter 12 and to the microwave links covered in Chapter 14. Chapter 11 will discuss coaxial design practices and cascaded performance.

10.2 Coaxial Cable

Coaxial cable is not the only option for transmitting broadband RF signals. Indeed, many early systems were built using open, parallel-wire balanced transmission lines, and a few even used an ingenious single-wire cable known as G-line, which had only a center conductor and dielectric. Coaxial cable, however, offers the advantages of a high degree of shielding, coupled with relatively low cost and moderate signal loss. For that reason, most modern RF systems use coaxial cable as at least one of the physical media through which signals are transported.

10.2.1 Definition

Coaxial cable is constructed with a center conductor surrounded by a dielectric of circular cross section and by an outer conductor (shield), also of circular cross section. Signals within the normal operating bandwidth of coaxial cable have a field configuration known as transverse electric and magnetic (TEM). In the TEM mode, the electric field lines go radially between the center and outer conductor and are of uniform strength around a cross section of the cable, whereas the magnetic field lines are circular and perpendicular to the length of the cable (see Figure 10.1). In a cable with a continuous, perfectly conducting shield, no electric or magnetic fields extend beyond the outer conductor so that no signals can leak out of or into the cable.

image

Figure 10.1 Coaxial cable basics.

10.2.2 Characteristic Impedance

Coaxial cables have a property known as surge impedance or, more commonly, characteristic impedance, which is related to the capacitance and inductance, per unit length, of the cable. The characteristic impedance is most easily thought of in terms of the effect on signals being transported: if a cable is connected to an ideal pure resistor whose value is equal to its characteristic impedance, a signal transmitted toward the resistor will be entirely absorbed by the resistor and converted to heat. In other words, no energy will be reflected up the cable.

The characteristic impedance, Z0 (in ohms), is a function of the relative diameters of the center conductor and the inner surface of the outer conductor and of the dielectric constant of the dielectric:


image (10.1)


where

D = the inner diameter of the shield

d = the outer diameter of the center conductor

ε= the dielectric constant

10.2.3 Attenuation as a Function of Frequency

Signal loss (attenuation) through coaxial cable can occur through any of four principal means:

Radiation out of the cable due to imperfect shielding

Resistive losses in the cable conductors

Signal absorption in the dielectric of the cable

Signal reflection due to mismatches between the cable and terminations or along the cable due to nonuniform impedance

Even when cables have perfect shields, exact impedance matches, and uniform construction, imperfect dielectrics and resistive conductors will cause loss. The general equation for this residual cable loss is


image (10.2)


where

α= the attenuation of the cable in dB/100 feet*

R = the effective ohmic resistance of the sum of the center and outer conductors per 100 feet of cable length at f

Fp = the power factor of the dielectric used

f = the frequency in MHz

The currents in the conductors are proportional to the strength of the magnetic fields at the conductor surface. If the conductors had no resistance, the RF currents would travel only on the surface. In real conductors, however, the current extends into the conductor, decreasing exponentially with depth. This property is known as skin effect, and the distance from the surface to where the current has decreased to 1/e (36.8%) of the surface amount is known as the skin depth at that frequency. It is related to frequency by the formula


image (10.3)


where

δ= the skin depth in mils (thousandths of an inch)

ρ/ρC = the resistivity of the conductor relative to copper

f = the frequency in MHz1

The formula is valid only for nonferrous materials such as aluminum or copper.

Over a typical downstream frequency range of 54–750 MHz, the skin depth in copper will vary from 0.00035 to 0.00009 inch and will increase to 0.0012 inch at 5 MHz (the lower end of the upstream frequency range).

The effective resistance of the conductor is the same as if it were a tube of material whose thickness is equal to the skin depth with the current distributed equally throughout its volume. The resistance of this tube will be


image (10.4)


where

R = the resistance per 100 feet of the conductor

ρ/ρC = the resistivity of the conductor relative to copper

f = the frequency in MHz

d = the conductor diameter in inches

Taking into account the resistance of both center and outer conductors, Equation (10.2) can be rewritten as


image (10.5)


where

ρdc = the resistivity of the center conductor material relative to copper

ρDC = the resistivity of the shield material relative to copper

This has one term that increases as the square root of frequency and one that increases linearly with frequency. For most cable constructions (discussed later), the conductor loss will dominate the dielectric loss so that the overall cable attenuation will increase approximately as the square root of frequency. Figure 10.2 shows the specified variation in loss for several cables in comparison to an ideal square-root relationship (with the curves matched at 100 MHz). Note that the agreement is excellent for cables ranging from drop sizes to the largest trunk cables, indicating the relatively small contribution from dielectric losses.

image

Figure 10.2 Cable loss: specified versus ideal.

A common cable construction, known as P3, utilizes a solid aluminum outer conductor, a foamed dielectric (image), and a copper-coated aluminum center conductor. It is available in a number of nominal sizes (referenced to the outer diameter of the aluminum shield in inches). For this family of cables, the attenuation can be approximated using


image (10.6)


where

α= the attenuation in dB/100 feet

Dnom = the nominal shield outer diameter in inches

f = the frequency in MHz

Other cable types using thinner shields and/or air dielectrics will have slightly lower losses for the same outer diameter, whereas solid dielectric cables will have higher losses.

10.2.4 Attenuation as a Function of Temperature

As Equation (10.2) shows, the attenuation of a coaxial cable, in decibels, is a linear function of its conductor resistance, provided that conductor losses are much larger than dielectric losses. As we will discuss in Section 10.2.10, the most common cable construction uses an aluminum shield and copper-coated aluminum center conductor. Copper has a resistivity at 68°F of about 0.68 × 10−6 ohm-inches, whereas aluminum has a resistivity of about 1.03 × 10−6 ohm-inches, and each has a temperature coefficient of resistivity of 0.22%/°F. Since conductor resistance varies as the square root of resistivity, the resistance of both conductors, and therefore the attenuation, would be expected to change about 0.11%/°F. In fact, the attenuation of commercial coaxial cables with hybrid aluminum-copper construction is typically specified to increase by 0.1%/°F.

10.2.5 Attenuation as a Function of Characteristic Impedance

For a given outer diameter (the primary determinant of the amount of materials used and, therefore, cost and weight), different impedance cables will be optimized for different characteristics. If the dielectric loss is assumed to be of only secondary importance and the usual construction is assumed (copper-plated center conductor with aluminum shield), then Equations (10.1) and (10.5) can be combined to find cable attenuation as a function of characteristic impedance:


image (10.7)


As Figure 10.3 shows, the loss minimum is at about 80 ohms, for air dielectric (dielectric constant = 1.0) and decreases as the dielectric constant increases. Cables with air dielectric also have lower overall loss since the center conductors are larger, and therefore have less resistance, for the same impedance. Although the choice of impedances near the minimum of the curve was not important, 75 ohms may have been chosen because it is also close to the feedpoint impedance of a half-wave dipole antenna. In order to minimize the need for repeaters, wide area distribution systems universally use 75-ohm cables. They are also used for local interconnection of baseband video signals within headends and broadcast facilities in order to minimize the differential loss across the video baseband spectrum.

image

Figure 10.3 Coaxial cable loss versus characteristic impedance for various values of dielectric constant.

Although the losses are higher, 50-ohm cables are generally used by the broadcast and radio communications industries because the power-handling capability is higher, the cables are less fragile, and they are a close match to the feedpoint impedance of a vertical quarter-wave antenna.

10.2.6 Wavelength

Along the length of the cable, at any one instant in time, the electric fields vary sinusoidally in strength. At some point, the center conductor will be at its most positive with respect to the shield. Moving along the cable, the voltage will decrease to zero, then become negative, then back to zero, and finally be at its positive maximum again. The distance between the maximum points of one polarity is known as the wavelength at the frequency being transmitted and is equal to the distance the signal transverses during one period of its frequency.

In free space, signals travel at the rate of 3 × 108 meters per second or, in more convenient terms, about 984 feet per microsecond. Signals in cable, however, travel more slowly due to the higher dielectric constant of the dielectric material. The ratio between velocity in cable and the free-space velocity is the relative propagation velocity VP and varies from about 0.85 to 0.95 for common cable types. VP is related to the dielectric constant by


image (10.8)


The wavelength of a signal in a coaxial cable is thus


image (10.9)


10.2.7 Theoretical Size Limitation

If the size of a coaxial cable is sufficiently large, it will support modes other than TEM. In particular, if the mean diameter exceeds one wavelength in the cable, it will support a mode in which the electric and magnetic fields are not uniform around the cable. This cutoff wavelength can be expressed as2


image (10.10)


where

D = the inner diameter of the shield

d = the outer diameter of the center conductor

Non-TEM mode generation is a problem since higher-order modes may not propagate at the same velocity and because coaxial components may not react the same to the higher-order mode. Standard practice is to limit cable sizes to those that do not support modes higher than TEM at the maximum operating frequency. The maximum size cable that will support a given frequency in TEM mode only can be found by substituting Equation (10.9) for λin Equation (10.10), then solving Equation (10.1) for d, substituting in Equation (10.10), and solving the resultant expression for D:


image (10.11)


Alternatively, we can solve for the maximum frequency that can be transmitted in only TEM mode through a given cable:


image (10.12)


For example, 75-ohm cables with foamed dielectrics (εabout 1.3) will support 1-GHz signals at up to 0.222-foot shield inner diameter, or about 2.6 inches. Since this is more than twice the largest cable used by network operators today, higherorder mode suppression is not an issue with current operating bandwidths.

10.2.8 Precision of Match: Structural Return Loss

One measure of cable quality is how close it adheres to its nominal impedance. This has two aspects: the precision with which its average characteristic impedance matches the ideal value, and the variations of impedance along the length of a cable. As an alternative to attempting to measure the physical structure of the cable along its length, the most frequent measure of quality used is the percentage of incident power from a source that is reflected at the input of a cable being tested, when the cable’s output is connected to a precision termination. If the source is calibrated using a precision, resistive terminator, then the result of the measurement is return loss. For cables, however, the variation of the reflection is often of more interest than the absolute match. In order to judge the variation, a variable bridge is used to match, as precisely as possible, the average surge impedance of the cable being tested. Then the reflection is measured over the full frequency range of desired operation, and the result is known as structural return loss (SRL). Its numeric value is the smallest measured ratio of incident to reflected power as the frequency is varied, expressed in decibels. Recommended worst-case SRL is 26–30 dB for trunk and feeder cable, whereas 20 dB is considered adequate for drop cable. The absolute impedance should be within ± 2–3 ohms for trunk and distribution cables and ±5 ohms for drop cables.3

SRL is a compromise measurement, as the reflected power is a measure, at each frequency, of the composite effect of discontinuities at all distances. Because of the attenuation of the cable, signals traveling toward more distant discontinuities will be reduced in amplitude, and the reflections from those discontinuities will be further attenuated so that imperfections farther from the end of the cable being measured will have less effect than those closer. At higher frequencies, this effect will be increased. On the other hand, most cable imperfections cause a greater reflection at higher frequencies.

Where multiple reflections are spaced along a transmission line, the effects can add, cancel, or anything in between, depending on the length of line between them and the frequency. (Chapter 15 includes a more complete discussion of multiple reflections in coaxial transmission systems.) Absent line losses, small, identical discontinuities will approximately cancel when spaced odd multiples of a quarter wavelength, whereas they will reinforce when spaced multiples of one-half wavelength. This is especially important when considering multiple, equally spaced discontinuities arising, for instance, from an imperfection in a roller over which a cable passes during the manufacturing process. Each revolution of the roller produces an identical fault, spaced along the cable by a distance equal to the circumference of the roller. Testing such a cable over the full frequency range will show maximums and minimums of return loss as the effects of the discontinuities interact. This is why it is important to test cable over its full expected frequency range with sufficient frequency resolution to detect sharply resonant discontinuities.

Though structural return loss measurements accurately assess the composite reflection from a cable, it is a poor method of determining the location of major defects. If a cable includes only a single major defect, its location can be estimated from the change in test frequency between successive maxima or minima of return loss; however, in most cases, cable loss plus a multiplicity of reflections will prevent accurate determination.

10.2.9 Precision of Match: Time Domain Reflectometry (TDR)

An alternative measurement method, also a compromise, is used to more accurately determine discontinuity location but averages the magnitude of the discontinuity as a function of frequency. In this test, a pulse is transmitted down the cable, and the distance is determined from the time required for the energy reflected from a discontinuity to return to the transmitter. The magnitude of the discontinuity is related to the amplitude of the returned signal. Secondary information about the frequency “signature” of the fault can be determined by comparing the shape of the returning pulse with the original.

Although TDR testing of short, low-loss cables is an accurate process, there are many compromises in testing of the relatively long cables used in typical cable system construction. With very short cables, the classic pulse waveform is a very fast rise time step function. A Fourier transform of this voltage waveform reveals that its frequency content is inversely proportional to frequency, and therefore, the power spectrum falls inversely as the square of frequency (see Figure 10.4(a)). When such a pulse is used to test long cables, it suffers from two problems: (1) most of the energy is concentrated at low frequencies, whereas most discontinuities primarily affect higher frequencies, and (2) the relatively higher cable loss at high frequencies further masks the reflections.

image

Figure 10.4 Frequency spectra of step and pulse TDR waveforms. (a) Step function. (b) Pulse.

For that reason, long cables are most frequently tested using a narrow “impulse” (see Figure 10.4(b)). This waveform has a spectrum that extends more uniformly to a frequency equal to approximately 1/PW, where PW is the pulse width (the exact shape of the curve in the frequency domain depends on the shape of the voltage pulse). The trade-off is that the distance resolution is limited to approximately cPWVP, or about 1 foot per nanosecond of pulse width for typical foam dielectric cables. By increasing PW, it is possible to increase the total energy (and therefore sensitivity) at the expense of frequency and distance resolution. As with step function pulses, impulse function testing suffers in accuracy because of the variation of cable loss with frequency. Because of the compromises, TDR testing of CATV-type cables is primarily limited to field operations such as measuring the cable remaining on a reel or locating an underground cut so that the cable can be repaired.

An instrument that is able to simultaneously resolve both frequency and time (distance) domain information is a network analyzer with gated TDR and Fourier transform capability. This instrument measures reflection coefficient, but “gates” the measurement so as to measure only reflections from a narrow range of distances at a time. A Fourier transform is then used to convert the waveform of the reflected signal into its frequency domain, resulting in a reasonably accurate measurement of the reflection coefficient of an individual discontinuity, even in the presence of other discontinuities at other distances. It is still, of course, affected by the cable loss and loss variation with frequency. As of the writing of this book, this technology has not been extensively used in field work in cable systems.

10.2.10 Practical Factors in Cable Selection

From an electrical standpoint, the ideal cable would have conductors with the highest possible conductivity, the largest diameter consistent with higher-order mode suppression, and a vacuum separating the conductors. Practical cables, however, are a compromise among several characteristics:

Manufacturing cost (which will vary somewhere between being a linear function of diameter and as the square of the diameter)

Weight (which affects the cost of the supporting structure for overhead lines)

Diameter (which affects required conduit size for underground lines, visibility of overhead lines, and cost of connectors)

Loss (which limits the distance between repeater amplifiers)

Mechanical expansion coefficient (which affects the amount of sag that must be used with overhead lines)

Connector cost (related to cable size)

Shielding (although solid outer conductors provide nearly perfect isolation, braided shields may provide adequate shielding for some applications)

Environmental protection (mechanical protection layers, compounds used to retard water ingress, integral support members, and so on)

Handling characteristics (bending radius, reforming ability, crush resistance, pulling force resistance, and so on)

Electrical resistance and current-carrying capacity at low frequencies

In oversimplified terms, the selection of optimum cable size for a distribution network design consists of comparing the cost of cable (and associated components) per decibel of loss with the cost per decibel of amplification to overcome that loss and choosing the least expensive combination. Of course, practical system design includes many additional factors that may affect the ultimate decision.

Trunk and Feeder Cable

Design optimization has resulted in several typical cable designs that are widely used. The most common distribution cable design in most of the world utilizes a solid aluminum shield, foamed polyethylene dielectric, and a copper-clad aluminum center conductor. The use of aluminum offers the best combination of cost, handling characteristics, weight, and strength, whereas the foamed dielectric offers a good compromise among loss, moisture ingress protection, and mechanical strength for the center conductor. Because of its rigidity, compared with cables that utilize braided wire shields, trunk and feeder cables using this basic structure are sometimes known as hard-line cable. The use of an aluminum core for the center conductor provides a match for the expansion coefficient of the shield, whereas the copper cladding provides a lower resistance to RF. Since the conducting area of the center conductor is much smaller than the shield, the total resistive losses are dominated by it so that copper cladding the center conductor is cost efficient though coating the inner side of the shield would not be.

The need to provide low resistance to the multiplexed power, as well as low RF loss, has led to the use of cables with outer shield diameters ranging from 0.412 to 1.125 inches, with larger cables being used where distance of coverage is paramount (trunk or express feeders), and smaller cables being used for shorter distances and cases where a substantial amount of the loss is due to tap losses rather than cable losses (tapped feeder cables). This family of cables is generically labeled by the outer shield diameter in thousands of an inch; for example, “500 cable” refers to a hard-line cable with an outer shield diameter of 0.500 inch. Common diameters include 0.412, 0.500, 0.625, 0.750, 0.825, and 1.000 inch though others are also used.

Versions of these same cables are available with solid copper center conductors, for use where 60-Hz power losses must be reduced. Protection options include combinations of none (bare aluminum), polyethylene jackets of various designs and degrees of sun and fire resistance, flooding compounds (a substance inserted between the shield and jacket to retard the ingress of water), and additional steel shields for crush resistance. Since the added environmental protection layers can considerably increase the cost and weight, builders generally utilize several designs in a single system, optimized by application.

A variation of this design utilizes air dielectric with periodic center conductor supports. This design has slightly smaller diameter for the same loss per foot because of the lower dielectric loss and lower ratio of shield to center conductor diameter. Special design features are required to prevent water, which may enter the cable owing to a fault, from traveling down the center conductor/shield space. Other features are required to maintain the required concentricity and crush or bending resistance.

Though hybrid aluminum-copper cables are the most common, some network operators prefer solid copper for center conductors and, sometimes, shields.

Drop Cable

In a typical cable system, well over half the total cable footage is taken up by drop cables. For that reason, cost is an important parameter for that application, as well as weight and appearance. In North America, four common sizes of drop cable have evolved, as shown in Table 10.1. The approximate loss for each size, as a function of frequency, is shown in Figure 10.5.4

Table 10.1 Common drop cable designations and sizes

image
image

Figure 10.5 Drop cable loss versus frequency.

Drop cables are constructed using a copper-plated steel center conductor, foamed dielectric, a shield made up of at least one layer of an aluminum foil and one layer of an aluminum braid, and a plastic jacket. Additional layers of foil and braid are available as options for greater shielding effectiveness. Although size 59 cable was the historical choice of the cable television industry, size 6 cable is most frequently used in modern systems, due to the use of increasingly higher frequencies. The larger sizes are generally reserved for occasional longer drop cables.

The jacket material for general use is usually PVC or polyethylene. Cables used for overhead drops often also include a separate steel strength member (the messenger) enclosed in the same plastic sheath structure. See Figure 10.6 for a composite drop cable structure that includes a messenger.

image

Figure 10.6 Typical drop cable construction.

An array of drop cable variations are available for special purposes, including those with flooding compounds to retard water ingress in case the plastic coating is damaged, multiple cables within a common jacket, those with varying degrees of fire resistance for use inside buildings, silver-plated center conductors, copper braids, solid dielectric cables, and so on.

As with distribution cables, North American practice has been adopted by some countries, whereas others (for instance, Germany) have chosen to use semirigid drop cables with solid copper shields and center conductors for their underground drops.

10.2.11 Shielding Effectiveness in Drop Cables

Although distribution cables constructed with solid aluminum tube shields offer nearly perfect signal isolation from external fields, drop cables depend on the overlap in the aluminum wrap and the interaction between the aluminum braid and wrap for shielding integrity. It is a compromise that is justified on the basis of cost and cable-handling characteristics. The cable industry has long struggled to find nonambiguous testing methodology for evaluating shielding integrity. Although these are beyond the scope of this book, you may wish to refer to industry standard test procedures developed by the Society of Cable Telecommunications Engineers (SCTE) for this purpose.5

10.3 Amplifiers

In order to overcome the loss of the distribution cables and to provide power to drive end terminals, signal boosting amplifiers are utilized throughout the coaxial distribution system. These must be designed to add as little signal degradation as possible, particularly noise and distortion, consistent with providing the required gain and total power output.

10.3.1 Broadband Random Noise

Because of the random motion of electrons in conductors, all electronic systems generate an irreducible amount of noise power. This noise is a function of the absolute temperature and the bandwidth in which the noise is measured. As discussed in Chapter 7, the minimum thermal noise power (the noise floor) can be calculated using


image (10.13)


where

np = the noise power in watts*

k = Boltzmann’s constant (1.374 × 10−23 joules/°K)

τ = the absolute temperature in °K

B = the bandwidth of the measurement in Hz

Signal power in cable television systems is commonly expressed in decibels relative to 1 mV across 75 ohms (dBmV), so the thermal noise power at 62°F is approximately


image (10.14)


Finally, although the allocated channel bandwidth for analog NTSC signals is 6 MHz, the effective noise bandwidth of receivers is less. For a variety of reasons, a 4-MHz bandwidth is used as the standard for noise measurements of analog NTSC signals and has been codified in the FCC’s rules.6 At that bandwidth, Equation (10.14) becomes


image (10.15)


This value will vary by a few tenths of a decibel across the temperature range normally of interest to cable operators.

10.3.2 Amplifier Noise

Amplifiers generate added noise at various points in their circuitry. For convenience, however, the added noise is treated as if it were coming from an independent generator and summed into the input port (though it cannot be measured there, of course). The ratio of total effective input noise power to the thermal noise floor, expressed in decibels, is known as the noise figure of the amplifier. Thus, an amplifier with a noise figure of FA dB will have a total equivalent input noise power of NA, where


image (10.16)


And the output noise power will be the input, increased by the gain, G, of the amplifier in decibels:


image (10.17)


Similarly, a desired (noise-free) input signal, Ci, will be amplified by the same amount so that the desired signal output level Cout = Ci (dBmV) + G (dB), and the carrier-to-noise ratio will be*


image (10.18)


When evaluated relative to the 4-MHz bandwidth commonly used for NTSC C/N measurements, this becomes


image (10.19)


Note that, in an FDM system, C/N is evaluated on each signal independently, with the level of individual carriers compared with the noise in the bandwidth that affects that signal. As will be seen, the C/N may be considerably different for different carriers in the spectrum. (The design of systems with cascaded amplifiers will be treated in Chapter 11.)

10.3.3 Distortion

Amplifiers are not perfectly linear. That is, if the instantaneous output voltage is plotted against the instantaneous input voltage, the graph will not be a perfectly straight line. There are two causes of this nonlinearity: the inevitable minor small-signal nonlinearities of the semiconductor devices used and the compression that takes place as the amplifier nears its saturation voltage. Though the small variations define a distortion “floor,” most of the distortion is due to saturation effects. They can take any of three related forms: even-order distortion, odd-order distortion, and cross modulation.

Even-Order Distortion (Composite Second Order—CSO)

A perfectly linear amplifier has a transfer function that is a constant; that is, the output voltage is a linear function of the input voltage. If the transfer function is not linear, then, in general, it can be expressed as a power series; that is,


image (10.20)


where

eo = the output voltage

ei = the input voltage

A, B, C, D,… = the gains at the various powers of the input voltage7

Those terms with even-numbered powers are called even-order distortion, and those with odd-numbered powers are called odd-order distortion. They act in very different ways in cable systems utilizing the most common channelization plans.

If a single sine wave signal [ei = Ei sin (ωt)] is transported through an amplifier with second- and fourth-order distortion, the output voltage will be


image (10.21)


where

ω= the angular frequency

A = the linear component of gain

B = the amplitude of the squared nonlinear component

D = the amplitude of the fourth power component

Ei = the peak input voltage

Using standard trigonometric identities, this is equivalent to


image (10.22)


Analyzing this expression, we find that

The first two terms of this expression represent a change in the average dc voltage that may correspond to a change in the power dissipation of the active device, but is not transmitted into the ac-coupled transmission system.

The third term represents the desired output signal. Note that this amplitude does not change as the result of the even-order distortion.

The fourth term represents an additional signal at twice the frequency of the input signal and in phase with it ω= π/2 radians. In real amplifiers, B is substantially greater than D. Thus, for small values of Ei, the amplitude of this factor will increase as the square of the input voltage. At some input level, the fourth power term will begin to dominate, and the second harmonic level will increase more quickly. Since the level of the fundamental increases linearly with input signal level, the ratio of the fundamental to the second harmonic, at low levels, is a linear function of input level. Expressed in logarithmic terms, the “second-order ratio” degrades by 1 dB for every decibel increase in operating level so long as the amplifier is “well behaved” (meaning the fourth power terms are insignificant relative to the second power terms).

The final term is the fourth harmonic of the fundamental. Like the fourth power term of the second harmonic, this product is generally small at normal operating levels and at all levels is at least 12 dB (one-fourth the voltage) below the second harmonic term. Note that this term is also in phase with the fundamental at ω= π/2.

If we plot the fundamental and first two even harmonics on a time scale (Figure 10.7), we see an interesting relationship: the distortion-caused products are antisymmetrical with respect to the fundamental; that is, whatever voltage they have at the positive peak of the fundamental, they have exactly the opposite during the negative peak. The usual case of the combined composite signal is shown in the bold line in the figure, which shows one peak of the composite waveform flattened relative to the fundamental, whereas the other peak is sharpened.

image

Figure 10.7 Even-order distortion.

As discussed in Chapter 9, the North American Standard channel plan (ANSI/EIA-542) places most analog television visual carriers approximately 6 MHz apart and offset approximately 1.25 MHz upwards from harmonics of 6 MHz.* Each such carrier can be expressed as


image (10.23)


where the angle is expressed in radians and

n = the closest harmonic of 6 MHz

t = the time in microseconds

When two such signals (on different channels) are transported through an amplifier with second-order distortion (for simplicity, and because the effects are often small, the fourth-order term has been dropped), the output is


image (10.24)


where the first term is the linear amplification of the input signals and

n and m = the multiples of 6 MHz closest to the two visual carriers

Ei = the peak voltage of either of the input signals (they are assumed to have equal amplitudes)

The last term has a dc component plus the following intermodulation products:


image


The first product falls 1.25 MHz above the visual carrier nearest the (m + n)th harmonic of 6 MHz, whereas the second product falls 1.25 MHz below the visual carrier nearest the (mn)th harmonic of 6 MHz.

The third and fourth products will be half the voltage (−6 dB in power) and will each lie 1.25 MHz above another visual carrier (provided, in all cases, that the products fall within the occupied bandwidth of the distribution system). Note that these last products are identical in amplitude to those analyzed with a single carried signal and are, in fact, just the second harmonics of the two original signals.

The amplitude of each of these four distortion products, as with the single channel analysis, depends on the square of the input voltage so that the ratio of fundamental to second-order products varies linearly with input level. A similar analysis for fourth-order distortion, the next even power, will show low amplitude components at four times the original frequencies and additional components at twice the original frequencies.

When many similarly spaced carriers in the Standard channel plan are subjected to second-order distortion, each of the individual carriers and each of the carrier pairs will mix as analyzed above. This will result in clusters of products 1.25 MHz above and below each of the original carriers.* As a general rule, the number of beats falling closest to a specific carrier can be calculated using the expression


image (10.25)


where INT indicates that the expression is to be rounded down to the nearest integer and

NL = the number of lower (AB) beats

NU = the number of upper (A + B) beats

n = harmonic number of highest carrier

m = harmonic number of lowest carrier

x = harmonic number of carrier being evaluated

Since, in the ANSI/EIA-542 Standard channel plan, the carriers are not exactly spaced, the second-order products are not frequency coherent, but rather are clustered around the nominal product frequencies.

The usual measurement technique when evaluating this type of distortion is to use a filter bandwidth of 30 kHz, which is wide enough to contain all the significant products in a group and to measure the total apparent power within the filter bandwidth. The power level of the upper composite beat cluster, relative to the sync peak power of the visual carrier, is known as composite second order (CSO) and is sometimes expressed in units of dBc (meaning decibels relative to the carrier power level, which is always a negative number). Some engineers prefer the notation C/CSO, the ratio of the carrier to beats in decibels (a positive number). This book will use this notation. The absolute values are the same in either case. Although the lower CSO cluster will generally have greater magnitude than the upper cluster on lower-frequency channels, it falls at the edge of the video channel and is often ignored since it has a minor effect on video quality compared with the in-band upper cluster.

Since analog video carrier modulation results in average signal levels that vary with program content, equipment is specified and evaluated when loaded with unmodulated carriers of a defined level in order to achieve repeatable measurements. When loaded with video carriers whose peak level is the same as the unmodulated levels used for the test, the average carrier power will be about 6 dB lower, resulting in an improvement of about 6 dB in C/CSO.

When the IRC channel plan is used, the visual carrier frequencies are derived from a common source so that all channels are phase coherent and offset from harmonics of 6 MHz by exactly the same amount. The result is that the second-order beats are also coherent.

When the HRC channel plan is used, the visual carrier frequencies are derived from a common source and not offset. Under those circumstances, the upper and lower second-order beats fall exactly on the visual carrier frequencies, and the difference frequency beats cannot be ignored.

The amplitudes of the individual products and the number in each cluster are a function of the nonlinearity of the amplifier, the levels of the carriers generating the beat, and the number of test signals used. Although the preceding analysis assumed a uniform carrier level, standard operating practice is to vary the carrier power in a controlled way as a function of frequency. This will result in varying individual beat levels.

When a comb of signals is subject to second-order distortion, the greatest number of products will fall on the highest and lowest channels. Figure 10.8, as an example, gives the total number of upper and lower 1.25-MHz second-order products in the Standard channel plan (in which channels 5 and 6 are offset by 2 MHz, and the 15th to 17th harmonics are not used).

image

Figure 10.8 Number of 1.25-MHz CSO beats: 550-MHz Standard channel plan.

The visual effect of the upper-side second-order products on analog NTSC television pictures is similar to that of a discrete interfering carrier — that is, closely spaced diagonal lines as discussed in Chapter 2. As discussed earlier, the lower CSO beats have relatively little effect on visual quality.

Many simple broadband RF amplifiers are single-ended class A devices, meaning that their output stages consist of a single active device that is biased to the approximate middle of its dynamic range. Such a device typically exhibits an asymmetrical compression behavior so that second-order distortion products dominate. In recognition of this problem, the North American VHF off-air frequency plan designated channel boundaries such that neither second harmonic nor second-order beats from any combination of channels affect any other channels even if the drive levels to television receivers are high enough to result in compression. This was discussed in detail in previous chapters.

Early cable television amplifiers used single-ended designs for the same reasons. As the industry began using non-off-air channels, however, CSO quickly became the primary performance-limiting parameter.

Odd-Order Distortion (Composite Triple Beat—CTB)

In an attempt to reduce this problem, all modern cable amplifiers are “push-pull,” meaning that their output stages contain two devices in a balanced circuit that assures symmetrical, or nearly symmetrical, compression. As will be seen, this will result in primarily odd-order distortion products.

If a single sine wave is transmitted through a circuit that has only third-order distortion, the output can be described by the following equation (derived from Equation 10.20):


image (10.26)


where A and C are the coefficients of the linear and cubed products, respectively, and


image


Using a standard trigonometric identity, this reduces to


image (10.27)


С may be either positive or negative. In the case where the nonlinearity is due to compression, it is negative so that the third-order distortion results in a reduction in the amplitude of the fundamental (unlike second-order distortion that does not affect the amplitude of the fundamental) plus the addition of a product at three times the original frequency whose peaks are in opposition to those of the fundamental. The level of the third-order products rises as the cube of the input voltage (e.g., the rise is 3 dB for every 1-dB rise in input level) so that the ratio of fundamental to third harmonic at the output decreases by 2 dB for every 1-dB rise in the input level so long as the ratio between them is high. When the reduction in the amplitude of the fundamental becomes material, the ratio will decrease more quickly.

Figure 10.9 shows visually the fundamental, third harmonic, and the sum of the two. As can be seen, third-order distortion is equivalent to a reduced amplitude, symmetrically flattened waveform. If higher odd multiples of the original frequency are also plotted on the graph, it will be seen that all such harmonics will have exactly the same effect on both peaks of the original waveform.

image

Figure 10.9 Effect of third-order distortion.

If multiple signals are transmitted through a system exhibiting this type of distortion, the input can be described by the following general formula:


image (10.28)


where

ω1, 2,…, n = the angular frequencies of the input signals

E1, 2,…, n = the levels of each subcarrier (peak voltage)

If we insert this definition of ei into Equation (10.26), it can be shown that the output spectrum will have signals at the fundamental frequencies of the input signals and all combinations of 3ωx, 2ωx, ± ωy, and ωx ± ωy ± ωz, where ωx, ωy, and ωz are any of the input frequencies. In particular, single frequency terms, two frequency terms, and three frequency terms will be present:

Single frequency terms:


image (10.29)


Just as with a single sine wave, this indicates that the output contains, in addition to the linearly amplified input signals, two terms proportional to the cube of the input voltage level of each of the carriers, one at the fundamental carrier frequency and one at three times the carrier frequency. When the input consists of carriers corresponding to the EIA Standard or IRC channel schemes (offset 1.25 MHz from harmonics of 6 MHz), the third harmonic term will fall 3.75 MHz above a harmonic of the root frequency (2.5 MHz above a carrier if one is present). With carriers that fall exactly on harmonics of a common root (HRC), the distortion products will also fall on harmonics of the root. The number of single frequency terms will be the same as the number of FDM carriers.

Two frequency terms:


image (10.30)


Each unique ordered pair of input signals mixes to produce a product at the fundamental of one of the input signals plus products at 2ωx ± ωy. The amplitude of the fundamental is twice that produced by the cubed term (assuming equal amplitude input signals). The voltage amplitudes of the other two terms are three times that of the cubed terms (+9.5 dB). When the input consists of carriers corresponding to the EIA Standard or IRC channel schemes, the 2ωx + ωy term will fall 2.5 MHz above a carrier assignment, while the 2ωx – ωy term will fall on a carrier assignment. With an HRC channel assignment scheme, all products will fall directly on channel frequencies.

When the input signals consist of an FDM spectrum of q input carriers, the number of ordered pairs is q(q − 1). Since each ordered pair results in the generation of products at two new frequencies, the number of new products will be 2q(q − 1). Thus, when the number of carriers is large, the total number of products will increase approximately as the square of the channel count.

Three frequency terms:


image (10.31)


Each unique combination of three input signals mixes to produce terms at ωx ± ωу ± ωz. When the input signal amplitudes are equal, the voltage amplitudes of these terms are six times that of the cubed terms (+15.6 dB). When the input consists of carriers corresponding to the EIA Standard or IRC channel schemes, the ωx + ωy – ωz and ωx – ωy + ωz terms will fall on a carrier frequency, the ωx + ωy + ωz term will fall 2.5 MHz above a carrier frequency, and the ωx – ωy – ωz term will fall 2.5 MHz below a carrier frequency. Note that, though ωx ± ωy ± ωz is sometimes negative, the only effect is the polarity of the magnitude since sin (−x) = – sin (x).

When the input signals consist of an FDM spectrum of q input carriers, the number of unique three-signal combinations is q(q − 1)(q − 2)/6. Since each of these combinations results in the generation of four new products, the total number of new third-order products caused by three-signal mixing will be 4q(q − 1)(q − 2)/6. Thus, when the number of carriers is large, the total number of products will increase approximately as the cube of the channel count.

In summary, when an FDM spectrum of q signals is transmitted through an amplifier exhibiting third-order distortion, the original signals will be amplified, and a number of new products will be generated:

Some that affect the output amplitude of each original carrier.

q new signals, one at the third harmonic of each of the original signals. With the ANSI/EIA-542 Standard or IRC frequency plan, these fall 2.5 MHz above visual carrier frequencies.

2q(q − 1) new signals at frequencies 2ωx ± ωy, where ωx and ωy are any two input carrier frequencies. Each of these signals is about 9.5 dB higher in level than the third harmonic frequencies. With the Standard or IRC frequency plan, half of these fall 2.5 MHz above, and half nominally on, visual carrier frequencies.

4q(q − 1)(q − 2)/6 new signals at frequencies ωx ± ωy ± ωz, where ωx, ωy, and ωz are any three different carrier frequencies. Each of these products will be about 15.6 dB higher in level than the third harmonic frequencies. With the Standard or IRC frequency plan, half of these will fall nominally on visual carrier frequencies, whereas one-quarter will be 2.5 MHz below and one-quarter will be 2.5 MHz above visual carrier frequencies.

The level of each of these products increases by 3 dB for every 1-dB increase in the levels of the input signals so that the ratio between the desired output signals and third-order products decreases by 2 dB.

Since the three-signal products (“triple beats”) are the greatest in both number and magnitude, they quickly dominate other third-order effects. Since half of them fall nominally on visual carrier frequencies, those product groups are always the largest. Thus, composite triple beat (CTB) distortion is defined as the total power in a 30-kHz bandwidth centered on the visual carrier frequency without the visual signal present, relative to the peak level of the visual carrier in dBc (decibels relative to a reference carrier level). As with CSO, it can also be expressed as the ratio of visual carrier level to apparent beat power, or C/CTB, in decibels.

For a contiguous set of equally spaced carriers, the number of third-order beats falling in any given channel is approximately8


image (10.32)


where

NCTB = the number of third-order beat products whose frequencies are nominally the same as the carrier being evaluated

a = the total number of contiguous, equally spaced carriers

b = the index number of the carrier being evaluated, i.e., the bth carrier from the bottom or top

For the center channel, which receives the greatest number of beats, this reduces to


image (10.33)


At the end channels, which receive the fewest number of beats, it becomes


image (10.34)


Actual cable systems using the EIA Standard channel plan, however, do not conform exactly to this model, with channels 4 and 5 being offset and some or all of the channels falling in the FM broadcast plan seldom used for video. Thus, “beat quantity tables” published for the industry show several percent fewer beats.

Figure 10.10 is a plot of the number of beats for some common, fully loaded, system bandwidths.9 As can be seen, channels slightly above the middle of the spectrum are affected by the greatest number of beats. It does not follow, however, that the channel with the greatest number of beats is the channel most affected by third-order interference. Amplifiers are not typically operated with equal power carriers, but rather with a uniform upward “slope” of levels with frequency (for reasons explained later in this chapter), causing some beats to be stronger than others. More important, since signal quality is affected by the ratio between visual carrier level and interference level, the lower carrier levels at the low end of the spectrum will sometimes cause the worst interference to occur on a lower-frequency channel than the one with the largest number of beats.

image

Figure 10.10 Number of triple beats per channel versus system bandwidth.

Since the third-order products are phase incoherent in most systems, their powers will add on a power, rather than a voltage, basis, and the measured CTB level would theoretically be proportional to 10 log(n), where n is the number of beats in a channel if the levels of all carriers are equal. In actuality, owing to the unequal carrier levels and beat amplitudes, and possibly differences in distortion across the spectrum, CTB may increase as slowly as 5 log(n) to 7 log(n).10

The visual effect of CTB on an analog television picture depends on whether the signals are harmonically related or not. With unlocked carriers, the differences in frequency between the desired carrier and the various beats produces low-frequency interference that has been described as “streakiness” in the picture. With locked carriers, the various products will be frequency coherent with the carrier and, like the fundamental frequency products, will alter the amplitude and phase of the original signals. The visual effects depend on many factors, including the amount of phase noise on each of the signals.

Since analog video carrier modulation results in an average signal level that varies with program content, equipment is specified and evaluated when loaded with unmodulated carriers of a defined level on each of the nominal visual carrier frequencies. This results in more repeatable measurements. When loaded with video carriers whose peak level is the same as the unmodulated levels used for the test, the average carrier power will be about 6 dB lower, resulting in an improvement of about 12 dB in CTB (though the improvement will vary over time depending on video modulation levels and timing and carrier phase relationships). Typical amplifiers provide a CTB of −70 to −90 dBc when loaded with unmodulated carriers at recommended operating levels, depending on design.

Cross Modulation (XMOD)

Another form of distortion occurs when amplifiers exhibiting third-order distortion are loaded with multiple amplitude modulated signals.

Although second-order distortion has no products that affect the gain of the primary FDM carriers, as Equations (10.27), (10.29), and (10.30) demonstrate, two classes of third-order products affecting every channel cause effective gain variations of the fundamental signal. In particular, the first term of Equation (10.30) demonstrates that the effective amplifier gain affecting each FDM sub-carrier is modified by a series of terms that include the amplitude of every other signal in the spectrum carried. Thus, when the average levels of any other signals are varied because of modulation, the output level of the desired channel will also vary slightly. This effect is known as cross modulation.

If the modulation of the other signals were truly independent of each other and random, the effects of cross modulation from various channels could be expected to be noiselike. Unfortunately, video modulation is not random. As covered in Chapter 2, the change in amplitude from synchronizing peak to black level is identical for every channel and, furthermore, occurs at nearly identical and quite precise frequencies. Thus, if the signals modulating multiple channels happen to be synchronized (which happens when they are locked to a common timing reference in a multichannel generation facility), or if they drift into alignment occasionally, the level of cross modulation can become significant.

The standard test for cross modulation is to load the amplifier with a comb of simultaneously modulated carriers plus one unmodulated carrier on the frequency to be tested. The level of modulation is measured on the nominally unmodulated carrier. Cross modulation is defined as the difference between the cross modulation sideband level and the sideband level that would correspond to 100% modulation expressed in decibels.

Since it is caused by third-order distortion, the level of cross modulation varies in the same way as CTB; that is, the relative level varies 2 dB for every 1-dB change in operating levels. Because of the difficulty in making the test and demonstrated inconsistency in the results, cross modulation has become less important than CTB as a means of characterizing third-order distortion. The inconsistency may be due to the many terms, whose effects may reinforce or cancel, affecting the cross modulation on any channel.

When the desired signal is an analog television program and the major cross-modulating signals are also analog television programs, the first visible indication of cross modulation takes the form of a single, faint broad horizontal line and another broad vertical line that may slowly move through the desired picture. These are caused by the horizontal and vertical synchronizing pulses (the highest carrier levels) of another television signal. In an improperly operated system, cross modulation can also be caused by a single very high amplitude channel that causes cross modulation onto other channels. In extreme cases, not only synchronizing bars but video modulation may be visible. The visual appearance of cross modulation is similar to that which results from direct pickup interference where an undesired signal “leaks” into the system and interferes with a cable signal on some channel.

Composite Intermodulation Noise (CIN)

The preceding analysis was based on the carriage of a spectrum of signals, each of which has a carrier that represents most of the energy in the channel. Though that is a valid model for analog television signals and many conventional AM and narrow band FM modulated signals, digital signals differ significantly. A QAM signal, for instance, has a suppressed carrier and, when viewed on a spectrum analyzer, looks like a flat block of noise occupying the entire communications channel (see Chapters 4, 5, and 6 for detailed discussions of QAM modulation).

When such signals are subject to second- and third-order distortion, they do not produce single frequency products, but rather noiselike bands of energy whose amplitude frequency signature depends on the distortion mechanism. The products of this type of distortion are alternatively known as composite intermodulation noise (CIN), composite intermodulation distortion (CID), or intermodulation noise (IMN). This is an exception to our use of the term noise to refer to only thermal noise since CIN is actually a distortion product.

On the assumption that third order is the dominate distortion-producing mechanism and that, further, three frequency products (ωα ± ωь ± ωc) dominate, IM products involving digital carriers will be of three types:

Products formed from the mixture of two analog carriers and one digital signal will appear as blocks of flat noise with a bandwidth equal to that of the digital signal.

Products formed from the mixture of one analog signal and two digital signals will appear to have a symmetrical triangular-shaped amplitude-versus-frequency spectrum, where the triangle will have a spectral width equal to the sum of the widths of the two individual digital signals.

Products formed from the mixture of three digital signals will appear to have a band-limited Gaussian shape with a spectral width equal to the sum of the widths of the three mixing signals.

If digital signals are carried lower in level than analog signals, then the preceding types of products are listed in decreasing order of amplitude.

Although standard operating practices have not yet been set for the cable industry, it has been reported that a 750-MHz plant, operating with a signal load of 77 modulated analog television signals (54-550 MHz) and 33 64-QAM digital signals (550–750 MHz) and typical operating conditions and levels, will suffer less than 1 dB of effective C/N degradation due to СІ Мwith digital signal levels suppressed as little as 5 dB relative to analog signals.11

10.3.4 Group Delay Variation and Amplifier Stability

Group delay is a measure of the rate of change of phase shift through a device as a function of frequency. In equation form


image (10.35)


where

γ= the phase shift through the device in radians

f = the frequency in Hz

In an ideal RF system, the transmission delay at all frequencies is a constant equal to the transit time through the length of line involved, and the group delay is zero.

If the group delay is not zero, signal components at one frequency will arrive shifted in time relative to other components. The degree of tolerance to group delay variation will vary according to the signal format. A particular case of interest to the analog NTSC format is chroma delay, a measure of the arrival time of the chroma signal relative to the luminance signal. As discussed in earlier chapters, the difference should not exceed 170 ns. Though modulators and demodulators are generally the primary contributors to chroma delay, channels close to forward or reverse transmission system band edges are also affected by amplifier design.

High-speed digital signals are also affected by group delay. For example, fast rise time waveforms will broaden if the higher-frequency components are delayed relative to the lower-frequency components. Occasionally, digital signals are carried below channel 2 in the downstream band and, thus, very close to passband edge.

The principal contributors to group delay variation in amplifiers are the diplex filters used at each of the signal ports to separate upstream and downstream spectra. Figure 10.11 illustrates the key components of a two-way amplifier. The input and output diplex filters each consist of a high-pass section connecting the forward gain block to the external ports and a low-pass section feeding the reverse gain block. Each of the filter sections, as well as the amplifier modules, will have a gain (negative in the case of the filters) that is a function of frequency.

image

Figure 10.11 Bidirectional amplifier.

In order to assure absolute stability, the sum of the gains around the loop shown in the heavy arrow must be less than zero at all frequencies. This condition must be met not only for matched loads on the external ports but for any mismatch encountered in an actual field installation. With simple filter designs, this is most easily achieved by having the cutoff frequencies of the filters as far apart as possible and, thus, close to the edge of well-separated forward and reverse passband edges. Since group delay variation gets worse as the cutoff frequency is approached, there is an inherent conflict among (1) the separation between the top of the upstream spectrum and the bottom of the downstream spectrum, (2) the cost of the diplex filter, (3) the group delay variation at the extremes of the spectra, and (4) the stability margin of the amplifier.

Ensuring a nonoscillatory condition may not be adequate. Even if the loop gain is less than zero, frequency response variations may result unless there is adequate isolation. As a practical test, adequate performance may be deduced by observing the swept frequency response of an amplifier at all frequencies, including the diplexer crossover region. Peaking of the response indicates a possibility of a loop gain problem.

Figure 10.12 illustrates the downstream performance of a typical commercially available amplifier having a “42/52 MHz split” (upper edge of the upstream spectrum is at 42 MHz, whereas the lower edge of the downstream spectrum is at 52 MHz). The curve illustrates the group delay, with the shaded regions indicating the frequencies and chroma/luma delay difference for the four lowest NTSC television channels in the Standard ANSI/EIA-542 frequency plan. The direction of the chroma delay is to advance the chroma with respect to the luminance. Serendipitously, this is the opposite delay sense (luminance is delayed with respect to chrominance) to that which is commonly encountered in the modulation-demodulation process, but any delay is not good.

image

Figure 10.12 Downstream group delay due to diplex filters.

The required end-to-end group delay requirements and typical network performance are discussed in Chapter 15, and upstream group delay is discussed in Chapter 16.

10.3.5 Hum Modulation

Although amplifiers that plug into conventional wall outlets are available (and used in headend and apartment house applications where convenient), most distribution amplifiers are powered via ac voltage that is multiplexed with the signals on the common coaxial distribution cable. Along with their signal-processing circuits, amplifier stations include circuitry for separating the power and signal voltages and for connecting the power to local power circuits that convert it to direct current to power the various electronic modules used.

Amplitude modulation of the transmitted signals at the power line frequency (hum) can occur in two ways: excessive ac voltage at the output of an amplifier’s local power pack and parametric modulation of magnetic component properties. Power pack “ripple” can modulate amplifier gain or be coupled into other circuits such as the automatic gain control (AGC). This ripple can result from faults in the power pack or loss of voltage regulation due to excessive voltage drop between the power supply and the amplifier’s power pack.*

Parametric magnetic modulation is less obvious. Consider the circuit segment in Figure 10.13. The current is tapped off the common coaxial input and sent through L to the amplifier power pack (and possibly inserted into the output coaxial cable(s) as well). Series capacitor Сserves to pass the signals into the amplifier while blocking the supply voltage. Finally, T is the input winding of a typical signal-processing element, such as a directional tap, splitter, or part of a band-splitting filter.

image

Figure 10.13 Power/RF separation in amplifiers.

One possible source of hum modulation is L. This component is required to exhibit a very high impedance to frequencies covering the entire forward and return spectra. If the magnetic core becomes partially saturated at the peaks of the supply current, then the impedance match of the amplifier will vary accordingly, resulting in a level variation of the transmitted signal at the line frequency rate.

τis a much more sensitive component. Typically, these transformers are constructed of tiny cores that must have constant magnetic characteristics over an extremely wide frequency range. Though Сnominally protects τfrom the supply current, there is still a displacement current to contend with. In particular, an ac current will flow through Сand the primary winding of τwhose magnitude is


image (10.36)


When the power supply is a sine wave with a frequency of fp and a peak voltage of Ep, the current will be


image (10.37)


which has a peak value of CEpfp. If the voltage is a square wave with fast rise and fall times, then the displacement current can be much greater for the same peak supply voltage. To illustrate the trade-offs in design of these circuits, assume that it is desired that Сhave no more than 1 ohm of reactance at the low end of the upstream spectrum (generally 5 MHz) so as to affect the input match as little as possible. This requires that Сbe at least 0.03 µF [XC = 1/(2πf C)]. If the supply voltage is 90-volts rms (127 volts peak) at 60 Hz, then the displacement current will be 1.4 mA. If, under square wave powering conditions, the rise time is five times that under sine wave conditions, then the current could be as high as 7 mA. The designer’s challenge is to ensure that this level of additional current in the winding of τdoes not change the transformer’s characteristics.

In the North American NTSC analog television system, the frame repetition rate is 59.94 Hz. Since the commercial power frequency is 60 Hz, hum modulation generally appears as a horizontal bar or brightness variation whose pattern slowly moves upwards with time (about 16 seconds to move the entire height of the picture). Typical coaxial components (both amplifiers and passive elements) are specified to create hum modulation at no greater than −70 dB, relative to 100% modulation, at their full rated supply voltage and pass-through current.

10.3.6 Frequency Response Variation and Impedance Match

Practical amplifiers do not amplify signals at all frequencies equally. Part of this variation is intentional and is intended to compensate for variations in cable losses and to optimize distortion performance, as we discuss in the next section. Any deviation from the intentional variation, however, is known as frequency response variation (quantified as ±x dB) or, more commonly, peak-to-valley (quantified as у dB peak-to-peak). Response variation is defined by the SCTE as the maximum gain deviation in either direction from a line drawn from the gain at the low end of the defined bandwidth to the gain at the high end of the defined bandwidth.12 Figure 10.14 illustrates this definition; the peak-to-valley (P/V) is the sum of the absolute values of +Δ max and -Δ max. This definition, however, is somewhat controversial since it is highly affected by the endpoints. An alternative definition for frequency response is the variation above and below a line that is drawn so as to minimize P/V. For instance, if the lower end of the “ideal gain line” in the figure were raised slightly, the effect would be to lower +Δ max by more than the increase in -Δ max and thus reduce the total P/V.

image

Figure 10.14 Definition of response variation.

The frequency response through a network may be affected by the imperfect impedance matching between the amplifier and attached transmission line. The effects of imperfect matches in coaxial components can best be understood in the context of their interaction. This is covered in detail in Chapter 15.

10.3.7 Practical Amplifier Design Choices

Complete amplifier stations may be simple or quite elaborate, depending on system needs. All share a common basic downstream structure consisting of several (generally two or three) cascaded gain stages, plus circuits that compensate for the loss variation of the cable before the amplifier (equalizers). Provisions are also made for adjusting the gain of the amplifier as needed to meet system needs. Amplifiers differ in how many stages are employed, internal splitting, redundancy provisions, where operating adjustments are in the chain, automatic gain control, and the performance of each gain block. Many have provisions for automatically adjusting amplifier gain to compensate for changing input signal levels (automatic gain control).

As will be seen, optimum operation of repeatered coaxial distribution systems results when the levels of the FDM subcarriers at the output of amplifiers are not equal but increase with frequency. The variation in level across the operating range is known as tilt. The relationship between operating levels, amplifier response, and interconnecting transmission losses is explored more fully in Section 10.3.8.

In the upstream direction, all the amplifier designs are similar. Since cable losses are much lower in that portion of the spectrum, a single amplifier module is sufficient. Diplexers are used at each port to separate the upstream and downstream signals. All the upstream signals are passively combined and fed to the amplifier module. Unlike downstream, where all level setting takes place at the input or midstage with the aim of setting a consistent output level, the common upstream practice is to align the system so that the input level to the module is constant and to set the output gain and slope after the gain stage so that the input to the next amplifier station upstream is correct. Chapter 16 covers the alignment and operation of networks in the upstream direction.

To allow the most flexibility in application and system design, jumpers or switches allow power to enter the amplifier via any desired port and to be routed to any combination of output ports.

Early power packs used conventional transformer/linear regulator designs. Transformer taps were available to optimize the winding ratio of the transformer to limit required regulator dissipation. Modern designs almost universally use switching regulators (whose design is beyond the scope of this book) that have the property of operating at a constant high efficiency regardless of input voltage. To do so, they draw a constant wattage from the coaxial cable, and thus the current is inversely proportional to the supply voltage. Though this results in the greatest overall powering efficiency, it has some unexpected consequences when a number of amplifiers are powered from a common source, as will be seen in Chapter 11.

Beyond their basic functional elements, amplifier stations may be equipped with a variety of enhancements, including redundant gain modules and power supplies, status monitors and transducers, and remotely controllable upstream path disconnect. Often fiber-optic receivers and transmitters are combined with amplifier station components, creating a fiber-optic node.

Although the labels attached to different configurations are somewhat arbitrary and subject to change, the following classifications are typical:

Trunk amplifiers are the workhorses of traditional all-coaxial cable television systems. They are optimized for long-distance, repeatered signal distribution. As such, they are operated at output levels that cause only moderate distortion to allow long cascades. The gain is set as high as possible, consistent with low per-stage noise addition. Typically, the output levels of the highest channels are +35 to +40 dBmV, and the gain is about 22 dB. Trunk amplifiers are not used to supply signals to taps.

Where a local distribution circuit is also desired, a portion of the trunk signal is fed to a separate output stage known as a bridger. This amplifier generally runs at a higher output level (and distortion) to allow feeding as many customer taps as possible. Though generally configured as an optional module within the trunk station, bridgers may be contained in separate housings along with directional couplers (midspan bridgers). Figure 10.15 is a block diagram of the RF portion of a typical trunk station, including bridger.

Distribution amplifiers are basically stripped-down trunk/bridger stations that are available in a wide variety of configurations. Typically, they are available with multiple output amplifier modules whose level can be independently adjusted. Figure 10.16 shows a typical configuration with three output gain blocks.

Line extenders are used to allow feeding more taps than possible with just a bridger or high-level distribution amplifier port. Since they are generally operated at similar output levels, the cascade of such devices must be limited to keep distortions at an acceptable level. Line extenders are the simplest of the classifications, as shown in Figure 10.17.

image

Figure 10.15 Typical trunk amplifier RF configuration.

image

Figure 10.16 Typical distribution amplifier RF configuration.

image

Figure 10.17 Typical line extender RF configuration.

10.3.8 Amplifier Operating Dynamics

The C/N degradation as a signal passes through an amplifier is related to the input signal level and the noise figure at any frequency (see Equation (10.18)). On the other hand, the loss of coaxial cable increases with frequency. Thus, the amplifier need not supply the same level for channels at the bottom of the spectrum as at the top to overcome cable loss. Reducing the levels of the lower-frequency channels, in turn, reduces the amplitudes of the intermodulation products to which those channels contribute, and thus CSO, CTB, CIN, and XMOD.

The relation between cable and amplifier characteristics and operating levels can best be illustrated by considering an amplifier and its driven cable as a system (see Figure 10.18). Only the downstream path will be discussed here; Chapter 16 will deal with the equivalent upstream issues.

image

Figure 10.18 Downstream response and gain relationships and adjustments.

The preamplifier will typically have a relatively uniform noise figure and gain across the spectrum. Between the preamplifier and output stage, a slope circuit is used to adjust the frequency response so that it uniformly increases across the downstream frequency range. Typical gain slopes are 6–10 dB for a 750-MHz amplifier. Additionally, pads are used to adjust the overall amplifier gain. The station noise figure and slope are illustrated in Figure 10.19(a).

image

Figure 10.19 Amplifier gain and signal loading. (a) Amplifier characteristics. (b) Amplifier loaded with analog video signals. (c) Amplifier with hybrid analog-digital signal loading.

Some combination of coaxial cable and passive RF devices is connected between this amplifier station and the input of the next station. Since both the cable and passive devices have a loss that varies across the spectrum (and, in the case of cable, increases as the square root of frequency), the variation in level reaching the next station will be reduced. Amplifier input circuits are designed to accommodate a variety of equalizers that are optimized to compensate for the residual response variation and pads that are used to set the correct drive level to the preamp input. Occasionally, external in-line equalizers may be used also, for example, as a tool to reduce the total variation of levels fed to customers.

In station alignment, the equalizers and pads are used to align the output of each amplifier station as closely as possible to the optimum levels and “gain slope.” Thus, the system gain and frequency response, as measured from the output of one amplifier to the output of any other similarly aligned amplifier in a series-connected string, are ideally unity. Because of the variability in interstation losses, the levels and response at station inputs will vary; however, the response at the internal preamp input will be approximately flat. Amplifiers that do not feed customer taps (for example, trunk amplifiers) are typically operated at levels several decibels lower than those that do.

By contrast to the frequency response of the system (slope), the variation in levels of the signals carried is known as tilt. The terms are often confused, particularly since modern amplifier performance is often specified with a tilt at amplifier outputs that matches the gain slope. Historically (with narrower bandwidth amplifiers), it was more common to use “block tilt” with groups of channels operated at different levels. This was easier for technicians to understand since only a few levels needed to be memorized.

Modern 750-MHz amplifiers are generally specified under two signal-loading conditions. Figure 10.19(b) shows the nominal preamplifier input (after equalization) and output levels of an amplifier station loaded with a full spectrum of 110 video signals that are tilted to match the gain slope of the amplifier. This is the optimum condition for carrying signals of similar format and results in approximately equal C/N degradation for each channel (because of the uniform preamp input levels) and the lowest distortion (because the outputs at the lower channels are reduced in level to approximately match the interconnecting cable and component losses). Figure 10.19(c) shows an alternative loading condition with a superimposed −10-dB block tilt for signals above 550 MHz. The assumption is made that these signals are digitally modulated and do not require the same C/N and C/distortion as analog video channels and so are reduced in level to reduce overall amplifier loading.

Optimum amplifier gain is a function of noise figure and output capability (for a given distortion level). By way of illustration, assume that a given amplifier has an effective noise figure of 9 dB and a CTB of −78 dB at an output level of +45/35 dBmV (the dual numbers are the individual channel levels at the top and bottom of the downstream spectrum, respectively). If the gain of the unit is set at 35 dB (at the top channel frequency) and the input level is +10 dBmV on all channels, then, from Equation (10.19), the C/N of this stage alone will be 60 dB (10 + 59 − 9). The output level will be the input level plus the gain or +10 dBmV +35 dB = +45 dBmV, so the CTB will be the specified −78 dBmV.

If the input level is varied, the C/N will improve by 1 dB for every decibel increase in input level, whereas the CTB will degrade by 2 dB. In setting up a coaxial system, input padding is used to set the optimum operating point.

Now consider the situation if the amplifier had been designed with a gain of 40 dB. At first glance, this would seem like an improvement since such amplifiers could be spaced farther apart along the transmission line. When driven by a +10 dBmV input signal, however, the output level will be +10 dBmV+40 dB =+50 dBmV. At that level, the CTB will be only −68 dB, having degraded by 10 dB owing to the 5-dB increase in output level. If the input level is decreased by 5 dB to restore the distortions to the specified level, the station’s C/N will decrease from 60 to 55 dB. In short, the excess gain forces the user to accept either higher distortion or higher noise.

Though line extender amplifiers often offer only an input padding option, some distribution and trunk amplifier designs also allow between-stage padding. This partially offsets potential excess gain problems by allowing the user to independently choose the operating level of each gain block though each gain block is still limited in performance by its individual noise figure, gain, and output capability.

The selection of optimum amplifier internal gains is very complex. It would seem from the preceding example that reducing gain would improve performance. Although that is true, more lower-gain amplifiers would have to be cascaded to reach the same distance, and that would lower reliability and increase power consumption. The industry’s choice of operational gains ranging from 20 to 38 dB (depending on technology and application) is based on the best compromise among cost, reliability, and performance.

10.3.9 Amplifier Technology Choices

As previously discussed, modern broadband solid-state cable television amplifiers are all push-pull in nature, meaning that their output stages contain two matched transistors connected in a symmetrical configuration. To the extent that any nonlinearities are also symmetrical, they will produce only odd-order distortion products. Amplifiers whose output stages each contain only a single push-pull circuit are known generally as push-pull amplifiers.

An improvement in distortion performance can be realized by splitting the input signal, feeding two push-pull output gain stages in parallel, and then combining the outputs into a single port. This type of amplifier, generically known as parallel hybrid, offers about twice the output level at the same distortion levels compared with push-pull units. Since two output stages are involved, the power consumption and dissipation are also higher than with push-pull units. Parallel hybrid amplifiers are generally used where high output powers are beneficial, such as bridgers and line extenders.

The third dominant amplifier technology is known as feed forward. In a feedforward amplifier, the input signals are amplified via a conventional inverting amplifier. Samples of the input and output signals, adjusted for gain, are combined and amplified in a separate inverting error amplifier. The output of the error amplifier is summed with the output of the original amplifier so that the errors approximately cancel.13 Figure 10.20 shows this configuration. The delay lines are required to match the phases of the signals. Modern feed-forward amplifiers offer about 12 dB of CTB improvement and 6 dB of CSO improvement over power doubling units at comparable operating levels. Furthermore, as discussed later, the remaining distortions are not necessarily phase coherent among cascaded amplifiers, which improves the performance of a repeatered system. Feed-forward amplifiers are generally used in long trunk lines where the lower cascaded distortion is beneficial.

image

Figure 10.20 Feed-forward gain block.

10.4 Passive Coaxial Components

The nature of typical coaxial distribution systems is that the downstream path is split and resplit to create many endpoints from a common signal insertion point. In the upstream direction, the opposite is true, with signals from many potential insertion points combined at a common node. In the headend, signals from individual modulators and signal processors are combined to create the full downstream spectrum, whereas upstream signals are split to feed receivers for each service.

The signal splitting can occur either within amplifier housings or by use of stand-alone components. Those that split the signal equally are known as splitters, whereas those that divert a defined portion of the input signal to a side port are known as directional couplers. Finally, those that divert a portion of the input signal, then resplit the diverted signal to create individual customer drops are known as taps. All share certain characteristics and so will be treated in this section.

In general, all these are passive devices constructed using ferrite-loaded transmission lines and transmission-line transformers. They are bidirectional; that is, the same device can be used to split signals to feed multiple paths or to combine signals from multiple input ports. Important characteristics are signal loss, impedance match, and isolation between nominally isolated ports.

10.4.1 Directional Couplers

Figure 10.21 shows schematics of two directional coupler configurations. In both examples shown, the voltage transformer and current transformers have the same ratio so that, after combining, the ratio of voltage to current stays the same, meaning that the impedance of the side port is the same as the main line. In the main line, the relative polarity of voltage and current is reversed for signals traveling in one direction versus the other direction. The transformers sample voltage and current regardless of signal flow in the main line. The difference is that the sampled signal, after combining, will propagate either toward the termination resistor or toward the side port, depending on the relative polarity of voltage and current. This is what gives the coupler its directivity.

image

Figure 10.21 Directional coupler schematic. (a) Coupler plus power components. (b) Alternate RF configuration.

By varying the turns ratio of both transformers, various values of coupling are possible. Ellis14 has published a tabulation of coupling values as a function of turns ratios, along with a good description of coupler operation. Table 10.2 lists some commonly used values and their theoretical signal loss values (practical units will exhibit 1–2 dB of excess loss due to imperfect components). The coupled loss is simply due to the voltage (or current) ratio defined by the transformers, whereas the input/output loss just reflects the loss diverted to the coupled port (or isolation resistor, depending on signal flow).

Table 10.2 Theoretical directional coupler losses

Turns Ratio (T1 and T2) Input to Output Insertion Loss (dB) Input to Coupled Port Loss (dB)
2.5:1 0.78 7.96
3:1 0.51 9.54
4:1 0.28 12.04
5:1 0.18 13.97

As is the case with amplifiers, added components in units intended for installation in trunk and distribution cables separate power from signal voltages and send them through different paths, as shown in Figure 10.21(a). Generally, means are provided to control transmission of power independently from RF signals so as to allow greatest flexibility in system design. Note that, if ac power is transmitted through all RF ports, the displacement current through the three blocking capacitors will add in phase through the primary of the voltage transformer. Thus, couplers generate parametric type hum modulation at levels as high as or higher than amplifiers.

The difference in sensitivity to downstream and upstream signals, as measured at the side port, is the directivity of the coupler. Sometimes, instead of directivity, the absolute upstream coupling factor, known as isolation, is specified. Numerically, directivity is simply the difference between coupling factor and isolation. As we will discuss in Chapter 15, directivity and return loss interact to cause additional system group delay and response variation.

The coupling value may not be uniform across the entire frequency range of the device. Even though the overall variation in coupling factor may not be desirable or intentional, the slope and response variation are measured separately and are defined for passive devices the same as for amplifiers.

10.4.2 Splitters

Two-way splitters differ functionally from directional couplers only in that the signal splitting is equal. Other common configurations include four-way and eight-way equal splits. Three-way unequal splits are internally constructed by splitting two ways, then resplitting one of the legs from the first splitter to create a 50%:25%:25% output ratio. Equal ratio three-way splitters are less common.

Figure 10.22 illustrates a two-way symmetrical splitter, with expansion to four way. Figure 10.22(a) is the schematic diagram of a two-way unit, and Figure 10.22(b) is the standard symbol, with terminology. Transformer T1 is a 2:1 impedance step-down transformer that creates a 37.5-ohm source to feed the center tap of transformer T2. T2 is a transmission-line transformer derived from a Wilkinson coupler (which is used for symmetrical splitting of high-power, narrow band signals for directional antenna arrays, among other applications).15 The two outputs at ports 2 and 3 are nominally in phase and at the same level. Resistor R1 is required to provide isolation between them.

image

Figure 10.22 Splitters. (a) Schematic. (b) External power flow. (с) Multiple stages of splitting to create m-way splitter with enhanced isolation among some ports.

Because power is equally divided between ports 2 and 3, the theoretical loss from port 1 to either output is 10 log (0.5) = −3.01 dB. As with directional couplers, the practical loss will be higher, due to core losses in the ferrite core material in the transformers and resistive losses in the transformer wire.

The dimensions of transformers used in high-frequency applications are constrained by the necessity for the winding lengths to be a small portion of a wavelength at the highest frequency of operation. This, in turn, mandates very small cores and fine-wire gauges whose resistance is a factor in excess signal loss. The excess loss over and above that due to coupling is typically 0.5 to 1 dB to each of the output ports, making the total loss typically 3.5 to 4 dB.

Loss in a splitter is called flat loss because it tends not to be a strong function of frequency, as is the loss in coaxial cable. However, there is some increased loss as the frequency increases, partially due to skin effects, which increase the effective resistance of the wire as frequency increases. At the higher frequencies, the ferrite core on which the transformer is wound ceases to have significant magnetic properties, but rather acts as a coil form. Some loss may be associated with it, but the ferrite material is primarily included to improve low-frequency performance. At the lowest frequencies, loss also increases because the open-circuit reactance inductance of the transformer windings is no longer large relative to the circuit impedance.

When the splitter is used as a combiner, incoming signals on ports 2 and 3 are combined and appear at port 1. The theoretical loss from port 2 (or port 3) to port 1 is also 3 dB, the other half of the power being dissipated in resistor R1. Theoretically, no power from port 2 appears at port 3 (if all ports are properly terminated) and vice versa. The signal loss between the output ports, with all ports terminated in 75 ohms, is defined as the isolation. Typical values of isolation for commercially available splitters range from 20 to 30 dB.

Other important specifications of splitters include insertion loss, normally specified as the total loss from port 1 to port 2, with port 3 terminated (or vice versa), and the return loss at each port, with the other ports terminated.

Lack of termination drastically reduces the effective isolation. For instance, power injected into port 3 is attenuated −3 dB at port 1. Should that port be unterminated, the power would be reflected into the splitter dividing equally between ports 2 and 3. Thus, the isolation from port 3 to port 2 would be only 6 dB. Similarly, the effective return loss at port 3 would be 6 dB as a result of power being reflected to that port.

Splitters having more output ports can be fabricated by combining two-way splitters in tree fashion, as shown in Figure 10.22(c). Each leg of the leftmost splitter is connected to the input of another two-way splitter. At that level, four outputs exist. Each of those four can be connected to another two-way splitter, resulting in eight outputs, and so on. The practical loss at each stage will be 3.5-4 dB so that the four-way split loss is 7–8 dB. In an eight-way splitter, it is 10.5-12 dB and so on. In general, the loss in a multilevel splitter is given by


image (10.38)


where

Lm = the loss of the m-output splitter chain in decibels

L1 = the loss in one two-way splitter (3.5-4 dB usually)

It is also possible to construct a splitter chain with a number of outputs not equal to 2n by simply terminating the splits early on some of the chains. Equation (10.38) must be modified in this case.

Where output port isolation is important in multiport applications (such as headend combining) one alternative is to use ports in multilevel splitters that are not derived from common stages for critical applications. As shown in Figure 10.22(c), if two splitter outputs are connected to the same second-stage splitter, the isolation between them is merely the isolation of that stage. On the other hand, if two ports are not connected to the same second-stage splitter, then the isolation realized will include the effects of the first-stage splitter, as shown in the figure. Unfortunately, manufacturers may not indicate internal splitter configurations or specify nonadjacent port isolation.

Another higher-isolation alternative is to construct multiport splitters from cascaded directional couplers. Generally, side ports of adjacent, series-connected couplers will be better isolated than outputs from single-stage splitters.

10.4.3 Taps

The most common tap configuration consists of a directional coupler with the side arm signal split two, four, or eight ways to create customer drops. Although the input and through ports are typically provided with connectors appropriate for distribution cable, the side arms are equipped with connectors appropriate for drop cable. In North American systems, this means “5/8-24” ports for input and through ports and type F connectors for the drops.

A tap need not have a through port. At the end of the line, it is common to use a terminating tap, which is just a splitter whose output legs are equipped with drop connectors.

Until about 1995, taps supplied RF signals, but not power, to customer drop lines and were limited in main-line current-handling capacity to 7 amperes or less since this was adequate for entertainment-based cable television systems. With the active consideration of HFC architectures for telephony service came increased demands on tap capabilities:

Bandwidth expansion to 5-1,000 MHz

The ability to supply power to side-of-house active network interface devices (NIDs) that could provide an interface between baseband twisted pair in-home telephone circuits and the RF carriers in the distribution system

Provisions for changing tap values and configurations without interrupting service to customers farther along the distribution cable

Increased voltage and current capability, typically 90 Vrms and 10 amperes, in order to allow transmission of sufficient power for both network amplifiers and NIDs over the same coaxial cables formerly used only for video service

The new generation of taps that have evolved are known as power-passing taps or, sometimes, telephony taps. In addition to normal tap capabilities, they tap into the distribution cable power and selectively provide current-limited power along with RF signals to a NID on each home. Depending on the design, this power may be multiplexed with the signals in the coaxial drop cable or carried along a separate twisted pair of copper wires in a hybrid cable configuration. In general, these taps also provide a make-before-break connection between the input and through ports when the tap “plate” (which contains all the circuitry) is removed from the housing so that both RF and power continuity are maintained to customers downstream.

A major concern for those using the drop center conductor to carry power to NIDs is that, if arcing occurs because of a faulty center conductor contact, the arc will transfer a very significant amount of RF power to the upstream plant, likely causing interference with all users of the reverse spectrum. On the other hand, an arc often causes healing of a bad contact and, thus, can be self-healing.

Figure 10.23 is a circuit diagram for a typical power-passing tap that uses the drop center conductor to pass power to the NID. The RF portion of the tap consists of a directional coupler and two-way splitter (the diagram is for a two-port tap). The directional coupler diverts a controlled portion of the downstream signal and passes it to the splitter, which divides the power to feed the subscriber ports.

image

Figure 10.23 Power-passing tap using drop center conductor for power delivery to home.

Multiplexed ac power consideration in taps is similar to that on amplifiers and other network passives. Capacitors C1 and C2 isolate the multiplexed power supply voltage and current from the directional coupler RF components. Transformers T1 and T2 are built with very fine wire to permit operation at frequencies up to 1 GHz and would be unable to withstand the currents that would result if the primary of T1 were required to handle several amperes of ac current or if the primary of T2 had tens of volts of 60 Hz across it. The multiplexed ac power is shunted across the tap through the large RF choke L1.

A choke with somewhat less current capacity, L2, provides a path for the multiplexed power that will feed the customer ports. Since this choke is shunt-connected across the main RF path, it must have a higher RF impedance than L1. The unavoidable RF loss due to this choke is one reason power-passing taps have a slightly higher RF loss than normal taps.

In order to permit the tap plate (which contains most, if not all, the circuitry) to be removed without interrupting either power or RF continuity in the hard cable distribution plant, many modern taps incorporate a mechanical bypass switch that engages before the tap-to-baseplate connections are broken to provide continuity from input to through ports. During the removal or insertion of a tap plate, therefore, there will be a brief interval when both the bypass and main connections are made. This will cause a transient condition during which the primary of T1 will be shorted, but the primary of T2 will function normally. This momentary mismatch will cause some attenuation of the transmitted signals. Whether or not it causes a functional problem to the services offered will depend on the characteristics of the signals used.

Continuing with the description of the circuit, capacitor C3 is used to ensure a good RF ground on the “cold” end of L2. Positive temperature coefficient (PTC) resistors PTC1 and PTC2 limit the ac current that can be drawn through the customer ports. At normal currents, the PTC resistor (commonly called just a PTC) exhibits low resistance. As more current is drawn, its temperature increases, causing a resistance increase and effectively limiting the total current. Electronic current limiters have been proposed as an alternative to PTCs, but to date have not been widely deployed because of cost. New National Electric Code (NEC) regulations, effective in 1999, require a more sophisticated current-limiting system if the drop cable does not meet certain construction and installation criteria.

Finally, inductors L3 and L4 couple power to the two tap ports shown, again isolating the RF signals from the ac supply. As with L2, the reactance of these components must be sufficient to minimize RF losses, and their presence leads to slightly higher overall RF losses in commercially available power-passing taps.

The relationship between passive specifications and system performance will be explored in Chapter 15.

10.5 Power Supplies

Power supplies convert commercial power to lower voltages that are multiplexed with RF signals in the coaxial cables. Although not part of the RF circuit, certain characteristics of power supplies have an effect on the transmission of signals.

Direct current, very low frequency (as low as 1 Hz) ac, and 60-Hz ac have all been considered for system power, as have voltages between 30 and 90 volts. Direct current (dc) has been rejected primarily because of possible galvanic corrosion. Early cable television systems used 30 volts, but the increased power demands of wider-bandwidth, higher-output amplifiers forced the industry to convert to 60 volts for systems constructed after about 1970. Two factors are forcing consideration of even higher voltages: the desire to power larger plant sections from a single power supply to improve reliability (see Chapter 20 on HFC reliability for a detailed explanation) and the possibility of powering some terminal equipment through the distribution plant.

Along with the increase in voltage has come a requirement to increase currents as far as practical, with requirements to pass 10–15 amperes being common. Since current from supplies can be transmitted both ways along the cable and/or inserted into more than one cable, power supplies with 40 or more ampere capacity are available.

Field power supplies are roughly divided among nonstandby, standby, and UPS/standby types. The basic component of a nonstandby, line frequency power supply is the ferroresonant transformer. Though its magnetic design is beyond the scope of this book, ferroresonant transformers have a number of properties of interest to operators: (1) output voltage regulation (a typical specification is ±5% output voltage change when the input voltage changes ±15%), (2) inherent current-limiting and short-circuit protection, (3) transient protection, and (4) 90% or better efficiency at rated output current.

Standby power supplies add a battery charger, a set of rechargeable batteries, an inverter, and a changeover switch. The charger assures that the batteries are always held at full charge, whereas the inverter starts and supplies square wave power to replace the sine wave commercial power when it is interrupted (see Figure 10.24). Depending on the application, sufficient battery capacity may be provided for 2–8 hours or more of standby operation.

image

Figure 10.24 Basic standby power supply functional diagram.

An important parameter for operation in the standby mode is the rise and fall time of the square wave voltage. As Equation (10.35) shows, the amount of ac power supply current that “leaks” into the low-level signal path in amplifiers and passives is directly proportional to the rise time of this waveform. The trade-off, for power supply designers, is that the efficiency of an inverter is an inverse function of the time the transistors spend in their active region (that is, not in a saturated or cutoff condition). Also, the power lost due to slow switching is dissipated in the transistors and must be conducted away by heat sinks. Unfortunately, the industry does not have a consensus on the best compromise on rise time.

When commercial power fails, there is an unavoidable but brief interruption while the relay switches between the ferroresonant transformer output and the inverter transformer output. Typically, the time to switch is 8–16 ms. Most supplies synchronize their inverters with commercial power before switching so that phase shift transients in the power supplies of the amplifiers can be avoided.

As will be discussed in Chapter 11, even with inverter phase synchronization, the short switching transient can be a problem when analyzed with respect to its effect on cascaded amplifiers. To avoid that, a third alternative is the uninterruptible power supply (UPS). Some UPS versions of field supplies avoid switching transients by utilizing the same ferroresonant transformer for both inverter and commercial power. During ac power outages, commercial power is disconnected, and the inverter continues to supply power via an additional primary winding.

10.6 Summary

This chapter has covered the technology behind coaxial distribution systems. The characteristics of the coaxial cable, amplifiers, passive components, and powering systems have been covered in detail, including both the theory behind operation and the performance of commercially available products.

Although other constructions are sometimes used, solid aluminum sheathed cable of 75-ohm impedance has been seen to offer a good combination of loss, cost-effectiveness, and handling characteristics and are in common use worldwide. Amplifier stations designed for cable systems are designed with unique characteristics that complement the characteristics of the interconnecting cable and passive devices. Integrated powering of active devices through the cable avoids the necessity of separately connecting each distribution system amplifier to the local utility, but requires special circuitry in every device to separate the power and signal paths.

Before the mid-1980s, cable television systems used primarily coaxial technology (sometimes supplemented by broadband microwave links) to serve their customers. Chapter 11 will cover the design of coaxial distribution systems, including both the shared portions and individual in-home “drop” wiring. As will be seen, the performance of such networks is subject to certain unavoidable limitations that become more severe as bandwidth is increased. As a result, modern systems generally use fiber-optic “supertrunks” in place of the long, repeatered coaxial trunks to interconnect small, physically separate coaxial distribution networks with shared headends. Chapter 12 and 13 will cover the technology and performance of such links. Chapter 14 will then cover alternatives to fiber optics (primarily microwave). Chapter 15 will cover the end-to-end downstream performance of the broadband network, from headend to subscriber, and Chapter 16 will deal with upstream issues.

Endnotes

* In accordance with common North American practice, all attenuation values will be given in decibels per 100 feet of length. This has the advantage that signal loss becomes a linear function of length and of simple conversion to metric units of length.

* To avoid confusion, lowercase symbols will be used in this discussion to refer to scaler quantities (for example, watts), and uppercase symbols will refer to logarithmic quantities (for example, dBmV).

In some countries, the reference is 1 μV rather than 1 mV. Levels in dBμV are therefore numerically greater by 60 than levels in dBmV. In either case, however, the reference is power, not voltage. Finally, for the convenience of those accustomed to working in systems referenced to 1 mW (dBm), 0 dBmV = −48.75 dBm. For consistency, this book uses dBmV as the units for RF power unless otherwise specified.

* As a reminder, the term C/N (carrier-to-noise ratio) is used in this book for RF modulated signals, whereas S/N (signal-to-noise ratio) is used for measurements of the noise level on baseband (unmodulated) signals. This is consistent with both regulatory terminology and common industry practice.

* Variations from exact spacing are due to equipment frequency tolerances, broadcast station offsets of 10 or 20 kHz, and aeronautical band offsets of 12.5 or 25 kHz.

* Actually, there are also some products spaced ±750 kHz from the visual carriers because two of the channels (5 and 6) are offset from the comb in the Standard channel plan.

* In accordance with standard industry terminology, the term power supply is used to designate the device that generates the system power for multiplexing on the coaxial cable, whereas the term power pack is used to designate the device within each amplifier that converts the multiplexed power to the appropriate regulated dc levels for the circuitry.

1. Reference Data for Radio Engineers, 6th ed. Indianapolis, IN: Howard W. Sams, 1977, p. 6–6.

2. Frederick Terman, Electronic and Radio Engineering. New York: McGraw-Hill, 1955, p. 121.

3. NCTA Recommended Practices for Measurements on Cable Television Systems, 2nd ed. Washington, DC: NCTA, October 1993, Part Six.

4. The size 59, 6, and 11 cables were formerly known as RG59, RG6, and RG11 because of their physical size similarity to standard military-grade 75-ohm cables carrying those designations. Since CATV cables do not attempt to adhere to the military specifications, the RG notation has been dropped by the industry.

5. The SCTE is an ANSI-accredited standards-making body. It publishes a wide variety of standards covering both performance and test methods related to cable television products. In particular, SCTE-IPS-TP-011, Test Method for Transfer Impedance, and SCTE-IPS-TP-403, Test Method for Shielding Effectiveness, detail test methods for evaluating shielding integrity, whereas SCTE-IPS-SP-001, Specification for Flexible RF Coaxial Drop Cable, contains both mechanical and electrical minimum performance standards.

6. Code of Federal Regulations: C.F.R. 47 $76.5 (w). Technically, the noise is defined as that occupying the band between 1.25 and 5.25 MHz above the lower band edge. Available from the Government Printing Office, Washington, DC.

7. An excellent detailed mathematical treatment of amplifier distortion based on a power-series expansion is given in the Technical Handbook for CATV Systems, authored by Ken Simons and published by Jerrold Electronics Corporation, Hatboro, PA (the third edition was printed in 1968 and was still in print in 1978). His results were also published in the Proceedings of the IEEE, in July 1970.

8. Schlomo Ovadia, Broadband Cable TV Access Networks. Upper Saddle River, NJ: Prentice-Hall, 2001, pp. 49–50.

9. Data taken from Cable TV Reference Guide, April 1992 edition, Philips Broadband Networks, 100 Fairgrounds Drive, Manlius, NY 13104.

10. Private correspondence from Archer Taylor based on his experiences with installed systems.

11. This treatment of CID is based on the paper by Jeff Hamilton and Dean Stoneback, The Effect of Digital Carriers on Analog CATV Distribution Systems, 1993 NCTA Technical Papers, pp. 100–113. NCTA, Washington, DC.

12. SCTE IPS-TP-201 Test Method for Frequency Response and Gain. SCTE, Exton, PA.

13. A basic discussion of feed-forward gain block principles is contained in the paper by John Pavlic, Some Considerations for Applying Several Feedforward Gain Block Models to CATV Distribution Amplifiers, Technical Papers, Cable ‘83. NCTA, Washington, DC., June 1983.

14. Michael G. Ellis, RF Directional Couplers. RF Design. February 1997, p. 33ff.

15. Robert P. Gilmore, Applications of Power Combining in Communications Systems. Application Note. Qualcomm, Inc., San Diego, CA.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset