Chapter 16. RF Testing

Soumendu BhattacharyaGeorgia Institute of Technology, Atlanta, Georgia

Abhijit ChatterjeeGeorgia Institute of Technology, Atlanta, Georgia

About This Chapter

In the last few years, radiofrequency (RF) testing has been gaining importance in industry as more wireless devices are getting infused into the consumer market. However, RF testing is not well understood among engineers because of the various new considerations present in RF testing compared to digital, analog, or mixed-signal testing. Testing RF devices often requires special skills and knowledge of a different set of instruments to successfully perform measurements. Little has been done to bridge this gap so far and as a result, test needs for RF devices are mostly met by in-house training of employees or hiring external vendors to develop and perform the tests.

This chapter explores RF testing in general and points to various aspects that need to be considered to make consistent measurements—be it on the bench or in production. We first discuss RF test methods for specifications of circuit components as well as complex system-level specifications such as I-Q imbalance, modulation error ratio, and bit error rate (BER). For advanced readers, we further describe innovative test methods proposed by researchers in this domain. We describe common RF instrumentation used in bench and production testing—such as spectrum analyzer, network analyzer, and noise figure meter—to familiarize the reader with their functional details.

Noise is an important factor in RF measurements, and thus we give special emphasis to noise figure measurement. In addition, we explore issues related to making accurate measurements in order to give a clear idea about their implications in a test environment. Finally, we discuss the future directions of RF testing to encourage the reader to conduct a self-study of these advanced topics.

Introduction

Radio, derived from the word radiate, refers to a device that can transmit or receive electromagnetic signals without any electrical conducting media. With carefully designed systems, such signals can be propagated over long distances. Guglielmo Marconi and J. C. Bose independently invented radio in 1898. Early applications of radio technology were limited to maritime communications, radar, and the like, and the only consumer application was in the form of AM radio. Since then, radio technology has come a long way to become an inseparable part of our present-day lives. In the early days, communication was limited to broadcast mode only, where a base station transmitted the signal and nearby units tuned in to the correct frequency to receive the signal. Such devices were bulky and consumed a lot of power, making them unsuitable for portable applications. With advances in semiconductor manufacturing, it is now possible to integrate an array of transmitter-receiver pairs into a small monolithic device so that the user can send and receive data (full-duplex communication) over various frequency bands simultaneously. This has given rise to applications such as pagers, mobile phones, laptop computers, and other devices that are highly portable with a small form factor and that require less power to operate while providing a reliable mode of communication to the end user.

Radiofrequency devices operate at very high frequencies, often in the range of megahertz (MHz) to a few gigahertz (GHz). Classical definitions limit RF frequencies from 300 MHz to 3 GHz; however the frequencies go well beyond 5 GHz for most modern applications. RF technology has been around for many decades, but only recently has it gained widespread popularity in the consumer market. As wireless products continue to flood into the market at a greater pace, the need to provide better functionality with a lower cost becomes important as manufacturers seek to maintain a profitable market share. With the progress in manufacturing technology following Moore’s law, production cost of silicon devices has reduced significantly. On the other hand, test costs for RF devices are relatively large compared to their analog counterparts and, unless accounted for, this can considerably offset the overall profit margin for semiconductor manufacturers for the future generation of RF devices. Unlike low-speed analog circuits, RF products pose a different set of challenges for high-volume testing and characterization. In this light, RF testing has drawn considerable attention from design and test communities. To better understand the challenges associated with RF testing, we first discuss the basics of RF technology.

RF Basics

Electrical signals are electromagnetic waves that propagate through a transmission medium and are altered in amplitude, frequency, and phase by the elements of the transmission network. Using Maxwell’s four basic equations [Pozar 2005], all such behaviors of a network can be explained. However, it is not always straightforward to use these equations, and solving them may be cumbersome in most cases. Hence, easier analysis methods, such as lumped models, are used to analyze a given circuit.

The scattering parameter, popularly known as the S-parameter, is one such technique to measure, characterize, and explain the operation of a linear RF device. S-parameters measure the transmitive/reflective properties of a device, such as the input and output impedance, the forward/reverse gain, and are measured using a network analyzer [Agilent NA-2000]. Under linear operating conditions, s-parameters represent the reflection and transmission coefficients of a network [Agilent SP-1997]. Hence, for a given input signal, the S-parameters tell us how much of the signal energy is reflected back to the source and by what amount the remaining signal is amplified/attenuated by the network.

Other parameters, such as the Z-parameter, h-parameter, Y-parameter [Pozar 2005], can also be used to model a RF network. Any parameter set can be transformed to any other parameter set using simple matrix transformations. However, computing the S-parameter set exhibits certain advantages over the others and is usually preferred. To determine all other parameters, the input and output ports of the network need to be either opened or shorted. At very high frequencies, this may be difficult to achieve because of the inherent parasitics present in the circuit. To compute S-parameters, the network is connected between a source and a load with a 50Ω impedance and that makes it easy to perform the measurement even at very high frequencies.

Figure 16.1 illustrates how S-parameters of a two-port network are measured. The parameters a and b, defined next, are used to define S-parameters in terms of the terminal current (I), voltage (V) and the characteristic impedance (Z) (where * represents the complex conjugate) [Kurukowa 1965]:

Equation 16.1. 

Using the a and b values (normalized with respect to characteristic impedance, Z0) for each port, S-parameters for a network can be computed, where a1 and a2 are the voltages generated by the waves incident at ports 1 and 2, respectively, and b1, b2 are the measured voltages generated by the waves reflected from the network at ports 1 and 2, respectively. The scattering matrix, of a two-port network is defined as:

Equation 16.2. 

where

Two port network.

Figure 16.1. Two port network.

The preceding discussion holds for linear circuit operation. This is generally true for RF, as the signal levels are usually small and circuits operate in the linear region. However, as the signals are inherently small, there is a need to maximize power transfer between circuit stages and minimize the loss in the form of reflections between the stages. This is done by following the maximum power (transfer) theorem [Lee 2003] [Pozar 2005], which states that to obtain maximum power delivery from a source with constant source impedance, the load impedance must be equal to the complex conjugate of the source impedance. This condition is known as the matched condition, where source and load impedances are matched to provide maximum power transfer from the source to the load. To achieve impedance matching for a device, first the complex reflection coefficient is computed as follows:

Equation 16.3. 

where ZL is the load impedance and Z0 is the characteristic impedance of the transmission line connecting the source and the load. Furthermore, to achieve matched conditions, ZL is adjusted such that ρ is equal to 0. The impedance value that is required to achieve this matching condition can be computed using a Smith chart, a handy tool that has been used since 1937 for designing RF circuits. A Smith chart is shown in Figure 16.2. The chart is plotted in the complex plane, representing the complex reflection coefficient computed from Equation (16.3), expressed in polar coordinates. The center of the chart corresponds to the case when the source and the load impedances are matched, and hence the reflection coefficient is zero. The perimeter of the chart corresponds to complete reflection (|ρ| = 1), and the angles printed around the perimeter indicate the phase of the reflection coefficient from 0° to 180°, or half a wavelength. The full circles represent the resistive component of the impedance, and the crossing partial circles represent the reactive component. Detailed descriptions of the Smith chart can be found in [Pozar 2005].

Smith chart for impedance match calculations.

Figure 16.2. Smith chart for impedance match calculations.

RF Applications

Since its emergence in the early 1900s, RF technology has found numerous applications in modern society. Currently, mobile phones are the fastest growing consumer segment with more than 100 million units sold worldwide every year. In addition, RF technology is used for radio communications in various forms, such as citizen FM radio, aviation controls, and private mobile communications (taxi, bus, truck, railways). The global positioning system (GPS) also uses frequencies in the RF band (1225-MHz, 1375-MHz, and 1685-MHz bands) to communicate between the satellite and the GPS units. Wireless local area networks (WLANs) use unlicensed bands at 2.4 GHz, 5.3 GHz, and 5.8 GHz. Apart from consumer applications, satellite communications and deep space communication are also based on RF technology.

Currently, various applications are being pushed to higher frequencies to obtain efficient propagation, larger bandwidth, immunity to certain types of noise, and reduced antenna size. For example, 2.4-GHz and 5.8-GHz cordless phones have become commonplace in home and office environments. In addition, ultrawideband technology (3.1 to 10.6 GHz) is ready to enter the market in the near future as wireless USB standard [Nekoogar 2005].

Key Specifications for RF Systems

A typical RF system consists of various modules, such as amplifiers, frequency converters/mixers, oscillators/frequency synthesizers, and DC bias control units [Razavi 1997]. For reliable operation of the complete system, each module must function reliably as per its performance metrics. Usually, for RF devices, specifications related to gain (conversion gain), nonlinearity, and noise performance are of utmost importance [Kasten 1998]. In addition, oscillator specifications such as phase noise and stability ensure reliable system functionality. Common RF specifications for different device types are listed in Table 16.1. These specifications are discussed in more detail in subsequent sections.

Table 16.1. Key Specifications for RF devices

Device Type

Specifications

Amplifiers/low-noise amplifiers (LNA)

Input matching or return loss (S11), voltage standing wave ration (VSWR), gain (S21), output matching (S22), reverse gain (S12), linearity (second-order intercept point [IP2]), third-order intercept point (IP3), noise figure (NF), in-band ripple, stability (gain and phase margins)

Power amplifier (PA)

Apart from the above specifications, output power level, power added efficiency (PAE), harmonics and total harmonic distortion (THD), signal-to-noise ratio (SNR)

Mixer

Conversion gain, IP3, NF, LO leakage, DC offset, image rejection ratio

Frequency synthesizer/oscillator

Frequency resolution, phase noise in dBc/Hz, integrated phase noise in root-mean-square (RMS) degree, in-band noise, out-band noise, spurs in dBc, reference frequency, tuning range, settling time, loop bandwidth, power consumption, reference feed-through, stability

Voltage controlled oscillator (VCO)

Frequency tuning range, gain in MHz/V, phase noise in dBc/Hz, spurs in dBc, power consumption

Transceiver system

Frequency bands, NF, IP3, dynamic range, sensitivity, maximum receivable power, maximum output power, power consumption, maximum DC offset, SNR, adjacent channel power (ACP), error vector magnitude (EVM), bit error rate (BER), I-Q imbalance (magnitude error, phase error)

Test Instrumentation

At this point, it is worth introducing the reader to the common RF instrumentation used in bench measurements (i.e., characterization) and production testing of RF devices and systems. The common instruments used in RF measurements are the spectrum analyzer [Agilent SA-2005], network analyzer, and noise figure meter [Agilent NF-2006]. Apart from DC tests, it is possible to perform most of RF specification tests using these three instruments.

Spectrum Analyzer

The spectrum analyzer is the most commonly used instrument for RF measurements. A spectrum analyzer is primarily used to display the spectral content (the power at different frequencies) of a signal over a range of frequencies. Using the spectral data, one can easily visualize both the desired and unwanted frequency tones present in the input signal. For RF testing, the engineer is often interested in measuring spurious tones, especially those originating from third-order intermodulation or the harmonics in the signal. In such cases, usually a spectrum analyzer is used to perform measurements. Apart from displaying frequency domain data of the input signal, modern spectrum analyzers can perform complex measurements automatically, such as ACP, IP3, and phase noise, and with certain modifications in hardware and firmware, they can also accurately measure NF.

Figure 16.3 shows the structure of a spectrum analyzer. Spectrum analyzers use firmware-based control (the embedded software processing unit that interfaces with the user and the display) to interact with the user. The firmware in spectrum analyzers operates at lower frequencies compared to the input signal, as signal processing at such high frequencies becomes extremely difficult. Therefore, the input signal is first amplified/attenuated to a preset level and down-converted using a receive chain. Usually, the user specifies a range of frequencies within which the desired signal is expected to lie. Accordingly, the oscillator frequency for down-conversion within the spectrum analyzer is chosen such that the input signal after down-conversion lies at the baseband frequency. For example, if the spectrum analyzer firmware can work up to 20 MHz and the input signal is at 900 MHz, the down-conversion frequency within the spectrum analyzer will be at 920 MHz. Furthermore, based on the range of frequency specified by the user, the oscillator frequency is swept in steps for this range using a sawtooth waveform generator and a voltage controlled oscillator (VCO) (see Figure 16.3). Most modern spectrum analyzers employ two to four down-conversion steps to get the signal down to baseband frequencies.

Block diagram of a spectrum analyzer.

Figure 16.3. Block diagram of a spectrum analyzer.

Next, this signal is passed through a resolution filter that controls the resolution of the display. If the resolution filter for the spectrum analyzer is 100 KHz, then the frequency tones 900 MHz and 900.01 MHz in the input signal cannot be distinguished on the display, as they will both pass through the resolution filter. This filter is selected by the digital control block automatically based on the frequency range specified by the user; however, users can choose the filter manually. The filtered signal is passed through an envelope detector. This generates a DC signal proportional to the tone along with some high-frequency components. The high-frequency components are removed by filtering through a video filter. The name video filter stems from the fact that it is used to provide a precise DC level to the digital block for display. This is done for all the frequency points within the range and is repeated over and over.

It is interesting to note that if the resolution filter has a wider bandwidth compared to the video filter, some unwanted signal close to the signal tone may pass through the detector and may not be fully eliminated by the video filter. This might lead to incorrect readings on the display of the spectrum analyzer. To prevent this outcome, usually the resolution filter and video filter bandwidths are kept the same. In cases where the resolution bandwidth is set too large, a large signal can easily mask any smaller tones in its vicinity. Therefore, it is extremely important to understand the test setup and its implications on the measurement.

Network Analyzer

A network analyzer is another versatile instrument that is widely used to characterize two-port and four-port networks. It can be used to measure the S-parameters and all associated specifications (reflection coefficient, VSWR, etc.). In addition, time-domain reflectometry (TDR) measurements can also be performed using this instrument.

As previously discussed, S-parameters constitute a significantly important portion of RF test specifications. Usually, network analyzers are used to measure S-parameters. To perform such measurements, four important blocks are present within a network analyzer: (1) a source for stimulus, (2) signal-separation/blocking devices, (3) a down-conversion module, and (4) a digital signal processor (DSP) based firmware unit to manage the user interface and display. Figure 16.4 illustrates a block diagram of a network analyzer.

Block diagram of a network analyzer.

Figure 16.4. Block diagram of a network analyzer.

Usually, a network analyzer has two or four ports, through which it can source signal, as well as isolate the reflected signal from the device under test (DUT). The sourcing is usually done from digitally controlled frequency synthesizers. To obtain a reference for S-parameter computation, a part of the stimulus is branched out through a power splitter and fed back to the DSP for reference (not shown in figure). The reflected signal from the DUT into the port is isolated through a directional coupler and can be measured separately from the original signal transmitted to the DUT [Pozar 2005]. The directional coupler usually has a low loss with high reverse isolation. With the transmitted and reflected signals from each port, all the S-parameters and associated specifications can be computed using a network analyzer.

Noise Figure Meter

A noise figure (NF) meter is used to accurately measure NF of various types of devices. Usually, a calibrated broadband noise source, also known as the noise head, is used to generate noise to perform such measurements.

An NF meter, also known as a noise figure analyzer, is used to characterize the noise characteristics of a device. To better understand the functionality of an NF meter, we first discuss the concept of noise temperature. Thermal noise generated by a device increases with increase in physical temperature. At a temperature T and measurement bandwidth of B, the noise power generated by the device is given by N = kTB, where k is Boltzmann’s constant (1.3806503 × 10–23 kg m2/s2K). This means that any device that generates more noise than kTB at room temperature (298.13 K) is equivalent to being at a higher physical temperature at which it would generate an equal amount of noise power. This is referred to as the noise temperature of the device. Note that if the device is at a higher physical temperature than room temperature, it must be compensated for before determining the actual noise temperature.

Any device output noise has two sources: (1) the noise contribution of the input signal that propagates through the device, and (2) the noise generated by the device itself. NF is a measure of the noise contributed by the DUT only. The equation to measure NF is given by:

Equation 16.4. 

where

NF = 10log10 F

where the noise contribution of the device is NDUT, G is the gain of the DUT, and Ni is the input noise power. Note that here the gain is unknown to the measurement system. The input noise is usually thermal noise from the source that equals to Ni = kTB, where T is the equivalent noise temperature, and B is bandwidth of the system. Therefore, Equation (16.4) changes to:

Equation 16.5. 

Assuming the noise contribution of the device does not change with varying input signal power, the noise contribution of the DUT can be measured by making two measurements with different input noise magnitudes. The output noise power from the DUT scales linearly with changing input noise power (temperature). As shown in the plot on the right-hand side of Figure 16.5, the slope of the line is equal to kBG. The noise contribution of the device is the intercept of the line with output noise power (y-axis) (i.e., the amount of output noise power when there is no input noise).

Block diagram of an NF meter and calculation of NDUT using NF meter via interpolation method.

Figure 16.5. Block diagram of an NF meter and calculation of NDUT using NF meter via interpolation method.

An NF meter has a calibrated noise source that can provide an accurate noise input to the DUT. The structure of an NF meter is illustrated in Figure 16.5. By obtaining two measurements with known input noise powers, the intercept (i.e., the noise power contribution of the DUT) can be obtained. Usually, the NF meter uses a reverse biased diode to create a steady noise power at the device input (Ni1). The output power from this is used as one of the measurement points (No1). To get the other point, the diode is simply turned off, and the input noise power is at 298.13 K (Ni2). From the output power (No2), the NDUT can be computed as:

Equation 16.6. 

From the NDUT and the N0 values, the NF can be further calculated using Equation (16.4).

Phase Meter

Although spectrum analyzers can measure phase noise, accurate phase noise measurements can only be made through a phase meter. The measurements from a phase meter are typically better than a spectrum analyzer by 10 to 20 dBc/Hz. Usually, the noise floor of the phase meter is lower than an NF meter. However, the use of phase meters is limited in production testing because measurement times are considerably larger than a spectrum analyzer.

Test Flow in Industry

Next, we discuss the overall test procedure followed in the semiconductor industry. Semiconductor manufacturing is a three-step process: (1) the circuit is designed and fabricated, (2) a prototype is developed and characterization test/qualification is implemented, and finally (3) ramping up to high-volume production is performed. This process is shown in Figure 16.6.

Typical semiconductor manufacturing flow.

Figure 16.6. Typical semiconductor manufacturing flow.

Design and Fabrication

The first step in manufacturing a device is to define a set of specifications for the device, either obtained from a customer directly or developed based on market need. Using computer-aided design (CAD) software, designers lay out a schematic of the circuit and simulate the design. Once the designers have obtained satisfactory performance from the design, the layout of the schematic is created. The layout is again simulated to incorporate parasitic effects and tweaked to meet the specification needs. Finally, the layout is sent for fabrication. In parallel to the design cycle, the test development process for the device is also initiated. In this phase, the tests that will be performed on the device during high-volume production are developed and coded in the automatic test equipment (ATE) language. The ATE is commonly referred to as a tester. The test program is usually executed first on a dummy tester and a dummy circuit to detect and debug any serious flaws in the code. To interface the device with the tester, a load-board is designed and fabricated to include relevant circuitry and a test socket. By the time the device is characterized and fabrication volume is ramped up, the test program and the load-board is ready for use on the tester.

Characterization Test

As the first set of devices arrive, they are tested for their specifications using a bench test setup, and this process is known as characterization test. A typical bench test setup usually consists of accurate (high performance) measurement instruments, such as spectrum analyzers, network analyzers, and high-speed sampling scopes. The devices are tested for nominal specifications, various corners, and a wide range of temperatures. In addition, the load-board and test programs are also tested to diagnose any serious issues. Also, repeatability of the measurement system (i.e., performing the same measurement many times while keeping the measurement setup unchanged) and device-to-device correlation (i.e., the variance of the measurements made on many devices using the same test setup) are computed. All of these steps help in determining the DUT performance and understanding the reliability of the overall test setup.

Production Test

After successfully characterizing the devices, fabrication volume is ramped up to production level, where many thousands of units are tested every day. The devices are tested on the ATE using the load-board and test programs created during the development phase. The test instruments used in the tester are, however, not as accurate as the bench setup. This is because these are usually low-cost instruments with lower resolution, making a tradeoff between cost of testing and desired accuracy. Also, the DUT is tested for a subset of specifications during production because of limited tester capability and time constraints. Finally, with satisfactory production test data, the devices are sent to the retailer/end user.

The manufacturing process is a closely monitored system, where any anomalies are taken care of by going back to the previous step. This way, if a device does not perform as expected during characterization, the design is revisited to determine the possible cause of failure and is fixed. However, this is not the case for production testing. During production, the process cannot be interrupted if a problem is detected during testing, and such problems must be dealt with using innovative methods. On the other hand, although characterization is very accurate, it may be extremely inefficient to test a large number of devices in a small amount of time. As a result, characterization and production tests inherently pose different sets of challenges to the manufacturing process, as discussed next.

Characterization Test and Production Test

The scope of characterization (or qualification) and production tests differs significantly in the overall manufacturing process, and each poses a different set of challenges [Schaub 2004]. During characterization, the silicon is tested to check the overall performance. Characterization deals with verifying the actual performance of the design; however, production testing focuses on the overall functionality of the silicon to eliminate outliers and defective parts getting shipped to customers. While characterization tests are more complex, production testing tries to achieve 100% yield with a simpler set of tests as the device is ramped up to production volume. Hence, the implications of each of the test processes on the overall manufacturing procedure are different. In this section, we will discuss the differences and advantages of each of the methods.

Often, there are extra steps in the development process, where the device is characterized on ATE as well. This process is similar to the bench test, where the performance of the device at different conditions is measured. This is usually performed on test chips and devices manufactured using advanced fabrication methods.

Accuracy

Usually, characterization testing is extremely accurate as the most precise set of instruments is used to perform such tests. This can be attributed to the fact that during characterization, larger averaging is possible, and tests can be performed with the highest resolution settings, such as minimum resolution bandwidth (RBW) setting in a spectrum analyzer. Production testing is not as accurate as the instruments are combined in a single housing within the tester in the form of line cards interfaced through GPIB, PCI-X, or some other standard. This inherently increases the overall noise generated in the system and affects the measurement accuracy. However, the results from characterization and production must follow the same trend from the obtained test measurements.

Time Required for Testing

The goal of production testing is to perform a set of predefined tests in a small amount of time while maintaining an acceptable level of accuracy [Ferrario 2002]. To do this, the tests are usually applied sequentially to the DUT. As different tests have different configurations, these are all implemented together on the load-board, and using a set of control signals generated by the tester, various circuit configurations are achieved via control relays. This reduces the overall test time such that a set of tests for a typical RF amplifier (15 to 20 tests—DC current, gain, IP3, NF, harmonics, etc.) is usually performed in 300 to 500 ms. On the other hand, characterization uses a different set of device boards to measure different specifications, and the process is not automated, thereby increasing the overall test time required for characterizing.

Cost of Testing

Production testing is geared toward reducing the overall cost of testing. To achieve this goal, emphasis is placed on an elaborate design of the load-board. This involves a one-time cost, but innovative designs can significantly reduce test time during production, thereby reducing the test cost per device. Also, testing many devices in parallel reduces overall test time. This is not an important concern during characterization, as the test cost is not of major concern.

In summary, one should keep in mind that the goals of production tests and characterization tests are different. Various methods are applied to make each of the test processes work successfully on a bench or a production floor. Although the device is characterized for all or most of its specifications, only a few of the tests are performed during production testing based on test cost and test time constraints. However, the principle of testing each specification remains the same in both cases. Next, we discuss the methods of testing various specifications for RF circuits and systems.

Circuit-Level Specifications

In this section, we discuss a few of the important specifications for stand-alone RF devices that are tested during high-volume production [Wang 2006]. It is important to discuss a new set of units that are commonly used in RF measurements. The unit used to specify gain is decibels (dB). Numbers are converted to decibels using the following formula:

Equation 16.7. 

Therefore, a gain of 8 dB essentially means a gain of approximately 2.5. Similarly, 20 dB is 10, and 40 dB is 100. A similar notation, called dBm, is used to specify power in logarithmic units, using the following formula:

Equation 16.8. 

where PWatts is power in Watts. Essentially, dBm measures power with 1mW as the reference such that 1 mW is 0 dBm and 1 W is 30 dBm. These two units are frequently used in RF design and test, and it is important for the reader to be familiar with these units to better understand the subject. The prudent reader might question the need for introducing a new set of units for RF measurements. In RF, the range of power measured is large—from nano-Watts to a few Watts. It can often become cumbersome to manage such a large range of numbers, and calculations become prone to mistakes. Moreover, engineers like to have rounded/integer numbers. Therefore, to make the calculations simple and contain the numbers in a smaller range, the logarithmic transformation of power is used.

Gain

Gain represents the linear gain of a device. This is usually specified for amplifiers (power amplifiers, low noise amplifiers, etc.) and mixers. Gain is calculated by providing input signals of known power within the linear range of operation of the device and measuring the output. Usually, a spectrum analyzer or a network analyzer is used to measure gain.

To measure the gain of a device, a single tone input is used. With the DUT powered up and the single tone input applied to the DUT input, the output of the DUT is measured using a spectrum analyzer. Gain is measured by taking the ratio of the input and output power. The amplitude of the input is adjusted so that the output of the DUT does not go into the saturation region. For a device, if the expected gain is A, and the saturation point (i.e., –1 dB compression point, P–1dB, discussed later) is A–1dB dBm, then the input Ain must be less than A–1dB – the expected gain. The gain of a DUT is given by measured gain (G) = AoutAin, where the input (Ain) and output (Aout) signals are given in dBm. Network analyzers can directly measure the gain (S21) of a device by applying an input and monitoring the output as a part of S-parameter measurements.

Gain measurements for mixers work differently because mixers translate the input frequencies to a different range of frequencies. The gain of a mixer is measured as the ratio of amplitudes taken at the input and output frequencies. Because of the inherent frequency translation involved in the process, gain for mixers is often referred as conversion gain, also expressed in dB.

Harmonics and Third-Order Intercept Point (IP3)

All RF devices are inherently nonlinear. Therefore, for a moderately large signal, the device may show serious nonlinear effects, such as gain compression, desensitization, harmonics, and intermodulation components [Razavi 1997] [Burns 2000] [Cho 2005]. To explain these in greater detail, we first present a typical input-output transfer curve for a device, as shown in Figure 16.7. This can be obtained by sweeping the input power to the DUT and measuring the output power. Because of the DUT behavior, the transfer curve starts to compress at high input power levels, thereby introducing nonlinearity in its output response. To understand the implications of nonlinearity, we fit a polynomial function to the transfer function. In the figure, the solid curve represents actual measurements made on a DUT, and the dotted curve is obtained by fitting a polynomial to its input-output response. A generic form of the polynomial is given by:

Equation 16.9. 

Nonlinear response of a DUT.

Figure 16.7. Nonlinear response of a DUT.

where x(t) is the input signal; y(t) is the output; and a0, a1, a2, a3, etc. represent the various coefficients of the fitted polynomial. If a single tone input (i.e., x(t) = Asin(2πft)) is applied to the device, the output not only contains a single tone at frequency f but also other tones at 2f, 3f, etc. are created. These tones are known as harmonics. In case of a multitone input with N tones as shown in Equation (16.10), many other tones apart from the harmonics are created close to the input tone frequencies (fi ± fj, 2fi ± fj,..., where i, jV + I ⊆ [1, N]). These tones are known as intermodulation tones. The term intermodulation stems from the fact that these tones are created by interactions from two or more fundamental tones. Harmonics are generated for each tone, and they can be eliminated from the output via proper filtering. On the other hand, intermodulation tones are difficult to remove because of their close vicinity to the input tones. Hence, it is important to measure the amount of distortion introduced by intermodulation during production testing:

Equation 16.10. 

Usually, the second order coefficient (a2) for RF devices is very small; the most significant component of the nonlinear terms is the third-order coefficient (a3). For a two-tone input, x(t) = Acos(ω1t) + Acos(ω2t), ignoring the term for a2, the expression becomes:

where

Equation 16.11. 

For a more detailed derivation, the reader is referred to Chapter 15.

If we fit a similar model to the response of any DUT, the terms a1 and a3 are usually close in magnitude. From this, one can easily deduce that the term relating to the nonlinear behavior of the DUT (k2) has three times the rate of increase of the linear term (k1) (we are talking about logarithmic scale and hence the origin of the term “third-order”). If the input power is increased continuously in this manner, at one point, the power contributions from the linear and the nonlinear terms would become the same. This is known as the third-order intercept point (IP3) of the DUT. Although this is not observable in practice as the device saturates long before that, the intercept point can be computed by equating the two terms, k1 and k2, with their corresponding slopes and finding their intersection point. However, finding IP3 by this method is computationally intensive, and an easier way exists to compute this during testing. As Figure 16.8 shows a two-tone input is applied to the DUT with input power Pin, which is well below the compression region. As the slopes of the fundamental and intermodulation tones bear a ratio of 1:3, the IP3 can be computed as follows (see Figure 16.8):

Equation 16.12. 

IP3 measurement details.

Figure 16.8. IP3 measurement details.

IP3 indicates the input power level for which the linear and nonlinear terms would become equal, and hence the unit for IP3 is dBm (the same as for power). A related specification, known as the 1-dB compression point, is worth mentioning at this point. It is usually expressed as P–1dB (see Figure 16.8). This indicates the input level where the output of the DUT deviates from its linear behavior by 1 dB and has the same units as that of IP3. The magnitude of P–1dB is much less than IP3 and, the relation IP3 ≈ P–1dB + 10 (where P–1dB and IP3 are both expressed in dBm) is frequently used as a rule of thumb.

To quantify the harmonics for a DUT, the power level of a fundamental frequency and its harmonics is presented. A new unit is used for this purpose, known as dBc, which indicates the difference in dB value from a reference—in this case, the fundamental frequency. Therefore, –40 dBc means that the tone is 40 dB down compared to the fundamental. Harmonics have frequencies that are multiples of the fundamental tone frequency and are indicated by the multiple numbers. For example, the harmonic at twice the frequency of the fundamental is called the second-order harmonic, etc. Usually, we are interested in harmonics up to the fifth order.

Both IP3 and harmonics are measured using spectrum analyzers. Modern spectrum analyzers have built-in firmware to directly measure IP3 for a two-tone input. However, for high-frequency devices, the harmonics, specifically the higher-order ones, are far apart from the fundamental frequencies. Such measurements require spectrum analyzers to operate over a wide range of frequencies, thus making them expensive to use during production testing.

To measure IP3, two tones are applied to the DUT with their frequency values close to each other relative to the overall bandwidth of the DUT. For example, if an amplifier operating at 2.4 GHz has a bandwidth of 20 MHz, the tones applied for IP3 measurement would be 2.3999 MHz and 2.4001 MHz, separated by only 200 kHz. In this case, the intermodulation tones will be located at 2.3997 MHz and 2.4003 MHz. The response of the DUT looks similar to Figure 16.9 where ΔP is used to compute IP3 from Equation (16.12). Harmonics are measured by applying a single tone to the DUT within its frequency range of operation and are specified with respect to the applied frequency. For most practical purposes, second and third harmonics are sufficient; however, some applications require the device to characterize up to the fifth harmonic for various frequencies within the range of operation.

IP3 computation from output spectrum.

Figure 16.9. IP3 computation from output spectrum.

1-dB Compression Point (P–1dB)

As previously mentioned, the device output gain reduces with increasing input signal level because of the nonlinear nature of the DUT. The 1-dB compression point indicates the point where the gain drops below the normal gain by 1 dB. Referring to Equation (16.11), the linear response of the device is k1 = a1A, whereas nonlinear response equals k1 + k2 = a1A + ¾a3A3. At the 1-dB compression point, if the input is A–1dB, the following equation holds (assuming the frequency of the fundamental and the intermodulation terms are close):

Equation 16.13. 

Typically, the 1-dB compression point occurs at –20 to –25dBm for front-end amplifiers.

Total Harmonic Distortion (THD)

THD is a measure commonly used for RF devices. To measure THD, a single tone is applied to the DUT with an input power within the linear range of operation, and the response is captured using a spectrum analyzer (see Figure 16.10). THD indicates the ratio of total output harmonic power and the output power at the fundamental frequency:

Equation 16.14. 

Fundamental signal and its harmonics.

Figure 16.10. Fundamental signal and its harmonics.

Because harmonics only up to the fifth order are usually considered, the relative contribution of the harmonics above the fifth order is not significant. However, for a device operating at 2 GHz, the spectrum analyzer needs to be able to measure up to 10 GHz, which is a wideband measurement. Such a wideband measurement is prone to external noise, and as a result such measurements are usually performed via discrete measurements at different frequency bands and require significant test time. As THD is a ratio, it is dimensionless. However, it is often represented in dB to avoid dealing with a very small number.

Gain Flatness

Until the late 1990s, gain flatness was not an important specification that was measured during production testing. Recent growth in applications demanding wideband RF amplifiers have made this measurement a necessity during production testing. Gain flatness indicates how much the gain of the DUT varies within its band of operation, as illustrated in Figure 16.11. The specification is represented in dB, which indicates the difference between the maximum and minimum gain values.

Gain flatness of the DUT within the bandwidth.

Figure 16.11. Gain flatness of the DUT within the bandwidth.

Noise Figure

Any electronic system inherently generates noise. Similarly, an RF circuit also introduces noise to the input signal. Although this cannot be eliminated completely, the noise contribution of the circuit is minimized through careful design techniques. As explained before, NF is a measure (ratio) of the amount of noise added by the device, and it is measured by comparing the input and output signal-to-noise ratio (SNR) as follows:

Equation 16.15. 

There are mainly three different methods to measure NF, and the choice of the measurement method depends on the range of NF measured and the desired accuracy [Maxim 2003]. The methods include (1) an NF meter, (2) gain method using a spectrum analyzer, and (3) the Y-factor method [Agilent Y-2004].

The first and the most common method uses an NF meter to perform NF measurements. It is very accurate and is capable of measuring very low noise levels. Using an external local oscillator (LO) source, it is possible to measure the NF of mixers as well. However, for mixers and other devices having large NF values (NF > 10 dB), this method is not accurate. In such cases, the gain method is useful. The gain method uses the following formula:

Equation 16.16. 

where Nout is the measured output noise, Nt is the thermal noise at the input of the DUT, and G is the system gain in dB. Once again, kTB represents the thermal noise at the input at temperature T. The principle of NF measurement remains the same as before. To measure NF, we merely subtract the noise contribution of the input times the gain of the DUT from the total measured output noise power. However, this method has two limitations. First, it assumes that the input is clean and thermal noise is the sole contributor at the input node. Second, as evident from the equation, we must know the gain of the DUT a priori to compute the NF.

The output noise power can be measured using a spectrum analyzer; however, the minimum noise density measurable in a spectrum analyzer is around –150 dBm/Hz. This means for a system with a bandwidth of 25 MHz and a gain of 15 dB (which is typical for most LNAs), the minimum NF that can be measured is:

Equation 16.17. 

Most modern LNA circuits have NF less than 2 dB, which clearly shows that this method is not suitable for measuring small NF values, unless the DUT gain is large enough.

The third method is the Y-factor method. First, we introduce the concept of excess noise ratio (ENR) and Y-factor. Both ENR and Y-factor denote the characteristics of a noise source, defined as:

Equation 16.18. 

POn and POff are the output power of the DUT when a noise source at the input is turned on and turned off, respectively, and Non and Noff are the measured output noise power of the source when it is turned on and off. N0 is the output noise power of the noise source at room temperature. Some people like to refer to the noise power as noise temperatures for ease of analysis. In this case, ENR is defined as follows:

Equation 16.19. 

We now describe how ENR and Y-factor can be used to efficiently compute the NF of the DUT. One can observe that:

Poff = NDUT × KTB × G

and

Equation 16.20. 

Hence, we can write:

Equation 16.21. 

Rearranging Equation (16.21), we derive the NF of the device as follows:

Equation 16.22. 

ENR and Y-factor are usually given in dB units, such that they must be converted to ratio values before using Equation (16.22). Hence, another form of NF using ENR and Y-factor is given as:

Equation 16.23. 

The Y-factor can be used to measure a wide range of noise figures and is commonly used to characterize a large variety of devices.

A related topic of interest is the method to compute the NF of cascaded devices. This is extremely important for RF receiver front-end characterization. For such systems, as illustrated in Figure 16.12, the total output noise is given by:

Equation 16.24. 

Computing noise figure of cascaded stages.

Figure 16.12. Computing noise figure of cascaded stages.

where the gain and NF values for the ith stage are given Gi and Ni, respectively. Also, the output signal is given by:

Equation 16.25. 

Equations (16.24) and (16.25) can be used to compute NF for a cascaded system:

Equation 16.26. 

This equation can be reduced to:

Equation 16.27. 

where and . A more general equation can be derived for larger systems as follows:

Equation 16.28. 

Sensitivity and Dynamic Range

Sensitivity of the DUT is defined as the minimum signal level that the DUT can detect and operate on with an acceptable level of SNR. This depends on various factors, such as bandwidth, noise figure, and noise power at the input port of the DUT, and can be derived from Equation (16.15):

Equation 16.29. 

or

Equation 16.30. 

where Sin|min is the sensitivity of the DUT, SNRout is the desired SNR of the DUT, Rs is the input source impedance with noise power PRs, and B is the DUT bandwidth.

A related term, dynamic range, represents the ratio of sensitivity and the maximum signal level that the DUT can handle. RF devices typically have high dynamic range, often in excess of 80 dB. Hence, the test system also needs to be capable of performing measurements over such large ranges.

Local Oscillator Leakage

In mixers, a local oscillator (LO) signal is provided that either up-converts or down-converts the input signal. Consider the case of up-conversion, where the input signal is at an intermediate frequency, fIF, and the applied LO signal is at a frequency of fLO. The up-conversion mixer generates the RF frequencies that are located in the output signal at fLOfIF and fLO + fIF, also known as mixer sidebands. However, poor isolation between the mixer ports may cause part of the LO signal energy to propagate (leak) to the output port. Hence, an additional tone at fLO appears in the output signal (circled in Figure 16.13). For RF systems, the difference between the RF and LO frequencies is often small, and hence this unwanted LO signal cannot be eliminated by simple filtering. For this reason, mixers are specified for their LO leakage, which is usually 20 to 30 dB below the mixer sidebands. This is tested using a spectrum analyzer during characterization but is rarely measured during production testing.

Leaked LO signal within the DUT signal bandwidth.

Figure 16.13. Leaked LO signal within the DUT signal bandwidth.

Phase Noise

Mixers need a clean LO signal to reliably perform the frequency translation operation. Usually, LO signals are single-tone, except for few advanced communication protocols where the LO is modulated. Because the LO acts as a reference for frequency translation, it is expected to be stable in terms of both frequency and amplitude. However, all such signals generated from oscillator circuits are nonideal; they exhibit variations in both amplitude and phase/frequency:

Equation 16.31. 

In Equation (16.31), an ideal signal and a signal with phase noise are included; α(t) denotes the amplitude modulation part, whereas m and θm denote the phase modulation part. θrandom represents the random phase noise portion of the signal.

The origin of such modulations can be attributed mostly to the power supply noise, flicker noise or 1/f noise, and thermal noise inherent to the system. The phase modulation term and the random term, together known as phase noise, cause the oscillator frequency to shift with time. This term is often described statistically because of the inherent randomness involved in its origin. For RF systems involving oscillators, it is extremely important to measure phase noise to obtain the desired performance from the system. Chapter 15 elaborates on this topic in greater detail.

Phase noise is measured using a spectrum analyzer. Most modern spectrum analyzers have built-in options to measure phase noise. Phase noise is represented in dBc/Hz, which indicates the noise level below the fundamental frequency of the oscillator within a 1 Hz bandwidth at a specific frequency offset. For example, –90 dBc/Hz @ 10 kHz indicates that the average noise power for the fundamental tone at an offset of 10 kHz is 90 dB below the carrier power. If the carrier power is 10 dBm, the mean noise power measured at 10 kHz offset from the carrier will be –80 dBm.

Adjacent Channel Power Ratio

Another important specification related to the nonlinearity and the out-of-band noise of the devices is ACPR. Typically, for cellular communication, modulated data are transmitted in specific frequency bands, also known as channels. It is thus imperative that the signal in one channel does not leak to the adjacent frequency channels and that the out-of-band phase noise of the device will not degrade the signal in the adjacent channel. However, when a modulated signal is fed to an amplifier, some intermodulation frequencies fall into the adjacent bands because of the nonlinearity of the device. The ratio of the signal energy falling into the adjacent bands to the total energy within the desired band is called ACPR of the DUT [RS 1EF40-1998]:

Equation 16.32. 

where Eadj is the energy in the adjacent band, and Etotal is the total energy of the modulated signal. As the computed value is a ratio of two terms, it is usually represented in dB. Modern spectrum analyzers are capable of directly measuring ACPR using the built-in firmware. Depending on the communication protocol and the modulation technique, ACPR can be automatically computed for various standards. For production testing, ACPR is usually measured with a multitone input given to the DUT while measuring the amount of power in the sidebands. The spectrum of a DUT output response that is used to measure ACPR is shown in Figure 16.14. For some communication standards, ACPR is measured using a pseudo-random bit stream from the baseband processor while measuring the total leaked power in adjacent bands.

ACPR measurement from DUT output spectrum.

Figure 16.14. ACPR measurement from DUT output spectrum.

System-Level Specifications

A typical RF transceiver is tested for many other specifications apart from the ones discussed earlier. Usually, these specifications are targeted at system-level performance metrics such as the clarity of the received modulated signal, quality of service (QoS), etc. A few of these specifications are discussed next.

I-Q Mismatch

Modern RF communication systems employ complex modulation schemes where data bits are combined at different phases—in-phase (I) and quadrature (Q) are added to increase the effective data rate and efficiency of the modulation process. However, in all practical systems, such modulations introduce imbalances in their phase relations, and this often leads to errors during demodulation. This phenomenon is known as I-Q mismatch. In a system, I and Q are usually represented as follows:

I(t) = AI cos(ωt)

and

Equation 16.33. 

Typically, AI and AQ are kept same for both channels. However, because of mismatch between the two channels, the modulations often have amplitude and phase imbalances that appear as:

I(t) = αIA cos(ωt) + βI

and

Equation 16.34. 

where the errors in amplitude and phase have been assumed to be lumped in one of the channels. The amplitude error is represented as αI and the phase error is represented as θQ. The DC offsets in I and Q channels are βI and βQ, respectively. Usually, the offsets can be easily removed by long-term averaging for each channel. However, the mismatches in amplitude and phase are hard to remove and must be characterized carefully for the DUT to operate reliably. I-Q mismatch contributes significantly to the EVM and BER (discussed later) of a DUT.

To measure the mismatch values, we need an ideal source that can phase lock to the signal that we need to measure. For such communication-based measurements where modulated data need to be transmitted, a spectrum analyzer is not sufficient. For this reason, both amplitude and phase imbalance values are not measured and specified during production testing because of the long test times involved and the expensive test hardware required to perform such measurements. Therefore, this is primarily a bench measurement. In most cases, a communication analyzer [Agilent Rx-2002] with the required modulation/demodulation capability is used for this measurement. These can provide a trigger to start the modulation and demodulate the data from the DUT to detect the frames. Usually, many frames of modulated data are looped through the device to estimate the specification values.

Error Vector Magnitude

EVM is one of the most important specifications that are measured to assess the quality of transmission of a complete system. EVM is measured for phase-modulated systems such as quadrature phase shift keying (QPSK) and 16-quadrature amplitude modulation (16-QAM). It presents the difference between actual and ideal (modulated) signals [Agilent EVMb-2000].

To measure EVM, all the symbols obtained at the demodulator output for the received waveform are compared against the ideal symbol locations [Agilent EVMa-2005]. The root-mean-square (RMS) sum of the error vectors (which includes the phase error values) is then used to compute EVM over a set of N symbols. As Figure 16.15 shows, the symbol decision from the demodulator is given by Error Vector Magnitude, whereas the ideal symbol is given by ñ. Therefore, the error vector, ã, is given by (Error Vector Magnitudeñ). EVM for N symbols can be calculated as shown in Equation (16.35):

Equation 16.35. 

Computation of EVM from constellation points.

Figure 16.15. Computation of EVM from constellation points.

Typical EVM plots for QPSK modulation are shown in Figure 16.16. By looking at constellation plots, one can determine the origin of the errors in the device caused by I-Q modulator amplitude/phase imbalance, or phase noise.

Smeared constellation for QPSK for various I-Q mismatch effects: (a) amplitude imbalance (AI > AQ), (b) phase imbalance (θQ ≠ 0), and (c) noise (phase noise + random noise) [refer to Equations (16.33) and (16.34)].

Figure 16.16. Smeared constellation for QPSK for various I-Q mismatch effects: (a) amplitude imbalance (AI > AQ), (b) phase imbalance (θQ ≠ 0), and (c) noise (phase noise + random noise) [refer to Equations (16.33) and (16.34)].

Measuring EVM requires test equipment that can demodulate the signal and handle time synchronization for data extraction. Moreover, a large number of data bits are transmitted to measure EVM, which incurs a large test time. Hence, EVM testing is generally not performed during production. However, it is a mandatory test that is performed during the characterization phase. Modern spectrum analyzers and vector signal analyzers (VSA) with demodulation capabilities can measure EVM. Other techniques for EVM measurements have also been proposed to reduce the overall test time [Halder 2005].

Modulation Error Ratio

The SNR of a digitally modulated signal is called the modulation error ratio (MER). SNR is generally used in the context of analog signals, whereas MER is defined for digital modulated signals only. It is usually expressed in dB. For N symbols, MER is defined as:

Equation 16.36. 

where Irj and Itj are the received and transmitted in-phase components, and Qrj and Qtj are the quadrature counterparts, respectively.

Bit Error Rate

The BER test indicates how many bit errors occurred when a certain number of bits (usually a pseudo-random sequence) is passed through a network system. BER indicates the fidelity of the data transmission through an entire network, or a subsystem of the network. In addition, BER measurements also serve as quality of service (QoS) agreements between network end users and service providers [RS 7BM03-2002].

A bit error occurs when the receiver circuitry incorrectly decodes a bit. There are many possible causes for BER, such as low signal power, noise within the system, phase noise of oscillators and associated jitter, crosstalk, and EMI from radiated emissions, to name a few. A BER tester, also known as a BERT, is used to test BER. The BERT contains a pseudo-random pattern generator and an error counter that counts the number of bits in error by comparing with the reference bits. Hence, BER is defined as:

Equation 16.37. 

where Nerr is the number of error bits, and N is the total number of bits. Obviously, the questions are how long should we transmit bits, and at what value of bit errors will the test be stopped? The rule of thumb is that at least 100 error bits are needed to deem the test process reliable. Therefore, if the BER of a system is expected to be 10–6, which means that 1 error in every 1 million bits is erroneous, then we need to transmit at least 100 million bits to get a reliable measure of BER for that system. Therefore, the lower the BER of the system, the longer it takes to measure BER. During production testing it might take hours, even days, to characterize and measure BER of the system reliably. However, to mitigate this problem, another technique, known as the Q factor method, is used to quickly measure BER. With this technique, instead of comparing bits, the histograms of the amplitudes of the two logic levels are plotted, and a Gaussian curve is fitted to those histograms. The mean and standard deviation of these two histograms can be found by moving the threshold for error detection. From these mean and standard deviations, the Q-factor can be determined as follows:

Equation 16.38. 

where μ1, μ2 are the mean and σ1, σ2 are the standard deviation values of the two logic levels. From the Q value, the BER can be directly obtained as follows:

Equation 16.39. 

This method is valid under the assumption that the error distributions are Gaussian. This might not be the case if periodic patterns are present. Various other methods for efficiently measuring BER have been proposed, such as those based on introducing a limiting filter in loopback test mode or rotating constellation points for efficient test [Bhattacharya 2005].

Structure of RF Systems

RF designers usually employ the well-known super-heterodyne architecture for implementation because of its subtle advantages over other architectures. However, certain applications, such as pagers, use the less-popular homodyne architecture to balance the various requirements of the system. A typical RF system can take a signal from a baseband unit, up-convert to a higher frequency suitable for transmission, further amplify the signal and transmit via an antenna, or do the reverse (i.e., receive the signal from antenna and down-convert to baseband frequencies). In addition to transmit and receive functions, RF systems usually consist of oscillators/frequency synthesizers to synchronize data transmission between the transceiver pair.

Figure 16.17 shows block diagrams of the super-heterodyne and homodyne transceiver architectures [Razavi 1997]. Although various architectures are available to build a system, RF system designers typically use one of these two architectures for realizing a transceiver. The choice of architecture is mainly driven by dynamic range, NF, and linearity of the complete system. Before going into the details of the architectures, it would be useful to discuss the problems of image rejection in RF receivers. Assume, for example, a receiver working at 900 MHz and a down-converted frequency, also known as intermediate frequency (IF), of 70 MHz; then the oscillator would be working at 970 MHz. The oscillator may also work at 830 MHz, but this frequency is usually not used as it would require a higher tuning range from the oscillator (we will come back to this later). If the input of this receiver has a signal at 1040 MHz (970 MHz + 70 MHz), the receiver would also down-convert it to 70 MHz. This would cause interference to the original signal; this unwanted signal is known as image signal. Often, the image is removed by filtering the input signal before down conversion (the first filter after the receiver antenna in Figure 16.17a); however, a very low IF would require a very high-Q filter at RF frequency, which is often difficult to implement. Therefore, the IF frequency is selected to be as high as possible to ease the front-end design of the receiver. It is common in communication systems to use the same oscillator for both the receiver and the transmitter.

Various RF transceiver architectures: (a) super-heterodyne and (b) homodyne or direct conversion.

Figure 16.17. Various RF transceiver architectures: (a) super-heterodyne and (b) homodyne or direct conversion.

In receiver design, the oscillator is often chosen to be higher than the RF frequency. Let us assume that an oscillator needs to tune over a range of 20 MHz, with an RF of 400 MHz—for a fixed IF of 70 MHz. In this case, two possible oscillator frequencies can be used: 330 MHz and 470 MHz. The metric we are trying to reduce is the ratio of tuning range of the oscillator. The higher side oscillator needs to vary from 460 to 480 MHz with a tuning ratio of 1.04, whereas the lower side oscillator would need to tune from 320 to 340 MHz with a ratio of 1.06. This ratio directly translates to the chip area and, thus, it is better to choose an architecture with a higher side oscillator [Lee 2003].

Before comparing the architectures, it is also worthwhile to discuss the various test access ports available for these architectures. Direct-conversion architectures usually have limited access to internal nodes; the only nodes accessible are the RF port connecting to the antenna/duplexer and the baseband port. In most cases, the LO is also internal to the DUT, implemented via a frequency synthesizer. The tuning of LO can be done only via the digital interface, but access to the RF signal within the DUT is usually unavailable. Super-heterodyne, on the other hand, allows implementation of external filters, and hence the RF signals at the output ports of the mixer, LNA and PA, are generally accessible. This allows testing of individual specifications of these blocks and, at the same time, characterization of the complete system for end-to-end specifications.

The architectures illustrated in Figure 16.17 pose different sets of advantages and disadvantages. The super-heterodyne architecture employs an image-rejection architecture using two LO signals in quadrature to cancel the image, and this modified version is known as the weaver architecture. Direct conversion does not face the problem of image-rejection as the signal and LO are at the same frequency. It thus reduces the filtering requirements on the receiver. Also, direct-conversion architecture does not require accurate I-Q matching as compared to super-heterodyne.

However, direct conversion has certain limitations compared to super-heterodyne. Direct conversion is prone to producing large DC offsets for small nonlinearities present in the device. Even-order harmonics tend to show up as DC offsets at the output of the receiver, which cannot be removed by simple AC coupling. This forces the designers to use differential signaling, thereby doubling the power requirements of the system. Also, such architectures need isolation from the LO input to the mixer input, as the LO leakage appears as a DC signal at the output of the receiver. Finally, 1/f noise is of greater concern in direct conversion, as this increases the settling time of the receiver and may well cause loss of packets for standard modulation schemes.

With the limitations posed by the direct-conversion architecture, super-heterodyne has gained widespread popularity in modern RF devices and will continue to remain popular until a new innovative, low-power architecture is proposed.

Test Hardware: Tester and DIB/PIB

In this section, we discuss ATE and the device interface boards (DIB) that are used to interface the DUT to the tester. Figure 16.18 shows a generalized block diagram of a tester. Usually, testers perform a series of tests under the control of a test program developed for production test. Any tester usually has three main blocks: source unit, capture unit, and clock and synchronization unit. Next, we discuss each of these submodules of the tester.

A typical tester with RF options.

Figure 16.18. A typical tester with RF options.

  • Power supply (PS) modules. Testers have accurate power supply modules that can source/sink large voltages/currents. Typically, these are digitally controlled with 8 to 12 bit resolution. The voltage sourcing capability of testers can go up to 60 to 80 V, while current capabilities can be 5 to 10 A per channel.

  • Digital channel (DCH). A tester usually has a large number of digital channels (as many as 256 to 512 channels). The digital channels provide digital signal input to the DUT through a relay matrix (discussed later). The tester computer and the clock unit usually govern the speed of operation. Usually, the speed varies from 1 to 500 MHz, based on the purpose of the tester.

  • Analog channel (ACH). Each tester has mixed-signal capabilities provided through analog channels. The analog channels provide sinusoids up to a frequency of 100 to 200 MHz as limited by the tester digitizer unit. The sinusoidal signals are usually generated by using a DAC and corresponding filtering stages. For this reason, the number of analog channels is usually limited compared to digital channels because each channel needs to have DAC/ADC and source/capture memory associated with it.

  • Arbitrary waveform generator (AWG) channel. Apart from digital inputs, testers have an AWG unit that can source any arbitrary waveform. The stimulus to be applied to the DUT is programmed into the source memory by the tester program and AWG sources it to the DUT via the relay matrix.

  • RF channel. RF testers have a RF signal generator unit apart from the digital and AWG channels. However, the RF channels are not connected through the relay matrix to maintain impedance matching (see Figure 16.18). It is important to maintain a low-loss connection to the DUT from the RF source. Also, test sockets need to be carefully chosen to reduce parasitic impedance and maintain a low reflection coefficient. The range of a typical RF signal generator varies from 1 to 8 GHz that can usually source up to 30 dBm.

  • Capture unit. Testers have a capture unit that consists of a digitizer and a RF measurement unit. The digitizer captures all the digital and analog outputs and stores them in the capture memory. Depending on the type of measurement to be performed on the DUT, the DC meter or the time measurement unit carries out computations on the captured waveforms. The RF measurements are usually performed in a separate dedicated unit. Although not shown in the figure, there is a digitizer block within the RF measurement unit that converts the results to a digital format.

  • Clock and synchronization. The clock and synchronization unit controls the signal flow through all the modules in the tester. The clock is derived from the tester CPU or an onboard oscillator/PLL capable of generating a wide range of frequencies.

  • DIB and Probe Interface Board (PIB). The tester and the DUT are interfaced through a DIB or a probe interface board (PIB), depending on the type of test. For wafer test, a PIB is used because the tester interfaces to the device via probes touching the pads on the wafer. However, such contact is pressure limited, and probes often form a Schottky contact with the wafer and become limited in signal capabilities (i.e., frequency and current). Hence, only supply current tests and continuity/leakage related tests are performed using PIBs. To test all the devices on the wafer, a stepper machine moves the wafer chuck to align the next die with the PIB. As the probes are thin and fragile, they are usually fixed with a horizontal chuck.

    DIBs are often more sturdy and versatile compared to PIBs. Hence, a larger number of tests can be performed using a DIB during production testing, when the part is already packaged. Typically the DIB is a multilayer printed circuit board (PCB), through which all tester inputs and outputs are connected to the DUT. To reduce noise coupling between the different tester channels, the analog and digital ground planes are kept separate and connected directly beneath the DUT. Furthermore, signal planes are isolated from each other by inserting ground planes between them.

    Various active and passive components, such as resistors, capacitors, buffers, and relays, are soldered onto the DIB and PIB to aid in testing and switching between various test modes. Hence, the measured performance of the DUT by the tester is a combined effect of the DUT and the DIB components. Hence, the DIB components must be of small tolerance to produce reliable data and need to be tested for their functionality before production testing. As a result, various diagnostic tests are run on a DIB before using it in production tests.

    A socket is used to interface with the DUT, with various types of sockets available for different classes of devices and package types. Whereas close-lid sockets are commonly used for analog/digital/mixed-signal devices, RF parts require carefully designed and characterized sockets to maintain proper impedance matching from the RF port of the tester to the DUT pin. RF sockets are typically zero-insertion force (ZIF) sockets that press the DUT on the DIB to make reliable contact.

    Most of the tester resources are available to the DIB via the pogo pins. The tester-to-DIB interface is made through pogo pins that are connected to the relay matrix on the tester side. However, RF connections are made directly to the DIB through SMA/SMB connectors. Most of the RF traces are routed directly on the top layer. The RF traces on the PCB are of the microstrip type, with the width and trace length carefully selected to achieve minimum power loss. As mentioned before, maintaining a 50-ohm matching is critical in any RF measurement. This is also true for the DIB, and manufacturers use various techniques, such as time-domain reflectometry (TDR), to check the trace impedances on the DIB.

  • Test program. A test program is a piece of software developed to interact with the tester using the tester controller CPU based on a high-level language such as C or Pascal. However, each tester has its own set of functions/subroutines that aid in the overall test development process. These functions include programming the different components of the tester, test application, and test response data capture and analysis. Another important function of a test program is to set the global clock and synchronize the test process. This is usually done via global clock pin assignments through the tester code.

Repeatability and Accuracy

Any test process must be verified for its accuracy, precision (repeatability), measurement error, and resolution of the test setup. For RF tests, all these factors are extremely important to obtain reliable measurement data. We next describe each of these factors and discuss its implication on the test process. Refer to Figure 16.19 for the following discussion on the various factors that contribute toward making reliable measurements.

Measurement accuracy and various related terms.

Figure 16.19. Measurement accuracy and various related terms.

  • Measurement error. Measurement errors can typically be classified in two types, systematic and random. Systematic errors appear consistently for consecutive measurements. The test process can be easily calibrated to reduce the effect of such errors. Random errors, on the other hand, are more difficult to remove or reduce. Although there can be innumerable sources of random error, the largest contributors are usually thermal noise and noise injected within the system from the power supply. Although it is theoretically impossible to eliminate random errors in measurements, filtering is commonly used to mitigate the effects. In addition, DSP techniques such as smoothing/averaging can be used to minimize random errors. In certain cases, measurement noise is introduced as a result of the human factor involved in the measurement system. A common example is the use of improper resolution bandwidth settings by the user in a spectrum analyzer, which can easily result in 0.5 to 1 dBm error in the measurement.

  • Resolution. Another well-known source of error in a measurement system is the resolution of the data converter system. Data converter systems usually consist of ADCs and DACs. As in all other testers, RF testers also digitize the analog signals with ADCs before the tester can process the measurements. In the process of digitization, quantization error is introduced into the measurement system. For an N-bit digitizer with a full-scale voltage of V volts, the quantization noise introduced is V/(2N – 1).

  • Accuracy. Accuracy of a test system is defined as the difference between the average of a set of measurements performed under the same test conditions and the ideal outcome, provided the ideal outcome of a test system is known a priori. A test system can consist of a single instrument, a set of instruments, or the tester as a whole. Accuracy is usually affected by systematic variations. Hence, calibration methods can be applied to increase the overall accuracy of the measurement.

  • Repeatability. If the same measurement is repeated many times on a single test setup, then the variation in the measurements is known as the repeatability of the test system. Variations in the measurement values are caused by random noise inherent to the system. Mathematically, repeatability is represented as the standard deviation of the measurement values obtained. In some cases, repeatability is also defined as the variation in measured values over time or different measurement conditions (i.e., temperature and humidity). This indicates the degree of stability of the measurement system.

  • Correlation. The variation in the measured data obtained from different test setups is called the degree of correlation. The test setups may consist of different pieces of the same hardware, software, or both. Different test setups will exhibit different mean and variance in the measured data. Obtaining a high correlation of measured values from various test setups in a production test environment is extremely important. Another term that is closely related to correlation is reproducibility, which is an indicator of the variation in a particular measurement from different test setups. While correlation is defined over a number of measurements, reproducibility is defined for one measurement set. This is different from repeatability, which is the variation in measurement data obtained from a single test. In an RF testing environment, accuracy and repeatability-related issues are extremely important for performing reliable measurements. Correlating the measurements across various test setups (i.e., instruments and testers), different versions of test programs may be necessary to ensure a reliable test and high reproducibility of the test measurement data in a production test environment.

  • Gauge R&R. During production testing, another important factor that is measured is gauge repeatability and reproducibility (GR&R). A study of GR&R provides information on the overall test system performance (including ATE and DIB) by analyzing measurement errors from various sources. Typically, the sources of error are from part-to-part variations, operator variations (various operators performing the same test on different parts), and gauge or equipment variations. In certain types of analysis, interaction between parts and operators can provide additional information about the gauging process. In most GR&R studies, multiple operators collect data from the same devices in random order by performing the same set of measurements. Most studies use two or three ATE/DIB combinations and five to ten parts (assuming the test data collection process is automated). The formula shown in Equation (16.40) is used to compute GR&R.

    Equation 16.40. 

    where

    In the above equation, σi indicates the standard deviation for each of the parts or devices, and N indicates the total number of parts tested. USL and LSL are usually the specification limits associated with the individual tests and the value of c is determined based on the number of parts used in the study. A GR&R value of less than 0.1 indicates very high repeatability and reproducibility from the system, while a value of more than 0.3 is usually unacceptable. Values of GR&R ranging from 0.1 to 0.3 usually indicate that some repair/calibration is needed on the system.

  • Calibration. Calibration is a method by which test instruments are correlated against standard measurements and are adjusted accordingly to meet the desired values. All the variations, such as accuracy and repeatability, cannot be eliminated completely; however, their effects can be minimized using calibration methods. All test instrumentation (i.e., tester and the bench instruments) are calibrated at regular intervals. The calibration standards are provided either by a standards society, such as the National Institute of Standards and Technology (NIST) or the original equipment manufacturer (OEM) itself. Most testers have standards built into them that are stabilized for wide variations in environmental conditions. Although calibration can be for both hardware and software, software calibration is more popular because of its ease of implementation. This essentially adjusts the different weights for measurements and applies them during runtime. Similar calibrations are also performed for DIB boards during production testing.

    It is worthwhile to mention that during manufacturing, as the ATE tests thousands of parts continuously, the tester performance is likely to degrade over time. As a result, the tester might erroneously pass defective parts, leading to huge revenue losses. To avoid this outcome, testers are also calibrated frequently during production testing, using golden devices (i.e., devices that are known to be good) interfaced through customized calibration boards to quickly diagnose possible faults in the tester.

Industry Practices for High-Volume Manufacturing

The test process is well defined in industry and starts at an early stage of product development. After the device specifications are defined, a test plan is usually formed if there are a large number of specifications. Production test, also known as high-volume manufacturing test, has two parts: (1) choice of tester and development of tester programs for that specific tester, and (2) design and fabrication of the DIB/PIB. The choice of tester needs to be carefully made because the tester needs to be capable of performing all the tests for that device during high-volume manufacturing.

As the device is designed and fabricated, first a schematic of the DIB is created. This schematic contains the various test configurations on the same board, and the different tests are performed via switching the circuit configuration through relays. Hence, a successful design is the key to performing tests reliably during production. This schematic goes through thorough design reviews to ensure proper functionality of the DIB board. As this procedure is completed, the test program is developed and the board is fabricated.

The first silicon usually goes to a group of engineers who perform characterization tests on the device. Characterization tests use accurate bench test equipments to test the part. In addition, this process is time consuming as little automation is involved during characterization. Therefore, some complex tests, such as Iout versus Vin curve, variation of bandgap output voltage over temperature, or BER over a range of SNR values, are only performed during characterization, as conducting these tests during production would be impossible because of time constraints and the hardware resources available.

After the device passes the characterization tests, the next step is verifying the test program and the DIB on a tester. This usually involves debugging the test program and finding any possible mistakes within the DIB. Another important aspect is test scheduling, which controls the flow of the tests to obtain optimal performance in a minimum amount of time. Moreover, the idea of scheduling is extended to test instruments, even testers, where the test program is arranged to best utilize the instruments with the least idle time.

Next, the test is correlated between testers and DIBs. To do this, the same set of parts is tested on one tester using different DIB boards manufactured for that device and different testers. This gives an idea of the amount of variation that will be observed during high-volume manufacturing. In addition, tests are performed to measure bin-to-bin correlations. Binning is a process of separating devices with different performances. One such example is binning the IDD leakage failure devices together—as these devices are often expected to fail nonlinearity tests. These correlation values give important feedback to the designers and the fabrication engineers about the quality of the manufacturing process. Once the test program runs properly on the tester (i.e., the tests give proper measurement data for devices) and test time is acceptable, the device is released for high-volume testing.

Test Cost Analysis

Test cost is an important factor that drives the overall cost of the device and may reduce the profit margin significantly. Various factors contribute to the test cost. The major contributor to the cost of testing is the ATE or the tester cost. As devices continue to grow more complex, the test capabilities need to be constantly improved. Also, the speed of the tester needs to increase because constant device scaling since the mid-1980s has pushed the device speeds significantly higher. Manufacturers are constantly looking for low-cost testers that can reliably test a complex, high-speed device during high-volume production testing.

Apart from the cost of testers, large test time is a major factor for increased test costs. Typically, test time for wireless devices ranges from a few seconds to a few minutes. During production, when millions of devices are tested, even such apparently small test times can create a bottleneck. A simple example will make the problem more obvious. Suppose a device test time required during production is 60 seconds. Therefore, the number of devices that can be tested is only 1440 each day (=24 × 3600/60). Therefore, testing is normally carried out on many testers in parallel. If the manufacturer devotes 10 testers solely to this example device, then to release a million devices to the customer will require 70 days. This clearly shows that a small reduction in test time can increase the throughput significantly. Therefore, there is a constant need in the test community to reduce production test time.

Aside from the two major factors just discussed, various other factors contribute to the overall test cost. Additional costs come from engineering errors or other human factors. For example, an improperly designed DIB or a bug in the test program can significantly increase the time required to release a product. This can cause the manufacturer to lose significant market share for that product. Such factors can be fatal for small businesses, and the success of the manufacturer relies heavily on the test process.

Key Trends

Having discussed the various aspects that drive the test cost, we now point out a few methods that manufacturers adopt to reduce the cost of testing. Of the various techniques, the major concern for semiconductor manufacturers is to reduce the cost of testers. This can be achieved by reducing the amount of resources available on the tester used during production testing. However, this may lead to inefficient testing procedures. Another way that seems more feasible is to use low-cost testers that run slower. Such testers are often inexpensive (~$300 K) and can still perform most of the tests for the device.

The major tester manufacturers for RF and wireless devices are Teradyne, Advantest, Agilent, and Roos, among others. Certain manufacturers are also trying to build in-house testers that can support a majority of the devices they manufacture. For smaller industries, maintaining a tester is usually expensive and, as a result, they rely on external test vendors to test their devices.

Apart from using a low-cost tester, manufacturers try to test as many devices in parallel as possible, known as multisite testing. This can be done in a relatively easy manner for low-frequency devices. However, for RF devices, multisite testing can be difficult, as many issues need to be addressed. As discussed earlier, impedance matching is a big concern for RF measurement; a significant amount of signal may be lost if improper impedance matching exists between the tester output port and the DUT input pin. Therefore, the DIB must be carefully designed to avoid such problems. In case of multisite testing, apart from maintaining impedance matching, it becomes important to consider crosstalk between different sites. The current trend in industry is to use a quad-site DIB for RF devices. However, there is also effort in the industry to test 8 or even 16 sites in parallel. This puts stringent requirements on the tester, such as signal sourcing capabilities for all sites, and may be difficult to obtain in the current generation of testers. Innovative methods are used, such as using a single channel, amplifying it, and then sourcing it to multiple sites using power splitters. This requires careful characterization of the DIB before porting it to the tester. Another issue that becomes extremely important for RF testing is providing a clean power supply to the DUT. Most present-day RF devices rely on a clean supply voltage to obtain a steady bias point. Small deviations in power supply voltage may often lead to a nonreproducible test setup and unreliable test data.

Advances in SOC technology have enabled integration of RF and baseband in a single chip. In most SOCs, the RF section is usually controlled by internal DSP signals. Hence, the DSP needs to be programmed during testing to sequentially perform the tests during production. This has increased the problems related to testing because the tester may have limited test resources that can not be allocated for programming the DSP. To address these problems, onboard controllers such as field programmable gate arrays (FPGA) or complex programmable logic devices (CPLD) are used to program the DSP within the DUT.

In some cases, SOCs provide limited access to internal RF nodes. Hence, characterization or production testing of embedded modules becomes difficult [Lau 2002]. To ensure proper testing and characterizing of such devices, innovative methods in the form of built-in self-test (BIST) for design for testability (DFT)—[Dabrowski 2003] [Ferrario 2003] [Sylla 2003] [Yin 2005]—are often used. This involves adding extra circuit components within the DUT to aid in testing. BIST is well known and has been extensively used for digital devices; however, BIST is becoming more popular for mixed-signal and RF devices [Veillette 1995]. RF BIST is still in its infancy as there is a huge gap between the test and design community. Considerable research is currently under way to bridge this gap and make RF BIST a reality [Voorakaranam 2002] [Bhattacharya 2004] [Cherubal 2004] [Valdes-Garcia 2005].

Concluding Remarks

As RF technology advances, it is becoming more important to resort to innovative techniques to keep testing costs down. The scope of test engineering is not limited to the knowledge of test software and tester resources, and it has become imperative for engineers to be well versed with the system design and modeling aspects so that they can make educated decisions to create an efficient test plan. As test development proceeds, it is necessary to verify the software even before the devices are fabricated. This requires modeling of the DUT in order to verify the software beforehand.

In the nanometer era that we are entering, the process variation is greater than ever before. Currently, designers attempt to reduce the effect of this variation for each specification; however, the specification spreads are much larger than that of their earlier counterparts. Hence, the test setup and measurement system needs to be robust enough to work over this wider range of specifications.

With wireless devices becoming ubiquitous, the amount of interference that a present-day RF SOC experiences is extremely large compared to earlier devices. Therefore, the devices must be characterized for harsh conditions where overall SNR is low and most of the dynamic range is occupied by the elevated noise floor. Such effects cannot be guaranteed by simulation only and require the manufacturer to apply a complex and elaborate field test to the DUT. Of course, this increases the test costs significantly. To mitigate these elevated test costs, test, design, and system engineers need to work in unison to develop a strategy that makes the devices testable. This may lead to the development of new protocols with advanced coding schemes at higher RF frequencies. One example is the development of 5.8-GHz cordless telephones that replaced the existing 2.4-GHz units. The technology needs to evolve around the needs of the semiconductor industry, which can be from a consumer perspective, relate to manufacturing issues, or involve test requirements. As we move toward third-generation (3G) wireless standards and beyond, new challenges will develop, and we will need innovative test methods for those devices that will test, diagnose, and correct themselves, leading to a new generation of truly adaptive, self-correcting devices.

Exercises

16.1

(Test Instrumentation) In a test setup, the minimum spacing between the tones of a multitone waveform is 100 KHz, and the largest amplitude difference is 10 dB. If the available RBW settings in the spectrum analyzer are 30 KHz, 100 KHz, and 300 KHz with Q of 20, 10, and 5 respectively, which RBW setting would you use to accurately measure the waveform and why?

16.2

(Test Instrumentation) In a noise figure measurement, three noise measurements are made to determine the NF of a DUT. If the output noise values measured for input noise of –70 dB, –60 dB, and –50 dB are –66.78 dB, –56.32 dB, and –46.91 dB, respectively, what is the NF of the DUT? (Hint: Use least-square fit to estimate NF.) What is the maximum accuracy that can be obtained from this instrument?

16.3

(Circuit-Level Specifications) From Equation (16.9), derive an expression for IP3 in terms of a1 and a3 (assume a0 and a2 = 0).

16.4

(Circuit-Level Specifications) For an instrument, the ENR is given as 1.32, and the measured values of POn and POff are 13.51 dB and 11.3 dB, respectively. What is the NF of the DUT?

16.5

(System-Level Specifications) Measurements made on a receiver system with QPSK modulation show gain mismatch of 0.5 dB and phase mismatch of π/15 radians. What would be the average EVM value measured on a large number of devices?

16.6

(System-Level Specifications) A digital bit stream was measured using a logic analyzer, and the logic levels were captured using a high-speed sampling oscilloscope. The average output high level, VOH, was measured as 1.5 V with a spread of 22%. VOL was 0.32 V with a spread of 31%. What is the range of BER values possible for this device?

16.7

(Repeatability and Accuracy) In a measurement setup, an engineer obtained the following data made on a DUT. What is the accuracy of the measurement system? What are the possible anomalies present in the data? What can the engineer do to clean the data?

Instance

Measurement

Instance

Measurement

1

10.65064916

21

10.74065448

2

10.31679807

22

10.50663072

3

10.12127354

23

11.91165271

4

10.81673331

24

11.50850651

5

4.08778264

25

10.55634287

6

10.36293711

26

10.17476899

7

10.5445455

27

10.38719247

8

10.63695599

28

11.54113781

9

10.96387893

29

17.72616125

10

11.02340429

30

10.39783598

11

11.29191079

31

10.46899635

12

11.29191079

32

11.62494034

13

11.29191079

33

11.62740999

14

11.29191079

34

11.70090761

15

10.15647102

35

11.00929383

16

10.13851337

36

11.10099202

17

10.19516033

37

10.74165817

18

11.40144598

38

10.27407421

19

11.52970227

39

10.04422281

20

11.34563046

40

10.93843566

16.8

(Repeatability and Accuracy) Explain the differences between accuracy and reproducibility.

16.9

(Test Cost Analysis) In a production line, the test time for a DUT is 60 seconds. The device needs to reach the customer in 60 days with a 200-K volume. What level of parallelization is needed in testing (multisite) if the production testers can be run 16 hours a day?

Acknowledgments

The authors wish to acknowledge Professor F. Foster Dai of Auburn University; Professor Sule Ozev of Duke University; Dr. Pramod Variyam and Tushar Jog of WiQuest; Dr. Qing Zhao, Dr. Ajay Kumar, and Dr. Dong Hoon Han of Texas Instruments; and Rajarajan Senguttuvan of the Georgia Institute of Technology for their careful comments and reviews in preparation of this chapter. We would like to extend our special thanks to Professor Charles E. Stroud of Auburn University for proofreading and help in finalizing the chapter.

References

Books

Introduction

Key Specifications for RF Systems

Industry Practices for High-Volume Manufacturing

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset