Chapter 5. SIP Test Architectures

Philippe CauvetNXP Semiconductors, Caen, France

Michel RenovellLIRMM–CNRS/University of Montpellier, Montpellier, France

Serge BernardLIRMM–CNRS/University of Montpellier, Montpellier, France

About This Chapter

Since the 1980s, we have seen the emergence and rapid growth of system-on-chip (SOC) applications. The same trend has happened in system-in-package (SIP) applications since the mid-1990s. The development of SIP technology has benefited from the SOC technology; however, this emerging SIP technology presents specific test challenges because of its complex design and test processes. Indeed, one major difference between an SOC and an SIP is that an SOC contains only one die on a packaged chip whereas an SIP is an assembled system composed of a number of individual dies on a packaged chip. Each die in the SIP can also use a different process technology, such as silicon or gallium-arsenide (GaAs), which includes a radiofrequency (RF) or microelectromechanical systems (MEMS) components. This fundamental difference implies that to test an SIP, each bare die in the SIP must be tested first before the bare die is packaged in the SIP. Then, a functional system test or embedded component test at the system level can be performed. The passing bare dies are often called known-good-die (KGD).

In this chapter, we first discuss the basic SIP concepts, explore SIP technology’s difference from the SOC technology, and show some SIP examples. We highlight the specific challenges from the testing point of view and derive the assembled yield and defect level for the packaged SIP. Next, various bare-die test techniques to find known-good-dies are described, along with their limitations. Finally, we present two techniques to test the SIP at the system level: the functional system test and the embedded component test. The functional system test aims to check the functions of all dies per their specifications at the same time, whereas the embedded component test tries to detect each faulty die individually. We conclude the chapter with a brief discussion on future SIP design and test challenges.

Introduction

Systems are moving to higher and higher levels of complexity, integrating ever more complex functions under ever more stringent economical constraints. With advances in semiconductor manufacturing technology, the SOC technology has appeared as a viable solution to reduce device cost through higher levels of integration. Thanks to the exploitation of the so-called reuse concept, the SOC development time remains at a reasonable limit despite the increasing complexity. However, the push toward more functionality in a single box requires the integration of heterogeneous devices that cannot be intrinsically achieved in single-technology SOC. In this context, SIP clearly appears as the only viable solution to integrate more functions in an equal or smaller volume.

Cellular handset designs have been one good example of this global trend. Today, a new cellular design must support multiband and multimode features in addition to Bluetooth™ networking, global positioning systems (GPSs), and wireless local area networks (WLANs), not to mention user applications such as games, audio, and video. This triggers the evolution of the SIP technology that has become one viable solution to solve these highly complex design integration problems in an economical way.

SIP Definition

The SIP concept started with the development of the multichip module (MCM) in the 1990s. MCM was the pioneer technology to integrate several active dies on one common substrate. A typical integration at that time included memories and a processor, but it was not possible to build a complete system in one package because of limited integration capabilities. With its associated system integration benefits, the SIP technology has since been adopted rapidly as a packaging alternative to save product cost. In mobile phone applications, the system integrators often face short product life cycles. As a result, they came to the conclusion that integrating existing and available integrated circuits (ICs) into an SIP is much easier than creating new SOC designs from scratch or even by reusing existing intellectual property (IP) cores.

The International Technology Roadmap for Semiconductors published by the Semiconductor Industry Association defines a system-in-package (SIP) as any combination of semiconductors, passives, and interconnects integrated into a single package [SIA 2005]. The definition does not limit SIP to any single technology or integration; it clearly indicates that an SIP can combine different die technologies and applications with active and passive components to form a complete system or subsystem. Consequently, an SIP usually includes logic and memory components, but it may also include analog and RF components. These various components are interconnected by wire-bond, flip-chip, stacked-die technology, or any combination of the above.

The differences between SOC and SIP can be better outlined as follows:

  • The SOC is created from a single piece of substrate (i.e., a single die); the single die is fabricated in a single process technology with a single level of interconnections from the die to the package pads or to the interposer. The SOC uses a single interconnection technology, such as wire-bond or flip-chip.

  • The SIP is created from several different dies (i.e., multiple parts); these dies can come from a broad mix of multiple process technologies, such as CMOS, GaAs, or BiCMOS, with multiple levels of interconnections from die/component to die/component, or from die/component to the package pads or to the interposer. The SIP includes multiple interconnection technologies, such as wire-bond, flip-chip, soldering, and gluing.

In short, we can say that for an SOC everything is single while for an SIP everything is multiple. Figure 5.1 shows an example SIP in a global system for mobile communications (GSM) application where multiple dies, components, and leadframe connections are embedded in a single package.

Multiple dies, components, and interconnections on an SIP for GSM application.

Figure 5.1. Multiple dies, components, and interconnections on an SIP for GSM application.

SIP Examples

In recent years, many types of SIP have been developed. They differ in the type of carrier or interposer to be used for holding the bare die (or component) and the type of interconnections to be used for connecting components. The carrier or interposer can be a leadframe, an organic laminate, or silicon based. Passive components such as resistors, inductors, and capacitors can be embedded as part of the substrate construction, or they can be soldered or epoxy attached on the surface. Active components such as microprocessors, memories, RF transceivers, and mixed-signal components can be distributed on the carrier surface. Another possibility is to stack components on top of each other, called stacked dies. The three-dimensional (3-D) stacked-die assembly can drastically reduce the overall size of the packaged system resulting in low package cost and increased component density that is critical in many applications.

The component interconnections are made from the component to the carrier and from die to die. They can use any type of interconnect technology: wire bonding, flip-chip, or soldering. The final packaged SIP looks like any conventional ceramic-type package, ball grid array (BGA), land grid array (LGA), or leadframe-based package. Figures 5.2 and 5.3 illustrate some examples of carriers and stacked dies. In Figure 5.2a, two dies are glued and interconnected on a leadframe. Figure 5.2b shows an SIP made of active dies and discrete components soldered on a laminate substrate (a micro printed circuit board) and encapsulated in a low-cost package. An example of silicon-based SIP is shown in Figure 5.2c, where three active dies are flipped and soldered on a passive silicon substrate. These three technologies are planar, whereas Figure 5.3 illustrates an SIP using the 3-D stacked die concept.

Examples of carrier style: (a) leadframe, (b) laminate, and (c) silicon based.

Figure 5.2. Examples of carrier style: (a) leadframe, (b) laminate, and (c) silicon based.

Example of stacked components.

(Courtesy of [DPC 2007].)

Figure 5.3. Example of stacked components.

SIP products may also be classified according to their development flow:

  • SIP development using off-the-shelf components: the individual dies were designed independently, thus no specific DFT for SIP test has been introduced, which in turn may cause difficult test problems.

  • SIP development with application specific components: in this case, test requirements can be considered in the SIP design and test strategy, which may simplify the SIP test significantly.

The SIP offers a unique advantage over the SOC in its ability to integrate any type of semiconductor technology and passive components into a single package. Currently, for wireless systems requiring the use of digital and analog components, the SIP is becoming a favorite choice. The SIP is also a good candidate for the integration of MEMS with circuitry to provide a fully functional system, as opposed to acting as a pure sensor or actuator. The MEMS-integrated SIP can handle sensors for physical entities such as light, temperature, blood/air pressure, or magnetic fields for automotive, biological/chemical, consumer, communication, medical, and RF applications, to name a few.

The SIP actually offers even more advantages including the following:

  • Combining different die technologies (Si, GaAs, SiGe, etc.)

  • Combining different die geometries (250 nm, 90 nm, etc.)

  • Including other technologies (MEMS, optical, vision, etc.)

  • Including other components (antennas, resonators, connectors, etc.)

  • Increasing circuit density and reducing printed circuit board area

  • Reducing design effort by using existing ICs

  • Minimizing risks by using proven existing blocks

  • Improving performance

In the early years, an SIP was only developed and used for high-volume production because of packaging cost concerns. The reduction of packaging cost through the utilization of proven low-cost packaging platform technologies has opened a broad range of applications. However, two points still require particular attention when developing or manufacturing an SIP:

  • Usually an SIP has a longer supply chain and may introduce additional planning risks and dependencies because of external suppliers for the carrier and the higher complexity of the assembly process.

  • The design and layout of carriers are often at the leading edge of substrate fabrication technology, which may result in low yield of the carrier production and a higher carrier cost.

Yield and Quality Challenges

The fabrication and test flow of a standard SIP is represented in Figure 5.4. The SIP under consideration is basically composed of n different dies. Figure 5.4 concentrates on die fabrication (Figure 5.4a) and SIP assembly (Figure 5.4b); passive components and carrier process are omitted. This flow can be seen as being composed of two fundamental steps referred to as the die process and the SIP process.

SIP creation and test: (a) fabrication of die #i and (b) assembly of the SIP.

Figure 5.4. SIP creation and test: (a) fabrication of die #i and (b) assembly of the SIP.

The die process represents the flow for all dies. Starting from a given material (silicon or other), a number of dies, given by Fi, are fabricated using a technological process. Among these Fi fabricated dies, a number of dies, Fci, are correct while a number of dies, Fdi, are defective. Testing is then used to screen defective dies (Finogo) from correct dies (Figo) where the terms “go” and “nogo” indicate that the die has passed or failed the screening test(s), respectively. Because testing is not a perfect process, we can classify the testing results of the dies into four possible categories [Bushnell 2000]:

  1. A number of correct dies (Fcigo) that are declared “go”

  2. A number of defective dies (Fdigo) that are declared “go”

  3. A number of correct dies (Fcinogo) that are declared “nogo”

  4. A number of defective dies (Fdinogo) that are declared “nogo”

Clearly, the correct decision has been made in categories 1 and 4, whereas categories 2 and 3 correspond to wrong decisions. Category 2 is usually characterized by defect level DLi, which is defined as the number of defective parts declared “go” over the total number of parts declared “go”:

DLi = Fdigo/Figo = Fdigo/(Fcigo+Fdigo)

This defect level can be viewed as a metric of test quality. A high-quality test should minimize the defect level: Fdigo → 0 ⇒ DLi → 0. The defect level is a measure of the number of dies remaining in the “go” category that is actually defective and will not function correctly in the application. In the context of the SIP, this concept of defect level is of great importance since all dies in the “go” category (i.e., Fcigo + Fdigo) are used during the second step of the SIP process as illustrated in Figure 5.4b. In this step, an SIP is assembled from a set of incoming dies (die #1 to die #n) and possibly some passive components, using a carrier or interposer. At the end of this assembly process, a total number of SIPs, A, is assembled, among which Ac SIPs are correct and Ad SIPs are defective. Testing is used once again to screen defective SIPs from correct SIPs.

The resulting SIP can be a very sophisticated and expensive product because a single SIP may include many dies and passive components. Furthermore, SIP assembly can be a very complex and expensive process, where many different interconnect techniques may be used for wire bonding and flip-chip connections. In addition to these costs, it is important to note that typically a defective SIP cannot be repaired. These factors combined lead to the following observation:

The SIP process is economically viable only if the associated yield YSIP of the packaged SIP is high enough.

Referring to Figure 5.4b, the production yield of the packaged SIP can be defined as the number of correct SIPs, Ac, over the total number of assembled SIPs, A:

YSIP = Ac/A = (A–Ad)/A

To optimize the assembly yield, YSIP, it is important to minimize the number of defective parts, Ad. This requires a detailed analysis of all possible root causes for defective SIPs. A defective SIP can originate from the following:

  • Defective substrate

  • Incorrect placement or mounting of dies and components

  • Defective soldering or wire-bonding of dies and components

  • Stress on dies during assembly

  • Defective dies, etc.

The first three cases are under the control and the responsibility of the SIP assembler. The substrate used as a carrier is generally made with a mature technology with a high level of quality, whereas the assembly process and the quality of the mounted dies are particularly critical. Indeed, SIPs are only viable if the quality of the assembly process is sufficiently high. In the IC fabrication context, yields (Yic) of around 75% are quite common. As a matter of fact, the SIP assembly context is totally different because the yield associated with the assembly process, Ya, should be much higher. For example, viable yields for SIP assembly are typically around 99%.

The fourth case corresponds to a situation where a die that is correct before assembly sustains some form of flaw. The die is stressed by the assembly process and becomes defective.

The last case is illustrated in Figure 5.4b with curved arrows and is totally independent of the assembly process. It is clear that any defective die belonging to the Fdigo category will contribute to the number, Ad, of defective SIPs. This important point demonstrates that the defect level DLi of each die process has a direct impact on the yield of the SIP assembly process. This is not a new point; in the early 1990s, after passing the first euphoria for MCMs, it was discovered that the yield of assembled modules was related to the number of devices in the module and the defect levels associated with each device. This simple relationship holds for any assembly process of components that must all function correctly (assuming no redundancy) to provide the correct system function [DPC 2007].

For a given die #i, the probability Pi of being defect free is related to its defect level:

Pi = 1 – DLi

Considering an SIP assembly process including different dies, the overall yield YSIP can be ascertained by:

YSIP = 100 [P1 × P2 ×...× Pn] × Ps × PintQ × Pw

where Pi is the probability that die #i is defect-free, Ps is the probability of substrate being defect-free, Pint is the probability of die interconnect being defect-free, Q is the quantity of die interconnect, and Pw is the probability of placement and mounting being defect-free.

The preceding equation demonstrates the cumulative effect of the different defect levels DLi expressed in parts per million (ppm). The resulting yield is related to the multiplication of the different defect levels. Table 5.1 gives a simplified example where passive components, interconnect, and mounting problems are omitted. The example only considers problems related to the substrate and three dies.

Table 5.1. Assembly Yield Example

 

DLi(ppm)

Pi(%)

Substrate

600

99.94

Die 1

4100

99.59

Die 2

35500

97.45

Die 3

1200

99.88

YSIP

 

96.87

From this example, the overall yield YSIP will always be lower than the probabilities of individual dies being defect-free. In other words, we could say that the assembly process accumulates all the problems of the individual dies. Another important feature is the sensitivity of this phenomenon. Indeed, to consider a high ppm for only one die, let’s assume that die 1 has a defect level of 20000 ppm. In this case, the resulting yield falls from YSIP = 96.87 to YSIP = 95.33.

An acceptable assembly yield requires that every die in the SIP exhibits a very low defect level. In other words, only high-quality components are used in the SIP assembly process. In Figure 5.4a, the die test process is responsible for this high quality level. In practice, this quality level is not easy to guarantee because tests are applied to bare dies and not to packaged dies. A test applied to a bare die is usually called wafer test, whereas a test applied to packaged IC is called final test. It is well-known that many factors limit the efficiency of tests applied to bare dies including limited contact and probe speed, among others. Despite these constraints, bare dies used in the SIP assembly process should exhibit the same, or better, quality level than a packaged IC. As noted earlier, this is referred to as the known-good-die (KGD) concept, which can be stated as follows:

KGD: A bare die with the same, or better, quality after wafer test than its packaged and “final tested” equivalent.

Test Strategy

We also have to consider the SIP test process in Figure 5.4b, which is responsible for the final delivery quality. As already discussed for the die process, testing is not a perfect process and, as a result, the SIP test process may exhibit some defect level or yield loss. Consequently the test yield YSIPtest of the packaged SIP is not exactly the same as the production yield YSIP defined earlier. The test yield is defined as follows:

YSIPtest = Ago/A = (A – Anogo)/A

Under the assumption that only high-quality components (KGD) are used in the assembly process, the SIP test process must focus on the faults that originate from the assembly process itself. Under ideal conditions, an SIP test strategy includes the following:

  • A precondition stating that all the components used for the assembly have a KGD quality

  • A test of the placement and connections

  • A global functional test for critical parameters

The first test concentrates on the faults that result from the assembly itself. The second test is more oriented to system aspects and concentrates on parameters that were not testable at wafer test of the separate components. Under ideal conditions (i.e., using KGD components), it is expected that failing SIPs will most frequently be encountered during placement and connections test. In practice, the situation is far different from the ideal one:

  • The status of the KGD is not always perfect.

  • The use of mixed component technologies and different connection technologies gives rise to a wide spectrum of interconnect faults.

  • The functionality may include mixed signal domains such as digital, analog, and RF devices.

As a consequence, the ideal strategy described earlier represents only a global guideline, and more realistic approaches must be used. In particular, most of the testing performed for the dies at wafer level must be repeated because a given die may become defective because of assembly stress. In some cases, additional tests are required to detect defects that passed the wafer test. For example, RF devices cannot be fully tested at wafer test and require at-frequency functional test after mounting.

Testing dies is not trivial because of the limited test access. The total number of pads on each of the different mounted dies is usually much higher than the number of pins of the final SIP package. This means that many test signals, such as scan-in and scan-out signals, multiplexed on die pads may not be connected to package pins. In this case, the recommended solution is the widely adopted IEEE standard for boundary scan. Because of the limited access to internal dies, design for testability (DFT) and built-in self-test (BIST) connected to the boundary scan test access port (TAP) are also recommended whenever possible.

Because of these practical points, testing an SIP is a combination of functional test at the system level with structural test, commonly referred to as defect-based test, at the die and interconnect level. The majority of the challenges lie in the testing of mixed signal and RF blocks.

Bare Die Test

Two major factors have to be considered by SIP manufacturers in bare die testing: test escapes of the die (defective die declared “go”) and system level infant mortality in the final package. The number of test escapes depends on the yield of the lot, automated test equipment (ATE) inaccuracy, and insufficient fault coverage obtained by the set of tests. The number of test escapes is unpredictable; therefore, meeting the KGD target for high-volume markets represents a big challenge for the industry. For stand-alone devices, good test coverage is often achieved thanks to the final package test, where high pin count, high speed, and high frequencies can be handled more easily than at wafer level [Mann 2004]. For bare die, on the other hand, probing technology is crucial.

Mechanical Probing Techniques

The probe card technology has not evolved at the same speed as IC technology. Until recently, most products did not require sophisticated probe cards. The traditional technology, based on tungsten needles shaped in a cantilever profile as illustrated in Figure 5.5, still represents 85% of the probe card market [Mann 2004].

Principle of cantilever probe card using epoxy.

Figure 5.5. Principle of cantilever probe card using epoxy.

The vertical probe from 1977 was developed to fulfill the requirements for array configurations, anticipating the increasingly high number of contact pads in circuits. However, the development of complex devices, combined with the expansion of multisite (parallel) testing, pushed the fabrication of probe cards toward the limits of the technology, one of the key issues being co-planarity. Meanwhile, the size and the pitch of the pads has regularly decreased, forcing probe cards toward novel, but expensive, technologies [SIA 2004]. On top of those requirements, the growth of chip-scale and wafer-level packages put further demands on probes, which contact solder bumps instead of planar pads.

Fortunately, some solutions have been able to push the mechanical limits forward. Among them, MEMS-based implementations of probe cards are now starting to replace the traditional macroscopic technologies. An example of a MEMS-based probe card is shown in Figure 5.6, which represents a silicon micro probe with Cr/Au metal wiring. The cantilevers are kept on the wafer and aligned, via an intermediate printed circuit board (PCB), with a holder to make the complete probe card. Pitches of 35μm can be achieved. There are many examples of either industrial products or research demonstrations that show the growing interest in the MEMS-based technologies [Cooke 2005].

View of probe tips in MEMS technology.

(Courtesy of Advanced Micro Silicon Technology Company.)

Figure 5.6. View of probe tips in MEMS technology.

Another solution for KGD wafer testing is also emerging, which consists of replacing the traditional probe card with a noncontact interface [Sellathamby 2005]. This technology uses short-range, near field communications to transfer data at gigabit per second rates between the probe card and the device under test (DUT) on a wafer. The principle is depicted in Figures 5.7 and 5.8. The test system consists of a probe chip with micro antennas and transceivers. Each antenna and transceiver probes one input/output (I/O) on the DUT with each I/O site on the DUT being connected to a single antenna and transceiver circuit. The antennas and transceiver circuits are designed into the DUT and operate at the same carrier frequency. This novel technology offers many advantages, including the following:

  • High reliability because no damage is caused by scratching the pad

  • High density because of the reduction of the size and pitch of the die pads

  • High throughput because it enables a massively parallel test

Generic principle of noncontact testing.

(Courtesy of Scanimetrics.)

Figure 5.7. Generic principle of noncontact testing.

Cross-sectional view of the noncontact test solution.

(Courtesy of Scanimetrics.)

Figure 5.8. Cross-sectional view of the noncontact test solution.

Electrical Probing Techniques

From an electrical perspective, it is fair to say that KGD cannot be achieved for most of the chips with RF interfaces. Thus far, analog ICs are mainly tested against their specified parameters. This strategy has proved to be effective, but it places a lot of requirements on the test environment including ATE, test board, and probe card. A lot of progress has been made by ATE vendors, in test board technology, and more recently in the probe cards. Testing RF and high-speed mixed-signal ICs represents a big challenge because the propagation of the signal along the path may be disturbed by parasitic elements. Moreover, the emergence of new communication protocols beyond 3GHz (i.e., ultrawideband [UWB]) adds many constraints to the wafer probing of these devices. A correct propagation is achieved when the following occurs:

  1. The integrity of the signal properties (frequency, phase and power) is maintained from the source to the load.

  2. The load does not “see” any parasitic signal.

Many elements must be controlled to fulfill these requirements. First, impedance matching will impact the path loss. The serial inductance and the shunt capacitance will create the conditions for proper oscillations and requested frequency band-widths. The crosstalk will depend on the mutual inductance, the quality of shielding, and filtering. Finally, the noise level can be kept low by adequate decoupling.

All of these effects can be reduced thanks to short, impedance-matched connections between the source and the load to minimize the parasitic elements of the RF connections and related effects. Figure 5.9 represents the cross-section of a typical cantilever probe card for testing RF dies. Components, such as surface-mounted devices (SMDs), are placed in the RF path to ensure a proper impedance matching (Zin) to optimize the test environment. Mass production implies thousands of touchdowns for the probe tips, thus the drift of the electrical characteristics must be minimal. Finally, despite a high degree of purity that can be reached, the test environment may differ slightly from the application. This is particularly the case with a bare die compared to either a stand-alone IC or an assembled SIP. Therefore, a good correlation is needed between the test and application environments.

RF probe card based on cantilever technology.

Figure 5.9. RF probe card based on cantilever technology.

The “membrane” technology (see Figure 5.10) can solve many problems by reducing the distance between the pads and the tuning components. A set of microstrip transmission lines is designed on a flexible dielectric material to connect the test electronics from the ATE to the DUT. A conductor on one side of the dielectric membrane and a thin metal film on the opposite side, which acts as a common ground plane, form each microstrip line. The width of the trace depends on the line impedance needed to match the device technology. The die is contacted by an array of microtips formed at the end of the transmission lines through via holes in the membrane (see Figure 5.10a) [Leslie 1989], [Leung 1995], [Wartenberg 2006]. Figure 5.10b shows the structure of the membrane probe. The membrane is mounted on a PCB carrier that interfaces with the test board.

Membrane probe card [Leslie 1989]: (a) membrane technology and (b) structure of the probe card.

Figure 5.10. Membrane probe card [Leslie 1989]: (a) membrane technology and (b) structure of the probe card.

This technology offers a number of significant advantages for high-performance wafer test including the following:

  • High-frequency signals can stimulate the DUT with high fidelity because the membrane probe provides a wideband interface with low crosstalk between the ATE and the DUT.

  • Photo-lithographic techniques enable an accurate positioning of the conductor traces and contact pads.

  • Contact reliability is much higher than with cantilever probe cards, as is final package test thanks to the absence of scratching.

Although this technology offers better signal integrity and better prediction thanks to modeling capabilities, the correlation issues are still not fully solved because the test environment cannot be exactly identical to that of the application.

An effective probing setup is required in order to achieve full test coverage (i.e., a test at “full specs”). Testing at “full specs” means that the DUT operates under worst-case conditions. In the silicon-based SIP technology [Smolders 2004], the active dies are designed to fulfill their respective performance requirements in a well-defined environment. In some cases, the active die requires its associated passive die to be functional. The passive die has to be closely connected (~10μm) to ensure nominal performances of the active die under test. These conditions cannot be achieved with traditional probe technologies. A solution has been proposed, namely the direct-die-contacting concept, which intends to test the active die (wafer) through a reference passive die held on the probing fixture. The DUT is contacted with short connections through a passive die, which is the same as, or very close to, the application passive die (see Figure 5.11).

Direct-die concept.

Figure 5.11. Direct-die concept.

Another approach to KGD for RF/analog products relies on alternative test methods. A representative example is given by a technique that consists of ramping the power supply and observing the corresponding quiescent current signatures [Pineda 2003]. With all transistors being forced into various regions of operation, the detection of faults is done for multiple supply voltages. This method of structural testing exhibits fault coverage results comparable to functional RF tests. Given that no RF signal needs to be generated or measured, probing becomes much less critical, and the reproducibility of the results is potentially better; therefore, the test escapes can be also lowered. This generic approach is illustrated in Figure 5.12.

Generic approach of DC signature method [Pineda 2003].

Figure 5.12. Generic approach of DC signature method [Pineda 2003].

Other approaches propose to reuse some low-speed or digital internal resources of the DUT, and to add some DFT features to get rid of RF signals outside of the DUT. For example, in [Akbay 2004], the authors apply an alternate test that automatically extracts features from the component response to predict RF specifications. In [Halder 2001], a test methodology is proposed that makes use of a voltage comparator and a simple waveform generator. A digitized response is then shifted out and externally analyzed. The combination of such methods in conjunction with some structural testing techniques will help reach high defect coverage for analog, mixed-signal, and RF devices.

Reliability Screens

In principle, burn-in testing is done by applying abnormally high voltage and elevated temperatures to the device, usually above the limits specified in the data sheet for normal operation. Burn-in testing becomes more efficient when the device can be excited by some stimuli (i.e., clock signals). However, manufacturers of high-volume ICs for low-cost consumer and mobile markets are interested in developing novel reliability screens that can be applied at the wafer level and that may fulfill the targets without burn-in testing. Test equipment vendors propose various probe models with high and low temperature capabilities. By applying high or low ambient temperatures during the wafer test, some of the infantile defects can be detected. However, this solution is not cost-effective, especially for very low temperatures (below –30° C), because of the price of the equipment, ramp-up and ramp-down times and possible disturbances during the test that can degrade the throughput time (i.e., air leaks). Diverse alternate methods have been developed and published. One of the most used techniques consists of applying a high-voltage stress to the device before the normal test sequence. A high-voltage stress breaks down weak oxides, thereby creating failure modes that can be detected by IDDQ, structural, or functional tests. The voltage level is significantly higher than the operating voltage [Barrette 1996] [Kawahara 1996]. Other potential failure mechanisms can be detected by low-voltage testing. Bridging faults due to resistive via contacts are detected during structural or functional testing at lower operating voltage.

The IDDQ test for CMOS devices is one of the most meaningful techniques for test engineers. It is a measurement of the supply current at the VDD node while the device is in a quiescent state. In [Arnold 1998], the authors discuss the benefits of IDDQ test and propose implementation of built-in current sensors (BICS) in each die for quiescent current testing. This paper also explains how a statistical approach can efficiently be used to achieve KGD. The technique consists of calculating test limits for parametric tests based on the quartile values (the Tukey method) of the distribution of a given population, which could be one wafer, a sublot, a full lot, or a group of lots. Another statistical method consists of detecting failure clusters on the wafers and declaring all the passing neighbors as defective [Singh 1997]. These statistical techniques are based on outlier identification. Outliers can be distinguished from a variety of patterns: vector to vector, test to test, or die to die. It is worth noting that statistical methods have been used extensively for several years in the test of automotive ICs. In summary, screening methods are not unique, and the trend is to couple them in order to achieve a reliability level that fulfills the requirements for KGD.

Functional System Test

System test at the SIP level can be considered as (1) functional test of the whole system or subsystem and (2) access methods implemented in the SIP components to enable functional, parametric, or structural tests. These access methods enable testing of the SIP once all the dies are assembled.

The first method is the functional system test in which the system is tested against the application specifications and the functionality is checked. The biggest advantage of this test method is the good correlation at the system level between the measurement results of the SIP supplier and those of the SIP customer (the end-integrator). However, some drawbacks may be as follows:

  1. There is a complex test setup with expensive instruments (i.e., combination of RF sources and analyzers with digital channels).

  2. There are long test times because of a large number of settings in the SIP (i.e., application software can take seconds to be completed).

  3. Testing full paths makes diagnostics difficult (with no intermediate test points).

Several efforts have been made to improve the functional test, especially with the growth of the mobile applications, where SIP technologies represent a big market share. The proposed solutions deal with either DFT, digital signal processing (DSP) techniques, BIST capabilities, or a combination of these.

Path-Based Testing

Despite the fact that all SIPs are not for application in wireless communications, the following subsections describe techniques developed in this field. Nevertheless, most of the techniques presented can be adapted to other types of SIPs. We consider a system made of a transmitter (Tx) and a receiver (Rx), with DFT and BIST features for test embedded inside. In case of a transmitter or a receiver-only system (i.e., a TV set, a Set Top Box, etc.), the same solutions can be applied, provided that the missing part is on the test board instead of the DUT. A block diagram of a typical system with an RF transceiver is shown in Figure 5.13. The partitioning of the functional blocks may depend on the application and the technologies. In this didactical example, we consider an SIP made of three dies: digital plus mixed-signal circuitry, an RF transceiver including a low-noise amplifier (LNA), and finally a power amplifier (PA). Other elements, such as switches or band-pass filters (BPFs), can also be placed on the substrate.

A typical transceiver system.

Figure 5.13. A typical transceiver system.

In this architecture, two paths are considered: the transmitter path and the receiver path. For transmission operation, the DSP generates a periodic bit sequence, which is modulated using a digital modulation type, such as wideband code-division multiple access (WCDMA) or GSM. The modulated sequences are up-converted in frequency and amplified. The receive subsystem down-converts the RF signal into the base-band, where an analog-to-digital converter (ADC) converts the base-band signal into digital words that are processed by the DSP.

To measure the system performance, the usual strategy consists of splitting the test in two paths, the receiver and the transmitter paths, respectively. In practice, a receiver is tested using sources able to generate digitally modulated RF signals with a spectral purity better than the DUT, and a transmitter is tested using a demodulator and digitizer that have a sufficient noise floor and resolution compared to the DUT.

At the system level, the quality of a receiver is given by its bit error rate (BER) performance. A basic BER test, applied to a receiver, for example, consists of the comparison of the demodulated binary string out of the system under test to the original data (which were modulated to produce the RF signal). The BER is determined by calculating the ratio of the total number of errors and the total number of bits checked during the comparison. In practice, such a test requires a lot of data to achieve the target accuracy, thus the test time becomes unacceptable. This problem may be overcome by varying the test conditions. Let us assume that a sampled voltage value corresponding to a digital bit follows a Gaussian distribution such that the likelihood of a bit-error, pe, is given by:

A typical transceiver system.

where No is the noise power spectral density, and Eb is the energy of the received bit, expressed by:

Eb = C/fb

where C represents the power of the carrier, and fb is the data rate.

From these equations, it can be observed that varying the signal-to-noise ratio (SNR) of the RF signal will influence the BER, in theory. However, attention must be paid to the nonlinear behavior of the receiving system with respect to amplitude variations. In practice, the intermodulation distortion (IMD) and the crest factor (the peak amplitude of a waveform divided by the root-mean-square value) will influence the BER. In [Halder 2005], the authors have proposed a low-cost solution for production testing of BER for a wireless receiver, based on statistical regression models that map the results of the AC tests to the expected BER value. BER testing is performed under various conditions, which guarantee several parameters such as sensitivity, co-channel, and adjacent channel power ratio (ACPR).

The transmitter Tx channel is usually tested at the system level by measuring the error vector magnitude (EVM). The EVM test (see Figure 5.14) is based on a time-domain analysis of a modulated signal represented by an I/Q diagram. Assuming that v(t) represents the transmitted signal at a carrier frequency wc, then the following relationship may be established:

v(t) = I(t) cos (wct) + Q(t) sin (wct)

where I(t) and Q(t) are the data signals that can be evaluated using the constellation diagram. For clarity in Figure 5.14, we represent one symbol only. The EVM parameter is expressed as follows:

A typical transceiver system.
Definition of error vector magnitude.

Figure 5.14. Definition of error vector magnitude.

In a transmission system, the data are collected just at the ADC outputs, where a tradeoff must be considered among the amount of data to be collected and transferred, accuracy, and test time. In [Halder 2006], a new test methodology for wireless transmitters is described in which a multitone stimulus is applied at the base band of the transmitter under test. The method enables multiple parameters to be tested in parallel, reducing the overall test time. An original method is proposed in [Ozev 2004], where the signals propagated through the analog paths are used to test the digital circuitry. Although this methodology was developed for SOC testing, it can be easily applied to SIP testing. Although it represents an attractive solution for Tx testing, EVM alone cannot detect all defective dies and systems. In [Acar 2006], the authors proposed enhanced EVM measurements in conjunction with a set of simple path measurements (input-output impedances) to provide high fault coverage.

Loopback Techniques: DFT and DSP

Loopback techniques are increasingly proposed in literature. Most of these techniques are combined with alternate test methods to reduce the test time and to be more predictable. Practical techniques may depend on the radio architecture of the system under test: time-division versus frequency-division duplex, half versus full duplex, shared versus separate Tx and Rx local oscillators, same versus different Tx and Rx modulation, etc. However, a general approach can be considered without considering the radio architecture. This section describes two generic concepts, external and internal, respectively.

One solution consists of creating the loop between the output of the PA of the transmitter and the input of the LNA of the receiver. Such a configuration is described in [Srinivasan 2006] and [Yoon 2005], where the authors propose an external loopback circuit. As illustrated in Figure 5.15, an attenuator is connected to the PA output, the frequency of the attenuated signal is then divided, and an offset mixer and band-pass filter are subsequently connected to the input of the receiver path. Creating a loop between the Tx and the Rx requires such a loopback circuit to overcome the limitations of the method applied to a TRx (such as in GSM or WLAN), where the shared VCO, the modulation cancellation, and the half duplex architecture affect synchronization between the Tx and Rx signals.

External loopback principle.

Figure 5.15. External loopback principle.

In [Lupea 2003], some blocks and intermediate loopbacks are added inside the RF transceiver at the low-pass filter, ADC, detector, etc. Creating the loop in the front-end IC was also investigated in [Dabrowski, 2003], as shown in Figure 5.16. The author proposes connecting the output of the up-converter of the Tx to the input of the LNA of the Rx through a so-called test attenuator (TA), a complementary BIST sharing its circuitry with on-chip resources. Consequently, the total area overhead for test is negligible. This TA and associated BIST are used so that EVM errors can be detected. These errors are caused by the faults in RF blocks that degrade the gain or the noise figure of the chain.

Internal loopback using a test attenuator [Dabrowski 2003].

Figure 5.16. Internal loopback using a test attenuator [Dabrowski 2003].

The faults close to the Rx analog output can be detected by reducing the TA gain or by running a complementary test for gain that is insensitive to fault location. The third-order intercept point (IP3) test was enhanced using statistical analysis. It should be noted that the path-based and the loopback techniques are still emerging because of limitations from both hardware and processing perspectives, leading to long test times. In [Dabrowski 2004] and [Battacharya 2005], de-embedding techniques for the BER test are proposed to reduce the test time of the system. In [Srinivasan 2006], a computation of the specifications from the measurements is performed in order to eliminate the need for standard specification test. This so-called alternate diagnosis approach is based on statistical correlation between the measurements and specifications enabling a drastic reduction of the test time. In Section 5.4.4, we describe another processing method applied to data converters that may also be considered as a preliminary solution to a full system test.

Test of Embedded Components

As discussed in Section 5.1, testing bare dies after assembly is a critical phase to achieve an economically viable SIP and to give some diagnostic capabilities. The test consists of two complementary steps:

  1. Structural testing of interconnections between dies

  2. Structural or functional testing of dies themselves

The main challenge is accessing the dies from the primary I/O of the SIP. The total number of effective pins of the embedded dies is generally much higher than the number of I/O for the package. Moreover, in contrast to SOC where it is possible to add some DFT between IP to improve controllability and observability, the only available active circuitry for testing in the SIP is that made up of the connected active dies. Consequently, improving testability places requirements on the bare dies used for the SIP and the definition of a specific SIP test access port (TAP). In the next sections, we illustrate SIP TAP constraints, the test of interconnections, and the test of dies with a didactic example of the SIP shown in Figure 5.17.

Conceptual view of an example of SIP.

Figure 5.17. Conceptual view of an example of SIP.

In this example, we consider the assembly of an SIP consisting of four active dies soldered onto a passive substrate: the first die is a cheap digital core (i.e., a microcontroller), the second one is a complex mixed-signal die (i.e., a video channel decoder), the third die is an expensive digital block (i.e., a complex decoder, with large embedded memory), and the fourth one is an RF die (i.e., a transceiver). This configuration is quite realistic and allows one to consider various testing issues.

SIP Test Access Port

The SIP imposes some specific constraints on the TAP. This SIP TAP must afford several features, mainly:

  • Access for die and interconnection tests

  • SIP test enabling at system level as it would be for an SOC

  • Additional recursive test procedures during the assembly phase

The IEEE 1500 standard was developed to manage the test of a complex SOC at system level and to give access to and control of each embedded core. Unfortunately, this approach is unsuitable for the SIP, as the standard is based on a specific global test access mechanism (TAM) embedded into the SOC but outside of the cores. In many SIPs, the only active circuitry is in the dies themselves, which are equivalent to cores in an SOC. Thus, a more viable approach to give the best accessibility to embedded active dies consists of using boundary scan compliant bare dies. In other words, digital active dies should be IEEE 1149.1 compliant, and mixed-signal or analog active dies in the SIP should be IEEE 1149.4 compliant. Figures 5.18 and 5.19 summarize the implications on the internal architecture of each die.

Compliant digital die with IEEE 1149.1 [Landrault 2004].

Figure 5.18. Compliant digital die with IEEE 1149.1 [Landrault 2004].

Compliant mixed-signal die with IEEE 1149.4 [Landrault 2004].

Figure 5.19. Compliant mixed-signal die with IEEE 1149.4 [Landrault 2004].

Around the internal die core, the required circuitry consists of at least two registers (instruction and bypass registers), and some required digital boundary scan cells (also referred to as digital boundary-scan modules [DBMs]) on the digital I/O and a TAP controller. The extensions for mixed-signal circuits include a test bus interface circuit (TBIC) and analog boundary modules (ABM) on the analog pins. Consequently, at the top level, the SIP TAP must be able to manage the local digital and analog TAPs by controlling four or five digital signals and two additional analog signals including the following:

  • TCK: the test clock

  • TMS: the test mode select signal

  • TDI: the test data input pin

  • TDO: the test data output

  • TRST*: the test reset signal (optional in the IEEE 1149.1 standard)

  • AT1 and AT2: the analog signals used in the IEEE 1149.4 standard

In addition to direct accessibility and controllability of dies, these boundary scan signals will control DFT or BIST circuitry embedded on each die and connected to the TAP of each die. For the consumer or integrator to perform system level test, an SIP has to be equivalent to an SOC in final application when the packaged system is soldered on the PCB. This assumption involves two constraints: how to distribute and manage the local TAPs at the die level and determining the identity of the top level TAP (specifically, the SIP ID code). The latter constraint comes from the specificity of the SIP and the significant impact on the assembly phase of active dies on the substrate. Because KGD is sometimes not achievable, and because the assembly process may introduce additional failures, intermediate tests after every die soldering might be required. This strategy combined with a die assembly ordering (from the least to the most expensive dies) allows one to optimize the overall SIP cost. During these incremental tests, the SIP TAP controller must manage boundary scan resources for interconnection and die tests even while some dies are missing.

Taking all the requirements into consideration, the SIP TAP controller must have two configurations: one during the incremental test and the other for the end-user test. Following the ordered assembly strategy, the first die will integrate the SIP TAP controller and, as a result, the ID code of the SIP will be the ID code of this first die. Figure 5.20 shows a conceptual view of the two configurations if the four dies in our example are boundary scan compliant.

The “star” configuration (see Figure 5.20a) attempts to facilitate incremental testing during the assembly. Obviously, the link between the dies (the daisy chain) is broken during intermediate testing because all dies are not yet soldered onto the substrate. This configuration requires as many TMS control signals as there are dies in the SIP. The “ring” configuration (see Figure 5.20b) is designed such that the end-user cannot detect the presence of several dies, either for identification (there is only one ID code) or for the boundary scan test. Only one TMS control signal is required in this configuration. Thus far, no SIP TAP standard exists, but architectures have been proposed based on the IEEE 1149.1 standard [De Jong 2006] and the IEEE 1500 standard [Appello 2006].

Possible TAP configurations: (a) star configuration for the intermediate test and (b) ring configuration for the end-user test.

Figure 5.20. Possible TAP configurations: (a) star configuration for the intermediate test and (b) ring configuration for the end-user test.

Interconnections

There are two types of interconnections: interconnection between dies and interconnection between die and SIP package pads. The test method for interconnections is equivalent in both cases, but the access issues are obviously different.

The techniques used to test the analog interconnections are based on both structural and functional approaches. The basic principle consists of forcing a specific current (voltage) and measuring its associated voltage (current). In fact, passive components on an SIP are placed on the analog interconnections between analog I/O. Therefore, testing of analog interconnections must also embed impedance measurements.

For testing digital interconnections, the IEEE 1149.1 boundary scan circuitry is used, if available, with classical stuck-at structural tests where the interconnection test is performed through boundary scan in external test mode (EXTEST). The fundamental principle consists of applying a digital value onto a die output (i.e., at the interconnection start-point) and evaluating the response to this stimulus at the input(s) of the other die(s) (i.e., at the interconnection end-point(s)). To illustrate the test procedure, consider the test of interconnections between die 1 and die 3 in the SIP example as shown in Figure 5.21, where die 4 is not yet soldered on the substrate. Table 5.2 shows an example of a two-vector sequence with the instructions sent to each die.

Test of interconnections.

Figure 5.21. Test of interconnections.

Table 5.2. Example of Test Procedure

Step

Instruction Die 1

Instruction Die 3

Test Vector

1

Reset

Reset

 

2

PRELOAD

PRELOAD

 

3

EXTEST

EXTEST

Vector #1

4

EXTEST

EXTEST

Vector #2

5

Reset

Reset

 

After initialization, the first vector is loaded in the DBMs of die 1 by the PRELOAD command on the instruction register. Next, the EXTEST instruction is used to apply the preloaded value on the interconnection wires and the value obtained at the other end of the wires (on the DBMs of die 3) are extracted with the subsequent EXTEST instruction. The second test vector is processed in a similar way such that only two vectors are required to test all the possible stuck-at and bridging faults for two adjacent wires. Considering all possible bridging faults affecting k wires, it has been shown that a minimum of log2(2k+2) vectors are required.

If no boundary scan capability is available on a die, the interconnections to be tested should be directly accessible from packaged pads. Implications of this additional DFT on the signal integrity should be analyzed and minimized as much as possible.

Digital and Memory Dies

Similar to an SOC with internal IP, we face the problem of accessing the inputs and the outputs of internal dies. In fact, an SIP with four times less external pads than internal pins of embedded dies is common. Consequently, we again rely on boundary scan capabilities for testing the internal dies. By activating the bypass function in dies, it is possible to reduce the length of the scan chain. However, “at-speed testing” requires using techniques such as compression, DFT, and BIST. Unfortunately, in many SIPs, no additional active silicon is available and, as a result, no additional circuitry can be implemented such that either BIST or DFT should already exist in the die itself.

If additional DFT is required to test a specific die, this DFT may be integrated on another die. Obviously, there is little chance that this DFT facility will be available on the hardware of one of the other dies because the design of each die is typically independent. Therefore, the only solution is to use the software or programmable capabilities available on the other digital dies to implement a configurable DFT. The assumption of sufficient programmable facilities on the SIP is often realistic with the incorporation of programmable digital dies, such as DSP, microprocessors or microcontrollers, field programmable gate arrays (FPGAs), etc. Another alternative is to use a transparent mode of the other dies to directly control and observe from the primary I/O of the package as illustrated in Figure 5.22. This concept looks simple, but, in fact, this parallel and at-speed connection through other dies is not necessarily an easy task.

Transparent mode principle.

Figure 5.22. Transparent mode principle.

Obviously, we might find an SIP implementation where none of these techniques can be applied. In this case, the only solution to access the specific internal pin is to add direct physical connections to SIP I/O pins while attempting to meet all the associated requirements in terms of signal integrity. In the specific case of a memory die, the access problem is critical because these embedded memories are generally already packaged. These package-on-package (POP) or package-in-package (PIP) configurations may have no BIST capabilities, thus the BIST has to be implemented in another digital core for application to the embedded memory.

Analog and RF Components

For the test of analog, mixed-signal, or RF dies, the two most significant challenges are as follows:

  1. Cost reduction of the required test equipment

  2. Testing of embedded dies because of the difficulty in accessing these dies after SIP assembly

Test Equipment Issues

The main advantage of SIP over SOC is the ability to assemble various and heterogeneous dies of different types and technologies into the same package. However, from the point of view of a test engineer, this assembly possibility can be a testing nightmare because the test equipment has to be able to address the whole set of testing requirements in all domains: digital, RF, analog, etc. Using ATE with expensive analog, mixed-signal, and RF options may result in unacceptable test costs. Moreover, analog, mixed-signal, and RF circuits need long functional test procedures, which inevitably impact the test time and cost.

The functional tests are required to achieve a satisfactory test quality and to give diagnostic capabilities at the die level. Even if all the tests previously performed at the wafer level for each die are not necessarily required after assembly, the price of the test equipment and the long test sequences usually make the test cost prohibitive. As a result, specific approaches must be considered to reduce the test time and test equipment cost.

A common approach is to move some or all the tester functions onto the chip itself. Based on this idea, several BIST techniques have been proposed where signals are internally generated or analyzed [Ohletz 1991] [Toner 1993] [Sunter 1997] [Azais 2000, 2001]. However, the generation of pure analog stimuli or accurate analog signal processing to evaluate the system response remains the main roadblock.

Another proposed approach is based on indirect test techniques to achieve a better fault coverage at wafer test and to come closer to the KGD status [Pineda 2003] as described in Section 5.2. The fundamental idea is to replace the difficult direct measurements by easier indirect measurements, provided that a correlation exists between what is measured and what is expected from the direct measurements.

Other techniques consist of transforming the signal to be measured into a signal that is easier to be measured by ATE. For example, timing measurement is easier for ATE than a precise analog level evaluation and a solution is to convert an analog signal on-chip to a proportional timing delay. Another possible solution consists of using DFT techniques to internally transform the analog signals to digital signals that are controllable and observable from the chip I/Os [Ohletz 1991] [Nagi 1994]. As a result, only digital signals are externally handled by less-expensive digital test equipment (a low-cost tester, for example). These techniques are limited by the accuracy of the conversion of the analog signal. A similar approach attempts to avoid the problem of conversion accuracy by assuming several digital-to-analog converters (DACs), and ADCs are already available to obtain a fully digital test [Kerzerho 2006] as illustrated in Figure 5.23. Note that this assumption is realistic for the majority of mixed-signal circuits used as active dies in some SIPs. In creating a so-called analog network of converters (ANC) by adding some analog DFT on the analog side of the converters, new paths can be used to test the DACs and ADCs as well as the analog blocks with fully digital inputs and outputs. The main difficulty is to discriminate the influence of each element (each converter and analog block in a same path) on the digital signature. The proposed solution takes advantage of multiconfiguration interconnections between the converters.

DFT principle of the ANC technique.

Figure 5.23. DFT principle of the ANC technique.

Test of Analog, Mixed-Signal, and RF Dies

Access is also critical for testing analog, mixed-signal, and RF dies. The two types of signals to be controlled or observed include analog signals as well as digital and clock signals for mixed-signal dies. It is common in SIP implementations that some of the analog signals are not connected to the external pins of the SIP, and, as a result, there is no direct access to these signals. Fortunately, for static or low-frequency signals, access to internal analog nodes is possible using the IEEE 1149.4 standard where two pads AT1 and AT2 are used to transmit stimuli and receive responses. According to the standard, the maximum frequency must be lower than 10 kHz and the resolution less than or equal to 16 bits. If the IEEE 1149.4 circuitry is not available on the die, or if the required signal frequency or the resolution is too high, a possible solution is to add some internal access nodes in the SIP. This can introduce disturbances (load modifications, parasitic effects, etc.) in critical signal paths and decrease the system performance. For critical signals, the only viable solution is to have BIST or DFT integrated into the die to preserve the signal integrity.

The digital signals for mixed-signal dies, which are generally used to control the analog portion of the die, seldom have direct access from SIP pins. However, these digital signals are often controllable through another fully digital die. For characterization of mixed-signal die, these digital control signals must be operated at-speed, so that the IEEE 1149.1 standard does not fulfill the specifications. Including a transparent mode in digital dies represents one unique solution. As illustrated in Figure 5.22 for digital die testing, this mode allows a direct transmission of digital signals from the primary inputs of the SIP to mixed-signal digital pins of the embedded die. The concept of transparent mode may seem simple, but routing digital signals simultaneously, while preserving an efficient control, is more complex than in a stand-alone configuration of the mixed-signal die. For instance, the delay introduced by the bypass configuration of the digital die is not accurately known and controllable. Furthermore, the required synchronization between digital signals, clocks, and analog signals is a challenge. The control of clock signals is actually a problem by itself because of jitter effect. Another problem is how to choose, a priori, the best candidates for transparent mode if we do not know which pins are likely to be available on the SIP package.

RF dies would seem to represent the worst-case scenario because of the high-frequencies and low analog levels. But, actually, RF signals are rarely only internal to the SIP, which makes direct access likely in most applications.

MEMS

Microelectromechanical systems (MEMS) correspond to the extreme cases of heterogeneous systems. A typical MEMS device can be an accelerometer, pressure sensor, temperature or humidity sensor, microfluidic system, or bioMEMS device, among others [Bao 2000]. The first problem for MEMS testing begins with the required test equipment. MEMS devices are generally dedicated to generate or monitor nonelectrical signals. Consequently, test equipment should allow generation and measurement using sound, light, pressure, motion, or even fluidics. Because of their price, the difficulty to implement them, and the long associated test time, the use this type of equipment for production test (especially at the wafer level) is rarely an option [Mir 2004]. In a production test environment, only fully electrical signals are actually viable. In this context, two approaches are likely:

  1. Perform an indirect structural or functional test on an electrical signal that would be an image of the physical signal associated with the MEMS under test.

  2. Implement some DFT circuitry allowing one to convert the physical signal associated with the MEMS to an electrical signal. This approach is used on the most famous MEMS device, the accelerometer from Analog Devices [Analog 2007], with a network of capacitors to convert the electrical stimulus to a force equivalent to a 10g acceleration [Allen 1989].

Another major challenge in MEMS testing is due to the significant package influence. MEMS characteristics depend on the properties and the quality of the package used. For example, the frequency of a mechanical resonator is directly linked to the humidity (moisture) inside the package. Therefore, we have a paradox with expensive MEMS packaging—because the package is required, it is difficult to detect defective MEMS before packaging. As the result, the cost of MEMS testing can be prohibitive because of the price of rejected packaged devices [Charlot 2001].

For MEMS integration into an SIP, the classical problems of MEMS testing are exacerbated. Indeed, the presence of additional active dies close to the MEMS might disturb and modify the MEMS quality observed at the system level. Moreover, because several MEMS can be mounted in the same SIP, the test needs to manage both stimulus generation and response analysis including various types of nonelectrical signals. As a result, the alternative techniques with only electrical signals are the only viable options.

From the package standpoint, the SIP concept poses new challenges. For monolithic MEMS in CMOS technology, direct integration of the bare MEMS onto the passive substrate is conceivable. For more complex MEMS, the bare die can be flipped onto the passive substrate. The new challenges then involve achieving a perfect etching and sealing of the cavity and guaranteeing the cavity quality during the life of the system. In this context, one solution is to add additional simple MEMS into the cavity to monitor the cavity characteristics as illustrated in Figure 5.24.

Cavity monitoring via additional sensor.

Figure 5.24. Cavity monitoring via additional sensor.

Considering access to MEMS in the SIP, for both smart MEMS composed of significant digital processing and for simple analog sensor, the problem is equivalent to digital and mixed-signal die. As a result, the solutions are thus similar to those described earlier according to the nature of the electric signal to be accessed.

Concluding Remarks

A system-in-package (SIP) is a packaged device composed of two or more embedded bare dies and, in most cases, passive components. The SIP technology has found many applications, in particular, in the customer electronics industry, such as cellular handsets. An SIP provides a system or subsystem solution in a single package, using various types of carriers and interconnect technologies.

Testing these complex SIP devices first requires extensive testing of the bare dies to reach the desired known-good-die (KGD) quality. If the KGD quality cannot be guaranteed, then each embedded component (bare die) must be tested within the SIP. After packaging, the assembled SIP must be tested from a functional point of view.

In this chapter, the problems of testing bare dies were described and analyzed. We illustrated the need for implementing IEEE 1149.1 boundary scan in the assembled dies to test die-to-die and pad-to-die interconnects. In addition, we discussed test limitations when a bare die is an analog, mixed-signal, RF, or MEMS component. The mechanical and electrical limitations of probing technologies were also described, along with a number of means to improve reliability. For example, we described a path-based technique using a transmitter (Tx) and receiver (Rx) pair to perform functional system testing, where tests are conducted to measure the responses of the SIP against its functional specifications using bit error rate (BER) and error vector magnitude (EVM). To reduce test cost, a loopback technique was then described that creates a loop between the Tx and Rx paths during testing. Both techniques make use of the bidirectional architecture of communication-oriented circuits.

In its current application fields of predilection, the SIP is moving toward ever more sophisticated packaging technologies, which will require new test solutions. The trend toward more functionality combined with more communication features for emergent applications, such as health care, smart lighting, or ambient computing, drives the integration of a large variety of sensors and actuators. Consequently, heterogeneous SIP implementations will be developed, posing many new test challenges.

Exercises

5.1

(SOC versus SIP) Explain the major differences and test challenges between a system-on-chip (SOC) and a system-in-package (SIP).

5.2

(Yield and Profit Using an SIP) Assume that a system-in-package (SIP) contains four active dies as follows:

Die 1 is a digital core with a price of 50 cents and an estimated defect level at 1000 PPM.

Die 2 is complex mixed-signal die; the price of this die is $1, and the defect level is estimated at 100 PPM.

Die 3 is a RF die; its price is only $1, but the defect level is 1000 PPM because of the inefficiency of wafer testing.

Die 4 is a $20 processor with a low defect level estimated at 10 PPM. The price of package and passive substrate is $5.

  1. Estimate the final yield of this SIP if no additional failure appears during the assembly process.

  2. Estimate the direct profit of using incremental testing during the assembly phase if the cost of handling (ATE, time, etc.) is omitted.

5.3

(Known-Good-Die) List the three most significant limiting factors that must be addressed in order to achieve a known-good-die (KGD) quality level.

5.4

(Functional System Testing) Explain the differences between path-based test and loopback test techniques.

5.5

(Functional System Testing) Give the two main approaches to solve the test equipment issues.

5.6

(SIP-TAP) List and justify three major features when a test access port (TAP) is used on the SIP (SIP-TAP).

5.7

(MEMS Testing) List the most critical factors for MEMS testing in the context of SIP.

Acknowledgments

The authors wish to thank Peter O’Neill of Avago Technologies, Herbert Eichinger of Infineon Technologies, Dr. Florence Azais of LIRMM, Professor Sule Ozev of Duke University, and Professor Charles E. Stroud of Auburn University for reviewing the text and providing valuable comments. They also would like to thank Dr. Christian Landrault of LIRMM and Frans de Jong of NXP Semiconductors for their invaluable technical and editorial advice.

References

Books

Introduction

Bare Die Test

Functional System Test

Test of Embedded Components

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset