2.6 Codes and Encoding

Throughout this book, we consider a rate c02-math-0567 binary encoder (see Fig. 2.18), where the information bits c02-math-0568 are mapped onto c02-math-0569 sequences of code bits c02-math-0570, c02-math-0571, where

2.78 equation
c02f018

Figure 2.18 The encoder ENC in Fig. 2.7 has rate c02-math-0573 and c02-math-0574 different “classes” of bits

Here, the index c02-math-0575 in the sequences c02-math-0576 represents a “class” of code bits sharing certain properties.5 While this description is not entirely general,6 it will allow us to highlight the encoder's properties and explain how these should be exploited during the design of BICM transceivers.

The encoder is a mapper between the information sequences c02-math-0578 and the sequences c02-math-0579, where c02-math-0580. The ensemble of all codewords c02-math-0581 is called a code, which we denote by c02-math-0582. In this book, we will consider linear binary codes where the relationship between the input and output bits can be defined via the generator matrix c02-math-0583

where

gathers all the code bits, and all operations are in c02-math-0586.

In order to analyze the performance of a receiver which takes coding into account, we need to analyze the decoding process. The latter depends, of course, on the structure of the code, and all the other elements in the BICM system. Nevertheless, we may start with the following simplified assertion: the probability of decoding a codeword c02-math-0587 that is different from the transmitted one c02-math-0588 depends on the HD c02-math-0589. This motivates the following definition of the distance spectrum (DS) of a code.

The next lemma particularizes Definition 2.21 to the case of linear codes.

As the HD provides an idea about how well the codewords are “protected” against decoding errors, we may also be interested in analyzing this protection for each class of code bits. This motivates the following definition of the generalized distance spectrum (GDS).

In this book, we consider two types of linear binary codes: CCs and TCs, where the codewords are obtained using a CENC and a turbo encoder (TENC), respectively.7 CCs are an example of codes which can be decoded optimally with low decoding complexity and for which we can find the DS using well-known analytical approaches. TCs, on the other hand, are an example of capacity-approaching codes, where the corresponding analytical tools coincide in part with those used for the analysis of CCs.

The encoder calculates the codewords in a deterministic manner via (2.79). However, as the input bit sequence c02-math-0606 is modeled as a sequence of i.u.d. binary random variables, the code bits c02-math-0607 are also modeled as random variables whose properties depend on the structure of c02-math-0608. To get insight into the probabilistic model of the code bits, we will need some simple relationships, which we define in the following.

The next theorem shows conditions on the matrix c02-math-0649 so that the code bits are i.u.d.. We assume therein that the columns of c02-math-0650 are linearly independent, which means that for any c02-math-0651 with c02-math-0652 we have c02-math-0653.

If a matrix has fewer rows than columns, its columns cannot be linearly independent. Consequently, for any c02-math-0669, the generator matrix c02-math-0670 never satisfies this condition, and thus, the output bits c02-math-0671 should be modeled as mutually dependent random variables. On the other hand, a subset of code bits can be independent if a subset of the columns of c02-math-0672 are linearly independent. In other words,

2.88 equation

can be mutually independent provided that c02-math-0674 has linearly independent columns, where c02-math-0675 contains a subset of the columns of c02-math-0676. In Section 2.6.2, we will explain how to select the code bits to guarantee their independence for the particular case of CENCs.

2.6.1 Convolutional Codes

We use the name convolutional codes to denote the codes obtained using CENCs. We focus on the so-called feedforward CENCs, where the sequence of input bits c02-math-0677 is reorganized into c02-math-0678 sequences c02-math-0679 so that c02-math-0680. At each discrete-time instant c02-math-0681, the information bits c02-math-0682 are fed to the CENC shown in Fig. 2.19. The output of the CENC is fully determined by c02-math-0683 shift registers and the way the input sequences are connected (through the registers) to its outputs (see Fig. 2.20 for an example). We denote the length of the c02-math-0684th shift register by c02-math-0685, with c02-math-0686, the overall constraint length by c02-math-0687, and the number of states by c02-math-0688. The rate of such encoder is given by c02-math-0689.

c02f019

Figure 2.19 The encoder ENC in Fig. 2.7 can be a binary CENC with rate c02-math-0690

c02f020

Figure 2.20 Rate c02-math-0711 CENC with encoder matrix given by (2.92)

A CENC is fully determined by the connection between the input and output bits, which is defined by the CENC matrix

where the binary vector c02-math-0692 contains the convolution coefficients relating the input bits c02-math-0693 and the output bits c02-math-0694, i.e.,

where c02-math-0696 and c02-math-0697, with c02-math-0698c02-math-0699.

It is customary to treat the vectors c02-math-0700 in (2.89) as the binary representation of a number (with c02-math-0701 being the MSB) given in decimal or (most often) in octal notation, as we do in the following example.

Usually, the design of CENCs relies on a criteria related to the performance of the codes for a particular transmission model. Quite often, as we will also do in Example 3.3, an implicit assumption of binary modulation appears. In such a case, the parameter of the CENC determining the performance is its free Hamming distance (FHD), which is defined as

2.93 equation

In general, having defined an optimization criterion, “good” CENCs can be found by an exhaustive numerical search. For example, given the rate c02-math-0713 and c02-math-0714, maximum free Hamming distance (MFHD) CENCs can be found and tabulated by searching over the CENC universe c02-math-0715, which is the set of all binary matrices c02-math-0716. Additional constraints are often imposed, e.g., catastrophic encoders8 are avoided, or we may require the outputs c02-math-0719 to be u.d. (see Section 2.6.2). The search criterion can be further refined to take into account not only the MFHD but also the code's weight distribution (WD) c02-math-0720 or its input-dependent weight distribution (IWD) c02-math-0721.9 This is the idea behind the so-called optimal distance spectrum (ODS) CENCs which consider distances c02-math-0722, c02-math-0723 as well as the frequency of occurrence of events associated with each of these distances.

In Table 2.1, we show the ODS CENCs for c02-math-0724 and c02-math-0725 for different values of c02-math-0726. In this table, we also show the FHD c02-math-0727 of each encoder and the total input HW of sequences that generate codewords with HWc02-math-0728.

Table 2.1 c02-math-0729 and c02-math-0730 ODS CENCs for different values of c02-math-0731. The table shows the encoder c02-math-0732, the FHD c02-math-0733, the first element of the WD c02-math-0734, and the first element of the IWD c02-math-0735

c02-math-0736 c02-math-0737
c02-math-0738 c02-math-0739 c02-math-0740 c02-math-0741 c02-math-0742 c02-math-0743 c02-math-0744 c02-math-0745 c02-math-0746
2 c02-math-0747 5 1 1 c02-math-0748 8 2 3
3 c02-math-0749 6 1 2 c02-math-0750 10 3 6
4 c02-math-0751 7 2 4 c02-math-0752 12 5 12
5 c02-math-0753 8 1 2 c02-math-0754 13 1 1
6 c02-math-0755 10 11 36 c02-math-0756 15 3 7
7 c02-math-0757 10 1 2 c02-math-0758 16 1 1
8 c02-math-0759 12 11 33 c02-math-0760 18 1 2
9 c02-math-0761 12 1 2 c02-math-0762 20 3 6

2.6.2 Modeling the Code Bits in Convolutional Encoders

Let c02-math-0763 be the binary random vector modeling the CENC's output bits c02-math-0764 at time c02-math-0765. From Theorem 2.25, we conclude that for CENCs defined by c02-math-0766 with linearly independent columns and for i.u.d. information bits, the binary vectors c02-math-0767 are i.u.d., i.e., c02-math-0768. This property will be used later to analyze the ML receiver for trellis encoders, where the code bits are directly connected to the mapper, and thus, the fact of having i.u.d. bits c02-math-0769 directly translates into i.u.d. symbols c02-math-0770 as well.

Let us have a look at the case of BICM. As the interleaver is present between the encoder and the mapper, the bits c02-math-0771 are the same as the bits c02-math-0772 taken from the output of the encoder at different time instants c02-math-0773 and from different encoder's outputs c02-math-0774. In what follows, we try to answer the following question: can we guarantee the independence of the bits c02-math-0775 in BICM? In particular, we will analyze the conventionally assumed independence of the bits in (2.72) for the particular case of CCs.

Theorem 2.27 guarantees that, regardless of the memory c02-math-0795, code bits taken at different time instants are i.u.d., even if the separation is only one time instant. The implication of this for BICM is that the input bits to the mapper will be i.u.d. if they correspond to code bits obtained from the encoder at different time instants. In other words, to guarantee the independence of the bits c02-math-0796 at the input of the mapper, it is enough to use c02-math-0797 code bits taken at different time instants c02-math-0798 for any c02-math-0799. This condition is typically guaranteed by an appropriately designed interleaver or by the assumption of using a quasirandom interleaving, in which case, this condition is violated a relatively small number of times. For more details about this, see Section 2.7.

For simplicity, Theorem 2.27 was given for c02-math-0808. The following corollary generalizes Theorem 2.27 to any c02-math-0809. Its proof is not included; however, it follows similar lines to the one used for Theorem 2.27.

To clarify the conditions (2.96) and (2.97) in Corollary 2.29, consider the matrix of the CENC in Example 2.26 (c02-math-0819) given by (2.91). In this case, we have c02-math-0820, c02-math-0821, and c02-math-0822, which corresponds to (2.96) for c02-math-0823. The three relevant pairs of bits in this case are rows one and three in (2.91). Similarly, the condition in (2.97) gives c02-math-0824, c02-math-0825, and c02-math-0826, which corresponds to rows two and six in (2.91).

2.6.3 Turbo Codes

The TENC originally proposed in 1993 is based on two parallel concatenated convolutional encoders (PCCENCs) separated by an interleaver as we show in Fig. 2.21. The encoder produces a sequence of systematic bits c02-math-0842, and two sequences of code (parity) bits c02-math-0843 and c02-math-0844. The latter is obtained by convolutionally encoding the interleaved sequence c02-math-0845 (interleaved via c02-math-0846). At the receiver's side, the decoders of each CENC exchange information in an iterative manner. The iterations stop after a predefined number of iterations or when some stopping criterion is met. The encoder's interleaver is denoted by c02-math-0847 to emphasize that it is different from the interleaver c02-math-0848 used in BICM transmitters between the mapper and the encoder.

c02f021

Figure 2.21 PCCENCs of rate c02-math-0849. To increase the rate c02-math-0850, some of the parity bits are removed (punctured)

c02f022

Figure 2.22 PCCENCs using two identical recursive systematic CENCs of rate c02-math-0851. The overall code rate is c02-math-0852. To increase the rate, puncturing of nonsystematic bits is done most often but, in general, systematic bits can also be punctured

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset