Introduction

The fundamental objective of a communication system is to reproduce at a destination point, either exactly or approximately, a message selected at another point. This was theorized by Claude Shannon in 1948 [SHA 48].

The communication chain is composed of a source (also called transmitter) and a destination (also called receiver). They are separated by a transmission channel, which may, for instance, be a wired cable if we consider ADSL transmission, optical fiber, a wireless mobile channel between a base station and a mobile terminal or between a satellite and its receiver, a hard drive, and so forth. The latter example indicates that when we refer to point, we may consider either location or time. The main issue faced by communication systems is that the channel is subject to additional noise, and may also introduce some distortions on the transmitted signal. Consequently, advanced techniques must be implemented in order to decrease the impact of noise and distortions on the performances as much as possible, so that the receiver signal may be as similar as possible to the transmitted signal.

The performance of a transmission system is evaluated by either computing or measuring the error probability per received information bit at the receiver, also called the bit error rate. The other major characteristics of a communication system are its complexity, its bandwidth, its consumed and transmitted power, and the useful data rate that it can transmit. The bandwidth of many communication systems is limited; it is thus highly important to maximize the spectral efficiency, which is defined as the ratio between the binary data rate and the bandwidth. Nevertheless, this should be done without increasing the bit error rate.

EXAMPLE I.1.– The following simple communication system consists of transmitting an image to a receiver through a binary symmetric channel. The binary symmetric channel is described by the model given in Figure I.1. It is described by the transition probability p = Pr(Y = 0|X = 1) = Pr(Y = 1|X = 0).

image

Figure I.1. Binary symmetric channel

In Figure I.2, we see an example of an image transmitted and received at the output of the binary symmetric channel for p = 0,1. In a communication system, we have to protect the information bits against the transmission errors, while limiting the number of transmitted bits in the transmission channel.

image

Figure I.2. Image at the input and output of a binary symmetric channel

This book consists of two volumes. It aims at detailing all the steps of the communication chain, represented by Figure I.3. Even though both volumes can be read independently, we will sometimes refer to some of the notions developed in the second volume.

This first volume deals with source coding and channel coding. After a presentation of the fundamental results of information theory, the different lossless and lossy source coding techniques are studied. Then, error-correcting-codes (block codes, convolutional codes and concatenated codes) are theoretically detailed and their applications provided. The second volume Digital Communications 2: Digital Modulations concerns the blocks located after channel coding in the communication chain. It first presents baseband and sine waveform transmissions. Then, the different steps required at the receiver to perform detection, namely synchronization and channel estimation, are studied. Multicarrier modulations and coded modulations, are finally detailed.

Chapter 1 focuses on the basis of information theory founded by Claude Shannon. The notions of entropy and average mutual information are first detailed. Then the fundamental theorems of information theory for communication systems are introduced. The lossless source coding theorem, stating that the minimum length of the words after source coding is equal to the entropy of the source, is developed. The lossy source coding theorem is also introduced. Then the theoretical limits of communication without error in a noisy channel are determined. Finally, after introducing different channel models, the capacity of the main transmission channels such as the discrete memory channel and the additive white Gaussian noise (AWGN) channel is evaluated.

image

Figure I.3. Block diagram of a transmission chain

The aim of source coding is to represent the message with the minimum number of bits. There exist two classes of source coding: lossless and lossy source coding. In lossless source coding, the sequence should be designed in order to guarantee the perfect reconstruction of the initial sequence by the source decoder. In lossy source coding, the aim is to minimize a fidelity criterion such as the mean square error or a subjective quality under a constraint on the binary rate. Chapter 2 describes the lossless source coding techniques (Huffman algorithm, arithmetic coding, LZ78 and LZW algorithms) and lossy source coding techniques (JPEG). The coding for analog source is also considered: in that case, it is necessary to define the different quantization methods. The application to the speech coding and audio coding are finally given.

The aim of channel coding is to protect the message against the perturbations of the transmission channel by adding redundancy to the compressed message. The channel codes, also called error correcting codes, are characterized by their redundancy and their error capability. Three classes of error correcting codes are considered; the block codes, the convolutional codes and also the concatenated codes (turbo codes and LDPC codes). These codes are developed in Chapters 3, 4 and 5, respectively. For each class of codes, the efficient techniques of error detection and correction are described. The block codes studied in Chapter 3 divide the message into blocks of K symbols that are coded using N symbols. Compared to block codes, the convolutional codes considered in Chapter 4 transform a semi-infinite sequence of information words into another semi-infinite sequence of codewords. These codes are classically decoded using the Viterbi algorithm. The concatenated codes offer the best performance due to the use of iterative decoding and soft input soft output decoding. These modern coding techniques are detailed in Chapter 5.

Most chapters detail fundamental notions of digital communications and are thus necessary to comprehend the whole communication chain. They provide several degrees of understanding by giving an introduction to these techniques, while still giving some more advanced details. In addition, they are illustrated by examples of implementations in current communications systems. Some exercises are proposed at the end of each chapter, so that the reader may take over the presented notions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset