Home Page Icon
Home Page
Table of Contents for
Digital Communications 1
Close
Digital Communications 1
by Mylène Pischella, Didier Le Ruyet
Digital Communications 1
Cover
Title
Copyright
Preface
List of Acronyms
Notations
Introduction
1: Introduction to Information Theory
1.1. Introduction
1.2. Review of probabilities
1.3. Entropy and mutual information
1.4. Lossless source coding theorems
1.5. Theorem for lossy source coding
1.6. Transmission channel models
1.7. Capacity of a transmission channel
1.8. Exercises
2: Source Coding
2.1. Introduction
2.2. Algorithms for lossless source coding
2.3. Sampling and quantization
2.4. Coding techniques for analog sources with memory
2.5. Application to the image and sound compression
2.6. Exercises
3: Linear Block Codes
3.1. Introduction
3.2. Finite fields
3.3. Linear block codes
3.4. Decoding of binary linear block codes
3.5. Performances of linear block codes
3.6. Cyclic codes
3.7. Applications
3.8. Exercises
4: Convolutional Codes
4.1. Introduction
4.2. Mathematical representations and hardware structures
4.3. Graphical representation of the convolutional codes
4.4. Free distance and transfer function of convolutional codes
4.5. Viterbi’s algorithm for the decoding of convolutional codes
4.6. Punctured convolutional codes
4.7. Applications
4.8. Exercises
5: Concatenated Codes and Iterative Decoding
5.1. Introduction
5.2. Soft input soft output decoding
5.3. LDPC codes
5.4. Parallel concatenated convolutional codes or turbo codes
5.5. Other classes of concatenated codes
5.6. Exercises
Appendix A: Proof of the Channel Capacity of the Additive White Gaussian Noise Channel
Appendix B: Calculation of the Weight Enumerator Function IRWEF of a Systematic Recursive Convolutional Encoder
Bibliography
Index
End User License Agreement
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
Cover
Next
Next Chapter
Title
Table of Contents
Cover
Title
Copyright
Preface
List of Acronyms
Notations
Introduction
1: Introduction to Information Theory
1.1. Introduction
1.2. Review of probabilities
1.3. Entropy and mutual information
1.4. Lossless source coding theorems
1.5. Theorem for lossy source coding
1.6. Transmission channel models
1.7. Capacity of a transmission channel
1.8. Exercises
2: Source Coding
2.1. Introduction
2.2. Algorithms for lossless source coding
2.3. Sampling and quantization
2.4. Coding techniques for analog sources with memory
2.5. Application to the image and sound compression
2.6. Exercises
3: Linear Block Codes
3.1. Introduction
3.2. Finite fields
3.3. Linear block codes
3.4. Decoding of binary linear block codes
3.5. Performances of linear block codes
3.6. Cyclic codes
3.7. Applications
3.8. Exercises
4: Convolutional Codes
4.1. Introduction
4.2. Mathematical representations and hardware structures
4.3. Graphical representation of the convolutional codes
4.4. Free distance and transfer function of convolutional codes
4.5. Viterbi’s algorithm for the decoding of convolutional codes
4.6. Punctured convolutional codes
4.7. Applications
4.8. Exercises
5: Concatenated Codes and Iterative Decoding
5.1. Introduction
5.2. Soft input soft output decoding
5.3. LDPC codes
5.4. Parallel concatenated convolutional codes or turbo codes
5.5. Other classes of concatenated codes
5.6. Exercises
Appendix A: Proof of the Channel Capacity of the Additive White Gaussian Noise Channel
Appendix B: Calculation of the Weight Enumerator Function IRWEF of a Systematic Recursive Convolutional Encoder
Bibliography
Index
End User License Agreement
Guide
Cover
Table of Contents
Begin Reading
List of Illustrations
Introduction
Figure I.1. Binary symmetric channel
Figure I.2. Image at the input and output of a binary symmetric channel
Figure I.3. Block diagram of a transmission chain
1: Introduction to Information Theory
Figure 1.1. Entropy of a binary source
Figure 1.2. Relations between entropies and average mutual information
Figure 1.3. Density probability and typical sequences set
Figure 1.4. Block diagram of the studied chain for source coding
Figure 1.5. Source coding
Figure 1.6. Tree associated with the source code of example 3
Figure 1.7. Kraft inequality
Figure 1.8. Entropy rate
Figure 1.9. Block diagram of the coder-decoder
Figure 1.10. Shannon distortion-rate for a Gaussian source of unitary variance
Figure 1.11. Uniform and non-uniform quantization L = 8 for a Gaussian source with variance σ
x
= 1
Figure 1.12. Binary symmetric channel
Figure 1.13. Conditional entropy H(X|Y) versus q and p
Figure 1.14. Discrete channel without memory
Figure 1.15. Erasure channel
Figure 1.16. Case C = H
MAX
(X)
Figure 1.17. Case C = 0
Figure 1.18. Communication system with channel coding
Figure 1.19. Mutual information I(X;Y) versus q and p
Figure 1.20. Capacity of the binary symmetric channel versus p
Figure 1.21. Capacity of the additive white Gaussian noise channel
Figure 1.22. Spheres of noise illustration
Figure 1.23. Maximum spectral efficiency of an additive white Gaussian noise channel
Figure 1.24. Spectral efficiency versus E
b
/N
0
2: Source Coding
Figure 2.1. Example of Huffman’s encoding
Figure 2.2. Probabilities of occurrence of the characters
Figure 2.3. Example of partitioning
Figure 2.4. Example of arithmetic coding
Figure 2.5. Tree associated with the strings memorized in the dictionary
Figure 2.6. Tree of the prototype strings
Figure 2.7. Sampling and reconstruction
Figure 2.8. Quantification uniform L = 8
Figure 2.9. Uniform quantization L = 8
Figure 2.10. Non-uniform quantization L = 8
Figure 2.11. Non-uniform quantization L = 8
Figure 2.12. Block diagram of the PCM coder
Figure 2.13. A law and μ law. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 2.14. Example of realization of a source with memory function of the time
Figure 2.15. Example of realization of a source with memory projected on a plane
Figure 2.16. Example of scalar quantization. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 2.17. Example of vector quantization.For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 2.18. Block diagram of the P order predictor
Figure 2.19. Delta modulator
Figure 2.20. Delta demodulator
Figure 2.21. Example of behavior of
in a delta modulator delta
Figure 2.22. Ideal block diagram of a DPCM transmission chain
Figure 2.23. Block diagram of the DPCM coder
Figure 2.24. DPCM decoder
Figure 2.25. Block diagram of the transform coding
Figure 2.26. Example of scalar quantization after a Karhunen–Loève transform. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 2.27. Subband coding
Figure 2.28. Zigzag serialization
Figure 2.29. Example of speech signal (“this is”) in the time domain
Figure 2.30. Simplified block diagram of the LPC coder
Figure 2.31. Block diagram of the CELP coder
Figure 2.32. Another version of the CELP coder
Figure 2.33. Absolute threshold curve
Figure 2.34. Masking level in frequency. Fora color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 2.35. Block diagram of an MPEG audio coder
Figure 2.36. Impulse response of the prototype filter
Figure 2.37. Frequency response of the filterbank in the bandwidth [0; 22 Khz]
Figure 2.38. Probability density
Figure 2.39. Density probability
Figure 2.40. Block diagram of the predictive coding system
Figure 2.41. Block diagram of the modified coder
3: Linear Block Codes
Figure 3.1. Parity check code C
2
(3, 2)
Figure 3.2. Hamming sphere
Figure 3.3. Bounds on the minimum distance for linear block codes with q = 2
Figure 3.4. Bounds on the minimum distance for linear block codes with q = 256
Figure 3.5. Error exponent function versus rate
Figure 3.6. Poltyrev bound rate versus length N
Figure 3.7. Sphere packing bound ratio E
b
/N
0
versus N
Figure 3.8. Two versions of the Tanner graph for the Hamming code (7, 4)
Figure 3.9. Branch of a trellis section
Figure 3.10. Trellis diagram obtained from the parity check matrix of the code C
3
Figure 3.11. Trellis diagram of Hamming code (7,4)
Figure 3.12. Block diagram of a transmission chain with an additive white Gaussian noise channel
Figure 3.13. Branch metric calculation
Figure 3.14. Cumulated metric calculation after the reception of the 1
st
bit of the word r. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.15. Cumulated metric calculation after the reception of the 4
th
bit of the word r. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.16. Cumulated metric calculation
Figure 3.17. Determination of the estimated sequence. Fora color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.18. Branch metric calculation
Figure 3.19. Cumulated metric calculation. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.20. Cumulated metric calculation. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.21. Determination of the estimated sequence. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 3.22. Iterative decoding on the Tanner graph
Figure 3.23. Decision regions associated with the codewords
Figure 3.24. Example
Figure 3.25. Coding gain of different codes
Figure 3.26. Performance comparison of hard and soft input decoding
Figure 3.27. a) Stop-and-wait protocol b) go-back-N protocol with N
f
= 4
Figure 3.28. Hardware structure of the division by 1 +p + p
2
Figure 3.29. Sequencing of the division by 1+p + p
2
Figure 3.30. Hardware structure of the Hamming encoder with g(p) = 1+p + p
3
Figure 3.31. Hardware structure of the Hamming encoder with premultiplication for g(p) = 1 + p + p
3
Figure 3.32. Complete hardware structure of the Hamming encoder for g (p) = 1 +p + p
3
Figure 3.33. Trellis diagram of the Hamming code defined by the polynomial g(p) = 1 + p + p
3
Figure 3.34. Hardware structure of the decoder for the Hamming code with g(p) = 1 + p + p
3
Figure 3.35. Hardware structure of the decoder for the Hamming code (7,4)
Figure 3.36. SER
O
= f(SER
I
) for the (255,239,17) Reed-Solomon code
Figure 3.37. Trellis diagram of the (8,4) Reed-Muller code
Figure 3.38. Trellis diagram of the (7, 4) code
Figure 3.39. Trellis diagram of the (3,2) code
Figure 3.40. Trellis diagram
4: Convolutional Codes
Figure 4.1. Convolutional encoder
Figure 4.2. Non-recursive convolutional coder of rate 1/2 M =2
g1
= 7,
g2
= 5
Figure 4.3. Non-recursive convolutional encoder of rate 1/2 M = 6
g1
= 133,
g2
= 171
Figure 4.4. Systematic recursive convolutional encoder 1/2 M = 2
Figure 4.5. Generic non-recursive convolutional encoder of rate 1/n
Figure 4.6. State transition diagram for the non-recursive convolutional code
g1
= 7,
g2
= 5
Figure 4.7. Elementary trellis of a non-recursive convolutional encoder
g1
= 7,
g2
= 5
Figure 4.8. Trellis diagram for the non-recursive convolutional coder
g1
= 7,
g2
= 5
Figure 4.9. TWL graph of a systematic convolutional coder of rate 1/2
Figure 4.10. Trellis diagram of the non-recursive
Figure 4.11. Modified state diagram of a non-recursive
Figure 4.12. Viterbi decoding for a non-recursive convolutional encoder
g1
= 7,
g2
= 5
Figure 4.13. Trellis diagram of a punctured convolutional encoder
5: Concatenated Codes and Iterative Decoding
Figure 5.1. Factor graph of the parity check code (3,2)
Figure 5.2. Factor graph of the repetition code (3,1)
Figure 5.3. Message flows for the calculation of μ
c
→T
(c) and μ
T
→c
(c)
Figure 5.4. Level curves of μ
T
→c
(c) as a function of
and
a) sum-product algorithm and b) minimum-sum algorithm
Figure 5.5. Tanner graph of the considered code (7,4)
Figure 5.6. Sum-product algorithm applied to the factor graph of the code (7, 4)
Figure 5.7. Minimum-sum algorithm for the code (7, 4)
Figure 5.8. Quantities α
t
(m), γ
t
(m′,m) and β
t+
1
(m) on the trellis of a convolutional code
Figure 5.9. Trellis of the coder
Figure 5.10. Calculation of α
Figure 5.11. Calculation of β
Figure 5.12. Calculation of ln α
Figure 5.13. Calculation of ln β
Figure 5.14. Communication system
Figure 5.15. Example of a factor graph for a recursive convolutional encoder of rate 1/2 used for the forward-backward algorithm
Figure 5.16. Message passing when performing the forward-backward algorithm
Figure 5.17. Tanner graph of a (N, d
c
, d
T
) regular LDPC code The rate
of an LDPC code is:
Figure 5.18. Tanner graph of a (12, 3, 4) regular LDPC code
Figure 5.19. Bipartite graph with girth of 4
Figure 5.20. Parity check matrix of a (638,324) QC-LDPC code
Figure 5.21. Parity matrix H with a lower triangular form
Figure 5.22. Graphical illustration of the evolution of the erasure probability as a function of the iterations for a regular LDPC code with d
c
= 6, d
T
= 3 . For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 5.23. Graphical illustration of the evolution of the erasure probability as a function of the iterations for a irregular LDPC code. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 5.24. Factor graph of the (N,d
c
, d
T
) regular LDPC code
Figure 5.25. Example of local tree for a (d
c
, d
T
) LDPC code
Figure 5.26. Message exchange for the calculation of
and
Figure 5.27. Bit error rate versus E
b
/N
0
for the (4320, 3557) regular LDPC code with Di = 17 and d
T
= 3
Figure 5.28. Comparison of the mean and variance calculated using the Gaussian approximation and using the density probability evolution. For a color version of the figure, see
Figure 5.29. Parallel concatenated convolutional encoder
Figure 5.30. TWL graph of a turbo encoder
Figure 5.31. Parity check matrix of a turbo coder composed of two RSC coders (7,5) with RT = 1/3
Figure 5.32. Comparison of the average performance of average turbo codes with rate R
t
= 1/3 composed of RSC (7,5) encoders (dashed lines) and RSC (15,17) encoders (continuous line) and a uniform interleaver of size K = 100 and K = 1000
Figure 5.33. Contribution of the input sequences of low weight on the bit error rate of a turbo coder of rate R
t
= 1/3 composes of RSC (15,17) encoders and a uniform interleaver of size K = 100. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip
Figure 5.34. Contribution of the input sequences of low weight on the bit error rate of a turbo coder of rate R
t
= 1/3 composes of RSC (15,17) encoders and a uniform interleaver of size K = 1000
Figure 5.35. Structure of the iterative decoder
Figure 5.36. Example of factor graph for a turbo code of rate 1/3
Figure 5.37. BER = f(E
B
/N
0
) for a turbo code of rate 1/2 composed of two RSC encoders
and an S-random interleaver of size K = 1024
Figure 5.38. Curves BER = f(E
B
/N
0
) for a turbo code of rate 1/2 composed of two RSC encoders
and a pseudo random interleaver of size K=65536
Figure 5.39. EXIT charts for an RSC encoder (13,15) of rate R = 1/2
Figure 5.40. Evolution of the average mutual information as a function of the iterations for a turbo code
Figure 5.41. Example of primary cycle
Figure 5.42. Interleavers a) L-random K = 320 and b) QPP K = 280
Figure 5.43. Code PCBC
Figure 5.44. TWL graph of the PCBC codes
Figure 5.45. Serial concatenated convolutional encoder
Figure 5.46. RA coder R
T
= 1/2
Figure 5.47. Product code
List of Tables
1: Introduction to Information Theory
Table 1.1. Variable length code of example 1
Table 1.2. Variable length code of example 2
Table 1.3. Variable length code of example 3
Table 1.4. Probabilities P
r
(X, Y), P
r
(Y) and P
r
(X|Y) for the binary symmetric channel
Table 1.5. Probabilities P
r
(X,Y), P
r
(Y) and P
r
(X|Y) for the erasure channel
2: Source Coding
Table 2.1. Huffman’s encoding table
Table 2.2. Probabilities of occurrence of characters
Table 2.3. Dictionary of strings
Table 2.4. List of prototype strings
Table 2.5. Look up table for the implementation of the non-uniform quantization
Table 2.6. Example of JPEG quantization table
Table 2.7. Comparison of modulations for speech coding
Table 2.8. List of the 24 critical bands using the Bark’s scale
3: Linear Block Codes
Table 3.1. Addition and multiplication in
Table 3.2. Addition and multiplication in
Table 3.3. Decomposition of the polynomial of the form p
2m−1
− 1 in a product of irreducible polynomials
Table 3.4. List of primitive polynomials for m ≤ 16
Table 3.5. List of the elements of the finite field
Table 3.6. Addition and multiplication in
Table 3.7. List of elements of finite field
built from the irreducible polynomial 1+p + p
4
Table 3.8. List of codewords for Hamming code C
3
(7,4)
Table 3.9. Weight enumeration of information sequence and codewords for the parity check code (3, 2)
Table 3.10. List of Reed-Mullercodes (N, K, d
min
) form = 3,4 and 5
Table 3.11. Standard array
Table 3.12. Syndrome table
Table 3.13. Standard array for the code (5,2)
Table 3.14. Syndrome decoding table for the (5,2) code
Table 3.15. Branch metric table
Table 3.16. Coding gain for different linear block codes
Table 3.17. List of codewords for the Hamming code (7,4) built with g(p) = 1 + p + p
3
Table 3.18. Generator polynomials for CRC
Table 3.19. Sequencing of the division
Table 3.20. Hamming encoding
Table 3.21. Sequencing of the Hamming encoder after premultiplication
Table 3.22. Syndrome table
Table 3.23. Content of the shift registers after each clock transition
Table 3.24. List of generator polynomials of BCH codes with N ≤ 63
Table 3.25. Partial table for μ = 0
Table 3.26. Partial table for μ = 1
Table 3.27. Partial table for μ = 2
Table 3.28. Final table
Table 3.29. Finding the positions of the errors using the Chien search method
4: Convolutional Codes
Table 4.1. Description of the internal states
Table 4.2. List of the best convolutional codes of rate 1/2
5: Concatenated Codes and Iterative Decoding
Table 5.1. Table of the transmitted and received symbols
Table 5.2. Table of the intrinsic information
Table 5.3. Table of branch metrics calculated using [5.58]
Table 5.4. Table of branch metrics calculated using [5.61]
Table 5.5. Calculation of the a posteriori information
Table 5.6. Calculation of ln γ
Table 5.7. Calculation of the a posteriori information
Pages
Cover
Contents
iii
iv
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
339
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset