8.5 Physical dynamics-aware channel decoding

In this section, we use existing coding schemes and outline a decoding procedure that considers the physical dynamics in a CPS, following the study in Ref. [165].

8.5.1 Motivation

Since the reliability of communication is usually very high in CPSs (e.g., in the wide area monitoring system in smart grids, the packet error rate should be less than 10−5 [166]) while the communication channel may experience various degradations (e.g., the harsh environments for communications in smart grids), it is important to use channel coding to protect the information transmission.

A straightforward approach for coding in a CPS is to follow the traditional procedure of separate quantization, coding, transmission, decoding, and further processing, which is adopted in conventional communication systems (Fig. 8.7). However, separate channel decoding and system state estimation may not be optimal when there exist redundancies in the transmitted messages. For example, the transmitted codeword at time t, b(t), is generated by the observation on the physical dynamics, y(t). Due to the time correlation of system states, y(t) is correlated with y(t + 1), thus being able to provide information for decoding the codeword in the next time slot t + 1. Hence if the decoding procedure is independent of the system state estimation, the permanence will be degraded due to the loss of redundancy. One may argue that the redundancy can be removed in the source coding procedure (e.g., encoding only the innovation vector in Kalman filtering if the physical dynamics is linear with Gaussian noise [167]). However, this requires substantial computation, which a sensor may not be able to carry out. Moreover, the sensor may not have the overall system parameters for the extraction of innovation information. Hence it is more desirable to shift the burden to the controller, which has more computational power and more system information; then the inexpensive sensor focuses on transmitting the raw data.

f08-07-9780128019504
Fig. 8.7 Illustration of the components in a CPS.

In this section, we propose an algorithm for joint channel decoding and state estimation in a CPS. We consider traditional channel codes with memory (e.g., convolutional codes). Then the physical dynamics system state evolution and the channel codes can be considered as an equivalent concatenated code, as illustrated in Fig. 8.8, in which the outer code is the generation of system observation (we call it a “nature encoded” codeword) while the inner code is the traditional channel coding. The outer code is generated by the physical dynamics in an analog manner (since the value is continuous) while the inner code is carried out by a digital encoder. We will use belief propagation (BP) [168], which has been widely applied in decoding turbo codes and low-density parity-check (LDPC) codes, to decode this equivalent concatenated code. The major challenge is that the Bayesian inference is over a hybrid system which has both continuous- and discrete-valued nodes.

f08-08-9780128019504
Fig. 8.8 Illustration of the equivalent concatenated code for system state evolution and channel coding.

8.5.2 System Model

We consider a discrete-time dynamical system with N-dimensional system state x and K-dimensional observation y. The dynamics are described by

x(t+1)=f(x(t),u(t),n(t)),y(t)=g(x(t),w(t)),

si165_e  (8.111)

where u is the control action taken by the controller, and n and w are both random perturbations, whose probability distributions are assumed to be known.

A special case is linear system dynamics, which is described by

x(t+1)=Ax(t)+Bu(t)+n(t),y(t)=Cx(t)+w(t),

si107_e  (8.112)

where n and w are both Gaussian noises with zero means and covariance matrices Γn and Γw, respectively.

We assume that y(t) is observed by a sensor.2 It quantizes each dimension of the observation using B bits, thus forming a KB-dimensional binary vector which is given by

b(t)=(b1(t),b2(t),,bKB(t)).

si167_e  (8.113)

We assume that the sensor simply carries out a scalar quantization for each dimension and does not use compression to remove the redundancy between y(t) and y(t − 1). It is straightforward to extend this to the case of vector quantization. However, it is nontrivial to extend it to the case of partial compression of the redundancy, which is beyond the scope of this study.

The information bits b(t) are put into an encoder to generate a codeword c(t). We assume binary phase shift keying (BPSK) modulation for the communication from the sensor to the controller. which is given by

r(t)=2c(t)1+e(t),

si168_e  (8.114)

where 2c(t) − 1 converts the alphabet {0, 1} to the antipodal signal {−1, +1}, and e(t) is the additive white Gaussian noise with zero expectation and normalized variance σe2. In this chapter, we do not consider fading, since within one codeword the channel is usually stationary and the channel gain can be incorporated (together with the transmit power) into the noise variance σe2.

8.5.3 Joint Decoding

As we have explained, there exists information redundancy between the system state in different time slots, as well as the received bits in the communications. Hence we can apply the framework of BP for the procedure of joint decoding and system state estimation.

A brief introduction to Pearl’s BP

We first provide a brief introduction to the BP algorithm, in particular a version of Pearl’s BP. We use an example to illustrate the procedure. The details and formal description can be found in Ref. [168].

Take Fig. 8.9 for instance, where a random variable X has parents U1U2, …, UM and children Y1Y2, …, YN. Here the parent nodes are characterized by the following equality of conditional probability:

P(X|all other random variables)=P(X|U1,U2,,UM),

si169_e  (8.115)

that is, given the random variables in the parent nodes, X is independent of all nonparent nodes. As indicated by the name, we say that a node is a child of X if X is a parent of it. The message passing of Pearl’s BP is illustrated by dotted arrows and dashed arrows in the figure. The dashed arrows transmit π-messages from a parent to its children. For instance, the message passing from Um to X is πUm,X(Um)si170_e, which is the prior information of Um given that all the information Um has been received. The dotted arrows transmit λ-messages from child to parent. For instance, the message passed from Yn to X is λYn,X(X)si171_e, which is the likelihood of X given that the information Yn has been received. After X receives all π-messages (i.e., πUm,X(Um)si170_e from its parents U1U2, …, UM) and all λ-messages (i.e., λYn,X(X)si171_e, X from its children Y1Y2, …, YN), X updates its local belief information BELX(x), and transmits λ-messages λX,Um(Um)si174_e to its parents and π-messages πX,Yn(x)si175_e to its children. The expressions for these messages are given by

πX(x)=Up(x|U)m=1MπUm,X(Um),

si176_e  (8.116)

γX(U)=xn=1NλYn,X(x)p(x|U),

si177_e  (8.117)

BELX(x)=α×n=1NλYn,X(x)×πX(x),

si178_e  (8.118)

λX,Um(Um)=U,UmγX(U)×jmπUj,X(Uj),

si179_e  (8.119)

πX,Yn(x)=πX(x)×inλYi,X(x),

si180_e  (8.120)

where U = (U1U2, …, UM) and Y = (Y1Y2, …, UN).

f08-09-9780128019504
Fig. 8.9 Illustration of message passing in BP.

In the initialization stage of Pearl’s BP, the initial values are defined as

λX,U(u)=p(x0|u),Xis evidence,x=x0,1,Xis not evidence

si181_e  (8.121)

and

πX,Y(x)=δ(x,x0),Xis evidence,x=x0,p(x),Xis not evidence.

si182_e  (8.122)

Iterative decoding

Based on the tools of a Bayesian network, we can derive the iterative decoding procedure. Fig. 8.10 shows the procedure of message passing in the decoding of a CPS: xt−2 summarizes all the information obtained from previous time slots and transmits π-message πxt−2xt−1(xt−2) to xt−1. The BP procedure can be implemented in either synchronous or asynchronous manner. To accelerate the procedure, we implement the asynchronous Pearl BP. The updating order and message passing in one iteration are given in Procedure 9.

f08-10-9780128019504
Fig. 8.10 Bayesian network structure and message passing for a CPS.

Procedure 9

BP-Based Channel Decoding

1: Set the maximal iteration times N

2: for Each time slot t do

3:  Receive r(t).

4:  for Iteration time iN. do

5:  Pass message xt1213yt1si183_e; i.e., the sensor observation procedure.

6:  Pass message yt1213bt1si184_e; i.e., the quantization procedure.

7:  Pass message bt1213yt1si185_e; i.e., the observation reconstruction procedure.

8:  Pass message yt1213xt1si186_e; i.e., the system estimation procedure.

9:  Pass message xt1213xtsi187_e; i.e., the system state evolution.

10:  Pass message xt213ytsi188_e; i.e., the observation procedure.

11:  Pass message yt213btsi189_e; i.e., the quantization procedure.

12:  Pass message bt213ytsi190_e; i.e., the observation reconstruction procedure.

13:  Pass message yt213xtsi191_e; i.e., the system state estimation procedure.

14:  Pass message xt213xt1si192_e; i.e., the trace back procedure.

15:  xt−1 updates information.

16:  end for

17:  Use the belief of b(t) as the prior information for the decoding with r(t).

18: end for

The algorithm has been tested in the context of voltage control in smart grids [165]. Numerical results have shown that the performance of both decoding and system state estimation is substantially improved, when compared with traditional separate decoding and estimation. One potential challenge in joint decoding and estimation is the possibility of error propagation; i.e., the decoding error or estimation error will be propagated to the next time slot, since the decoding and estimation procedures in different time slots are correlated. An effective approach to handle error propagation is to monitor the performance and restart the decoding/estimation procedure once performance degradation is detected.

8.6 Control-oriented channel coding

Most channel coding schemes are designed for common purpose data communications. Although they can also be used in the context of controlling a CPS, it is not clear whether these coding schemes designed for pure data communications are optimal for the purpose of control. There have been some studies on designing specific channel codes for control systems [169, 170]. In this section we give a brief introduction to these efforts, which may provide insights for further communication system designs in CPSs, although they have not been implemented in real systems.

8.6.1 Trajectory Codes

Trajectory codes for the purpose of automatic control were proposed in Ref. [169]. The main feature is the online error correction capability for real-time estimation and control.

System model

We assume that the communication channel has binary input and output. The transmitter has a time-varying state xt, which is represented by the vertex of a graph G. The edges in the graph G show the possibility of state transitions. It is assumed that G is known to both the transmitter and receiver. This setup is similar to the random walk on graphs.

The time is divided into rounds and channel use times. In each round, the state of the transmitter makes one move in G, and then the transmitter can use the communication channel M times. The encoded bits are based on all the history of the transmitter state. The goal of the design is to find a coding scheme that enables the receiver to obtain accurate estimation of the transmitter’s state with a large probability. Hence the transmitter can be considered as the combination of physical dynamics and communication transmitter. The coding rate is given by

ρ=logΔM,

si193_e  (8.123)

where Δ is the maximum out-degree or in-degree of nodes in the graph G.

The receiver outputs an estimation on xt, which is denoted by x^tsi194_e, based on all the received coded bits. The error of decoding is represented by the shortest length between xt and x^tsi194_e in the graph G, which is denoted by dG(x,x^)si196_e. The communication scheme has an error exponent κ if

P(dG(x,x^)l)exp(κl).

si197_e  (8.124)

It was shown in Ref. [169] that the communication is online efficient if the time and space complexities of coding and decoding are of the order of (logt)O(1)si198_e. If the coding scheme has a positive rate, positive error exponent and is online efficient, then we say that the code is asymptotically good.

Note that the problem considered above is more like a system estimation problem. The control procedure is not considered; i.e., the system state xt evolves independently of the communication performance. However, the system state is closely related to the control problems, since the error of system state estimation substantially determines the performance of control; meanwhile the system state xt can be considered as the system state in which the impact of control has been removed.

Concept of trajectory codes

The trajectory of the transmitter’s state can be represented by a path in the corresponding graph G, whose initial time is denoted by t0. If two trajectories, γ and γ′, have the same length, the same starting time and the same initial vertex, we denote this by γγ′. The distance between two trajectories indicates the number of different states in the two trajectories and is denoted by τ(γγ′).

A trajectory code, denoted by χ, maps from a trajectory to a certain alphabet in a concatenation manner:

χ(γ)=χ(γ(t0+1),t0+1),,χ(γ(t0+t),t0+t).

si199_e  (8.125)

The Hamming distance between two equal-length codewords is denoted by h. The relative distance of a trajectory code is denoted by

δ=infγγh(χ(γ),χ(γ))τ(γ,γ),

si200_e  (8.126)

which implies the capability of distinguishing two different trajectories of the transmitter’s state. A trajectory code is said to be asymptotically good if it has a positive rate and provides a positive relative distance.

Obviously, the larger the relative distance δ, the more capable the receiver is of estimating the transmitter’s state. When δ is large, even if there are some transmission errors, the remaining discrepancy between two codewords can still be used to distinguish the two different state trajectories. Then the major challenge is the existence and construction of an asymptotically good trajectory code. The following theorem guarantees the existence of correspondence:

Theorem 55

Every graph has an asymptotically optimal trajectory code. A trajectory code with δ < 1 is always feasible.

The details of the proof are given in Ref. [169]. Here we provide a sketch of the proof, which is based on the probabilistic approach and random coding, similarly to Shannon’s random coding argument [33]. First we consider a random coding scheme; i.e., the transmitter randomly selects an output from the output alphabet for each possible input (i.e., a state, or equivalently a vertex in G). Consider two trajectories γ1 and γ2, which satisfy γ1γ2 but do not have common vertices except for the initial one. We call these two trajectories twins. It is easy to verify that

infγ1γ2h(χ(γ),χ(γ))τ(γ,γ)=infγ1,γ2are twinsh(χ(γ),χ(γ))τ(γ,γ).

si201_e  (8.127)

Then we define the event

Aγ=γ=(γ1,γ2)γ1γ2,h(χ(γ),χ(γ))τ(γ,γ))δ.

si202_e  (8.128)

If we can prove

γAγcϕ,

si203_e  (8.129)

then there exists at least one asymptotically good trajectory code. In Ref. [169], Eq. (8.129) is proved by citing the Lovasz Local Lemma.

Construction of trajectory codes

Although the existence of trajectory codes has been proved, it is still not clear how to construct these codes. In Ref. [169], two approaches are proposed to construct trajectory codes for the special case of d-dimensional grids. In this book, we provide a brief introduction to the first method.

Note that the grid graph is determined by the set of vertices Vnd to be

Vn,d={n/2+1,n/2}d.

si204_e  (8.130)

Two nodes with distance 1 have a connecting edge. The goal of the construction procedures is to design a trajectory code

χ:Vn,d×{1,,n/2}S,

si205_e  (8.131)

which is asymptotically good.

The first approach of construction is based on two codes, where n1Θ(logn)si206_e and k is the least even integer no less than 121δ+4si207_e.

 Block code: Consider a block code η:Vn,dR1n1si208_e, where R1 is a particular finite alphabet. Intuitively, η encodes each vertex in Vnd to a block code with codeword length n1. We assume that the block code η has a positive rate and a relative distance (1 + δ)/2. The coding and decoding procedures can be computed in a time of order n1O(1)si209_e. η can be rewritten as

η(x)=(η1(x,1),,η1(x,n1)),

si210_e  (8.132)

where η1:Vn,d×{1,,n1}R1si211_e.

 Recursive code: We assume that χ1:Vkn1,,d×{1,,kn1/2}S1si212_e is a trajectory code with relative distance (1 + δ)/2, where S1 is a finite alphabet. The construction of this code will be provided later.

Now we cover the space Vnd by overlapping tiles. The tile placed at vertex x = (x1, …, xdxd+1) ∈ Vnd is the mapping defined as

σx(y)=(χ1(yx),η1(x1,,xd,(yd+1xd+1modn1))),

si213_e  (8.133)

which maps from the domain

i=1d(x1kn1/2+1,,xi+kn1/2)×(xd+1+1,,xd+1+kn1/2),

si214_e  (8.134)

to S1 × R1.

We assume that n can be divided by kn1. Then we pick out the tiles placed at the locations x having the following form:

x=(n1kz1+n1a1,,n1kzd+1+n1ad+1),

si215_e  (8.135)

where {zi}i satisfy

(z1,,zd+1)n2kn1+1,,n2kn1d×1,,n2kn1,

si216_e  (8.136)

and

(a1,,ad+1)k2+1,,k2d×{0,,k1}.

si217_e  (8.137)

The set of these tiles, which actually form a covering of Vnd, are labeled using (a1, …, ad+1). Hence there are a total of kd+1 such sets. This procedure is illustrated in Fig. 8.11, in which we set n1 = 4 and k = 2.

f08-11-9780128019504
Fig. 8.11 Illustration of the tiles for designing trajectory codes.

The trajectory code is then given by

χ(y)={χ(a1,,ad+1)(y)}(a1,,ad+1),

si218_e  (8.138)

that is, for each state y, the coder output is the concatenation of all the coder outputs at the tiles satisfying Eq. (8.135).

The intuition behind the complicated procedure of construction is given below:

 The coder for long trajectories (of the order of n) is based on the coders for short trajectories (of the order of lognsi219_e).

 When the transmitter state is at a particular point of Rd, the coder output is determined by the output of the tiles close to it.

 The coder output of each tile in Eq. (8.133) consists of the recursive coder output, which encodes the transmitter state with respect to the center point of the tile, and the output of the block coder, which encodes the location of the center point of the tile.

It was shown in Ref. [169] that the above construction of trajectory codes can achieve the relative distance δ < 1, whose proof is involved. Then the only unsolved problem is the construction of the coder χ1 for each tile. Since the problem size is much smaller, it is possible to obtain the coder by using exhaustive search, thus solving the whole problem.

8.6.2 Anytime Channel Codes

In this section, we follow the discussion in Ref. [170] to show that it is possible to design the channel codes directly for control.

System model

We consider a linear system with system state x and observations y, which are given by

x(t+1)=Fx(t)+Gu(t)+w(t)

si220_e  (8.139)

and

y(t)=Hx(t)+v(t),

si221_e  (8.140)

where u(t) is the control action, w(t) and v(t) are bounded noise processes. It is assumed that (FG) are controllable and (FH) are observable.

At time t, the sensor observes y(t) and generates a k-bit message, which is a function of all existing observations. Suppose that n bits can be transmitted for the k information bits. Hence the data transmission rate is k/n.

Anytime reliability

According to the discussion in Chapter 5, the communication should satisfy the anytime reliability in order to stabilize the linear system. We say that an encoder-decoder pair can achieve (Rβd0)-reliability over a particular communication channel if

P(min{τ:b^(τ|t)bτ}=td+1)η2nβd,dd0,t,

si222_e  (8.141)

where b^(τ|t)si223_e denotes the estimation of bτ (the message transmitted at time τ) given the received messages until time t, and η is a constant. Intuitively Eq. (8.141) indicates that the probability of decoding error decreases exponentially with elapsed time.

Note that this definition of anytime reliability is not literally the same as the original one in Ref. [13]. In Ref. [13], the encoder-decoder pair can achieve α-anytime reliability if

P(b^(τ|t)b(τ))K2α(tτ),

si224_e  (8.142)

for any t and τ. However, it is easy to see that these two definitions are equivalent. First, if Eq. (8.141) holds, then we have

P(b^(τ|t)b(τ))d=tτP(min{τ:b^(τ|t)bτ}=td+1)d=tτη2nβd=η2nβ(tτ)12nβ,

si225_e  (8.143)

which implies β-anytime reliability. The first inequality arises because the event that bτ is incorrectly decoded belongs to the event that the first incorrectly decoded message is bs, where sτ.

On the other hand, if the encoder-decoder pair is α-anytime reliable according to Eq. (8.142), we have

P(min{τ:b^(τ|t)bτ}=td+1)P(b^(td+1|t)b(td+1))K2αd,

si226_e  (8.144)

where the first inequality arises because the event that the first decoding error happens for btd belongs to the event that btd is incorrectly decoded. According to the conclusion in Ref. [13, Theorem 5.2], the anytime reliability can guarantee the stability of linear systems with sufficiently small outage probability.

Linear tree codes

Ref. [170] proposed the use of linear tree codes to design the anytime reliable code. Note that the concept of tree codes will be explained in the next section. Due to the requirement of causal coding, the output of linear tree codes cr at time slot r is given by

cr=fr(b1:r)=k=1rGrkbk,

si227_e  (8.145)

where bk is the k-bit information symbol generated at time k, the subscript r is the current time index, and {Grk} are the n × k generating matrices. This coding procedure is illustrated in Fig. 8.12.

f08-12-9780128019504
Fig. 8.12 Illustration of the linear tree code.

Hence the overall generating matrix can be written as

Gn,R=G110G21G220Gr1Gr2Grr.

si228_e  (8.146)

The parity check matrix of GnR is denoted by HnR, which satisfies

Gn,RHn,R=0.

si229_e  (8.147)

It is easy to verify that HnR also has the lower triangular matrix form.

Maximum likelihood decoding

We assume that maximum likelihood (ML) decoding is used. Since we want to test whether the communication is anytime reliable, we focus on the probability of the event that the earliest wrongly decoded message is d time slots ago. Denoting this probability by Pt,desi230_e, we use the union bound to upper bound Pt,desi230_e:

Pt,decCt,dP(0 is decoded asc),

si232_e  (8.148)

where Ct,dsi233_e is the set of codewords whose earliest nonzero symbol occurs at time td, and we assume that an all-zero sequence is transmitted.

It was shown in Ref. [171] that the error probability of ML decoding is bounded by

P(0is decoded asc)ζc,

si234_e  (8.149)

where ζ is the Bhattacharya parameter, which is defined as (here we assume a discrete channel output alphabet Zsi235_e)

ξ=zZp(z|X=0)p(z|X=1).

si236_e  (8.150)

Hence according to the union bound in Eq. (8.148), we have

Pt,dewmin,dtwndNw,dtζ2,

si237_e  (8.151)

where w is the weight of decoder output c, Nw,dtsi238_e is the number of codes in Ct,dsi233_e having weight w, and wmin,dsi240_e is the minimum weight of the codes in Ct,dsi233_e. The reason why wnd is that there is no error before the tdth message. This motivates the following definition:

Definition 23

A linear tree code in Eq. (8.145) is said to have (αθd0)-anytime distance if:

 The parity check matrix HnR has a full rank for all t > 0.

 wmin,dtαsi242_e and Nw,dt2θwsi243_e, for all t > 0 and dd0.

The first requirement on the full rank of HnR is to guarantee that the encoding procedure is invertible. The second condition implies

Pt,deη2αlog21ζθ,

si244_e  (8.152)

where

η=12log21ζθ1.

si245_e  (8.153)

Based on the above discussion, we have the following proposition:

Proposition 6

If the linear tree code uses ML decoding and has(αθd0) -anytime distance, then it is(Rβd0) -anytime reliable, for channels having Bhattacharya parameter ζ where

β=αlog1ζθ.

si246_e  (8.154)

Code construction

Now the challenge is how to construct a linear code satisfying the requirement of (αθd0)-anytime distance. In Ref. [170], the following Toeplitz structure was considered:

Hn,RTZ=H10H2H10HrHr1H1,

si247_e  (8.155)

which assures the causality of the coding procedure.

Then a random construction approach for Hn,RTZsi248_e is proposed in Procedure 10.

Procedure 10

Constructing the Linear Tree Code

1: Select a fixed full rank binary matrix for H1.

2: for k = 2, 3, …do

3:  for Each element in Hk do

4:  Select 1 with probability p and 0 with probability1 − p.

5:  end for

6: end for

The main theorem in Ref. [170] shows that the above construction results in an anytime reliable code with a large probability.

Theorem 56

The probability that the construction procedure in Procedure 10 results in a code with(αθd0) -anytime distance is bounded by

P(the resulting code has(α,θ,d0)-anytime distance)12Ω(nd0),

si249_e  (8.156)

where

α<H1(1Rlog(1/(1p-)))

si250_e  (8.157)

and

θ>log[(1p-)(1R)1],

si251_e  (8.158)

with p-=min(p,1p)si252_e.

The detailed proof of this theorem in Ref. [170] is quite involved. In this book, we provide an intuitive sketch of the proof. Consider time slot t. Suppose that an all-zero codeword is sent and consider a codeword c≠0. If Hr,Rtc=0si253_e, then c may be confused with the correct one, which results in a decoding error. Due to the requirement of anytime reliability, we are interested in the locations of errors (or equivalently the nonzero elements of c). Since we require that the messages before time td be correctly decoded, we have

c(τ)=0,τtd,c(td+1)0Hn,Rtc=0,

si254_e  (8.159)

where c = (c0, …, ct). This is equivalent to

H10H2H10HrHr1H1ctd+1ctd+2ct=000,

si255_e  (8.160)

which can be rewritten as

C1d+10Ctd+2Ctd+10CtCt1Ctd+1h1h2hd=000,

si256_e  (8.161)

where hi=vec(HiT)si257_e and Ci=diag(ciT,,ciT)si258_e. Since H1 has been fixed in the construction procedure, we have

C1d+10Ctd+2Ctd+10Ct1Ct2Ctd+1h2h3hd=Ctd+2Ctd+3Cth1,

si259_e  (8.162)

and Ctd+1h1 = 0. We abbreviate Eq. (8.162) to

Ch=Ch1.

si260_e  (8.163)

Hence the higher dimensional vector h cannot be arbitrarily chosen due to the linear constraint in Eq. (8.163). So h must be selected within a lower dimensional subspace. Since {hi}, i = 2, …, t, are randomly chosen, the corresponding probability that h falls in a lower dimensional subspace can be bounded by the following lemma:

Lemma 8

Consider a randomly selected m -dimensional vector v in{0, 1}m with probability

P(V)=pv(1p)mv,

si261_e  (8.164)

that is, with probability p (or 1 − p ), 1 (or 0) is selected for each dimension of v . Then the probability that v lies in an l -dimensional subspace U ( lm ) is bound by

P(U)max(p,1p)ml.

si262_e  (8.165)

The detailed evaluation of the upper bound will lead to the desired conclusion.

8.6.3 Convolutional LDPC for Control

In the previous subsection, it is rigorously shown that the proposed coding scheme can stabilize the physical dynamics in a CPS. However, the following two issues in the coding scheme hinder its application in real practice:

 The parity check matrices are generated randomly, so it is difficult to control the decoding error probability.

 With a large probability, the matrix Hr is nonzero, which means that even the bits with a long time lapse still have an impact on the current coding procedure; hence all the previous information bits need to be stored, thus requiring infinite memory.

To handle these two challenges, some practical coding schemes have been proposed. In this book we briefly introduce the convolutional LDPC codes [172, 173].

It was proposed in Ref. [172] that the LDPC convolutional codes can be used for assuring anytime reliability for the purpose of controlling dynamics in a CPS. Originally LDPC convolutional codes were proposed for pure data communications, rather than for CPSs. Essentially, the parity check matrix in the LDPC convolutional codes has the following form:

H=H0(1)H1(1)H0(2)H1(2)Hms(1)H0(t)Hms(2)H1(t)Hms(t),

si263_e  (8.166)

where ms is the memory length. It is easy to verify that the generating matrix G has a similar structure; hence the output of the tth message is given by

c(t)=s=tms+1tb(s)Hts.

si264_e  (8.167)

Such a scheme handles the above two challenges in the following manner:

 There have been plenty of studies on how to optimize the LDPC codes; moreover, efficient decoding algorithms have been found for the LDPC codes, which makes the LDPC convolutional codes fairly practical.

 The convolutional structure makes the encoding procedure causal. Moreover, the finite memory requires only bounded memory.

It was shown in Ref. [172] that the proposed convolutional LDPC code can achieve anytime reliability. The details can be found therein.

8.7 Channel coding for interactive communication in computing

The channel coding schemes in the previous sections are dedicated to system state estimation and control in a CPS. We can also consider state estimation and control as a problem of computing: the agents (sensors or controllers) in a CPS have local parameters or observations on the physical dynamics and want to compute the control actions as functions of the local numbers. Hence the agents can communicate by using an existing protocol of interactive communications (e.g., a sensor sends an observation to a controller; or two controllers exchange their control actions and observations). Such a protocol can always be designed when there is no communication error. Since a noisy communication channel can incur transmission errors, it is important to study how to carry out the interactive communication protocol designed for noiseless communication channels. Although the study on coding for interactive communication in computing is not confined to the purpose of control, it is of significant importance for us to design channel coding schemes in a CPS. Hence in this section, we follow the seminal work of Schulman [174] to introduce the studies on this topic.

8.7.1 System Model

For simplicity, we consider two agents A and B, which have local discrete arguments zA and zB, respectively. They are required to compute a function f(AB). The two agents can communicate with each other, one bit per transmission. The simplest approach is for A to transmit zA to B and then B calculates f(zAzB). However, in many situations, fewer bits can be transmitted to obtain f(zAzB), which is encompassed in research on communication complexity [94, 95].

We assume that the protocol for computing f(zAzB) has been designed, given the assumption that there is no communication error. Now our challenge is how to carry out the communication for computing with a given probability of transmission errors, when the communication channel is noisy (thus the probability of transmission error is nonzero).

8.7.2 Tree Codes

The construction of a communication protocol for computing the function f(zAzB) is based on the concept of tree codes. A d-ary tree with depth n is a tree in which each nonleaf node has d child nodes and the number of levels is n. Then a d-ary tree code over alphabet S with distance parameter α and depth n can be represented by a d-ary tree with depth n, in which each edge corresponds to an element in S. A codeword is generated by tracing from the root to a certain node in the tree and concatenating the outputs of the edges. We denote by W(v) the codeword generated by node v in the tree. Consider two nodes v1 and v2 with the same depth h in the tree, whose least common ancestor has depth hl. Then Δ(W(v1), W(v2)) ≥ αl, where Δ(⋅, ⋅) is the Hamming distance. An illustration of the tree code is given in Fig. 8.13, where h = 5 and l = 3. Hence if α = 0.5, we have Δ(W(v1), W(v2)) ≥ 2.

f08-13-9780128019504
Fig. 8.13 Illustration of tree code.

Construction of potent tree codes

In Ref. [174], the existence of good tree code with large relative Hamming distance is only assumed; the detailed construction of the code is not discussed. It was not until recent years that the explicit construction of the tree code was discussed [175], in which some intuitions were provided at the beginning. Since the random coding scheme has achieved great success in traditional information theory, the first idea for the code construction is to try the random coding scheme; i.e., we randomly assign output alphabets to each edge in the tree. This looks reasonable, since we require two paths in the tree with large discrepancies (as illustrated in Fig. 8.14A); otherwise it is hard to distinguish the two trajectories V P′ and V P. Since the outputs are randomly assigned, the output sequences should have a large Hamming distance with a large probability.

f08-14-9780128019504
Fig. 8.14 (A) large deviations (B) crossed deviations (C) small deviations.

However, this is not the only requirement. We should let short divergent paths in the tree have sufficiently large Hamming distances. This nontrivial requirement is illustrated in Fig. 8.14B. Suppose that the transmission errors make the pebbles diverge from the correct path to wrong locations A, A′, B, B′. Assume that these short divergent paths have small relative Hamming distances. Then it is possible that the agents keep making mistakes due to the small relative Hamming distances. For example, the agent may be distracted to the wrong location A and return to the correct path after recognizing the mistake; however, it makes a mistake soon after and diverges to A′. In the subsequent procedure, the agent keeps falling into wrong locations such as B and B′. Hence in Ref. [174] it is required that all paths have large Hamming distances.

However, it is not necessary to require all paths to have large Hamming distances, which incurs too many constraints. It is argued in Ref. [175] that we can allow some branches in the tree to have small relative Hamming distances. As illustrated in Fig. 8.14C, each path from the root to a leaf is allowed to have some small diverged branches that have small relative Hamming distances. It is shown in Ref. [175] that this requirement can assure the success of protocol simulation.

To quantify the above requirement on the tree code, we need the following definition of a potent tree code:

Definition 24

Consider two nodes u and v with the same depth h in the tree, whose least common ancestor is w with depth hl. We say that u and v are α-bad nodes if

Δ(W(u),W(v))<αl.

si265_e  (8.168)

The paths from w to u and v are called α-bad paths. The interval [hlh] is called an α-bad interval.

Consider a tree code with depth N. If there is a path Q whose union of all α-bad intervals has a total length no less than ϵN, then we say that this tree is an (ϵα)-bad tree. Otherwise, it is an (ϵα)-potent tree code.

For a detailed construction of potent tree code, it is proposed in Ref. [175] to use the ϵ-biased sample space, which is defined as follows:

Definition 25

Consider a sample space S on n bits. For any element x = (x1, …, xn) in X and any nonzero binary sequence a = (a1, …, an), we define y(x,a)=i=1nxiaisi266_e, where the addition is binary. X is said to be ϵ-biased with respect to a linear test if it satisfies

|P(y(X,a)=0)P(y(X,a))|ϵ,

si267_e  (8.169)

where X is randomly selected from S, for any nonzero a.

It is shown in Ref. [175] that an ϵ-biased sample space can be constructed as follows: consider a prime p > (n/ϵ)2; a point in S is given by a number x in [0, 1, …, p − 1] mapped to the n-bit sequence (r0(x), …, rn−1(x)), where

ri(x)=1χp(x+i)2,

si268_e  (8.170)

and χp(x) denotes the quadratic character of x (mod p).

Once an ϵ-biased sample space S is constructed, we can use it to build an (ϵα)-potent tree code. For simplicity, we assume that the output of each edge in the tree is binary. Since there are a total of dN edges in a d-ary tree, the outputs of all edges in the tree can be described by a dN-bit sequence: we sort the edges in the tree in a predetermined order and then assign each element in the sequence to an edge. This forms a d-ary small biased tree code.

Small biased tree codes have a useful property defined as follows:

Definition 26

A sample space for n bits, x1, …, xn, is called (ϵk)-independent if

|P((xi1,,xik)=ξ)2k|ϵ,

si269_e

for any k indices i1, …, ik and any k-bit sequence ξ.

The intuitive explanation of the property of (ϵk)-independence is that the corresponding sequence is very close to uniformly distributed for any k-subset. It is shown in Ref. [175] that if a sample space is ϵ-biased then it is also ((1 − 2k)ϵk)-independent, for any k. This property guarantees that, with a large probability, the small divergent branches in a small biased tree code have reasonable relative Hamming distances, since their bit assignments are “very random.”

In Ref. [175], the following theorem guarantees that the construction of small biased tree codes leads to potent tree codes, with a large probability:

Theorem 57

Consider ϵ and α in (0, 1) . With probability 1 − 2Ω(N) , a d -ary small biased tree code having depth N is (ϵα) -potent if the alphabet size is larger than (2d)2+2/ϵ/(1 − α).

The detailed proof is given in Ref. [175]. Here we provide a brief summary of the basic idea of the proof. Consider a node v. It is possible that there exists another node u having the same depth such that u forms a bad interval of l. To that end, we need to check the last l bits leading to v and u, denoted by Wl(v) and Wl(u), respectively. Due to the property of small biasedness in Definition (26), the Hamming distance between Wl(v) and Wl(u) should be large with a large probability. As will be shown later, potent tree codes can accomplish the task of iterative communications.

Explicit construction of tree codes

In the above discussion, the construction of potent tree codes is based on a random coding scheme. It is of substantial theoretic importance since it can prove the existence of potent tree codes. However, it is of no use in practice due to the complexity caused by the lack of efficient coding and decoding schemes.

A breakthrough in the deterministic and efficient construction of tree codes was made by Braverman [176]. The corresponding cost of computation time is subexponential; i.e., the time costs of code construction, encoding, and decoding are all 2nϵsi270_e for a tree code with size n, where ϵ is a small number.

In this book, we focus on the deterministic construction of tree code. The corresponding coding and decoding complexities can be found in Ref. [176]. The basic idea of tree code construction is to combine multiple small-sized tree codes and then obtain a large-scale tree code. To that end, we introduce an operation called tree code product, i.e., we construct a tree code with depth d1 × d2 from two tree codes having depths d1 and d2.

First we assume that a tree code with depth d has been constructed. Here we assume that d is small such that we can use exhaustive search to find the tree code. Then we convert it to a local tree code of depth Dd and locality l. Note that a local tree code with depth D and locality l, as well as other parameters, is defined as follows:

Definition 27

A local tree code, with depth d, distance α, locality l, input alphabet size σi, and output alphabet size σo, is a σi regular tree with depth d. In the tree, each edge is labeled with a letter from σo. This defines the coding mapping Tsi4_e. For any three codewords w, w1, and w2 over the input alphabet, such that

|w1|=|w2|l,|w|+|w1|d,w1(q)w2(1),

si272_e  (8.171)

we have

Δ(T(ww1),T(ww2))α|w1|,

si273_e  (8.172)

where ∘ means the operation of concatenation.

Remark 17

Eq. (8.172) means that, for two input sequences, the distance of their coding outputs is proportional to the depth of their diverged paths in the tree. This is the same as the definition of the tree code. The difference from the tree code is that we only require this property for the sequences with small disparities (i.e., the depth of the subtree rooted at the least ancestor is at most l). This is why we call it “local” tree code.

The construction of the local tree code with a much larger depth D is carried out by repeating the mapping Tsi4_e provided by the original tree code with a much smaller depth d. It is shown in Ref. [176] that this construction satisfies the requirement for local tree code.

Based on the conversion from a tree code to a local tree code with much larger depth, we can combine two tree codes with smaller depths to obtain a new tree code with a larger depth, which is stated in the following theorem:

Theorem 58

Consider two tree codes TIsi275_e and TOsi276_e , whose depths, distances, and alphabet sizes are given by (d1α1σiσo1) and (d2α2σiσo2) , respectively. Then one can construct a product tree code TPsi277_e having parameters (dασiσo) which satisfy

α=min(α1,α2/10),d=d1d24,σo=(σo1σo2)O(1).

si278_e  (8.173)

The procedures of code construction and encoding are illustrated in Fig. 8.15. Essentially, the encoding is a concatenation of an inner code Tisi279_e and an outer code To. The inner code Tisi279_e is a local tree code, which is obtained from Tisi281_e. It takes care of the divergent paths whose divergence is short. Then the coding output is fed into the outer code To, which takes care of longer paths. Tosi282_e is obtained from an intermediate code Tosi283_e. Simply speaking, Tosi283_e is obtained by spreading Tosi285_e over different blocks. Then the output of Tosi283_e is protected by traditional error correction codes, thus forming Tosi282_e. The details of the construction can be found in Ref. [176].

f08-15-9780128019504
Fig. 8.15 Illustration of the procedure of constructing the product tree code.

8.7.3 Protocol Simulation

We explain the protocol based on the tree codes, as proposed in Ref. [174].

The protocol for computing the function f(zAzB), denoted by π, can be represented by a 4-tree, in which the trajectory of a communication for computing can be represented by a path from the root to a leaf node. Each level of edges means one round of communications. Note that agents A and B transmit simultaneously within one round. For each node in the tree, the four child nodes led by the four edges 00, 01, 10, and 11 represent the state after this round of communications. For example, the node led by edges 00 is the state if the bits transmitted in this round are both zero. This procedure can be illustrated by the first two rounds:

 In the first round, agents A and B transmit bits πA(zAϕ) and πB(zBϕ), respectively. The two bits are denoted by m1 = π(zϕ), which generates the four child nodes.

 In the second round, the agents A and B transmit bits πA(zAπ(zϕ)) and πB(zBπ(zϕ)), respectively. The two transmitted bits are denoted by m2.

In a generic round, say round i, the agents A and B transmit bits πA(zA, {mi}j=1, …, i−1) and πB(zB, {mi}j=1, …, i−1), respectively. This procedure can be represented by a trajectory in a tree denoted by Tsi4_e. At node v of Tsi4_e, agent A (or B) transmits bit πA(zAv) (or πB(zBv)).

Given a tree code and the protocol π for noiseless communication channels, we can design the protocol subject to noisy channels by simulating the protocol π. The basic idea is that both agents estimate each other’s current state by using the received bits. If the two agents have different estimations of the current location in Tsi4_e, due to errors in the transmissions, they will begin to find that the received bits are possibly different from what they expect. At this time, they will trace back in the tree Tsi4_e and try to synchronize their estimated location in Tsi4_e.

To that end, the following structures are needed at both agents A and B:

 Pebble: Each agent has a pebble to indicate its conjecture on the current location in the tree Tsi4_e, namely which node has been reached in the protocol π. In each round of communications, the pebble of each agent could be moved upstream (action Bsi294_e) or downstream (indicated by one of the four possible edges, 00, 01, 10, and 11) of the tree Tsi4_e, or keep its current location (action Hsi296_e). The agent also transmits either 0 or 1 according to the edge in Tsi4_e; e.g., the current location of the protocol tree Tsi4_e.

 Agent state: The state of each agent can be represented by another 12-ary tree Ysi299_e. Each node has 12 child nodes, indicated by the movement of the pebble (six possibilities) and transmitted bit (two possibilities).

With the two structures explained above, we can describe the protocol for simulating π with noisy communication channels, which is detailed in Procedure 11.

Procedure 11

Iterative Communication for Simulating Protocol π

1: Input: π: the protocol to be simulated; T: the number of rounds in the protocol π; (zAzB): local inputs; Tsi4_e: the protocol tree of π: Ysi299_e: the tree of system state.

2: for t = 1 : 5T do

3:  Agent A checks its state sA in YAsi302_e.

4:  Agent A sends out bit w(sA).

5:  Agent A estimates the current state g of agent B such that the Hamming distance Δ(W(g), Z) is minimized, where Z is the received bits and W(g) is the bit sequence sent by B if its current state is g.

6:  Compute the corresponding location of the pebble of B, namely pebble(g).

7:  Agent A compares its own pebble location vA and pebble(g).

8:  if vA is a strict ancestor of pebble(g) then

9:  Agent A does not move its pebble in Tsi4_e (i.e., the action is Hsi296_e). Reset its own state.

10:  end if

11:  if vA and pebble(g) have a strict common ancestor then

12:  Move the pebble upstream in Tsi4_e (i.e., the action is Bsi294_e). Reset its own state.

13:  end if

14:  if vA = pebble(g) then

15:  Move the pebble downstream in Tsi4_e.

16:  end if

17:  Agent B carries out the same steps as those of A.

18: end for

Note that we can fix the total number of rounds at 5T: if the simulation has been completed before time 5T, we let the agents send bits 0 until 5T; if the simulation has not been completed, we still stop at time 5T and claim that the procedure of simulation fails.

Note that, for the potent tree code proposed in Ref. [175], the interactive protocol is still the same.

8.7.4 Performance Analysis

We first focus on the analysis in Ref. [174], where it is shown that the procedure described in Procedure 11 can simulate the original protocol π with an exponentially small failure probability. This is summarized in the following theorem:

Theorem 59

We assume a binary symmetric communication channel, and the existence of a tree code with relative distance 1/2. The simulation procedure described in Procedure 11 , which is run 5T times (T is the number of rounds in the original protocol 4π ), can simulate π with an error probability less than or equal to 2−5T.

The rigorous proof of this important theorem is given in Ref. [174]. In this book, we provide an intuitive explanation. The key step is to quantify the level of success in the simulation procedure by defining a concept called mark:

Definition 28

Consider the two pebbles at vA and vB in Tsi4_e. The least common ancestor is denoted by v-si309_e. The mark of the simulation procedure is defined as

mark(vA,vB,v-)=depth(v-)max(d(v-,vA),d(v-,vB)),

si310_e  (8.174)

where d is the distance in the tree.

It is important to realize that π is successfully simulated if the mark at the termination (time 5T) is at least T: if so, the least common ancestor has a depth of at least T, which means that the two pebbles have reached a node with depth T simultaneously and thus implies the success of the simulation. This is rigorously proved in Lemma 4 of Ref. [174]. Hence the estimation of the error probability is adjusted to the problem of evaluating the probability that the mark is less than T.

The second important point is to realize the impact of a good move (when both agents estimate the system states correctly) or a bad move (when a state estimation error occurs):

 A good move increases the mark by 1.

 A bad move decreases the mark by no more than 3.

Then a sufficient condition for a successful simulation of π is that the proportion of good moves is at least 4/5, since at the termination at time 5T, we have

mark45×5T3×15×5T=T.

si311_e  (8.175)

Therefore the problem becomes one of bounding the proportion of bad moves given the tree code with relative distance 1/2. We denote by l(t) the larger one of the pebble location errors. If there is a bad move at time t, we define the error interval as tl(t) + 1, …, t, since there must be transmission errors within this error interval. Each bad move must be within an error interval; hence we can use the number of error intervals to bound the number of bad moves. A more detailed analysis leads to the conclusion in Theorem 59.

In Ref. [175], the requirement of the existence of tree code is relaxed to that of the potent tree code, while the interactive protocol is still the same as that in Ref. [174]. The corresponding performance is summarized in the following theorem:

Theorem 60

Suppose that the communication channel is binary symmetric. Consider a (1/20, α) -potent tree code. Then a protocol π of length T without communication error can be successfully simulated by Procedure 11 with probability 1 − 2Ω(T) , if it is carried out for N = O(T) rounds.

8.8 Conclusions

In this chapter, we have discussed the physical dynamics-aware design of the physical layer in the communication network of a CPS. Various approaches have been discussed, ranging from modulation to channel coding. These are still the subject of research in academia. Almost all existing communication protocols of the physical layer in a CPS have not included the awareness of physical dynamics. The main challenge lies in the following aspects:

 The computational complexity of these algorithms is still too high. For example, in the adaptive modulation, the scheduler needs to solve an LMI. Although efficient algorithms from convex optimization can be used, it is still challenging for real-time control in CPSs.

 These algorithms require detailed models and parameters of the physical dynamics, which may be unavailable in many situations. Hence lack of availability of information on physical dynamics could disable these algorithms. Even if the models and parameters can be obtained from system identification or machine learning, it is still not clear whether these algorithms are robust to errors in the models and parameters.

Hence there is still a long way to go before the physical dynamics-aware design of the physical layer becomes useful in practice.

References

[13] Sahai A., Mitter S. The necessity and sufficiency of anytime capacity for control over a noisy communication channel. IEEE Trans. Inform. Theory. 2006;52(8):3369–3395.

[16] Branicky M.S., Phillips S.M., Zhang W. Stability of networked control systems: explicit analysis of delay. In: Proceedings of the American Control Conference (ACC). 2000.

[33] Cover T.M., Thomas J.A. Elements of Information Theory. second ed. Wiley; 2006.

[94] Yao A.C. Some complexity questions related to distributed computing. In: Proceedings of the 11th ACM Symposium on Theory of Computing (STOC). 1979.

[95] Kushlevitz E., Nisan N. Communication Complexity. Cambridge, UK: Cambridge University Press; 2006.

[133] Boyd S., El Ghaoui L., Feron E., Balakrishnan V. Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA: Society of Industrial and Applied Mathematics; 1994.

[146] Li H., Song J., Zeng Q. Adaptive modulation in networked control systems with application in smart grids. IEEE Commun. Lett. 2013;17(7):1305–1308.

[147] Costa O.L.V., Fragoso M.D., Todorov M.G. Continuous-Time Markov Jump Linear Systems. New York: Springer; 2013.

[148] Costa O.L.V., Fragoso M.D. Stability results for discrete-time linear systems with Markovian jumping parameters. J. Math. Anal. Appl. 1993;179:154–178.

[149] Matei I., Martines N.C., Baras J.S. Optimal linear quadratic regulator for Markovian jump linear systems in the presence of one time-step delayed mode observations. In: Proceedings of the 17th Wolrd Congress of the International Federation of Automatic Control. 2008.

[150] Neuhoff D.L., Gilbert R. Causal source codes. IEEE Trans. Inform. Theory. 1982;28(5):701–713.

[151] Gaardar N.T., Slepian D. On optimal finite-state-digital transmission systems. IEEE Trans. Inform. Theory. 1982;28(2):167–186.

[152] Walrand J.C., Varaiya P. Optimal causal coding-decoding problems. IEEE Trans. Inform. Theory. 1983;29(6):814–820.

[153] Borkar V.S., Mitter S.K., Tatikonda S. Sequential vector quantization of Markov sources. SIAM J. Control Optim. 2001;40(1):135–148.

[154] Li H., Han Z. Distributed source coding for controlling physical systems with application in smart grid. In: IEEE Conference on Global Communications (Globecom). 2014.

[155] Slepian D., Wolf J.K. A coding theorem for multiple access channels with correlated sources. Bell Syst. Tech. J. 1973;52:1037–1076.

[156] Garcia-Frias J., Cabarcas F. Approaching the Slepian-Wolf boundary using practical channel codes. Signal Proces. 2006;86:3096–3101.

[157] Matsuta T., Ueymatsu T., Matsumoto R. Universal Slepian-Wolf source codes using low-density parity-check matrices. In: IEEE International Symposium of Information Theory. 2010.

[158] Pradhan S.S., Ramchandran K. Distributed source coding using syndromes (DISCUS): design and construction. In: Proceedings of the IEEE International Data Compression (DCC). 1999.

[159] Pradhan S.S., Ramchandran K. Generalized coset codes for distributed binning. IEEE Trans. Inform. Theory. 2005;51(10):3457–3474.

[160] Stankovic V., Liveris A.D., Xiong Z., Georghiades C.N. Code design for the Slepian–Wolf problem and lossless multiterminal networks. IEEE Trans. Inform. Theory. 2006;52(4):1495–1507.

[161] Liu Z., Cheng S., Liveris A.D., Xiong Z. Splepian-Wolf coded nested lattice quantization for Wyner-Ziv coding: high-rate performance analysis and code design. IEEE Trans. Inform. Theory. 2006;52(10):4358–4379.

[162] Zamir R., Shamai S., Erez U. Nested linear/lattice codes for structured multiterminal binning. IEEE Trans. Inform. Theory. 2002;48(6):1250–1276.

[163] Dragotti P.L., Gastpar M. Distributed Source Coding: Theory, Algorithms and Applications. Academic Press; 2008.

[164] Conway I., Sloane J. Sphere Packing, Lattices and Groups. New York: Springer; 1998.

[165] Gong S., Li H. Decoding the nature encoded messages for distributed energy generation in microgrid. In: Proceedings of the IEEE International Conference on Communications (ICC). 2011.

[166] Open Smart Grid (OpenSG). SG Network System Requirements Specification. 2010.

[167] Simon D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley; 2006.

[168] Pearl J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufmann; 1988.

[169] Ostrovsky R., Rabani Y., Schulman L. Error-correcting codes for automatic control. In: Proceedings of the 46th Annual IEEE Symposium on the Foundations of Computer Science (FOCS). 2005.

[170] Sukhavasi R.T., Hassibi B. Anytime reliable codes for stabilizing plants over erasure channels. In: Proceedings of the IEEE Conference on Decision and Control (CDC). 2011.

[171] Sason I., Shamai S. Performance analysis of linear codes under maximum-likelihood decoding: a tutorial. Future Trend Commun. Inform. Theory. 2006;3.

[172] Dossel L., Rasmussen L.K., Thobaben R. Anytime reliability of systematic LDPC convolutional codes. In: Proceedings of IEEE International Conference on Communications (ICC). 2012.

[173] Tarable A., Nordio A., Tempo R. Anytime reliable LDPC convolutional codes for networked control over wireless channel. In: IEEE International Symposium on Information Theory (ISIT). 2013.

[174] Schulman L.J. Coding for interactive communication. IEEE Trans. Inform. Theory. 1996;42(6):1745–1756.

[175] Gelles R., Moitra A., Sahai A. Efficient coding for interactive communication. IEEE Trans. Inform. Theory. 2014;60:3.

[176] Braverman M. Towards deterministic tree code constructions. Electronic Colloquium on Computational Complexity; 2011 64.


“To view the full reference list for the book, click here

1 When y1 falls out of R1(y2γ1), the controller claims that there is a decoding error and discards the received message. Such a decoding error does not cause much impact on the control since the probability is very small and the controller can estimate the system state using previous history.

2 In this study, we consider the single sensor case. The same analysis and algorithm can be extended to the case of multiple sensors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset