Chapter 8

Physical layer design

Abstract

This chapter is focused on the design of the physical layer in communication systems for controlling cyber physical systems (CPSs). Modulations and coding are considered, which take the physical dynamics into consideration. Selection of a modulation scheme is made adaptive to the physical dynamics. Both point-to-point and distributed source coding methods in CPSs are discussed. For channel coding, various topics are covered, such as joint decoding and system state estimation and coding schemes especially designed for control. Finally, channel coding for generic interactive computing is analyzed.

Keywords

Modulation; Source coding; Channel coding; Tree codes

8.1 Introduction

We now consider the lowest layer of the communication network in cyber physical system (CPSs), namely the physical layer. Usually the main tasks in the physical layer consist of modulation and coding, which were introduced in Chapter 2. In modern communication systems, modulation and coding are usually independent of upper layer activities. However, as we will see, if the modulation and coding take into consideration the current physical dynamic state, this can improve the performance of controlling the physical dynamics, at the expense of a more complex system design. In this chapter, we ignore the complexity of system design, which is an important concern in practical system design, and study how the physical layer can be made adaptive to the physical dynamics and how much performance gain we can achieve compared with existing communication systems.

8.1.1 Modulation

As mentioned previously, the task of modulation is to convert information bits into physical symbols. Modern communication systems, particularly wireless systems, can make the modulation scheme adaptive to the communication channel condition, thus making a tradeoff between the communication rate (a denser constellation results in a higher rate) and reliability (a denser constellation results in less reliable transmission). In the context of a CPS, the different communication rates and reliabilities can be interpreted as different packet delays and different packet loss rates. In Section 8.2 we will study the effects of delay and packet loss rate on control performance and then propose a scheme to select the appropriate modulation scheme in an adaptive manner.

8.1.2 Coding

In communications, coding consists of source and channel coding. Both will be discussed in the context of CPSs in this chapter.

Source coding

We know that the purpose of source coding is to convert the original information source into information bits. The main challenge of source coding in a CPS is how to adapt the source coding scheme to the physical dynamics. In Section 8.3 we consider the point-to-point case, in which there is a single sensor and a single controller. Then the source coding is extended to the distributed case in Section 8.4, where we consider multiple sensors.

Channel coding

Channel coding is used to protect the communication from noise and interference. The challenge of channel coding in a CPS is how to embed the channel coding procedure into the control procedure. In Section 8.5 we consider existing channel codes but integrate the decoding procedure into the system state estimation. In Section 8.6 the channel codes are specially designed for the purpose of control in a CPS. Since control can be considered as a special case of computing, the channel codes are designed in the context of distributed computing in Section 8.7.

8.2 Adaptive modulation

There have been many studies on adaptive modulation in wireless communication systems. They mostly consider time-varying wireless channels. A transmitter can change its modulation scheme according to the current wireless channel conditions. For example, when the channel gain is good, the transmitter can use a denser constellation for the modulation (e.g., changing from BPSK to 64QAM), in order to better utilize the good channel quality to transmit more data; on the other hand, when the channel becomes worse, the transmitter can adjust to a more sparse constellation such that the communication is still reliable despite the bad channel condition.

Adaptive modulation in a CPS serves a different purpose. Essentially, a tradeoff is needed between the delay and reliability of communications in a CPS. In the context of a CPS, the requirements of the communication delay and reliability (e.g., the packet drop rate) may change with the system state of physical dynamics. It is desirable to make the modulation take into consideration the system state.

An example in the physical layer is given in Fig. 8.1, in which using QPSK or 8PSK for modulation generates different physical dynamics with different feedback delays and packet drop rates. Here it is assumed that the same number of bits is used to describe an observation. Hence when QPSK is used, the sensor needs more delays to transmit one packet while incurring less demodulation errors; when 8PSK is used, the delay is decreased while the reliability is worse. It is therefore desirable to study how the modulation should be adaptive to the system state of physical dynamics. Note that the proposed analysis and design are also valid for adaptive channel coding in a CPS.

f08-01-9780128019504
Fig. 8.1 Illustration of adaptive modulation.

In this section, we follow Ref. [146] to introduce algorithms for adaptive modulation in CPSs, which take into consideration the physical dynamics system state.

8.2.1 System Model

Physical dynamics

Continuous-time physical dynamics are considered, whose evolution is described by the following differential equation:

x.(t)=Ax(t)+Bu(t),

si1_e  (8.1)

where x(t) is the N-dimensional system state, u(t) is the M-dimensional control action, and A and B are both system parameters which are assumed to be perfectly known. Here the physical dynamics are deterministic. The analysis of stochastic systems is much more complicated, and is beyond the scope of our discussion. A sensor is used to measure the system state x. It is assumed that there is no measurement noise. Then the sensor sends the measurements in a digital communication channel to the controller. We assume that linear feedback control is used by the controller in the ideal case; i.e.,

u(t)=Kx(t),

si2_e  (8.2)

where K is the feedback gain matrix, which is predetermined before system operation. For example, it can be obtained from the linear quadratic regulator (LQR) controller.

Communication channel

The number of bits used to quantize the measurement at the sensor is denoted by B. For simplicity of analysis, we ignore the quantization error. This is reasonable if B is sufficiently large, which is true in practical systems (otherwise the control will be unreliable and that analysis is also complex). The symbol rate is R symbols per second, which is fixed. Hence for different modulation schemes, the communication delays are different since different numbers of symbols are needed to convey the B bits. We assume that there are S available modulation schemes, among which the sensor can switch. This is reasonable in modern software-defined radio systems. In the sth modulation scheme, each symbol contains bs bits (e.g., a 8PSK contains 3 bits). Hence the time needed to convey the B bits is given by

Ts=BbsR,

si3_e  (8.3)

for the sth modulation scheme.

It is possible that the sensor cannot switch the modulation scheme freely, since usually the switching time is nonnegligible. Therefore we assume that the modulation scheme can be changed every Tm seconds. The time period of sampling for measurement is denoted by Tsi4_e. The following assumptions are used for simplicity of analysis:

 We assume T>Tssi5_e, for all possible s; i.e., any measurement can always be sent out (but not necessarily reach the controller) before the next sample arrives.

 We assume that tm=Tm/Tsi6_e is an integer; i.e., each modulation scheme can be used for tm samples after being activated.

Since noise exists in the communication channel, it is possible that packet loss occurs. The packet loss rate of the sth modulation scheme is denoted by Pes. For simplicity, we assume that all the transmission errors can be detected (but not corrected). Since the communication is highly real-time, we assume that there is no retransmission in the communications even if a packet is dropped.

Suppose that the sth scheme starts from time 0. The control action u at time t is determined by linear feedback control:

u(t)=Kx(τs(t)),

si7_e  (8.4)

where τs(t) is the time of the nearest sample of the system state:

τs(t)=max{(n1)Ts|nTst}.

si8_e  (8.5)

Compared with Eq. (8.2), we replace the instantaneous feedback x(t) with the closest observation x(τs(t)), by assuming that the error between x(t) and x(τs(t)) is small, which is reasonable if the system dynamics do not change very rapidly.

When transmission error occurs, the following two possible approaches are considered to synthesize u:

 Zero observation upon transmission failure (ZOTF): When x(t) is not successfully received, we set x(t) = 0 and then obtain u(t) = 0; i.e., the controller takes zero control action.

 Most recent observation upon transmission failure (MROTF): When x(t) is not successfully received, the controller uses the most recent sample of x as a reasonable estimation of x(t).

8.2.2 Impact of Modulation on System Dynamics

According to the system model, when the sth modulation scheme is selected, the corresponding delay is Ts and the packet loss rate is Pes. When the sensor uses denser modulation, it results in a smaller delay but a higher packet loss rate; and vice versa. Hence it is necessary to analyze the impacts of delay and packet loss on the system dynamics.

Impact of communication delay

In Ref. [16], the impact of communication delays on the system dynamics has been analyzed by assuming that the delay is shorter than the sampling period. Essentially, the analysis converts the controlled system with communication delay into an equivalent discrete-time system, whose state is defined as

z(k)=(x(kT),u((k1)T)),

si9_e  (8.6)

and the evolution law of dynamics is given by

z(k+1)=Φsz(k),

si10_e  (8.7)

where, when the sth modulation scheme is used, the matrix Φs is given by

Φs=ΨΓ0(Ts)KΓ1(Ts)K0,

si11_e  (8.8)

where

Ψ=eAT,Γ0(Ts)=0TTseAsdsB,Γ1(Ts)=TTsTeAsdsB.

si12_e  (8.9)

Impact of delay and packet loss

We add the impact of packet losses to that of the communication delay. Since we assume two possible control actions upon a packet loss, we discuss the impact of packet loss in the following two situations:

 ZOTF criterion: In this case, the controller carries out zero control action if a packet of measurement is lost. Then there are two possible modes for the control feedback gain matrix, namely

Kr=kK,r=0,1,

si13_e  (8.10)

where k = 0 (or 1) means transmission failure (or transmission success). Thus the matrix Φs in Eq. (8.7) should be replaced with Φsr(k) where K is replaced with Kr(k). Therefore when the sth modulation scheme is used, the system dynamics is governed by

z(k+1)=Φsr(k)z(k).

si14_e  (8.11)

 MROTF criterion: In this case, if a packet is lost, the controller uses the most recently received packet for system state estimation. For the analysis a new variable xc(t) is defined, which is given by

xc(k)=r(k)x(k)+(1r(k))xc(k).

si15_e  (8.12)

We can verify that xc(k) is the most recent received observation no later than time k.
Then the equivalent system state z is defined as

z(k)=(xT(k),xc(k),u(k1))T,

si16_e  (8.13)

in which the timing of the variables is illustrated in Fig. 8.2.

f08-02-9780128019504
Fig. 8.2 Illustration of the timing of the variables.

The equivalent system state satisfies the same expression as in Eq. (8.11), in which the expression of Φsr(k) is given by

Φsr(k)=ΨΓ0(Ts)KΓ1(Ts)r(k)I(1r(k))I00K0.

si17_e  (8.14)

Markovian jump process

The system described in Eq. (8.11) is essentially a Markovian jump process [147], in which the state r(k) forms a Markov chain. On the other hand, the state s (namely the selected modulation scheme) is a discrete state while it can be controlled by the sensor. Given a fixed s, the system stability is judged by the following theorem, as a straightforward conclusion from the theorem in Ref. [148]:

Theorem 49

The system described in Eq. (8.11) is mean-square stable if and only if there exists a matrix G > 0 such that

GPe,s(Φs0)TGΦs0(1Pe,s)(Φs1)TGΦs1>0.

si18_e  (8.15)

Beyond the system stability, we can also consider the cost of the dynamics. We define the cost as

JN=k=0N1xT(k)Qx(k)=k=0N1zT(k)Q~z(k),

si19_e  (8.16)

where Q~=diag(Q,0,0)si20_e. As a straightforward conclusion of Theorem 1 in Ref. [149], the expected cost is given by

JN(z(0),r(0),s)=xT(0)P0,r(0),sx(0),

si21_e  (8.17)

where

Pt,j,s=Pe,s(Φs0)TPt,0,sΦs0+(1Pe,s)(Φs1)TPt,1,sΦs1+Q~,

si22_e  (8.18)

with PNjs = Q. Note that z(0) and r(0) are known initial conditions.

8.2.3 Hybrid System Modeling

As explained in Chapter 7, a hybrid system is a powerful tool to model systems with both discrete and continuous states. Again, we list the linear switching dynamics, for instance:

x(t+1)=Ak(t)x(t)+Bk(t)u(t),y(t)=Ck(t)x(t).

si23_e

where x is the continuous system state, k is the discrete system state, u is the control action, y is the observation vector, and A, B and C are system parameters, which are dependent on the discrete system state. Then in our context, we define the discrete system state as k(t) = (r(t), s(t)); i.e., the discrete state represents the status of transmission and the current modulation scheme. Note that r(t) is random in nature and uncontrollable, while the sensor can control s(t).

We then consider two types of adaptive modulation schemes, namely System Parameter Aware Adaptive Modulation (SPAAM) and System State Aware Adaptive Modulation (SSAAM). In SPAAM the modulation scheme is adaptive to only the system parameters, not the state of the dynamics. Hence it is fixed throughout the operation, unless the system parameters are changed. In SSAAM the modulation is adaptive to the instantaneous system state of physical dynamics, and thus is time varying. Below we discuss both types of adaptive modulation.

SPAAM

As indicated by the name, in SPAAM the modulation scheme is adaptive only to the system parameters and is fixed regardless of the current system state of physical dynamics. We assume that the objective of SPAAM is the stability of system dynamics. Then by applying the conclusion in Theorem 49, we obtain the following optimization problem:

maxs,γ,Gγs.t.GPe,s(Φs0)TGΦs0(1Pe,s)(Φs1)TGΦs1γI>0G>0,

si24_e  (8.19)

where γ represents the eigenvalue of the corresponding matrix.

Below is an intuitive explanation for the above optimization: the goal is to maximize the minimal eigenvalue on the left-hand side of Eq. (8.15). If the maximum γ is positive, then the corresponding modulation can stabilize the system dynamics. A simple algorithm for the optimization is to carry out an exhaustive search for all modulation schemes. For each s, we apply the theory of linear matrix inequalities (LMI) [133] to find the optimal γ and G. Finally, we choose the optimal s that maximizes the objective function γ. Since the number of possible modulation schemes is usually quite limited, an exhaustive search is computationally efficient.

SSAAM

As indicated by the name SSAAM, the modulation scheme is selected adaptively to the current system state of physical dynamics. We have assumed that the modulation scheme can be changed every Tm seconds (or equivalently tm samples). The set of modulation schemes that the sensor can choose from is denoted by Ω. The strategy of adaptive modulation is denoted by π; essentially it is a mapping from the system state x to the modulation scheme within Ω. In terms of hybrid systems, the modulation scheme selection is equivalent to the control of the discrete system state in the corresponding hybrid system.

The basic elements of SSAAM, as an optimization problem, are as follows:

 Cost function: A quadratic cost is considered over Tf scheduling periods (equivalently Tftm samples), which is given by

J=t=1TftmxT(t)Qx(t).

si25_e  (8.20)

Thus the optimal adaptive modulation strategy is to minimize the cost given the current system state; i.e.,

π*=argminπJ(π).

si26_e  (8.21)

 Dynamic programming (DP): This is a powerful approach to optimize the above cost function. As a standard procedure of DP, we define the following cost-to-go function at the tth decision as the minimal cost in the future:

Jt(x)=r=(t1)tm+1TftmxT(r)Qx(r)x((t1)tm)=x.

si27_e  (8.22)

By solving the following Bell equation, we can obtain the cost-to-go function in a recursive manner:

Jt(x)=minsr=(t1)tm+1ttmxT(r)Qx(r)x((t1)tm)=x+Jt+1(x)x((t1)tm)=x,s,

si28_e  (8.23)

where x is the initial system state in the next decision period, which is determined by the current system state x and the action s (the selection of modulation). The cost-to-go function can be calculated from the last time slot Tftm, which is trivial.

 Approximate dynamic programming (ADP): The main challenge of DP is that it is difficult to represent the continuous function Jt(x), as many DP problems do. An effective approach to handle this challenge is to apply ADP. In Ref. [146], we approximate the cost-to-go function by using quadratic functions given by

Jt(x)=xTΣtx,

si29_e  (8.24)

where Σt is a positive-definite function. Then the computation of the cost-to-go function is equivalent to estimating Σt. Due to limitations of space, we are unable to discuss how to estimate Σt. The detailed algorithm can be found in Ref. [146]. Numerical results have shown that the proposed ADP approach can effectively handle the challenge and results in good performance for controlling the physical dynamics.

Both SPAAM and SSAAM have been demonstrated to achieve good performances in the context of smart grids [146].

8.3 Source coding in a CPS: Point-to-point case

Source coding encodes the original information (either continuous or discrete valued) into discrete information bits. The main purpose is to reduce redundancy and thus transmit fewer bits (thus using fewer communication resources). In many existing CPSs, the measurements (e.g., the frequency and phase measurements at phasor measurement units (PMUs) in smart grids) are quantized with certain precisions and then sent to the controller. However, such a straightforward scheme may waste communication resources for the following reasons:

 Ignoring temporal redundancy: The measurements at different times are usually correlated. For example, in power grids, the frequency and phase measurements are continuous in time and do not change very rapidly. Hence successive measurements are similar to each other, and it is not necessary to fully quantize each measurement in an independent manner. A possibly more efficient approach is to quantize increments in the frequency and phase.

 Ignoring spatial redundancy: When there are multiple sensors, the measurements at different sensors are usually correlated, unless the sensors happen to measure orthogonal subspaces. The redundancies among different sensors can be fully utilized to decrease the source coding rate, even though there are no communications among the sensors (and thus no explicit coordination among them).

In this section we assume a single sensor and consider only temporal redundancy. Spatial redundancy will be considered in the next section. In traditional communication systems, the temporal redundancy can be compressed by carrying out joint source coding for all the source information symbols. However, such a source coding scheme requires a long latency since the coding needs to be carried out until all the source information symbols have been gathered. A long communication latency is intolerable for real-time feedback control in a CPS. Hence it is of key importance to study real-time source coding, in which the source information symbols are encoded in a sequential manner.

8.3.1 System Model

We assume that the information source generates a discrete-time random process {Xt}t=si30_e. The encoder generates bit sequence {Zt}t=si31_e via the mapping

Zk=hk({Xt}t=).

si32_e  (8.25)

The decoder tries to regenerate {Xt}t=si30_e from the received bit sequence {Zt}t=si31_e, namely

X^k=gk({Zt}t=).

si35_e  (8.26)

For notational simplicity, we rewrite the decoder output as

X^t=fk({Xt}t=).

si36_e  (8.27)

Definition 22

The encoder {fk}k=si37_e is called causal if

fk({Xt}t=)=fk({Yt}t=)

si38_e  (8.28)

whenever {Xt}t=k={Yt}t=ksi39_e.

Obviously, the output of the decoder is dependent only on the received bits and is thus nonanticipating.

8.3.2 Structure of Causal Coders

Given the requirement of causality, we follow Ref. [150] to introduce the typical categories of causal source encoders, which are illustrated in Fig. 8.3.

f08-03-9780128019504
Fig. 8.3 Illustration of coder structures.

Sliding block causal coders

In this case the output of a decoder at time t is given by

X^t=ft({Xk}k=t).

si40_e  (8.29)

If the coder has finite memory, then we have

X^t=ft({Xk}k=tMt),

si41_e  (8.30)

where M is the memory length.

Block causal coders

In such coders, the information is encoded and decoded in a block manner, namely

X^kN+1kN+N=f(XkN+1kN+N),

si42_e  (8.31)

where k is the index of the block (or codeword). Obviously, such a coding scheme will cause an average delay N/2.

Block stationary coders

Block stationary coders can be considered as a combination of sliding block causal coders and block causal coders. The output is given by

X^kN+1kN+N=f(XkN+N),

si43_e  (8.32)

where the decoder output is in blocks and the decoding procedure uses all the previous received bits.

Finite-state causal coders

A challenge of sliding block causal coders and block stationary coders is that infinite memory is needed to store all the previously received bits. To alleviate this problem while not discarding the history, a state S can be defined for the coders, such that the decoder output is a function of the state and the newest information symbol, namely

X^k=fx(Sk,Xk),

si44_e  (8.33)

and the system state S evolves in the following law:

Sk=fs(Sk1,Xk).

si45_e  (8.34)

Rate distortion

In the context of causal coding, the rate-distortion function rc(D) is defined to be the minimum average transmission rate such that the average distortion is no larger than D. It is shown in Ref. [150] that rc(D) is determined as follows:

rc(D)=inffcausal,d(f)DlimsupK1KH(X^1K),

si46_e  (8.35)

where d(f) denotes the average distortion caused by the coder f. An interesting conclusion in Theorem 3 of Ref. [150] is that the causal rate-distortion functions in the above coder schemes are all the same when the information source is memoryless. However, it is still an open problem to compare the rate-distortion functions when the information source has memory.

8.3.3 Finite-State Transceiver

As mentioned above, to avoid the requirement of infinite memory and the performance loss caused by discarding the history, an effective approach is to make both the transmitter and receiver have finitely many states, such that the history can be incorporated into the evolution of the system states. This type of transceiver has been discussed in Ref. [151]. Here we provide a brief introduction to the main framework and conclusions.

Transceiver structure

We assume that the encoder and decoder have states T and R, respectively. Upon receiving a source information symbol Xt, the transmitter sends out a coded symbol Dt (with possibly ND values) and updates its state:

Tt+1=τ(Xt,Tt),Dt=δ(Xt,Tt).

si47_e  (8.36)

Upon receiving the coded symbol Dt, the receiver updates its own state and outputs a symbol Yt:

Rt+1=ρ(Dt,Rt),Yt=η(Dt,Rt).

si48_e  (8.37)

It is assumed that the information source {Xt}t is a stationary ergodic stochastic process, and the encoding/decoding mappings are deterministic. The distortion function is denoted by ψ(xy). We assume that the following limit exists:

ϵk2=limnEψ(Xnk,Yn).

si49_e  (8.38)

Major problem and conclusions

Ref. [151] mainly focused on how to find the encoding and decoding mechanisms τ, δ, ρ, and η in order to minimize the expected distortion ϵk2si50_e. Although there is no determining conclusion for generic cases, many conclusions on the transceiver structure have been obtained.

First it has been shown that, given sufficiently long delay k, the finite-state transceiver can achieve the traditional rate-distortion function with unlimited codeword length.

Theorem 50

Denote by R(d) the traditional rate-distortion function with unlimited codeword length. Assume that {Xt} has positive entropy. Then for any ϵ > 0, if logND>R(d)si51_e, we can find a finite-state transceiver such that

EΨ(Xnk,Yn)d+ϵ,n>k;

si52_e  (8.39)

if logND<R(d)si53_e, any finite-state transceiver satisfies ϵk2>dsi54_e for all possible k values.

Then it is of major interest to study how to choose the encoding and decoding mechanisms τ, δ, ρ, and η such that the average distortion ϵk2si50_e is minimized, given the delay k. For notational simplicity, we define

pXnk,Tn,Dn,Rn(x,t,d,r)=Pr(Xnk=x,Tn=t,Dn=d,Rn=r).

si56_e  (8.40)

We assume that the initial states T1 and R1 are known, or the distributions are given. If τ, δ, and ρ are properly chosen, the distribution pXnk,Tn,Dn,Rnsi57_e will converge to a stationary one:

limnpXnk,Tn,Dn,Rn(x,t,d,r)=pX,T,D,Rk(x,t,d,r).

si58_e  (8.41)

Or the distribution will converge to a periodic structure with period L:

limnpXmk,Tm,Dm,Rm(x,t,d,r)=pj,X,T,D,Rk(x,t,d,r),

si59_e  (8.42)

where m = nL + j. In this case, we can define the average distribution over one period:

pX,T,D,Rk(x,t,d,r)=1Lj=1Lpj,X,T,D,Rk(x,t,d,r).

si60_e  (8.43)

The definitions of pX,T,D,Rk(x,t,d,r)si61_e in both Eqs. (8.41) and (8.43) are called the steady-state distribution. Hence the average distortion is given by

ϵk2=x,d,rpX,D,Rk(x,d,r)ψ(x,η(d,r)),

si62_e  (8.44)

where

pX,D,Rk(x,d,r)=tpX,T,D,Rk(x,t,d,r).

si63_e  (8.45)

Using the Bayesian rule, we have

ϵk2=d,rpD,Rk(d,r)xpX|D,Rk(x|d,r)ψ(x,η(d,r))=d,rpD,Rk(d,r)Fd,r(η(d,r)),

si64_e  (8.46)

where

Fd,r(y)=xpX|D,Rk(x|d,r)ψ(x,y).

si65_e  (8.47)

Let y0=argmaxyFd,r(y)si66_e. Then the following theorem shows that the optimal decoder should adopt y0.

Theorem 51

Suppose that the decoding delay is k. Then the optimal decoding function is given by

η(d,r)=y0(d,r),

si67_e  (8.48)

where y0(dr) minimizes the function Fdr(y) in Eq. (8.47).

Given the optimal decoder η, the next task is to design the optimal functions of τ, δ, and ρ. However, the search space of these functions is enormously large. Hence the optimal design is still an open problem, even for special cases such as Markov information sources. However, for some special cases, such as the one discussed subsequently, we can obtain more information.

8.3.4 Channel Feedback

In Ref. [152], the special case of Markov information source and perfect channel feedback has been discussed. The following assumptions are adopted for the system under study:

 The information source {Xt} is Markovian; i.e.,

p(Xt+1|Xt,Xt1,)=p(Xt+1|Xt).

si68_e  (8.49)

Many practical information sources can be modeled as Markovian.

 All the communication channel outputs are fed back to the transmitter. This is reasonable in a CPS with continuous-valued physical dynamics, since the physical dynamics can encode the communication channel output and convey it to the sensor.

Denote by xn, yn, zn the output of decoder, and the input and output of the communication channel. Then the outputs of the encoder and decoder are given by

un=cn(xn,yn1),zn=hn(zn1,yn).

si69_e  (8.50)

The following theorem shows the structure of the optimal codes in the above framework:

Theorem 52

There exists an optimal code c* satisfying

c*(xn,yn1)=γn(xn,zn1).

si70_e  (8.51)

When the receiver has perfect memory (i.e., it can remember all the previously received bits), the conclusion is sharpened in the following theorem:

Theorem 53

Suppose that zn = yn. Then there exists an optimal code c* satisfying

cn*(xn,yn1)=ψn*(xn,ξn1*(yn1)),

si71_e  (8.52)

for certain functions ψ*, where ξn*=ξc*si72_e.

When the communication channel is symmetric, the conclusion can be further simplified. First we define the concept of a symmetric channel. Consider a transition matrix Q between two finite sets U and Y. We say that Q is of type S if, for every f:UUsi73_e, there exists a transition matrix A:YYsi74_e such that

Q(y|f(u))=yA(y|y)Q(y|u),

si75_e  (8.53)

for all uU and yY. We say that a memoryless channel is symmetric if its transition matrices {Qn}n are all of type S. Then we have the following conclusion:

Theorem 54

If the channel is symmetric (X = U), and zn = yn, then

cn*(xn,yn1)=xn,n1,

si76_e  (8.54)

is optimal.

8.3.5 Sequential Quantization

A detailed quantization scheme, which meets the requirement of causality, is proposed in Ref. [153]. Here we provide a brief introduction to the corresponding algorithm.

System model

We consider a discrete-time information source {Xn}, which is ergodic and Markovian. The information source {Xn} cannot be observed directly. An associated observation process {Yn} can be measured. The conditional distribution of Y given X is denoted by ψ(Y |X). We assume that Xn and Yn are n-dimensional and d-dimensional real vectors, respectively. The evolution law of Xn and Yn is given by

P(Xn+1A,Yn+1B|Xn,Yn)=A×Bp(Xn,dx,dy)=ABϕ(y,z|Xn)dydz,

si77_e  (8.55)

for any ARnsi78_e and BRdsi79_e. Since given Xn the distribution of Xn+1 and Yn+1 is independent of Xn−1 and Yn, such an information source is called a Markov source. An explicit description of the source and observation processes is given by

Xn+1=g(Xn,ξn),Yn+1=h(Xn,ξn),

si80_e  (8.56)

where g and h are the mechanisms of both processes, and ξn and ξn are random perturbations that are mutually independent and are independent of Xn. We further assume that the evolution of the system state Xn is continuous with respect to the initial value. In more detail, if two realizations Xn and Xn, with initial values X0 = x and X0 = y, have the same driving random noises {ξn}, we have

E[XnXn]Kβnxy,n0,

si81_e  (8.57)

for some constants K and β.

Vector quantizer

A vector quantizer is assumed. The set of alphabets is denoted by Σ = {α1, …, αN}. The quantized version of the observation process {Yn} is given by {qn}, which takes values in Σ.

We denote the set of all possible finite subsets of Rd by D, which satisfies the following condition: there exists an M > 0 and a Δ > 0 such that

 xAD implies ∥x∥≤ M;

 if x = (x1, …, xd) and y = (y1, …, yd) are both within AD and xy, then we have |xiyi| > Δ for all i.

Intuitively, A denotes the set of representative vectors for the quantizer. The first requirement means that all the representative vectors have bounded norms. The second requirement means that all the dimensions of any pair of representative vectors should be well separated.

Then the quantizer is given by

qn+1=iQn(lQn(Yn+1)),Qn=ηn(qn),

si82_e  (8.58)

where Qn is the set of representative vectors which change with time, ηn maps from all the quantization outputs to the set of representative vectors, lQnsi83_e maps from the observation to the nearest representative vector, and iQnsi84_e maps from the representative vector to the index in Σ.

The goal of the quantizer is to optimize the choice of ηn such that the average entropy rate of {qn} and the average distortion are jointly optimized. The cost function can be formulated as

limsupn1nm=0n1E[H(qm+1|qm)+λYmq-m2],

si85_e  (8.59)

where λ is the weighting factor and q-m=iQm11(qm)si86_e.

Equivalent control problem

We now rewrite the cost function in Eq. (8.59) into an alternative form, which is reduced to a control problem. First we define

πn(A)=P(XnA|qn),

si87_e  (8.60)

which is the probability distribution of Xn given the previous quantizer outputs, and can be computed recursively. We further define

ha(π,A)=π(dx)fn(x,A),

si88_e  (8.61)

where

fn(x,A)=ψ(y|x)I[iA(lA(y))=a]dy.

si89_e  (8.62)

The intuitive meaning of fn(xA) is the probability of generating the quantization output a when the true system state is x and the set of representative vectors is A. ha(πA) is the probability of outputting a when the distribution of x is π and the representative vector set is A.

We also define

f^(x,A)=ψ(y|x)ylA(y)2dy,

si90_e  (8.63)

which is the expected cost when the true system state is x and the set of representative vectors is A, and

k(π,A)=aha(π,A)logha(π,A),

si91_e  (8.64)

which means entropy of the quantizer output when the distribution of x is π and the set of representative vectors is A, and

r(π,A)=π(dx)f^(x,A),

si92_e  (8.65)

which is the expected cost when the distribution of x is π and the set of representative vectors is A.

Then the cost in Eq. (8.59) can be written as

limsupn1nm=0n1E[k(πm,Qm)+λr(πm,Qm)].

si93_e  (8.66)

In Ref. [153], a technical mechanism is proposed to make the corresponding probabilities bounded from below by a small number. Here we skip this step and simply assume that all the probabilities have been bounded by a small constant.

Now the problem becomes to find the control policy mapping from the a posteriori probability πn to the quantization policy (i.e., the representative vectors) Qn, which is denoted by Qn = v(πn). Thus the source coding problem is transformed to a Markov decision problem.

Dynamic programming

The cost function in Eq. (8.66) is the long-term average of costs, whose solution is involved. Hence we first convert the cost function to a discounted one, which is given by

Jα(π0,{Qn})=m=0αmE[k(πm,Qm)+λr(πm,Qm)].

si94_e  (8.67)

The standard procedure of DP can then be applied. First we define the value function as

Vα(π0)=infvJα(π0,{Qn}).

si95_e  (8.68)

Using Bellman’s equation, the value function satisfies

Vα(π)=minA[k(πm,Qm)+λr(πm,Qm)+αVα(π)P(dπ|A,π)],

si96_e  (8.69)

where P(dπ′|Aπ) is the conditional probability of the a posteriori probability given the quantization scheme represented by A and the current a posteriori probability π. The corresponding optimal strategy can be obtained via

v(π)=argminA[k(πm,Qm)+λr(πm,Qm)+αVα(π)P(dπ|A,π)].

si97_e  (8.70)

To remove the discounting in Eq. (8.67), it was shown in Ref. [153] that we can let α1si98_e and obtain the optimal policy for Eq. (8.66). The details are omitted here. One challenge for the DP here is that both πn and Qn are continuous and thus the strategy v is infinitely dimensional. It still remains an open problem to efficiently solve the Bellman equation in Eq. (8.69). One possible approach is to use ADP and obtain suboptimal solutions.

Once the optimal strategy of sequential quantization is found, the estimations of X and Y are given by the maximum a posteriori (MAP) ones, i.e.,

X^n=argmaxπn()

si99_e  (8.71)

and

Ŷn=argmaxI{iQnl=qnP(,z|x)dzπn1(dx)}.

si100_e  (8.72)

8.4 Source coding in a CPS: Distributed case

In this section, we follow Ref. [154] to study the distributed source coding in a CPS with multiple sensors, in which we exploit both temporal and spatial redundancies in the measurements at different sensors.

8.4.1 Distributed Coding in Traditional Communications

We first introduce distributed source coding in traditional communications. Consider two information sources whose information symbols are represented by X1(t) and X2(t). It is assumed that there are no communications between the two sources. An astonishing conclusion in information theory is that the two information sources can achieve lossless source coding when the sum of the two source coding rates equals the entropy of the information sources [155]; i.e.,

R1+R2=H(X1,X2),

si101_e  (8.73)

as if the two information sources were a single information source (which can be achieved if there is a communication link between the two sources with sufficiently large capacity), although they are actually distributed. This is the well-known Splepian-Wolf coding scheme. Similar conclusions are obtained for the case of side information at the decoder (i.e., Wyner-Ziv coding) and distributed lossy source coding (Berger-Tung bounds).

The information-theoretic argument is then extended to practical distributed source coding schemes by employing parity check-based channel coding [156160] or nested lattice coding [161, 162]. A survey can be found in Ref. [163]. However, in the context of a CPS, we face the following new challenges, which require a reexamination of the distributed source coding:

 Complicated relationships: The relationships among the distributed observations can be more complicated than in many existing studies in which X = Y + N, where X is the binary information source, Y is side information, and N is binary random noise.

 Control-aware distortion: The distortion function can be directly related to the purpose of controlling the CPS, which is more complex than simple distortion criteria such as minimum mean square error (MMSE) used in many lossy source coding scenarios.

 Time correlations: Many existing studies assume independent observations in time. However, the observations are usually correlated in the time domain in a CPS due to the system state evolution.

Due to the above new challenges in CPSs, there is a pressing necessity to study the distributed source coding in CPSs.

8.4.2 System Model

In this book, we consider discrete-time physical dynamics. The N-dimensional system state of physical dynamics, denoted by x, has the evolution law:

x(t+1)=f(x(t),u(t),n(t)),

si102_e  (8.74)

where u is the control action and n is random noise.

For simplicity, we consider two sensors, 1 and 2. The corresponding observations are denoted by y1 and y2, whose dimensions are denoted by N1 and N2, respectively. They are functions of the system state and random observation noise, i.e.,

yi(t)=gi(x(t),wi(t)),i=1,2.

si103_e  (8.75)

It is assumed that the function gi is invertible and forms a one-to-one mapping. For the generic case, it is much more difficult to describe the relationship between y1 and y2, which is beyond the scope of this book. If we fix the noise term wi(t); i.e.,

x(t)=gi1(yi(t),wi(t)),

si104_e  (8.76)

then the observations at the two sensors are statistically correlated, which is characterized by the following equation:

y2=g2(g11(y1(t),w1(t)),w2(t)).

si105_e  (8.77)

We say that y1 and y2 are separable if there exist a function hij and noise wij (as a function of w1 and w2) such that

yj=hij(yi)+wij,

si106_e  (8.78)

which means that there is no nonlinear coupling between yi and the noise.

A special (but very typical) case is linear system dynamics, which is described by linear equations:

x(t+1)=Ax(t)+Bu(t)+n(t),y(t)=Cx(t)+w(t),

si107_e  (8.79)

where y(t)=(y1T(t),y2T(t))Tsi108_e and w(t)=(w1T(t),w2T(t))Tsi109_e and C=(C1T,C2T)Tsi110_e. For simplicity, we assume that both C1 and C2 have full column ranks. Then it is easy to verify that

y1=C1(C2TC2)1C2T(y2w2)+w1,

si111_e  (8.80)

which is obviously separable.

The following assumptions are used for the communication system:

 There is no communication between the two sensors.

 There is no communication error during transmission.

The transmission rates of information bits of the two sensors are denoted by R1 and R2, respectively.

8.4.3 Distortion and Quantization

Before we discuss the details of distributed source coding for CPSs, we consider the basics for the single sensor case. Since we have special requirements for the communications in a CPS, we need to reexamine the distortion and quantization of source coding in a CPS.

Distortion

In traditional source coding, a typical distortion criterion is the mean square error (MSE), which can also be applied in a CPS. However, the eventual goal of a CPS is not to convey the measurements; instead, it is to stabilize the system and minimize the cost of physical dynamics operation. Therefore we propose a new criterion for the distortion in a CPS.

First we notice that the system state estimation error may have different effects on system stability. In stable subspace, the error of system state estimation will be automatically damped and then vanish; in unstable subspace, the error may be amplified and make the whole system unstable. Therefore we first decompose the system state space by decomposing the matrix A to

A=UTΛU,

si112_e  (8.81)

where U is unitary and Λ = diag(λ1, …, λN), where λn is the nth eigenvalue. For simplicity, we assume that λiλj, if ij. Also for simplicity, we assume that the matrix A is of full rank. We consider the following dichotomy for the eigenvalues:

 |λn| > 1: in this subspace, the physical dynamics is not stable; the error of system state estimation may be amplified;

 |λn| < 1: in this subspace, the system state of physical dynamics inherently converges to zero (if no random perturbation is considered); any system state estimation error will be naturally suppressed; hence even if there is no control in this subspace, the system state is still stable.

The decomposition is illustrated in Fig. 8.4. We denote by Ω the set of unstable subspaces. For simplicity, we assume that the first N0 subspaces are unstable.

f08-04-9780128019504
Fig. 8.4 Illustration of unstable and stable subspaces.

Using the above decomposition, the system can be decomposed into N uncorrelated subsystems:

zn(t+1)=λnzn(t)+vn(t)+ñ(t),

si113_e  (8.82)

where z = (z1, …, zN) = Ux, n~=(ñ1,,ñN)=Unsi114_e, and v = (v1, …, vN) = UBu.

We assume that a “blow-up” control policy is used by the controller, which intuitively means that the controller simply removes the deviation from the desired operational point and does not consider the cost of the control action (not like the strategy in linear quadratic Gaussian (LQG) control, where the cost of control action is also considered in the cost function). The corresponding control action in the nth subspace is given by

vn(t)=λnz^n(t),

si115_e  (8.83)

where z^nsi116_e is the estimation of zn. We assume that the matrix UB has a full row rank, which assures the existence of v. When there is no random noise (namely ñ(t)=0si117_e) and the state estimation z^nsi116_e is perfect, the blow-up control action drives the system state back to zero, which is assumed to be desired. Then the variance of system state in the nth subspace is given by

V[zn]=E[ñn2]+|λn|2E[ϵn2],

si119_e  (8.84)

where ϵ is the estimation error of zn.

Based on the above discussion, we define the distortion as

D=nΩV[zn],

si120_e  (8.85)

namely the sum variance of the system state in the unstable subspaces. The fluctuations in the stable subspaces are not taken into consideration. Note that Eq. (8.85) is a heuristic definition for the distortion. It emphasizes the unstable subspaces and totally omits the stable ones, since the latter can be automatically stabilized and do not affect the system stability. Below are some other possibilities as regards distortion:

 The stable subspaces can also be taken into account. This helps to better suppress the total variance. However, the definition of distortion in this book increases the dimension of the system state space and incurs more computational cost.

 Typical criteria in controls such as quadratic cost functions (e.g., the cost function in the framework of an LQG system) can be considered. However, we are still not clear about the connection between these cost functions and the source coding problem.

Lattice-based quantization

An effective approach for quantization is to apply the concept of a lattice. A lattice is defined as the following infinite set [162, 164]:

Σ={Gi,iZn},

si121_e  (8.86)

where Z = (…, −2, −1, 0, 1, 2, …), n is the dimension, and G is an n × n matrix, which is called the generation matrix.

We then study how to form lattices for the observations y1 and y2. We consider a higher dimension lattice generated by G1 and G2 (i.e., the generating matrices for y1 and y2); i.e.,

G12=(G1T,G2T)T

si122_e  (8.87)

and

Σ12={G12i,iZN1+N2}={((G1i1)T,(G2i2)T)T,i1ZN1,i2ZN2}.

si123_e  (8.88)

The next task is to design the generating matrices G1 and G2. We apply the distortion function defined in Eq. (8.85). It is assumed that the system state estimation based on y1 and y2 is obtained from

x^=(C1TC1+C2TC2)1(C1T,C2T)(y1T,y2T)T.

si124_e  (8.89)

We map the space to the unstable subspaces of the system state. Then the lattice generated by matrix Gx is given by

Gx=U~C~G12,

si125_e  (8.90)

where U~si126_e consists of the eigenvectors in U associated with the unstable subspaces, and C~=(C1TC1+C2TC2)1(C1T,C2T)si127_e.

We want to equalize the mean square quantization errors in each unstable subspace. To that end, the lattice is designed to be parallel to the axes, and the distances between two neighboring lattice points are designed to be proportional to 1λnnΩsi128_e. This means that, in a particular unstable subspace, the quantization error variance is inversely proportional to the corresponding eigenvalue, since the eigenvalue scales the impact of the quantization error on the system state. Hence the generating matrix Gx is designed to be

Gx=Δ1λ10001λ20001λN0,

si129_e  (8.91)

where Δ is a constant to control the tradeoff between coding rate and quantization errors. This procedure is illustrated in Fig. 8.5 when y2 (i.e., C2 = 0) is ignored.

f08-05-9780128019504
Fig. 8.5 Illustration of transformation to the desired lattice.

For lattices Σ1 and Σ2, we require

E[y1|y2]Σ1,y2Σ2.

si130_e  (8.92)

If y1 and y2 are separable, we have

h21(y2)+E[w12]Σ1,y2Σ2.

si131_e  (8.93)

8.4.4 Coding Procedure

Given the above schemes of distortion and lattice quantization, we have converted the continuously valued information sources to discrete ones. Then we propose an approach to implement the source coding in a distributed manner without explicit collaborations between the sensors.

Slepian-Wolf coding

We first revisit the well-known scheme of Slepian-Wolf coding proposed in 1973. Consider two information sources whose symbols are denoted by X1(t) and X2(t). There is no communication between the two information sources. The surprising conclusion of Slepian-Wolf coding is that it can achieve asymptotically lossless coding when the sum of the transmission rates of the two sources equals the entropy of the information sources [155]; i.e.,

R1+R2=H(X1,X2),

si101_e  (8.94)

as if the two information sources can perfectly collaborate as a single information source, although there is no explicit communication and coordination between the two information sources.

Coloring-based coding

We now apply the principle of Slepian-Wolf coding in the context of distributed source coding for a CPS. The main challenge is that the coding delay in a CPS cannot be tolerated while the Slepian-Wolf coding needs an infinite delay. Moreover, the random codebook in Slepian-Wolf coding is infeasible in practice, while the encoding/decoding complexities must be addressed in our proposed coding scheme.

In this book, we adopt the approach of coloring-based coding. It is well known that the source coding for computing functions with side information is equivalent to the problem of coloring a graph. We apply the same idea to distributed source coding in a CPS.

For simplicity of analysis, it is assumed that sensor 2 sends its full quantized version of y2. This provides side information for the transmission and decoding of y1 from sensor 1 and thus can decrease the communication requirement of sensor 1. For each y2, the set of related values of y1 is defined as

Γ(y2,γ1)={y1|y1Σ1,P(y1|y2)>γ1},

si133_e  (8.95)

where γ1 is a predetermined threshold and the set Γ(y1γ1) consists of the lattice points in Σ1 that are of significantly large conditional probability given y2.

An n1×n2××nN1si134_e super-rectangle in Σ1 is defined as

R=G1p=1N1[yp,yp+1,,yp+np],

si135_e  (8.96)

for a particular {yp}p=1,,N1si136_e.

The compatible super-rectangle in Σ1 given y2 and γ1 is defined as

R1(y2,γ1)=the minimumRcontainingΓ(y1,γ1).

si137_e  (8.97)

The reason why we choose a rectangle for the shape of the set is that it results in an efficient coloring scheme and the corresponding source coding, which will be detailed later. Once γ1 is properly selected, we assume that the probability of y1 falling out of R1(y2γ1) is negligible. For the generic case, the compatible super-rectangles of different y2 values may have different sizes. However, for the case of separable y1 and y2, the super-rectangles of different y2 values have the same sizes (with different centers); i.e.,

R1(y2,γ)=h21(y2)+R-(γ1),

si138_e  (8.98)

where R-(γ1)si139_e is the basic rectangle defined as

R-1(γ)=the minimumRcontainingΓ(h121(0),γ1).

si140_e  (8.99)

The definition of super-rectangles is illustrated in Fig. 8.6, in which y20si141_e in Σ2 corresponds to y10si142_e in Σ1 (i.e., y10si142_e is the value of y1 when there is no noise in the observation). A super-rectangle is given by the dotted lines. Hence with a large probability, y1 lies in the area specified by the dotted lines.

f08-06-9780128019504
Fig. 8.6 Illustration of super-rectangles.

We have assumed that the precise information of quantized y2 has been transmitted to the fusion center, as the side information for sensor 1. Since with a large probability y1 falls in the compatible super-rectangle R1(y2γ1),1 sensor 1 only needs to specify the point within R1(y2γ1). If the labeling of each point in Σ1 is considered as coloring a set of elements, two different points in R1(y2γ1) should have different colors; otherwise, the same color causes decoding confusion.

Hence the following requirements are needed for the coloring problem:

 There are no different points having the same color in the same compatible super-rectangle.

 The mappings to and from the color should be computationally efficient.

Although algorithms in graph theory can be applied to minimize the number of colors used in the coding of y1 with the guarantee of different colors, the mapping between lattice points and colors can be irregular and needs look-up tables, which substantially increases the computational complexity. Hence we propose a simple but efficient approach of coloring:

 First we determine the maximal sizes in different dimensions of a compatible super-rectangle; i.e.,

nk*=max{nk|nkis the length of thekthdimension of a compatible super-rectangle}.

si144_e  (8.100)

 Then for a lattice point in Σ1 which can be written as

y1=G1(n1,,nN1)T,nkZ,

si145_e  (8.101)

we assign the following color to it:

c(y1)=(mod(n1,n1*),,mod(nN1,nN1*)),

si146_e  (8.102)

which can also be written as a vector.

It is easy to verify that there are no identical colors within the same compatible super-rectangle. Moreover, the total number of colors is given by k=1N1nk*si147_e and the average number of bits used to describe the color of y1 is given by

E[number of bits fory1]=k=1N1log2nk*.

si148_e  (8.103)

Numerical results have demonstrated the validity of the proposed distributed coding scheme in a CPS [154]. However, more problems, such as scalability, algorithm efficiency, and performance analysis, are still to be explored.

8.4.5 Adaptation to Physical Dynamics

One of the challenges of communications for a CPS is that the distribution of sensor observations is time variant. Hence we need to incorporate the change of the distribution into the distributed source coding scheme.

Prediction

Due to the change of distribution in the physical dynamics, it is necessary for the controller to predict the most probable range of the observation in the next time slot. Suppose that all the received messages from the two sensors before time slot t are given by

cit1=(,ci(t2),ci(t1)),i=1,2.

si149_e  (8.104)

The distribution of observations in the next time slot (before receiving the observation in the next time slot) is given by the conditional probability P(y|c1t1,c2t1)si150_e. We assume that the fusion center broadcasts the received messages back to the sensors such that the sensors share with the controller the same information on P(y|c1t1,c2t1)si150_e. For the case of no feedback, the sensors and controller do not have the same set of information, which may make the design much more complicated.

Take the linear system in Eq. (8.79) with Gaussian noise, for instance. We can use Kalman filtering to obtain an approximation of P(y|c1t1,c2t1)si150_e, namely

x-(t|t1)=Ax-(t1|t1)+Bu(t)

si153_e  (8.105)

and

Σ(t|t1)=AΣ(t1|t1)AT+W,

si154_e  (8.106)

where x-(t|t1)si155_e is the expected system state at time t, Σ(t|t − 1) is the expected covariance matrix, x-(t1|t1)si156_e is the system estimation at time t − 1, and Σ(t − 1|t − 1) is the covariance matrix at time t − 1. Then the distribution of y is given by

P(y|c1t1,c2t1)N(Cx-(t|t1),CΣ(t|t1)CT+E[ww]T).

si157_e  (8.107)

Note that the above prediction consists of quantization errors due to the digital communications, which is negligible if the lattice is sufficiently dense in space.

Range adaptation

We make the assumption that both the sensors and the fusion center share the same estimation of the distribution P(y|c1t1,c2t1)si150_e, due to the feedback from the controller. We define the most probable region of y2(t) based on the prediction to be

Γ2(t,γ2)={y2Σ2|P(y|c1t1,c2t1)>γ2},

si159_e  (8.108)

and the most probable super-rectangle, denoted by R2(tγ2), to be the minimum super-rectangle in Σ2 that includes Γ2(tγ2). Then similarly to the coloring scheme of y1 in distributed source coding, we define mk*si160_e as the maximum length of the kth dimension of all most probable super-rectangles of y2. Then the coding of y2 is given by

c(y2)=(mod(m1,m1*),,mod(mN2,mN2*)),

si161_e  (8.109)

where (m1,,mN2)si162_e is the coordinate of y^2si163_e in Σ2 that is closest to y2; i.e.,

y1=G1(m1,,mN1)T,mkZ.

si164_e  (8.110)

Based on the above discussion, the procedures for design and online operation of distributed source coding in a CPS are summarized in Procedures 7 and 8.

Procedure 7

Distributed Source Coding in a CPS: Design

1: Decompose the vector space of the system state into unstable and stable subspaces using eigenstructure decomposition, as in Eq. (8.81).

2: Determine the lattice of z using Eq. (8.91).

3: Determine the lattice-generating matrices G1 and G2 using Eqs. (8.90) and (8.92).

4: Determine the thresholds γ1 and γ2.

5: Determine the maximal sizes of compatible super-rectangle and most probable super-rectangle.

6: Color the lattice points of y1 and y2 using Eqs. (8.102) and (8.109).

Procedure 8

Distributed Source Coding in a CPS: Coding and Decoding

1: for Each time slot do

2:  Both the controller and sensor predict the distribution of y using Eq. (8.107).

3:  Both the controller and sensor 2 determine the most probable region of y2 using Eq. (8.108).

4:  Sensor 1 sends the report using the color of the lattice point closest to y1.

5:  Sensor 2 sends the report using the color of the lattice point closest to y2.

6:  The controller uses the most probable region to determine y2 according to the report from sensor 2.

7:  The controller uses the compatible super-rectangle to determine y1 according to the report from sensor 1 and the recovered y2.

8:  The controller feeds back the received messages to both sensors.

9: end for

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset