Nonequilibrium equalities for feedback control

The following theorem provides an equality for feedback control:

Theorem 29

(Generalized Detailed Fluctuation Theorem With Feedback Control)

The following equality holds when feedback control exists:

P(XN,YN)P(XN,YN)=eσ(XN,ΛN(YN)Ic(XN;YN)),

si557_e  (5.307)

where the “backward probability distribution” is defined as

P(XN,YN)=P(XNΛN(YN1))P(YN).

si558_e  (5.308)

The proof is similar to that of Theorem 26. Note that the physical meaning of P(XN,YN)si559_e is the probability obtained from the following two-step experiments:

 Step 1: Carry out the forward experiment and obtain a series of observations YN.

 Step 2: Carry out the backward experiment with protocol ΛN(YN1)si560_e.

Based on Theorem 29, we also obtain the following theorems:

Theorem 30

(Generalized Integral Fluctuation Theorem With Feedback Control)

The following equality holds for the case of feedback control:

eσIc=1.

si561_e  (5.309)

Theorem 31

(Generalized Second Law of Thermodynamics With Feedback)

The following inequality holds for the case of feedback control:

σIc.

si562_e  (5.310)

We can also obtain

σ+Ic=P(XN,YN)lnP(XN,YN)P(XN,YN)dXNdYN=D(P(XN,YN)||P(XN,YN))

si563_e  (5.311)

by taking the expectation of the logarithm of both sides of Eq. (5.307). Note that D(P(XN,YN)||P(XN,YN))si564_e is the Kullback-Leiber distance between the distributions P(XNYN) and P(XN,YN)si559_e. When the reversibility with feedback control holds, i.e.,

P(XN,YN)=P(XN,YN),

si566_e  (5.312)

we have

σ+Ic=0.

si567_e  (5.313)

When σ = β(W −ΔF), the following generalized Jarzynski equality holds:

eβ(WΔF)Ic=1,

si568_e  (5.314)

which results in

WΔFkBTIc.

si569_e  (5.315)

This inequality relates the work exerted by external sources, the reduction of free energy of the thermodynamic system, and information obtained from the feedback. Moreover, if ΔF = 0 (namely the free energy of the thermodynamic system does not change), we define Wext = −W as the work extracted from the thermodynamics system, we have

WextkBTIc.

si570_e  (5.316)

Hence the work that we can extract from the information provided to the thermodynamic system through the feedback is upper bounded by the mutual information scaled by kBT.

Take the Szilard engine discussed previously, for example. The information about the location of the particle is binary (0: the left half; 1: the right half). We assume that the measurement is passed through a binary symmetric channel with error probability ϵ; i.e., the controller receives the correct location of the particle with probability 1 − ϵ. Using the argument in Ref. [71], we can obtain

Wext=kBT(ln2+ϵlnϵ+(1ϵ)ln(1ϵ)).

si571_e  (5.317)

Nonequilibrium equalities with efficacy

An alternative type of fluctuation theorem can also be derived. Consider a Markovian measurement; i.e.,

P(yn|Xn)=P(yn|xn).

si572_e  (5.318)

A forward experiment is carried out at times t1, …, tM with feedback control; a backward experiment is also carried out at times tNM, …, tN−1. The measurements in the backward experiment are denoted by YN = (yNM, …, yN−1). Given XNsi573_e, we define

Pc(YN|XN)=k=1MP(yNk|xNk),

si574_e  (5.319)

which denotes the probability that YN is obtained given XN. Given the protocol Λ(YN), the probability of observing YN is given by

P(YN|ΛN(YN1))=Pc(YN|XN)P(XN|ΛN(YN1))dXN.

si575_e  (5.320)

The time-reversed sequence of YN, denoted by YNsi576_e, is given by

YN=(yNM,,y(N1)).

si577_e  (5.321)

Then the probability that YN equals YNsi576_e is given by

P(YN|ΛN(YN1))=Pc(YN|XN)P(XN|ΛN(YN1))dXN.

si579_e  (5.322)

We further assume time-reversed symmetry; i.e.,

P(yn*|xn*)=P(yn|xn).

si580_e  (5.323)

We then obtain the following theorem, whose detailed proof is similar to that of Theorem 26 and can be found in Ref. [71].

Theorem 32

(Renormalized Detailed Fluctuation Theorem)

The following equality holds:

P(YN|Λ(YN))P(YN)=eσ(YN),

si581_e  (5.324)

where the renormalized entropy production σ′ isdefined as

σ(YN)=lneσ(XN,ΛN(YN1))P(XN|YN)dXN.

si582_e  (5.325)

Taking the expectation of both sides of Eq. (5.324), we obtain the following corollary:

Corollary 9

The following equality holds:

eσ=γ,

si583_e  (5.326)

where γ is called the efficacy parameter of feedback control given by

γ=P(YN|ΛN(YN1))dYN.

si584_e  (5.327)

For the case σ = β(W −ΔF), we have

eβ(WΔF)=γ,

si585_e  (5.328)

which is a generalization of the Jarazynski equality.

The values of the efficacy parameter γ are as follows:

 When the feedback control is perfect (namely, the system is expected to return to the initial state with probability 1 in the time-reserved process), γ equals the number of possible observations of YN.

 In the Szilard engine, γ = 2.

 When there is no feedback control, γ = 1.

5.9 Conclusions

We have discussed various approaches of evaluating the communication requirements for controlling CPSs in different situations. As we have seen, several approaches are closely related to the entropy, either deterministic or stochastic, of the physical dynamics. Hence it is a common and generic approach to analyze the increase of the entropy of the physical dynamics and then study how the control can alleviate the entropy increase. We are still awaiting an elegant framework to unify these approaches.

References

[4] Kundur P. Power System Stability and Control. McGraw-Hill; 1994.

[11] Matveev A.S., Savkin A.V. Estimation and Control Over Communication Networks. Birkhäuser; 2008.

[12] Downarowicz T. Entropy in Dynamical Systems. Cambridge, UK: Cambridge University Press; 2011.

[13] Sahai A., Mitter S. The necessity and sufficiency of anytime capacity for control over a noisy communication channel. IEEE Trans. Inform. Theory. 2006;52(8):3369–3395.

[14] Tatikonda S., Sahai A., Mitter S. Stochastic linear control over a communication channel. IEEE Trans. Inform. Theory. 2004;49(9):1549–1561.

[15] Li H. Entropy reduction via communications in cyber physical systems: how to feed Maxwell’s Demon? In: Proceedings of the IEEE International Symposium of Information Theory. 2015.

[33] Cover T.M., Thomas J.A. Elements of Information Theory. second ed. Wiley; 2006.

[56] Thorp J.S., Seyler C.E., Phadke A.G. Electromechanical wave propagation in large electric power systems. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 1998;45:614–622.

[70] Wong W.S. Control communication complexity of distributed control systems. SIAM J. Control Optim. 2009;48(3):1722–1742.

[71] Sagawa T., Ueda M. Nonequilibrium thermodynamics of feedback control. Phys. Rev. E. 2012;95:2.

[72] Anderson B.D.O., Moore J.B. Optimal Control: Linear Quadratic Methods. Dover Books; 2007.

[73] Baldwin S.L., Slaminka E.E. Calculating topological entropy. J. Stat. Phys. 1997;89(5/6):1017–1033.

[74] Froyland G., Junge O., Ochs G. Rigorous computation of topological entropy with respect to a finite partition. Physica D. 2001;154:68–84.

[75] Gallager R.G. Information Theory and Reliable Communication. Wiley; 1968.

[76] Gastpar M., Rimoldi B., Vetterli M. To code, or not to code: lossy source-channel communication revisited. IEEE Trans. Inform. Theory. 2003;49(5):1147–1158.

[77] Ashby W.R. An Introduction to Cybernetics. Filiquarian Legacy Publishing; 2012.

[78] Wiener N. Cybernetics: The Control and Communication in the Animal and the Machine. second ed. MIT Press; 1965.

[79] Connant R.C. Information transfer required in regulatory processes. IEEE Trans. Syst. Sci. Cybernet. 1969;5(4):334–338.

[80] Weidemann H.L. Entropy analysis of feedback control systems. In: Advances in Control Systems: Theory and Applications. Academic Press; 1969:225–255.

[81] Kailath T., Sayed A.H., Hassibi B. Linear Estimation. Prentice Hall; 2000.

[82] Wang H. Minimum entropy control of non-Gaussian dynamic stochastic systems. IEEE Trans. Automat. Control. 2002;47(2):398–403.

[83] Brown B.M., Harris C.J. Neurofuzzy Adaptive Modeling and Control. Prentice Hall; 1994.

[84] Kickert W.J.M., Bertrand J.M., Praagaman J. Some comments on cybernetics and control. IEEE Trans. Syst. Man Cybernet. 1978;8(11):805–809.

[85] Li H., Song J.B. Does feedback control reduce entropy/communications in smart grids? In: Proceedings of the IEEE International Conference on Smart Grid Communications (SmartGridComm). 2015.

[86] Fermi E. Thermodynamcis. Dover; 1956.

[87] Truesdell C.A., Bharatha S. The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines: Rigorously Constructed Upon the Foundation Laid by S. Carnot and F. Reech. New York: Springer; 1977.

[88] Feder M., Merhav N. Relations between entropy and error probability. IEEE Trans. Inform. Theory. 1994;40(1):259–266.

[89] Martins N.C., Dahleh M.A. Feedback control in the presence of noisy channels: ‘bode-like’ fundamental limitations of performance. IEEE Trans. Automat. Control. 2008;53(7):1604–1615.

[90] Koch I. Analysis of Multivariate and High-Dimensional Data. Cambridge, UK: Cambridge University Press; 2013.

[91] Lee J.A., Verleysen M. Nonlinear Dimensionality Reduction. New York: Springer; 2007.

[92] Evans L.C. Partial Differential Equations. second ed. AMS; 2010.

[93] He D., Shi D., Sharma R. Consensus based distributed cooperative control for microgrid voltage regulation and reactive power sharing. In: Proceedings of the IEEE Innovative Smart Grid Technologies (ISGT Europe). 2014.

[94] Yao A.C. Some complexity questions related to distributed computing. In: Proceedings of the 11th ACM Symposium on Theory of Computing (STOC). 1979.

[95] Kushlevitz E., Nisan N. Communication Complexity. Cambridge, UK: Cambridge University Press; 2006.

[96] Leff H.S., Rex A.F. Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing. Institute of Physics Publishing; 2003.

[97] Brillouin L. Negentropy principle of information. J. Appl. Phys. 1953;24(9):1152–1163.

[98] Penrose O. Foundations of Statistical Mechanics: A Deductive Treatment. Dover Publications; 2005.

[99] Toyabe S., Sagawa T., Ueda A., Muneyuki E., Sano M. Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality. Nat. Phys. 2010;6:988–992.


“To view the full reference list for the book, click here

1 For a vector x, x=maxj|xj|si32_e.

2 When the system state entropy is very large, it is possible that random perturbations can decrease the entropy, since the entropy does not necessarily decrease in a generic stochastic system [33].

3 Since Rji is nats per channel use (sample), the transmission rate in bits/second tends to infinity when ΔT0si370_e. Thisinfinite rate is due to the independent source coding for different times and different nodes. If thetemporal and spatial redundancies in the system states are taken into account in the source coding,the rate can be bounded, which will be a topic of future study.

4 Note that this is similar to the symmetric product of tensors, which generates a symmetric tensorfrom two tensors.

5 Note that the entropy arises from the uncertainty from the view of an outsider.

6 It has been realized by Bennett that such a coupling mechanism does not necessarily requireenergy [96]; hence the system is still isolated.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset