With the development of mechatronics, automatic systems consisting of sensors for perception and actuators for action are being more and more widely used [17]. Besides the proper choices of sensors and actuators and an elaborate fabrication of mechanical structures, the control law design also plays a crucial role in the implementation of automatic systems especially for those with complicated dynamics. For most mechanical sensor–actuator systems, it is possible to model them in Euler–Lagrange equations [17]]. In this chapter, we are concerned with the sensor–actuator systems modeled by Euler–Lagrange equations.
Due to the importance of Euler–Lagrange equations in modeling many real sensor–actuator systems, much attention has been paid to the control of such systems. According to the type of constraints, the Euler–Lagrange system can be categorized as a Euler–Lagrange system without nonholonomic constraints (e.g. fully actuated manipulator [18], omni‐directional mobile robot [19]), the under‐actuated multiple body system. For a Euler–Lagrange system without nonholonomic constraints, the input dimension are often equal to the output dimensions and the system is often able to be transformed into a double integrator system by employing feedback linearization [20]. Other methods, such as the control Lyapunov function method, passivity‐based method, and optimal control method are also successfully applied to control the Euler–Lagrange system without nonholonomic constraints. In contrast, as the input dimensions are lower than those of outputs, it is often impossible to directly transform the Euler–Lagrange system subject to nonholonomic constraints to a linear system and thus feedback linearization fails to stabilize the system. To tackle the problem, various methods (variable structure control‐based method [21], backstepping‐based control [22], optimal control‐based method, and discontinuous control method) have been widely investigated and some useful design procedures are proposed. However, due to the inherent nonlinearity and nonholonomic constraints, most existing methods [21, 22] are strongly model dependent and the performance is very sensitive to model errors. Inspired by the success of human operators for the control of Euler–Lagrange systems, various intelligent control strategies are proposed to solve the control problem of Euler–Lagrange systems subject to nonholonomic constraints. As demonstrated by extensive simulations, these type of strategies are indeed effective for the control of Euler–Lagrange systems subject to nonholonomic constraints. However, rigorous proof of the stability is difficult for this type of method and there may exist some initializations of the state, from which the system cannot be stabilized.
In this chapter, we propose a self‐learning control method applicable to Euler–Lagrange systems. In contrast to existing work on intelligent control of Euler–Lagrange systems, the stability of the closed loop system with the proposed method is proven in theory. On the other hand, different from model‐based design strategies, such as backstepping‐based design [22] and variable structure‐based design [21], the proposed method does not require information on the model parameters and therefore is a model independent method. We formulate the problem from an optimal control perspective. In this framework, the goal is to find the input sequence to minimize the cost function defined on infinite horizon under the constraint of the system dynamics. The solution can be found by solving a Bellman equation according to the principle of optimality [23]. Then an adaptive dynamic programming strategy is utilized to numerically solve the input sequence in real time.
In this chapter, we are concerned with the following sensor–actuator system in the Euler–Lagrange form,
where , is the inertial matrix, , and . Note that the inertial matrix is symmetric and positive definite. There are three terms on the left‐hand side of Equation (2.1). The first term involves the inertial force in the generalized coordinates, the second one models the Coriolis force and friction, the values of which depend on , and the third one is the conservative force, which corresponds to the potential energy. The control force applied on the system drives the variation of the coordinate . It is also noteworthy that we assume the dimension of is equal to that of here. This definition also admits the case for with lower dimension than that of by imposing constraints to , e.g. the constraint with restricts in a dimensional space. Defining state variables and , the Euler–Lagrange equation (2.1) can be put into the following state‐space form:
Note that the matrix is invertible as it is positive definite. The control objective is to asymptotically stabilize the Euler–Lagrange system (2.2), i.e. design a mapping such that and when time elapses.
As an effective design strategy, variable structure control finds applications in many different types of control systems including the Euler–Lagrange system. The method stabilizes the dynamics of a nonlinear system by steering the state to a elaborately designed sliding surface, on which the state inherently evolves towards the zero state. Particularly for the system (2.2), we define as follows:
where is a constant. Note that together with the dynamics of in Equation (2.2) gives the dynamics of as for . Clearly, asymptotically converges to zero. Also we know when according to . Therefore, we conclude the states , on the sliding surface for defined in Equation (2.3) converge to zero with time. With this property of the sliding surface, a control law driving the states to definitely guarantees the ultimate convergence to the zero states. Accordingly, the stabilization of the system can be realized by controlling to zero. To reach this goal, a positive definite control Lyapunov function , e.g. , is often used to design the control law. For stability consideration, the time derivative of is required to be negative definite. In order to guarantee the negative definiteness of the time derivative of , exact information about the system dynamics (2.2) is often necessary, which results in the model‐based design strategies.
We have the following remark about the Euler–Lagrange equation (2.1) for modeling sensor–actuator systems.
Without losing generality, we stabilize the system (2.1) by steering it to the sliding surface with defined in Equation (2.3). Different from existing model‐based design procedures, we design a self‐learning controller, which does not require accurate knowledge about , , and in Equation (2.1). In this section, we formulate such a control problem from the optimal control perspective.
In this chapter, we set the origin as the desired operating point, i.e. we consider the problem of controlling the state of the system (2.1) to the origin. For the case with other desired operating points, the problem can be equivalently transformed to the one with the origin as the operating point by shifting the coordinates. At each sampling period, the norm of , which measures the distance from the desired sliding surface , can be used to evaluate the one step performance. Therefore, we define the following utility function associated with the one‐step cost at the ith sampling period,
with
where is defined in Equation (2.3) and , denotes the absolute value of the ith component of the vector , the parameter for . At each step, there is a value and the total cost starting from the kth step along the infinite time horizon can be expressed as follows:
where is the state vector of system (2.1) sampled at the kth step with , is the discount factor with , and is the control sequence starting from the kth step. Note that for the deterministic system (2.1), the preceding states after the kth step are determined by and the control sequence . Accordingly, is a function of and with . Also note that both the cost function and the utility function are defined based on the discrete samplings of the continuous system (2.1). Now, we can define the problem of controlling the sensor–actuator system (2.1) in this framework as follows:
where is defined by Equations (2.4) and (2.5), is the sampling period, the set defines the feasible control actions, and is the cost function for in Equation (2.6). It is worth noting that is a function of and according to Equation (2.6). The optimization in Equation (2.7) is relative to with a given initial state . Also note that in the optimization problem in Equation (2.7), the decision variables are defined in every sampling period. The control action keeps the value in the duration of two consecutive sampling steps. This formulation is consistent with the real implementations of digital controllers.
In this section, we present the strategy to solve the constrained optimization problem efficiently without knowing the model information of the chaotic system. We first investigate the optimality condition of Equation (2.7) and present an iterative procedure to approach the analytical solution. Then, we analyze the convergence of the iterative procedure and the stability with the derived control strategy.
Denoting the optimal value to the optimization problem in Equation (2.7), i.e.
According to the principle of optimality [23], the solution of Equation (2.7) satisfies the following Bellman equation:
where is the solution of Equation (2.7b) at with and the control action for . Without introducing confusion, we simply write Equation (2.9) as follows:
Define the Bellman operator relative to function as follows:
Then, the optimality condition in Equation (2.10) can be simplified into the following with the Bellman operator,
Note that the function is implicitly included in the Bellman operator. Equation (2.12) constitutes the optimality condition for the problem in Equation (2.7). It is difficult to solve the explicit form of analytically from Equation (2.9). However, it is possible to get the solution by iterations. We use the following iterations to solve ,
The control action keeps constant in the duration between the kth and the k+1th step, i.e. for . can be obtained from Equation (2.9) based on Equation (2.13),
In the previous sections, the iteration (2.13) is derived to calculate and the optimization (2.14) is obtained to calculate the control law. The iteration to approach and the optimization to derive have to be run in every time step in order to obtain the most up‐to‐date values. Inspired by the learning strategies widely studied in artificial intelligence, a learning‐based strategy is used in this section to facilitate the processing. After a sufficiently long time, the system is able to memorize the mapping of and the mapping of . After this learning period, there will be no need to repeat any iterations or optimal searching, which will make the strategy more practical.
Note that the optimal cost is a function of the initial state. Counting the cost from the current time step, can also be regarded as a function of both the current state and the optimal action at the current time step according to Equation (2.10). Therefore, , the approximation of , can also be regarded as a function relative to the current state and the current optimal input. As to the optimal control action , it is a function of the current state. Our goal in this section is to obtain the mapping from the current state and the current input to and the mapping from the current state to the optimal control action using parameterized models, denoted as the critic model and the action model, respectively. Therefore, we can write the critic model and the action model as and , respectively, where and are the parameters of the critic model and the action model, respectively.
In order to train the critic model with the desired input–output correspondence, we define the following error at time step to evaluate the learning performance,
Note that is the desired value of according to Equation (2.13). Using the back‐propagation rule, we get the following rule for updating the weight of the critic model,
where is the step size for the critic model at the time step .
As to the action model, the optimal control in Equation (2.14) is the one that minimizes the cost function. Note that the possible minimum cost is zero, which corresponds to the scenario with the state staying inside the desired bounded area. In this regard, we define the action error as follows:
Then, similar to the update rule of for the critic model, we get the following update rule of for the action model,
where is the step size for the action model at the time step .
Equations (2.16) and (2.18) update the critic model and the action model progressively. After and have learnt the model information by learning for a sufficiently long time, their values can be fixed at the one obtained at the final step and no further learning is required, which is in contrast to Equation (2.14) which requires an optimization problem to be solved even after a long time.
In this section, we consider the simulation implementation of the proposed control strategy. The dynamics given in Equation (2.1) model a wide class of sensor–actuator systems. Particularly, to demonstrate the effectiveness of the proposed self‐learning variable structure method, we apply it to the stabilizations of a typical benchmark system: the cart–pole system.
The cart–pole system, as sketched in Figure 2.1, is a widely used testbed for the effectiveness of control strategies. The system is composed of a pendulum and a cart. The pendulum has its mass above its pivot point, which is mounted on a cart moving horizontally. In this section, we apply the proposed control method to the cart–pole system to test the effectiveness of our method.
The cart–pole model used in this work is the same as that in [24], which can be described as follows:
where
with the following values for the parameters:
This system has four state variables: is the position of the cart on the track, is the angle of the pole with respect to the vertical position, and and are the cart velocity and angular velocity, respectively.
Define , , , , , , and . With these notations, Equation (2.19) can be rewritten as:
By choosing
the system of Equation (2.19) coincides with the model of Equation (2.1). Note that the input in this situation is constrained in the set .
In the simulation experiment, we set the discount factor , the sliding surface parameter , , and . The feasible control action set in Equation (2.7) is defined as N}. This definition corresponds to the widely used bang‐bang control in industry. To make the output of the action model within the feasible set, the output of the action network is clamped to 10 if it is greater than or equal to zero and clamped to if less than zero. The sampling period is set to 0.02 s. Both the critic model and the action model are linearly parameterized. The step size of the critic model, , and that of the action model, , are both set to 0.03. Both the update of the critic model weight in Equation (2.16) and the update of the action model weight in Equation (2.18) last for 30 s. For the uncontrolled cart–pole system with in Equation (2.19), the pendulum will fall down. The control objective is to stabilize the pendulum to the inverted direction (). The time history of the state variables is plotted in Figure 2.2 for the system with the proposed self‐learning variable structure control strategy. From Figure 2.2, it can be observed that is stabilized in a small vicinity around zero (with a small error of 0.1 rad), which corresponds to the inverted direction.
In this chapter, the self‐learning variable structure control is considered to solve a class of sensor–actuator systems. The control problem is formulated from the optimal control perspective and solved via iterative methods. In contrast to existing models, this method does not need pre‐knowledge of the accurate mathematical model. The critic model and the the action model are introduced to make the method more practical. Simulations show that the control law obtained by the proposed method indeed achieves the control objective.