In recent years, robotic manipulators have been widely employed in industry for labor‐intensive and high‐accuracy operations. The current shortage of skilled labor and the aging population pose urgent demands for robotic manipulation, e.g. payload transport, welding, painting, and assembly. Among the broad categories of manipulators, redundant manipulators, which have more control degrees of freedom (DOFs) than desired, are advantageous for dextrous manipulations due to their redundancy in DOFs. In comparison with traditional manipulators without extra DOFs, redundant manipulators usually have more than one control solution for a specific task, and thus allow designers to exploit this feature to fulfil additional requirements, such as obstacle avoidance and control optimization, and thus have received intensive research in recent years.
The introduction of extra DOFs in redundant manipulators increases the capability and flexibility of robotic manipulation, but also sets challenges for the control design for efficient redundancy resolution in real time. Analytically, the joint velocity of a nonredundant manipulator can be expressed in terms of the inverse of its Jacobian matrix. This result extends to redundant manipulators by replacing the Jacobian matrix inverse with its pseudo‐inverse, as the Jacobian matrix in this situation is nonsquare [25]. From an optimization perspective, this solution is equivalent to one that minimizes joint speeds. This type of method provides the foundation for manipulator kinematic redundancy but suffers from several drawbacks, e.g. intensive computation due to the continuous computation for pseudo‐inverse, outbound of computed joint velocities due to the lack of velocity range constraints, and local instability in some situations due to the singularity of the Jacobian matrix [15]. Later work employs various strategies to deal with the drawbacks of pseudo‐inverse‐based solutions, including the usage of fast matrix operation algorithms for numerical approximation [28] and iterative methods to approach the solution. However, time efficiency still remains a challenging problem for redundancy resolution since recomputation has to be conducted for every time step.
The parallel processing capability of neural networks inspires researchers to explore the use of neural networks, including feed‐forward neural networks [29, 31] and recurrent neural networks [31], to control redundant manipulators. Among them, recurrent neural network based solutions achieve great success by exploiting historical information for control update to save computation. In [32], the redundancy resolution problem is modeled as convex optimization under an equation constraint, and solved using a Lagrange neural network. However, this model can only deal with equation constraints. For inequality constraints, e.g. the boundedness of joint velocities, cannot be directly considered by Lagrange neural networks. Dual neural networks are presented to address this problem. In [38], this approach is extended to optimize the joint torque when resolving the redundancy. In [40], the authors propose a single layered dual neural network for the control of kinematically redundant manipulators for the reduction of network spatial complexity and the increase of computational efficiency. There exists an equivalence for redundancy resolutions between the velocity level and the acceleration level. By formulating the problem as constrained quadratic programming at the acceleration level, various dynamic neural solutions can be obtained to reach similar control performances to velocity level resolution results [16]. Not confined to the redundancy resolution of a single serial manipulator, the recurrent neural network approach is applied to kinematic redundancy resolution of parallel Stewart platforms in [58], the coordination of multi‐manipulators [3, 42]. Actually, all recurrent neural networks for solving general constrained optimization problems in principle can be applied to address redundancy resolution of manipulators [16].
Although great success has been achieved and various models have been presented for the redundancy resolution of manipulators using recurrent neural networks, most work mainly focuses on the nominal design without considering the appearance of noises explicitly. Actually, in the implementation of neural controllers for manipulator control, the presence of noises is inevitable in the forms of truncation error, rounding error, model uncertainty, and external disturbance. The robustness against additive noises is an important factor in designing reliable neural controllers. However, it remains unexplored for the design of dual neural networks inherently tolerant to noises. This chapter makes progress by proposing a novel recurrent neural network design that is immune to any time‐varying noises in the form of polynomials. Based on the feature that polynomial noises become zero after passing through time derivation operations enough times, a neural dynamics system is constructed to learn the noise and counter its impact. Accordingly, we are able to build an improved dual neural network with guaranteed convergence in noisy environments.
In this section, we present the kinematic model of manipulators and formulate the redundancy resolution problem as a control one.
For a k‐DOF redundant manipulator with the joint angle , its Cartesian coordinate in the workspace can be described as a nonlinear mapping:
For a redundant manipulator, the joint space dimension is greater than the workspace dimension , i.e. . Equation (5.1) becomes as follows after computing the time derivative on both sides:
where is called the Jacobian matrix of , is the end‐effector velocity in the workspace, and is the manipulator joint angular velocity in the joint space. Equation (5.2) describes the kinematics of the manipulator in the velocity level. In comparison with the position level description (5.1), the velocity level descriptions is easier to deal with since is affine to in (5.2). In applications, the joint angle usually is driven by acceleration or equivalently by force generated by motors, described as
where is the acceleration as the control input and, is an additive noise in the control channel, e.g. the disturbance from electrical and mechanical vibrations. Due to physical constraints, the angular velocity of a manipulator cannot exceed certain limits. Let denote the feasible angular velocity set, then the physical constraint can be expressed as
With (5.4) and (5.2), we are ready to define the kinematic redundancy resolution problem of a manipulator as in the following.
In this section, we present the design of a dual neural network for the control of a nominal manipulator without the presence of noises. This will pave the way for our further consideration with noises taken into account.
Due to the redundancy of the manipulator, there is more than one control design that can solve the redundancy resolution. To exploit the redundancy in an optimal fashion, we first define the problem as an optimization one and then convert it into a control law to solve the problem in a recurrent way.
To exploit the extra design freedom, we find a control action to minimize the following quadratic cost function by following conventions for dual neural network design:
where is a symmetric positive semi‐definite matrix , is a vector. This cost function can be utilized to characterize kinematic energy expenses for angular velocity . For example, choosing as an identity matrix and , this cost function is the kinematic energy. Generally, this cost function describes biased and weighted kinematic energy. The optimal solution to (5.5) also needs to meet the kinematic model (5.2) of the manipulator such that the workspace velocity equals the desired velocity ultimately, and also the feasibility constraint (5.4). Overall, we formulate it as the following constrained optimization:
Conventionally, a dual neural network is designed based on a Lagrange function as . In this chapter, we augment such a Lagrange function with the equality constraint, which benefits the convergence of the resulting neural dynamics, and define it as follows:
where is a symmetric positive semi‐definite weighting matrix . The optimal solution to (5.6) is identical to the saddle point of the following problem:
According to the Karush–Kuhn–Tucker condition [59], the solution of (5.8) satisfies
where denotes the normal cone of set at . According to the definition of normal cone, we have
with which, the first expression in (5.9) can be written as
With (5.11), we further conclude
This implies
where is the projection operator defined as
or equivalently, is the one satisfying
The derivation of (5.13) from (5.12) directly follows by setting new variables , and in (5.12). Overall, (5.9) can be finally expressed as
This is a nonlinear equation set and usually is impossible to solve analytically. To solve it numerically, we propose the following recurrent neural network in the form of
This amounts to the following control law
in the absence of noises, i.e. .
About the presented neural network model, we have the following theoretical conclusion.
In the last section, we have designed a neural network to deal with kinematic redundancy resolution in a nominal situation without noises, i.e. in (5.3). In real applications, noise is inevitable and it is important to design a noise tolerant controller for effective control. In this section, we present the design of neural dynamics that takes noises into account.
In this section, we consider the situation with polynomial noises. According to Taylor expansion, it is possible to represent any time varying noises by a polynomial for a desired accuracy. In this sense, a polynomial noise can be regarded as a general representation of noises.
We first present the neural dynamics with the capability to deal with polynomial noises. For an ‐order polynomial noise with for and unknown, we present the neural controller as follows:
About this neural controller, we have the following theoretical results.
About the choice of parameters for in the neural controller, we have the following remark.
About the generality of the presented neural controller in coping with any type of time‐varying noises, we have the following remark.
The presented method in this chapter is applicable to various real problems. On this point, we have the following remark.
In real applications, it is necessary to determine a proper order for the polynomial noise before implementing the neural controller. For a particular problem, a trial‐and‐error procedure can be used to determine its value. Specifically, we can first assume the polynomial order is , i.e. a constant noise, and design neural controllers accordingly. With the devised neural controller, we can measure the control performance in terms of the tracking error. If it is not satisfactory, we can increase the polynomial order to , and check the tracking error again. Otherwise, we just stop iteration and use the current neural controller for control. If cannot return with a satisfactory performance, we proceed to try . Recursively, we can find a proper order of the polynomial, and the corresponding neural controller, with which the control error is satisfactory. Note that we can always improve the tracking accuracy by increasing the order as implied by Taylor's theorem on the approximation power of polynomials.
In addition, we present the architecture of the presented neural network (Figure 5.1) to show its structure from a connectionist perspective. It is clear from Equation (5.28) and Figure 5.1 that the neural model contains hidden states , , , , …, , , , …, and an explicit output . The hidden states interact with each other in a dynamic way and form the recurrent layer of this neural network. The recurrent layer then impacts the output with a static mapping and forms the output layer. The dependency of variables and the relationship between them as described by (5.28) are shown in Figure 5.1. From this figure, it can also be observed that there are variables in the recurrent layer, i.e. , , , , …, , , , …, . As to , impacts , impacts , …, and impacts , in a cascaded way. The hidden variable dynamically depends on itself and the two inputs and . As to , the dynamics of depends on , and , the dynamics of only depends on , the dynamics of only depends on , …, and the dynamics of only depends on . All variables in the recurrent layer then impact the output in a nonlinear way.
Furthermore, the implementation of the interaction between the neural controller and the manipulator is also very important in real implementations. The manipulator is driven by the acceleration of joint motors. With the actuation of all individual joints, the end‐effector moves in its workspace and the control objective is to reach the desired velocity in the workspace. For the neural controller in the presence of polynomial noises, the input to this neural controller includes two quantities: one is , which is the angular velocity of joints and comes from the feedback provided by sensors mounted on the manipulator; and the other one is the reference velocity , which comes from the control command. As a summary, we provide the control diagram in Figure 5.2 to describe the interconnection between the presented neural controller and the manipulator.
In this section, we examine two special cases of polynomial noises, namely constant noises and linearly time‐varying noises, to show the specific presentation of the neural controllers.
We first consider the situation with a constant noise. In practice, this category of noises can be employed to describe uncompensated weights, manipulator parameter errors, etc.
Choosing in (5.28), the neural controller falls into the special case with a constant noise, as represented in the following form
In comparison with the neural controller (5.18) in the absence of noises, two additional terms are inserted into the control law: the term , where is the integration of the that measures the violation of the solution's optimality in quantity, and the term , which provides an extra damping to stabilize the system.
About the neural controller (5.39), we have the following corollary.
Another important special case of polynomial noises is the one linearly varying with time. It models various physical processes, e.g. aging and fatigue. The linearly varying noise can be represented as where and are unknown constants and represents time. The linear noise carries two parts: the constant part ; and the time proportional part . Choosing in (5.28), the neural controller falls into this special case with its neural controller expressed as:
where and are constants. As stated in Corollary 5.1, the neural model (5.39) is able to deal with the constant noise part . For the additional part , two extra terms are introduced to form the new neural controller: and . The first term is the twice integration of the control error introduced to penalize the control error resulting from the additive noise . The second term is the integration of introduced as a damping term for stabilization consideration. Notice that the newly introduced terms are both in integration format, due to the fact that the time‐varying noise approaches infinity with the increase of time and an increased amount of control action is desired to conquer its impact.
Table 5.1 Summary of the Denavit–Hartenberg parameters of the PUMA manipulator used in the simulation.
Link | a(m) | α(rad) | d(m) |
1 | 0 | 0 | |
2 | 0.43180 | 0 | 0 |
3 | 0.02030 | 0.15005 | |
4 | 0 | 0.43180 | |
5 | 0 | 0 | |
6 | 0 | 0 | 0.20000 |
About the presented neural controller (5.40), we have the following convergence result in the presence of linear noises.
In this section, we apply the presented neural controller to the redundancy resolution of a PUMA 560 robot arm for the tracking of a circular motion. We compare the presented method with existing dynamic neural solutions in the presence of various noises and show its robustness in dealing with additive noises.
In the simulation, we consider a PUMA 560 robot arm, with its Denavit–Hartenberg parameters summarized in Table 5.1, as the manipulator platform for the tracking of a circular motion with its end‐effector. PUMA 560 is a serial manipulator with six joints and thus has 6 DOFs for control. As the control task only refers to the position tracking, it forms a three‐dimensional workspace. For this task, PUMA 560 serves as a redundant manipulator. The desired motion is to track a circle in the plane with radius 0.15 m at an angular speed of 0.2 rad/s. In the simulation, we consider noises up to fourth order of polynomials with different amplitude. For parameters in the presented neural network, we choose , , , and . As to the parameter , it is determined by Equation (5.38) by choosing . The simulation results are compared with existing neural solutions presented in [39] and [60] by setting the same parameters.
In this section, we present simulation results in the nominal situation without the presence of noises to show the convergence of the presented neural controller. Without losing generality, we consider the situation with . Recall that we have chosen and we have from (5.38). Figures 5.3 and 5.4 show the result of a typical simulation run for 50 s. As shown in Figure 5.3, the generated trajectory for the PUMA 560 manipulator matches the desired circular motion. Quantitatively, the position tracking error in the workspace along , and and directions are drawn in Figure 5.4a. As observed from this figure, the position tracking error remains at a low level (lower than in absolute values for all dimensions). The generated joint angles at different times are shown in Figure 5.4b.
In this simulation, we set the noise as a constant, and compare the performance obtained by using different neural controllers. Note that the noise goes into the control channel in the form of (5.3) and the control input is amplified by . Therefore, we consider a set of noise comparable with the control input as , , and . We compare the results generated by the presented algorithm in this chapter with two existing neural controllers presented in [60] and [39] with the same parameter setup. As shown in Figures 5.5–5.7, with the presence of the constant noise , the position tracking error for the presented method is restricted to be lower than in the absolute value across the entire simulation run. In contrast, the dual neural method presented in [60] has a larger tracking error with a peak value around at time s and s (Figure 5.6). As to the dual neural method presented in [39], the system loses stability and fluctuates severely with time. With the increase of the noise to , the position tracking error of the method presented in [60] increases a lot and reaches a peak value 0.4 at time s, and the system controlled by the neural law presented in [39] demonstrates enhanced fluctuation with an increased frequency. Note that due to the restriction introduced by the physical size of the manipulator, the position tracking error is always bounded. For the case with noise , in contrast to the severe performance degradation of existing methods [39, 60], the presented one still performs well with a similar noise level as in the situation without the presence of noise. For even larger noise , the method presented in [60] also loses stability and starts to oscillate with a big error amplitude as observed in Figure 5.7. As to the presented method shown in Figure 5.7, after a short period for transition, the position tracking error returns to a small value below very soon. Note that the short transition shown in the left inset figure in Figure 5.7a is due to the fact that the large constant noise acts on the system at the beginning of this simulation run while it takes a short transient time for the presented control to adapt to this noise. Overall, the simulation reveals that existing solutions designed in nominal situations are not robust against additive noises and the presented neural controller with an explicit consideration of noise tolerance receives a significant performance improvement over existing ones in dealing with unknown constant noises.
In this simulation, we consider a set of polynomial noises to verify the effectiveness of the presented neural controller. Particularly, we consider a linear time‐varying noise , a quadratically time‐varying one , a cubically time‐varying one , a fourth‐order time‐varying one , and one with all the above ingredients plus a constant, i.e. . Notice that the noise signals considered in this section are all time‐varying and their values could be very large. For example, at the end of the simulation s, the linear time‐varying noise becomes , which is 50 times larger than the largest constant noise considered in the last section. For existing methods [39, 60], as demonstrated in Section 5.5.3, they cannot handle such large noises. In the presence of the above mentioned noises, the position tracking error remains at a low level for the linear noise, the quadratic noise and the cubic noise as shown in Figures 5.8a, b, and c, respectively. Although it takes a short transition for neural adaption with the increase of noises, as seen in Figure 5.8d and e, the presented neural network is able to reach a small tracking error no greater than ultimately, which is comparable with the situation without the presence of noises. To further show the improved performance with the presented neural network in the presence of polynomial noises, we additionally consider the track of a square path in the horizontal plane with the size of each edge being 0.3 m. With the same setup of neural controller parameters as the situation to track a circular motion, we run one more set of simulation.
In this chapter, we consider the recurrent neural network design for redundancy resolution of manipulators in the presence of polynomial noises. A novel neural control law is presented to tackle this problem. Theorems are established to show the convergence of the neural controller with noisy inputs. Extensive simulations verify the theoretical conclusions. Numerical comparisons with existing neural controllers show significant performance improvements in control accuracy.