5

 

 

Neural-Network-Based AGC Design

Recent achievements on artificial neural networks (ANNs) promote great interest principally due to their capability to learn to approximate well any arbitrary nonlinear functions, and their ability for use in parallel processing and multivariable systems. The capabilities of such networks could be properly used in the design of adaptive control systems. In this chapter, following an introduction of ANN application in control systems, an approach based on artificial flexible neural networks (FNNs) is proposed for the design of an AGC system for multiarea power systems in deregulated environments.

Here, the power system is considered a collection of separate control areas under the bilateral scheme. Each control area that is introduced by one or more distribution companies can buy electric power from some generation company to supply the area load. The control area is responsible for performing its own AGC by buying enough regulation power from prespecified generation companies, via an FNN-based supplementary control system.

The proposed control strategy is applied to a single- and three-control area power systems. The resulting controllers are shown to minimize the effect of disturbances and achieve acceptable frequency regulation in the presence of various load change scenarios.

 

 

5.1 An Overview

In a deregulated environment, an AGC system acquires a fundamental role to enable power exchanges and to provide better conditions for the electricity trading. The AGC is treated as an ancillary service essential for maintaining the electrical system reliability at an adequate level. Technically, this issue will be more important as independent power producers (IPPs), renewable energy source (RES) units, and microgrid networks get into the electric power markets.1

As mentioned in Chapter 4, there are several schemes and organizations for the provision of AGC services in countries with a restructured electric industry, differentiated by how free the market is, who controls generator units, and who has the obligation to execute AGC. Some possible AGC structures are introduced in Chapter 4. Under a deregulated environment, several notable solutions have already been proposed.1,2 Here, it is assumed that in each control area, the necessary hardware and communication facilities to enable reception of data and control signals are available, and Gencos can bid up and down regulations by price and MW-volume for each predetermined time period to the regulating market. Also, the control center can distribute load demand signals to available generating units on a real-time basis.

The participation factors, which are actually time-dependent variables, must be computed dynamically based on the received bid prices, availability, congestion problem, and other related costs, in the case of using each applicant (Genco). Each participating unit will receive its share of the demand, according to its participation factor, through a dynamic controller that usually includes a simple proportional-integral (PI) structure in a real-world power system. Since the PI controller parameters are usually tuned based on classical experiences and trial-and-error approaches, they are incapable of obtaining good dynamical performance for a wide range of operating conditions and various load scenarios.

An appropriate computation method for the participation factors and desired optimization algorithms for the AGC systems has already been reported in Bevrani.1 Several intelligent-based AGC schemes are also explained in Chapter 3. In continuation, this chapter focuses on the design of a dynamic controller unit using artificial FNNs. Technically, this controller, which is known as a supplementary control unit, has an important role to guarantee a desired AGC performance. An optimal design ensures smooth coordination between generator set point signals and the scheduled operating points. This chapter shows that the FNN control design provides an effective design methodology for the supplementary frequency controller synthesis in a new environment.

It is notable that this chapter is not about how to price either energy or any other economical aspects and services. These subjects are briefly addressed in Chapter 4. It is assumed that the necessary pricing mechanism and congestion management program are established by either free markets, a specific government regulation, or voluntary agreements, and this chapter only focuses on a technical solution for designing supplementary control loops in a bilateral-based electric power market.

The ANNs have already been used to design an AGC system for a power system with a classical (regulated) structure.312 Generally, in all applications, the learning algorithms cause the adjustment of the connection weights so that the controlled system gives a desired response. The most common ANN-based AGC structures are briefly explained in Chapter 3. In this chapter, in order to achieve a better performance, the FNN-based AGC system has been proposed with dynamic neurons that have wide ranges of variation.13

The proposed control strategy is applied to a three-control area example. The obtained results show that designed controllers guarantee the desired performance for a wide range of operating conditions. This chapter is organized as follows. An introduction on ANN-based control systems with commonly used configurations is given in Section 5.2. The ANN with flexible neurons to perform FNN is described in Section 5.3. Section 5.4 presents the bilateral AGC scheme and dynamical modeling. The FNN-based AGC framework is given in Section 5.5. In Section 5.6, the proposed strategy is applied to a single- and three-control area examples, and some simulation results are presented.

 

 

5.2 ANN-Based Control Systems

For many years, it was a dream of scientists and engineers to develop intelligent machines with a large number of simple elements, such as neurons in biological organisms. McCulloch and Pitts14 published the first systematic study of the artificial neural network. In the 1950s and 1960s, a group of researchers combined these biological and psychological insights to produce the first ANN.15,16 Further investigations in ANN continued before and during the 1970s by several pioneer researchers, such as Rosenblatt, Grossberg, Kohonen, Widrow, and others. The primary factors for the recent resurgence of interest in the area of neural networks are dealing with learning in a complex, multilayer network, and a mathematical foundation for understanding the dynamics of an important class of networks.17

The interest in ANNs comes from the networks’ ability to mimic the human brain as well as its ability to learn and respond. As a result, neural networks have been used in a large number of applications and have proven to be effective in performing complex functions in a variety of fields, including control systems. Adaptation or learning is a major focus of neural net research that provides a degree of robustness to the ANN model.

5.2.1 Fundamental Element of ANNS

As mentioned in Chapter 3, an ANN consists of a number of nonlinear computational processing elements (neurons), arranged in several layers, including an input layer, an output layer, and one or more hidden layers in between. Every layer usually contains several neurons, and the output of each neuron is usually fed into all or most of the inputs of the neurons in the next layer. The input layer receives input signals, which are then transformed and propagated simultaneously through the network, layer by layer.

The ANNs are modeled based on biological structures for information processing, including specifically the nervous system and its basic unit, the neuron. Signals are propagated in the form of potential differences between the inside and outside of cells. Each neuron is composed of a body, one axon, and a multitude of dendrites. Dendrites bring signals from other neurons into the cell body (soma). The cell body of a neuron sums the incoming signals from dendrites as well as the signals from numerous synapses on its surface. Once the combined signal exceeds a certain cell threshold, a signal is transmitted through the axon. However, if the inputs do not reach the required threshold, the input will quickly decay and will not generate any action. The axon of a single neuron forms synaptic connections with many other neurons. Cell nonlinearities make the composite action potential a nonlinear function of the combination of arriving signals.

A mathematical model of the neuron is depicted in Figure 5.1, which shows the basic element of an ANN. It consists of three basic components that include weights Wj, threshold (or bias) θ, and a single activation function f(·). The values W1, W2, …, Wn are weight factors associated with each node to determine the strength of input row vector XT = [x1 x2xn]. Each input

is multiplied by the associated weight of the neuron connection. Depending upon the activation function, if the weight is positive, the resulting signal commonly excites the node output, whereas for negative weights it tends to inhibit the node output.

The node’s internal threshold θ is the magnitude offset that affects the activation of the node output y as follows:

y(k)=f(j=1nWjxj(k)+W0θ)(5.1)

This network, which is a simple computing element, was called the perceptron by Rosenblatt in 1959, which is well discussed in Haykin.18 The nonlinear cell function (activation function) can be selected according to the application. Sigmoid functions are a general class of monotonically nondecreasing functions taking on bounded values. It is noted that as the threshold or bias changes, the activation functions may also shift. For many ANN training algorithms, including backpropagation, the derivative of f(·) is needed so that the activation function selected must be differentiable.

Images

FIGURE 5.1
Basic element of an artificial neuron.

5.2.2 Learning and Adaptation

New neural morphologies with learning and adaptive capabilities have infused new control power into the control of complex dynamic systems. Learning and adaptation are the two keywords associated with the notion of ANNs. Learning and adaptation in the field of intelligent control systems were basically introduced in the early 1960s, and several extensions and advances have been made since then. In 1990, Narendra and Parthasarathy suggested that feedforward ANNs could also be used as components in feedback systems. After that, there were a lot of activities in the area of ANN-based identification and control, and a profusion of methods was suggested for controlling nonlinear and complex systems. Although much of the research was heuristic, it provided empirical evidence that ANNs could outperform traditional methods.19

There exist different learning algorithms for ANNs, but they normally encounter some technical problems, such as local minimization, low learning speed, and high sensitivity to initial conditions, among others. Recently, some learning algorithms based on other powerful tools, such as Kalman filtering, have been proposed.20

Recent advances in static and dynamic ANN have created a profound impact on the neural architecture for adaptation and control, introduction to the backpropagation algorithms, and identification and control problems for a general class of dynamic systems. The subject of adaptive control systems, with various terms such as neoadaptive, intelligent, and cognitive control systems, falls within the domain of control of complex industrial systems with reasoning, learning, and adaptive abilities.21

The backpropagation learning algorithm is known as one of the most efficient learning procedures for multilayer ANNs. One reason for wide use of this algorithm, which will be described later, is its simplicity. These learning algorithms provide a special attribute for the design and operation of the dynamic ANN for a given task, such as design of controllers for complex dynamic systems. There are several approaches for deriving the backpropagation algorithm. The simplest derivation is presented in Fogelman-Soulie et al.22 and LeCun.23 This approach is directly influenced from the optimal control theory, which uses Lagrange multipliers to obtain the optimal values of a set of control variables.

Direct analytic computation and recursive update techniques are two basic learning approaches to determining ANN weights. Learning algorithms may be carried out in continuous time or discrete time via differential or difference equations for the weights. There are many learning algorithms, which can be classified into three categories: (1) a supervised learning algorithm uses a supervisor that knows the desired outcomes and tunes the weights accordingly; (2) an unsupervised learning algorithm uses local data, instead of a supervisor, according to emergent collective properties; and (3) a reinforcement learning algorithm uses some reinforcement signal, instead of the output error of that neuron, to tune the weights.

Unlike the unsupervised learning, both supervised and reinforcement learning algorithms require a supervisor to provide training signals in different ways. In supervised learning, an explicit signal is provided by the supervisor throughout to guide the learning process, while in reinforcement learning, the role of the supervisor is more evaluative than instructional.24 An algorithm for reinforcement learning with its application in multiagent-based AGC design is described in Chapter 7.

The principles of reinforcement learning and related ideas in various applications have been extended over the years. For example, the idea of adaptive critic25 was introduced as an extension of the mentioned general idea of reinforcement learning in feedback control systems. The adaptive critic ANN architecture uses a critic ANN in a high-level supervisory capacity that critiques the system performance over time and tunes a second action ANN (for generating the critic signal) in the feedback control loop. The critic ANN can select either the standard Bellman equation, Hamilton-Jacobi-Bellman equation, or a simple weighted sum of tracking errors as the performance index, and it tries to minimize the index. In general, the critic conveys much less information than the desired output required in supervisory learning.26 Some recent research works use a supervisor in the actor-critic architecture, which provides an additional source of evaluation feedback.27

On the other hand, the learning process in an ANN could be offline or online. Offline learning is useful in feedforward applications such as classification and pattern recognition, while in feedback control applications, usually online learning, which is more complex, is needed. In an online learning process, the ANN must maintain the stability of a dynamical system while simultaneously learning and ensuring that its own internal states and weights remain bounded.

5.2.3 ANNS in Control Systems

Serving as a general way to approximate various nonlinear static and dynamic relations, ANN has the ability to be easily implemented for complex control systems. While, in most cases, only simulations supported the proposed control ideas, presently more and more theoretical results are proving the soundness of neural approximation in control systems.28

Applications of ANNs in feedback control systems are basically distinct from those in open-loop applications in the fields of classification, pattern recognition, and approximation of nondynamic functions. In latter applications, ANN usage has developed over the years to show how to choose network topologies and select weights to yield guaranteed performance. The issues associated with weight learning algorithms are well understood. In ANN-based feedback control of dynamical systems, the problem is more complicated, and the ANN must provide stabilizing controls for the system as well as ensure that all its weights remain bounded.

The main objective of intelligent control is to implement an autonomous system that can operate with increasing independence from human actions in an uncertain environment. This objective could be achieved by learning from the environment through a feedback mechanism. The ANN has the capability to implement this kind of learning. Indeed, an ANN consists of a finite number of interconnected neurons (as described earlier) and acts as a massively parallel distributed processor, inspired from biological neural networks, which can store experimental knowledge and make it available for use.

The research on neural networks promotes great interest principally due to the capability of static ANNs to approximate arbitrarily well any continuous function. The most used ANN structures are feedforward and recurrent networks. The latter offers a better-suited tool for intelligent feedback control design systems.

The most proposed ANN-based control systems use six general control structures that are conceptually shown in Figure 5.2: (1) using the ANN system as a controller to provide a direct control command signal in the main feedback loop; (2) using ANN for tuning the parameters of the existing fixed structure controller (I, PI, PID, etc.); (3) using the ANN system as an additional controller in parallel with the existing conventional controller, such as I, PI, and PID, to improve the closed-loop performance; (4) using the ANN controller with an additional intelligent mechanism to control a dynamical plant; (5) using ANN in a feedback control scheme with an intelligent recurrent observer/identifier; and finally, (6) using ANNs in an adaptive critic control scheme. The above six configurations are presented in Figure 5.2a–f, respectively.

In all control schemes, the ANN collects information about the system response, adjusts weights via a learning algorithm, and recommends an appropriate control signal. In Figure 5.2b, the ANN performs an automatic tuner. The main components of the ANN as an intelligent tuner for other (conventional) controllers (Figure 5.2b) include a response recognition unit to monitor the controlled response and extract knowledge about the performance of the current controller gain setting, and an embedded unit to suggest suitable changes to be made to the controller gains. One can use a linear model predictive controller (MPC) instead of a conventional controller.29

In this case, the combination of a linear MPC and ANN unit in Figure 5.2b represents a nonlinear MPC. The ANN unit provides an estimate for the deviation between the predicted value of the output computed via the linear model and the actual nonlinear system output, at a giving sampling time.

The structure shown in Figure 5.2e is mainly useful for trajectory tracking control problems. A recurrent high-order ANN observer can be used to implement the intelligent observer/identifier block.30 An adaptive critic control scheme, shown in Figure 5.2f, is comprised of a critic ANN and an action ANN that approximate the global control based on the nonlinear plant and its model. The critic ANN evaluates the action ANN performance by analyzing predicted states (from the plant model) and real measurements (from the actual plant). The adaptive critic control designs have the potential to replace critical aspects of brain-like intelligence, that is, the ability to cope with a large number of variables in a real environment for ANNs. The origins of the adaptive critical control are based on the ideas of synthesized from reinforcement learning, real-time derivation, backpropagation, and dynamic programming. An application study for the mentioned control structure, entitled the dual heuristic programming adaptive critic control method, is presented in Ferrari.31

Images

FIGURE 5.2
Popular configurations for ANN-based control systems in (a)–(f).

In addition to the above control configurations, ANN is widely used as a plant identifier in control systems. There are two structures for plant identification: the forward and inverse structures. In case of the forward configuration, the ANN receives the same input as the plant, and the difference between the plant output and the ANN output is minimized usually using the backpropagation algorithm. But, the inverse plant identification employs the plant output as the ANN input, while the ANN generates an approximation of the input vector of the real plant.

When an ANN is used as a controller, most of the issues are similar to those of the identification case. The main difference is that the desired output of ANN controller, that is, the appropriate control input to be fed to the plant, is not available but has to be induced from the known desired plant output. In order to achieve this, one uses either approximations based on a mathematical model of the plant or an ANN identifier.29 To perform an adaptive control structure, usually ANNs are combined to both identification and control parts, as shown in Figure 5.2e. Internal model control can be considered as an application for this issue. Such a design, which is schematically shown in Figure 5.3, is robust against model inaccuracies and plant disturbances.28 An idea of the internal model control32 consists of employing a model of the plant and modifying the reference signal r*(k)=r(k)(y(k))y˜(k)), where y˜ represents the internal model output.

Over the past years, ANNs have been effectively used for regulation and tracking problems, which are two control problems of general interest. Regulation involves the generalizations of a control input that stabilize the system around an equilibrium state. In the tracking problem, a reference output is specified and the output of the plant is to approximate it in some sense with as small a difference error as possible. For theoretical analysis, this error is assumed to be zero, so that asymptotic tracking is achieved. Numerous problems are encountered when an ANN is used to control a dynamical system, including regulation and tracking issues. Some problems can be briefly considered as follows:

Images

FIGURE 5.3
Internal model control scheme.

  1. Since the ANN is in a feedback loop with the controlled plant, dynamic rather than static backpropagation is needed to adjust the parameters along the negative gradient. However, in practice, only static backpropagation is used.19

  2. In most control applications, an approximate model of the plant is needed, and to improve the performance further, the parameters of an additional neural network may be adjusted, as in point i.

  3. Because of the complexity of the structure of a multilayer ANN and the nonlinear dependence of its map on its parameter values, stability analysis of the resulting system is always very difficult and quite often intractable.

  4. In practice, the three-step procedure described earlier is used in industrial problems, to avoid adaptation online and the ensuing stability problems. However, if the feedback system is stable, online adaptive adjustments can improve performance significantly, provided such adjustments are small.19

  5. As an important part of ANN-based control system design, the quality of selection methods for ANN initial conditions significantly influences the quality of the control solution.

Selection of initial conditions in an ANN-based control system is also known as an important issue. In multiextremum optimization problems, some initial values may not guarantee the achievement of extremum with a satisfactory value of optimization functional. The initial conditions are usually selected according to the a priori information about distributions at the already known structure of the open-loop ANN and selected optimization structure (control strategy). Methods for the selection of initial conditions can be classified into three categories according to the form of the used information:33 random initial conditions without use of the learning sample, deterministic initial conditions without use of the learning sample, and initial conditions with use of the learning sample.

 

 

5.3 Flexible Neural Network

5.3.1 Flexible Neurons

As mentioned in the previous section, the activation function is the most important part of a neuron (Figure 5.1), and it is usually modeled using a sigmoid function. The flexibility of ANNs can be increased using flexible sigmoid functions (FSFs). Basic concepts and definitions of the introduced FSF were described in Teshnehlab and Watanabe.13 The following hyperbolic tangent function as a sigmoid unit function is considered in hidden and output layers:

f(x,a)=1-e-2xaa(1+e-2xa)(5.2)

The shape of this bipolar sigmoid function can be altered by changing the parameter a, as shown in Figure 5.4. It also has the property

lima0f(x,a)=x(5.3)

Images

FIGURE 5.4
Sigmoid function with changeable shape.

Thus, it is proved that the previous function becomes linear when a→0, while the function becomes nonlinear for large values of a.13 It should be noted that in this study, the learning parameters are included in the update of connection weights and sigmoid function parameters (SFPs). Generally, the main idea is to present an input pattern, allow the network to compute the output, and compare this to the desired signals provided by the supervisor or reference signal. Then, the error is utilized to modify connection weights and SFPs in the network to improve its performance with minimizing the error, as a flexible neural network (FNN) system.

5.3.2 Learning Algorithms in an FNN

The learning process of FNNs for control area i is to minimize the performance function given by

J=12(ydi-yiM)2(5.4)

where ydi represents the reference signal, yiM represents the output unit, and M denotes the output layer. It is desirable to find a set of parameters in the connection weights and SFPs that minimize the J, considering the same input-output relation between the layer k and the layer (k + 1). It is useful to consider how the error varies as a function of any given connection weights and SFPs in the system.

The error function procedure finds the values of all of the connection weights and SFPs that minimize the error function using a gradient descent method. That is, after each pattern has been presented, the error gradient moves toward its minimum for that pattern, provided a suitable learning rate. Learning of SFPs by employing the gradient descent method, the increment of ai denoted by Δai, can be obtained as

Δaik=-η1Jaik(5.5)

where η1 > 0 is a learning rate given by a small positive constant. In the output layer M, the partial derivative of J with respect to a is described as follows:38

JaiM=JyiMyiMaiM(5.6)

Here, defining

σiM-JyiM(5.7)

gives

σiM=(ydi-yiM)(5.8)

The next step is to calculate a in the hidden layer k:

Jkik=Jyikyikaik=Jyikf*(hik,aik)(5.9)

where h denotes outputs of the hidden layer, and by defining

aik-Jyik(5.10)

we have

Jyik=mJymk+1ymk+1yik=-mσmk+1ymk+1hmk+1hmk+1yik=-mσmk+1f(hmk+1amk+1)hmk+1wi,mk,k+1(5.11)

where

aik=mσmk+1f(hmk+1amk+1)hmk+1wi,mk,k+1(5.12)

Therefore, the learning update equation for a in the output and hidden layer neurons is obtained, respectively, as follows:

aik(t+1)=aik(t)+η1σikf*(hik,αik)+α1Δaik(t)(5.13)

where f*(.,.) is defined by in the output layer ∂f(.,aiM)/∂yiM and ∂f(.,aik)/∂yik in the hidden layer, and α1 is the stabilizing coefficient defined by 0 < α1 < 1. For deeper insights into the subject, interested readers are referred to Teshnehlab and Watanabe.13

Generally, the learning algorithm of connection weights has been studied by different authors. Here, it can be simply summarized as follows:

wij(t+1)=wijk,k1(t)+η2δjkyjk1+α2Δwijk,k1(t)(5.14)

where t denotes the tth update time, η2 > 0 is a learning rate given by a small positive constant, α2 is a stabilizing (or momentum) coefficient defined by 0 < α2 < 1, and

δiM=(ydj-yjM)f(hjM)(5.15)

δik=f(hjk)mδmik+1wik,k+1(5.16)

f(hjM)=df(hjM)/dhjM(5.17)

 

 

5.4 Bilateral AGC Scheme and Modeling

5.4.1 Bilateral AGC Scheme

As mentioned in Chapter 4, depending on the electrical system structure, there are different control frameworks and AGC/LFC schemes, but a common objective is restoring the frequency and the net interchanges to their nominal values, in each control area. The present work studies AGC design in a given control area under bilateral structure, similar to the decentralized pluralistic scheme defined by UCTE34 and described in Chapter 4.

In a bilateral AGC scheme, the power system is considered a collection of distribution control areas interconnected through high-voltage transmission lines or tie-lines. Each control area has its own load frequency controller and is responsible for tracking its own load and honoring tie-line power exchange contracts with its neighbors. Similar to Chritie and Bose,35 the general theme in our chapter is that the loads (Discos) are responsible for purchasing the ancillary services they require. Contractual agreements, between regulation power producers (Gencos) and consumers (Discos), are signed together with transmission access license energy transactions. In the assumed bilateral AGC structure, each distribution area purchases regulating power from one or more Gencos. A separate control process exists for each control area. Such a configuration for a three-control area power system is conceptually shown in Figure 5.5.

Images

FIGURE 5.5
Three control areas.

The control areas regulate their frequency by their own supplementary control loop. According to the UCTE definitions, a control area is the smallest portion of a power system equipped with an autonomous AGC system. A control block may be formed by one or more control areas working together to meet a specified AGC requirement with respect to the neighboring control blocks.37

If some control areas perform a control block, which is not applied in the present example, in this case a block coordinator (market operator) coordinates the whole block toward its neighbor blocks/control area by means of its own supplementary controller and regulating capacity.34,36 The control algorithm for each control area is executed at the generating units, since ultimately the Gencos must adjust the governor set points of generators for the AGC purpose.

5.4.2 Dynamical Modeling

In most ANN applications, specifically for performing the backpropagation linear algorithm, an approximate dynamic model of the plant is needed. This model can be obtained by either identification methods or mathematical differential equations. For this purpose, differential equations of an aggregated control area model are used.

Consider that a general distribution control area includes N generator companies that supply the area load, and assume the kth Genco Gk is able to generate enough power to satisfy the necessary participation factor for tracking the load and performing the AGC task, and other Gencos are the main supplier for area load.

Although power systems are inherently nonlinear, for intelligent AGC design, a simplified (and linearized) model is usually used. In intelligent control strategies (such as the one considered in this chapter), the error caused by the simplification and linearization can be considered in respect to their self-tuning and adaptive property. For simplicity, assume that each Genco has one generator. The linearized dynamics of the individual generators are given by

2Hif0dδfidt=δPmi-δPLi-δPdi-Diδfidδδidt=2πδfi,i=1,2,...,N(5.18)

where Hi is the constant of inertia, Di is the damping factor, δi is the rotor angle, fo is the nominal frequency, Pm is the turbine (mechanical) power, Pdi is the disturbance (power quantity), and PLi is the electrical power.

The generators are equipped with a speed governor. Assuming nonreheat steam type for the generating units, the linear models of speed governors and turbines associated with generators are given by deferential equations:1

dδPgidt=-1TgiδPgi+KgiTgi(δPCi-1Riδfi)dδPmidt=-1TtiδPmi+KmiTtiδPgi,i=1,...,N(5.19)

where Pg is the steam valve power, Ri is the droop characteristic, Tt and Tg are the time constants of the turbine and governor, Km and Kt are the gains of the turbine and governor, and finally, PCi is the reference set point (control input).

The individual generator models are coupled to each other via the system network. Mathematically, the local state space of each individual generator must be extended to include the system coupling variable (such as rotor angle), which allows the dynamics at one point on the system to be transmitted to all other points. The state space model of a control area can be easily obtained as follows:38

x˙i=Aixi+Biui+Fiwiy =Cixi+Eiwi(5.20)

 

 

5.5 FNN-Based AGC System

Here, the objective is to formulate the AGC problem in each control area and propose an effective supplementary control loop based on the ANNs. Since there is a strong relationship between the training of ANNs and adaptive/ self-tuning control, increasing the flexibility of structure induces a more efficient learning ability in the system, which in turn causes less iteration and better error minimization. To obtain the improved flexibility, teaching signals and other parameters of ANNs (such as connection weights) should be related to each other.

Here, a sigmoid unit function, as a mimic of the prototype unit, to give a flexible structure to the neural network, is used. For this purpose, a hyperbolic tangential form of the sigmoid unit function, with a parameter that must be learned,13 is introduced to fulfill the above-mentioned goal. The overall scheme of the proposed control system for a given control area is shown in Figure 5.6.

The FNN uses a backpropagation algorithm in a supervised learning mode. The accuracy and speed of the backpropagation method is improved using dynamic neurons. The main idea is to modify connection weights and SFPs in the proposed FNN-based controller to minimize the output error (e) signal and improve system performance. On the other hand, it is desirable to find a set of parameters in the connection weights and SFPs that minimize the output error signal.

Images

FIGURE 5.6
The overall synthesis framework.

The designed controller acts to maintain area frequency and total exchange power close to the scheduled value by sending a corrective signal to the assigned Gencos (ΔPCi = ui). This supplementary regulating power signal, weighted by the generator participation factor αi, is used to modify the set points of generators. The Ii is an input vector that includes a property set of output (yi), reference (ydi), and control signal (ui) in current and previous iteration steps.

Images

FIGURE 5.7
The proposed AGC scheme.

The proposed control framework, shown in Figure 5.7, uses the tie-line power change (ΔPtie) and frequency deviation (Δf) as main input signals. βi and λPi are properly set up coefficients of the supplementary control feedback system. ΔPd represents the system disturbances. More measurement signals and data may need to feed the FNN control system as additional inputs (such as previous values of the control signal). As there are many Gencos in each area, the control signal has to be distributed among them in proportion to their participation in the AGC. Hence, the generator participation factor shows the sharing rate of each participant generator unit in the AGC task.

The general structure of an FNN-based controller is shown in Figure 5.8. In this figure, unit functions in the hidden and output layers are flexible functions. The number of hidden layers and the units in each layer are entirely dependent on the control area system, and there is no mathematical approach to obtain the optimum number of hidden layers,17,39 since such selection generally falls into the application-oriented category.

The number of hidden layers can be chosen based on the training of the network using various configurations. It has been shown that the FNN configuration gives fewer hidden layers and nodes than traditional ANN,8 which still yields the minimum root mean squares (RMS) error quickly and efficiently. Experiences and simulation results show that using a single hidden layer is sufficient to solve the AGC problem for many control areas with different structures.38

The equivalent discrete time-domain state space of the control area model (Equation 5.20) can be obtained as follows:

xi(k+1)=Aixi(k)+Biui(k)+Fiwi(k)yi(k)=Cixi(k)+Eiwi(k)(5.21)

When the FNN controller is native, i.e., the network is with random initial weights and SFPs, an erroneous system input ui(k) may be produced by an erroneous output yi(k). This output will then be compared with the reference signal ydi(k). The resulting error signal ei(k) is used to train the weights and SFPs in the network using the backpropagation algorithm. In fact, the basic concept of the backpropagation method of learning is to combine a nonlinear perceptron-like system capable of making decisions with the objective error function and gradient descent method. With repetitive training, the network will learn how to respond correctly to the reference signal input.

Images

FIGURE 5.8
General structure of an FNN-based controller.

As the amount of training increases, the network becomes more and more mature; hence, the area control error (ACE) will become smaller and smaller. However, as seen from Figure 5.8, the backpropagation of the error signal cannot be directly used to train the FNN controller. In order to properly adjust the weights and SFPs of the network using the backpropagation algorithm, the error (Equation 5.22) in the FNN controller output should be known.

ej(k)=ydi(k)-yj(k)=ACEdi(k)-ACEj(k)=-ACE(k)(5.22)

where udi(k) is the desired driving input to the control area. Since only the system output error, as described in Equation 5.23, is measurable or available,

ei(k)=ydi(k)-yi(k)(5.23)

εi(k) can be determined using the following expression:

εi(k)=ei(k)yi(k)ui(k)(5.24)

where the partial derivative in Equation 5.24 is the Jacobian of the control area model. Thus, the application of this scheme requires a through knowledge of the Jacobian of the system dynamical model. For simplicity, instead of Equation 5.22, one can use the following equation:

εi(k)=ei(k)Δyi(k)-Δyi(k-1)Δui(k)-Δui(k-1)(5.25)

This approximation avoids the introduction and training of a neural network emulator, which results in substantial savings in development time. The proposed supplementary feedback acts as a self-tuning controller that can learn from experience, in the sense that connection weights and SFPs are adjusted online; in other words, this controller should produce ever-decreasing tracking errors from sampling by using FNN.

 

 

5.6 Application Examples

A discrete time-domain state space model for control area i can be obtained as given in Equation 5.21. To achieve the AGC objectives, the proposed control strategy is applied to a control area as shown in Figure 5.9. As can be seen from the block diagram, a multilayer neural network including three layers is constructed. This network has nine units in the input layer, seven units in the hidden layer, and one unit in the output layer. The neural network acts as a controller to supply the AGC participation generating units with a correct driving input ui(k) = ΔPCi(k), which is based on the reference input signal ydi(k), previous system output signals yi(k–1), …, yi(k–4), and control output signals ui(k–1), …, ui(k–4). The ydi(k) is the output variable yi(k), when the error signal must be equal to zero. Then the input vector of neural network is

IiT(k)=[ydi(k)yi(k-1)yi(k-4)ui(k-1)ui(k-4)](5.26)

h1i, …, h7i are outputs of the hidden layer, and ui(k) is the output of the output layer of area i.

As shown in Figure 5.9, in the learning process not only the connection weights, but also the SFPs are adjusted. Adjusting the SFPs in turn causes a change in the shapes of sigmoid functions. The proposed learning algorithm considerably reduces the number of training steps, resulting in much faster training than with the traditional ANNs.38

For the problem at hand, simulations show that three layers are enough for the proposed FNN control system to obtain desired AGC performance. Increasing the number of layers does not significantly improve the control performance. In the proposed controller, the input layer uses the linear neurons, while the hidden and output layers use the bipolar FSFs given in Equation 5.2.

Images

FIGURE 5.9
The proposed FNN controller for the control area i

In order to demonstrate the effectiveness of the proposed strategy, some simulations were carried out. In these simulations, the proposed intelligent control design methodology is applied to a single- and three-control area power system example. The initial connection weights and initial uniform random number (URN) of sigmoid function unit parameters for each control area are properly chosen.

5.6.1 Single-Control area

Consider a distribution control area and its suppliers, as shown in Figure 5.10, which perform a control area under the bilateral AGC policy. Gencos produce electric power that is delivered to the Disco either directly or through the Transco. In this example, the Disco buys firm power from Gencos 2, 3, and 4, and enough power from Genco 1 to supply its load and support the AGC system. It is assumed that Genco 1 is able to generate enough regulation power to satisfy the AGC needs. Transco delivers power from Genco 1, and it is also contracted to deliver power associated with the AGC problem. This control area is connected to its neighbors through L1 and L2 interconnection lines.

Images

FIGURE 5.10
A single-control area.

TABLE 5.1
Applied Data for Simulation

Images

In the proposed structure, the Disco is responsible for tracking the load, and hence performing the AGC task by securing as much transmission and generation capacity as needed. Ultimately, the control algorithm is executed at Genco 1. It is assumed that each Genco has one generating unit. The power system parameters are given in Table 5.1. The sampling time is chosen as 1 ms. The initial values of sigmoid function parameters are randomly chosen from a domain of [0, 1], and the initial connection weights are considered as follows:

Images

W20=[2.1823.7837.3777.4883.0087.3049.126]

A system response for some simulation scenarios is shown in Figure 5.11. Figure 5.11a and b compare the open-loop and closed-loop (equipped with conventional ANN) area frequency response following a 0.1 pu step increase in the area load at 2 s. The parameter ai of sigmoid unit functions in the applied ANN is fixed at ai = 1. Figure 5.11c shows the system response for the same test scenario, using the proposed FNN controller. Figure 5.11b and c shows that the closed-loop performance for the FNN controller is much better than that for the conventional ANN with a fixed structure activation function. Finally, Figure 5.11d depicts the changing in power coming to the control area from all Gencos following the step disturbance. This figure shows that the power is initially coming from all Gencos to respond to the load increase, which will result in a frequency drop that is sensed by the speed governors of all machines through their primary frequency control loops. But after a few seconds and at steady state the additional power comes from the participating unit(s) in AGC only, and other Gencos do not contribute to the AGC task.

Images

FIGURE 5.11
System response following 0.1 pu step load increase: (a) open-loop system, (b) conventional ANN, (c) proposed FNN, and (d) mechanical power changes (solid, Genco 1; dotted, other Gencos).

5.6.2 Three-Control area

As another example, consider a power system with three control areas, as shown in Figure 5.12. Each control area has some Gencos with different parameters, and it is assumed that one generator unit with enough capacity is responsible for area frequency regulation (G11, G22, and G31 in areas 1, 2, and 3, respectively). Control area 1 delivers enough power from G11 and firm power from other Gencos to supply its load and support the AGC task. In case of a load disturbance, G11 must adjust its output to track the load changes and maintain the energy balance.

A control area may have a contract with a Genco in another control area. For example, control area 3 buys power from G11 in control area 1 to supply its load. The control areas are connected to the neighbor areas through L12, L13, and L23 interconnection lines. It is assumed that each Genco has one generator unit. The power system parameters are given in Bevrani et al.40

The proposed intelligent control design is applied to the three-control area power system described in Figure 5.12. Similar to the previous example, the initial connection weights and initial uniform random number of sigmoid function unit parameters for each control area are properly chosen. For instance, a set of suitable learning rates and momentum terms for area 1 are given in Table 5.2.

Images

FIGURE 5.12
Three-control area power system.

Figure 5.13a–c demonstrates a system response, following a 0.1 pu step load increase in each control area. In steady state, the frequency in each control area is properly returned to its nominal value. Comparing these results with the results obtained from ANN controllers with fixed structure, sigmoid functions for some test scenarios are given in Bevrani et al.38 The comparison illustrates the effectiveness and capability of the proposed control design against the conventional NN-based AGC design.

TABLE 5.2
Learning Rates and Momentum Terms for Proposed FNN

Control Area i

Learning Rates [η1i2i]

Momentum Terms [α1i, α2i]

1

[0.005, 0.002]

[0.07, 0.09]

2

[0.003, 0.001]

[0.05, 0.05]

3

[0.01, 0.002]

[0.01, 0.04]

Figure 5.13d demonstrates the disturbance rejection property of the closed-loop system. This figure shows the frequency deviation in all control areas, following a step disturbance (ΔPd) of 0.01 pu on the interconnecting lines L12, L13, and L23 at t = 2 s.

Simultaneous learning of the connection weights and the sigmoid unit function parameters in the proposed method causes an increase in the number of adjustable parameters in comparison with the traditional method, and the proposed algorithm causes a reduction in the sensitivity of ANN to the parameters, such as connection weights, while increasing the sensitivity of ANN to the SFPs. However, in the proposed structure, the training of SFPs causes a change in the shape of individual sigmoid functions according to input space and reference signal and achieves betterment convergence and performance than traditional ANNs.

 

 

5.7 Summary

Power system operation and control took decades to shape, having been modified with increasing availability of new, powerful mathematical and computational tools. One of the modern tools is artificial neural networks (ANNs). Over recent years, due to their ability to learn complex nonlinear functional relationships, ANNs have been suggested for many industrial systems using different configurations. At present, the methods provided by neural networks have matured and have been widely used in power system modeling, identification, and control.

Images

FIGURE 5.13
Frequency deviation in (a) area 1, (b) area 2, (c) area 3, following simultaneous 0.1 pu step load increases in three areas. (d) System response due to a 0.01 pu step disturbance on interconnecting lines at 2 s.

In this chapter, a methodology for AGC design using flexible neural networks in a restructured power system has been proposed. Design strategy includes enough flexibility to set a desired level of performance. The proposed control methodology was applied to a single- and three-control area power system under a bilateral AGC scheme. It is recognized that the learning of both connection weights and SFPs increases the power of learning algorithms, in keeping with a high capability in the training process. Simulation results demonstrated the effectiveness of the methodology. It has been shown that the suggested FNN-based supplementary frequency controllers give better ACE minimization and a proper convergence to the desired trajectory than the traditional ANN ones.

 

 

References

1. H. Bevrani. 2009. Robust power system frequency control. New York: Springer.

2. H. Bevrani. 2004. Decentralized robust load-frequency control synthesis in restructured power systems. PhD dissertation, Osaka University.

3. H. L. Zeynelgil, A. Demiroren, N. S. Sengor. 2002. The application of ANN technique to automatic generation control for multi-area power system. Elect. Power Energy Syst. 24:345–54.

4. F. Beaufays, Y. Abdel-Magid, B. Widrow. 1994. Application of neural networks to load-frequency control in power systems. Neural Networks 7(1):183–94.

5. D. K. Chaturvedi, P. S. Satsangi, P. K. Kalra. 1999. Load frequency control—A generalized neural network approach. Elect. Power Energy Syst. 21:405–15.

6. Y. L. Karnavas, D. P. Papadopoulos. 2002. AGC for autonomous power system using combined intelligent techniques. Elect. Power Syst. Res. 62:225–39.

7. M. Djukanovic, M. Novicevic, D. J. Sobajic, Y. P. Pao. 1995. Conceptual development of optimal load-frequency control using artificial neural networks and fuzzy set theory. Eng. Intelligent Syst. Elect. Eng. Commun. 2:95–108.

8. H. Bevrani. 2002. A novel approach for power system load frequency controller design. In Proceedings of IEEE/PES T&D 2002, Asia Pacific, Yokohama, Japan, vol. 1, pp. 184–89.

9. A. Demiroren, N. S. Sengor, H. L. Zeynelgil. 2001. Automatic generation control by using ANN technique. Elect. Power Components Syst. 29:883–96.

10. T. P. I. Ahamed, P. S. N. Rao. 2006. A neural network based automatic generation controller design through reinforcement learning. Int. J. Emerging Elect. Power Syst. 6(1):1–31.

11. L. D. Douglas, T. A. Green, R. A. Kramer. 1994. New approaches to the AGC nonconforming load problem. IEEE Trans. Power Syst. 9(2):619–28.

12. H. Shayeghi, H. A. Shayanfar. 2006. Application of ANN technique based on Mu-synthesis to load frequency control of interconnected power system. Elect. Power Energy Syst. 28:503–11.

13. M. Teshnehlab, K. Watanabe. 1999. Intelligent control based on flexible neural networks. Dordrecht, NL: Kluwer Publishers.

14. W. W. McCulloch, W. Pitts. 1943. A logical calculus of ideas imminent in nervous activity. Bull. Math. Biophys. 5:115−33.

15. F. Rosenblatt. 1961. Principles of neurodynamics. Washington, DC: Spartan Press.

16. J. L. McClelland, D. E. Rumelhart. 1986. Parallel distributed processing explorations in the microstructure of cognition: Psychological and biological models. Vol. 2. Cambridge, MA: MIT Press.

17. A. Zilouchian, M. Jamshidi. 2001. Intelligent control systems using soft computing methodologies. Boca Raton, FL: CRC Press.

18. S. Haykin. 1994. Neural networks. New York: IEEE Press and Macmillan.

19. K. S. Narendra. 2003. Identification and control. In The handbook of brain theory and neural networks, ed. M. A. Arbib, 547–51. Cambridge, MA: MIT Press.

20. S. Haykin, ed. 2001. Kalman filtering and neural networks. New York: John Wiley & Sons.

21. M. M. Gupta, L. Jin, N. Homma. 2003. Static and dynamic neural networks: From fundamentals to advanced theory. New York: John Wiley & Sons.

22. F. Fogelman-Soulie, P. Gallinari, Y. LeCun, S. Thiria. 1987. Automata networks and artificial intelligence. In Automata networks in computer science: Theory and applications, 133–86. Princeton, NJ: Princeton University Press.

23. Y. LeCun. 1988. A theoretical framework for back-propagation. In Proceedings 1988, Connectionist Model Summer School, ed. D. Touretzky, C. Hinton, T. Sejnowski, 21–28. Pittsburgh, PA: Morgan Kaufmann.

24. F. L. Lewis, J. Campos, R. Selmic. 2002. Neuro-fuzzy control of industrial systems with actuator nonlinearities. Philadelphia: SIAM.

25. P. J. Werbos. 1991. A menu of designs for reinforcement learning over time. In Neural networks for control, ed. W. T. Miller, R. S. Sutton, P. J. Werbos, 67–95. Cambridge, MA: MIT Press.

26. J. Sarangapani. 2006. Neural network control of nonlinear discrete-time systems. Boca Raton, FL: CRC Press.

27. M. T. Rosenstein, A. G. Barto. 2004. Supervised actor-critic reinforcement learning. In Handbook of learning and approximate dynamic programming, ed. J. Si, A. G. Barto, W. B. Powell, D. Wunsch, 359–80. IEEE Press.

28. A. Pacut. 2003. Neural techniques in control. In Neural networks for instrumentation, measurement and related industrial applications, ed. S. Ablameyko et al., 79–117. Amsterdam, NL: IOS Press.

29. U. Halici, K. Leblebicioglu, C. Özegen, S. Tuncay. 2000. Recent advances in neural network applications in process control. In Recent advances in artificial neural networks design and applications, ed. L. Jain, A. M. Fanelli, 239–300. Boca Raton, FL: CRC Press.

30. A. Y. Alanis, E. N. Sanchez. 2009. Discrete-time reduced order neural observers. In Advances in Computational Intelligence, eds. W. Yu, E. N. Sanchez. AISC61: 113–22, Berlin Heidelberg: Springer-Verlag.

31. S. Ferrari. 2002. Algebraic and adaptive learning in neural control systems. PhD thesis, Princeton University.

32. I. Rivals, L. Personnaz. 2000. Nonlinear internal model control using neural networks: Applications to processes with delay and design issues. IEEE Trans. Neural Networks 11(1):80–90.

33. A. I. Galushkin. 2007. Neural networks theory. New York: Springer.

34. UCPTE. 1999. UCPTE rules for the co-ordination of the accounting and the organization of the load-frequency control. UCTE. 1999. UCTE Ground rules for the coordination of the accounting and the organization of the load-frequency control. Available: www.ucte.org.

35. R. D. Chritie, A. Bose. 1996. Load frequency control issues in power system operation after deregulation. IEEE Trans. Power Syst. 11(3):1191–200.

36. B. Delfino, F. Fornari, S. Massucco. 2002. Load-frequency control and inadvertent interchange evaluation in restructured power systems. IEE Proc. Gener. Transm. Distrib. 149(5):607–14.

37. G. Dellolio, M. Sforna, C. Bruno, M. Pozzi. 2005. A pluralistic LFC scheme for online resolution of power congestions between market zones. IEEE Trans. Power Syst. 20(4):2070–77.

38. H. Bevrani, T. Hiyama, Y. Mitani, K. Tsuji, M. Teshnehlab. 2006. Load-frequency regulation under a bilateral LFC scheme using flexible neural networks. Eng. Intelligent Syst. J. 14(2):109–17.

39. S. Haykin. 1999. Neural networks: A comprehensive foundation. 2nd ed. Upper Saddle River, NJ: Prentice Hall.

40. H. Bevrani, Y. Mitani, K. Tsuji. 2004. On robust load-frequency regulation in a restructured power system. IEEJ Trans. Power Energy 124(2):190–98.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset