11

 

 

Application of Genetic Algorithm in AGC Synthesis

 

Genetic algorithm (GA) is a numerical optimization algorithm that is capable of being applied to a wide range of optimization problems, guaranteeing the survival of the fittest. Time consumption methods such as trial and error for finding the optimum solution cause interest in meta-heuristic methods such as GA. The GA becomes a very useful tool for tuning of control parameters in AGC systems.

The GA begins with a set of initial random populations represented in chromosomes; each one consists of some genes (binary bits). These binary bits are suitably decoded to provide a proper string for the optimization problem. Genetic operators act on this initial population and regenerate the new populations to converge at the fittest. A function called fitness function is employed to aid regeneration of the new population from the older one, i.e., initial population. The fitness function assigns a value to each chromo-some (solution candidate), which specifies its fitness. According to the fitness values, the results are sorted and some suitable chromosomes are employed to generate the new population by the specified operators. This process will continue until it yields the most suitable population as the optimal solution for the given optimization problem.

Several investigations have been reported in the past, pertaining to the application of GA in the AGC design.115 Application of GA in AGC synthesis as a performance optimization problem in power system control is briefly reviewed in Chapter 3. In this chapter, following an introduction on the GA mechanism in Section 11.1, the GA application for optimal tuning of supplementary frequency controllers is given in Section 11.2. In Section 11.3, AGC design is formulated as a multiobjective GA optimization problem. A GA-based AGC synthesis to achieve the same robust performance indices as provided by the standard mixed H2/H control theory is addressed in Section 11.4. The capability of GA to improve the learning performance in the AGC systems using a learning algorithm is emphasized in Section 11.5, and finally, the chapter is summarized in Section 11.6.

 

 

11.1 Genetic Algorithm: An Overview

11.1.1 GA Mechanism

The GA mechanism is inspired by the mechanism of natural selection, where stronger individuals would likely be the winners in a competing environment. Normally in a GA, the parameters to be optimized are represented in a binary string. A simplified flowchart for GA is shown in Figure 11.1. The cost function, which determines the optimization problem, represents the main link between the problem at hand (system) and GA, and also provides the fundamental source to provide the mechanism for evaluation of algorithm steps. To start the optimization, GA uses randomly produced initial solutions created by a random number generator. This method is preferred when a priori information about the problem is not available. There are basically three genetic operators used to produce a new generation: selection, crossover, and mutation. The GA employs these operators to converge at the global optimum. After randomly generating the initial population (as random solutions), the GA uses the genetic operators to achieve a new set of solutions at each iteration. In the selection operation, each solution of the current population is evaluated by its fitness, normally represented by the value of some objective function, and individuals with higher fitness values are selected.

Images

FIGURE 11.1
A simplified GA flowchart.

Different selection methods such as stochastic selection or ranking-based selection can be used. In the selection procedure the individual chromosomes are selected from the population for the later recombination/crossover. The fitness values are normalized by dividing each one by the sum of all fitness values named selection probability. The chromosomes with a higher selection probability have a higher chance to be selected for later breeding.

The crossover operator works on pairs of selected solutions with a certain crossover rate. The crossover rate is defined as the probability of applying a crossover to a pair of selected solutions (chromosomes). There are many ways to define the crossover operator. The most common way is called the one-point crossover. In this method, a point (e.g., for two binary-coded solutions of a certain bit length) is determined randomly in two strings and corresponding bits are swapped to generate two new solutions. A mutation is a random alteration with a small probability of the binary value of a string position, and it will prevent the GA from being trapped in a local minimum. The coefficients assigned to the crossover and mutation specify the number of the children. Information generated by the fitness evaluation unit about the quality of different solutions is used by the selection operation in the GA. The algorithm is repeated until a predefined number of generations have been produced.

Unlike the gradient-based optimization methods, GAs operate simultaneously on an entire population of potential solutions (chromosomes or individuals) instead of producing successive iterations of a single element, and the computation of the gradient of the cost functional is not necessary. Interested readers can find basic concepts and a detailed GA mechanism in Goldberg16 and Davis.17

11.1.2 GA in Control Systems

GA is one of rapidly emerging optimization approaches in the field of control engineering and control system design.18 Optimal/adaptive tracking control, active noise control, multiobjective control, robust tuning of control systems via seeking the optimal performance indices provided by robust control theorems, and use in fuzzy-logic- and neural-network-based control systems are some important applications of GA in control systems. Genetic programming can be used as an automated invention machine to synthesize designs for complex structures. It facilitates the design of robust dynamic systems with respect to environmental noise, variation in design parameters, and structural failures in the system.19,20

A simple GA-based control system is conceptually shown in Figure 11.2. The GA controller consists of three components: performance evaluator, learning algorithm, and control action producer. The performance evaluator rates a chromosome by assigning it a fitness value. The value indicates how good the chromosome is in controlling the dynamical plant to follow a reference signal. The learning algorithm may use a set of rules in the form of “condition then action” for controlling the plant. The desirable action will be performed by a control action producer when the condition is satisfied. The control structure shown in Figure 11.2 is implemented for several control applications in different forms.21

Images

FIGURE 11.2
A GA-based control system.

 

 

11.2 Optimal Tuning of Conventional Controllers

Here, to show the capability of GA for tuning of a conventional integral controller in the supplementary frequency control loop, a simple three-area control system with a single thermal generator (reheat steam unit) and an integral controller in each control area is considered. The dynamic model of a governor-turbine, Mi (s), and nominal parameters of systems are taken from Golpira and Bevrani14 and Bevrani.22

The dynamic frequency response model for each area is considered as shown in Figure 11.3. The components of the block diagram are defined in Chapter 2. Here, the most important physical constraints are considered. For the sake of simulation, the generation rate constraint (GRC), governor dead-band, and time delay in each supplementary frequency control loop are fixed at 3% pu.MW/min, 2 s, and 0.36 Hz, respectively.

For the present example, the initial population consists of one hundred chromosomes; each one contains forty-eight binary bits (genes). The fitness proportionate selection method (known as the roulette-wheel selection method) is used to select useful solutions for recombination. The crossover and mutation coefficients are fixed at 0.8 and 0.2. The objective function, which should be minimized, is considered as given in Equation 11.1:

J=0T(ACEi)2(11.1)

where T is simulation time and ACEi = ΔPtie,i + βiΔfi is the area control error signal (see Equation 2.11). The applied GA steps are summarized as follows:

  1. The initial population of one hundred random binary strings of length 48 has been built (each controller gains by sixteen genes).

  2. The strings are decoded to the real numbers from a domain of [0, 1].

Images

FIGURE 11.3
Frequency response model for area i

  1. Fitness values are calculated for each chromosome.

  2. The fitter ones are selected as parents.

  3. Some pairs of parents are selected based on the selection method and recombined to generate children.

  4. Rarely, mutation is applied to the children. In other words, a few 0 bits flipped to 1, and vice versa.

  5. A new population is regenerated by allowing parents and children to be together.

  6. Return to step 2 and repeat the above steps until terminated conditions are satisfied.

The above procedure is schematically depicted in Figure 11.4. Following the GA application, the optimal gains for integral controllers in areas 1, 2, and 3 are obtained as 0.259, 0.278, and 0.001, respectively. The system response is examined in the presence of simultaneous 0.02 pu step load disturbances in three areas, at 2 s. The frequency deviation and the net tie-line power change in each area are shown in Figure 11.5.

It is shown that neglecting GRC, speed governor dead-band, and time delay decreases the efficiency of the designed controller in response to load disturbances in an acceptable time period.14 The mentioned dynamics must be considered in the design of supplementary frequency control loops to eliminate their detrimental effects.

Images

FIGURE 11.4
GA structure.

Images

FIGURE 11.5
System response following simultaneous 0.02 pu step load increases in three areas: (a) frequency deviation and (b) tie-line power change.

 

 

11.3 Multiobjective GA

11.3.1 Multiobjective Optimization

Initially, the majority of control design problems are inherently multiobjective problems, in that there are several conflicting design objectives that need to be simultaneously achieved in the presence of determined constraints. If these synthesis objectives are analytically represented as a set of design objective functions subject to the existing constraints, the synthesis problem could be formulated as a multiobjective optimization problem.

As mentioned, in a multiobjective problem, unlike a single optimization problem, the notation of optimality is not so straightforward and obvious. Practically in most cases, the objective functions are in conflict and show different behavior, so the reduction of one objective function leads to an increase in another. Therefore, in a multiobjective optimization problem, there may not exist one solution that is best with respect to all objectives. Usually, the goal is reduced to compromise all objectives and determine a trade-off surface representing a set of nondominated solution points, known as Pareto-optimal solutions. A Pareto-optimal solution has the property that it is not possible to reduce any of the objective functions without increasing at least one of the other objective functions.

Unlike single-objective optimization, the solution to the multiobjective problem is not a single point, but a family of points as the (Pareto-optimal) solutions set, where any member of the set can be considered an acceptable solution. However, the choice of one solution over the other requires problem knowledge and a number of problem-related factors.23,24

Mathematically, a multiobjective optimization (in the form of minimization) problem can be expressed as

Minimizey=f(x)={f1(x),f2(x),...,fM(x)}Subject to:g(x)={g1(x),g2(x),...,gJ(x)}0(11.2)

where x = {x1, x2, ..., xN} ∈ X is a vector of decision variables in the decision space X, and y = {y1, y2, ..., yN} ∈ Y is the objective vector in the objective space. The solution is not unique; however, one can choose a solution over the others. In the minimization case, the solution x1 dominates x2, or x1 is superior to x2, if

i{1, ... ,M},y(x1)y(x2)i{1,...,M}|yi(x1)<yi(x2)(11.3)

The x1 is called a noninferior or Pareto-optimal point if any other in the feasible space of design variables does not dominate x1. Practically, since there could be a number of Pareto-optimal solutions, and the suitability of one solution may depend on system dynamics, the environment, the designer’s choice, etc., finding the center point of a Pareto-optimal solutions set may be desired.

GA is well suited for solving multioptimization problems. Several ap proaches have been proposed to solve multiobjective optimization problems using GAs.15,16,2529 The keys for finding the Pareto front among these various procedures are the Pareto-based ranking29 and fitness-sharing16 techniques. In the most common method, the solution is simply achieved by developing a population of Pareto-optimal or near-Pareto-optimal solutions that are non-dominated. The xi is said to be nondominated if there does not exist any xj in the population that dominates xi. Nondominated individuals are given the greatest fitness, and individuals that are dominated by many other individuals are given a small fitness. Using this mechanism, the population evolves toward a set of nondominated, near-Pareto-optimal individuals.29

In addition to finding a set of near-Pareto-optimal individuals, it is desirable that the sample of the whole Pareto-optimal set given by the set of nondominated individuals be fairly uniform. A common mechanism to ensure this is fitness sharing,29 which works by reducing the fitness of individuals that are genetically close to each other. However, all the bits of a candidate solution bit string are not necessarily active. Thus, two individuals may have the same genotype, but different gene strings, so that it is difficult to measure the difference between two genotypes in order to implement fitness sharing. One may simply remove the multiple copies of genotypes from the population.30

11.3.2 application to agC Design

The multiobjective GA methodology is conducted to optimize the proportional-integral (PI)-based supplementary frequency control parameters in a multiarea power system. The control objectives are summarized to minimize the ACE signals in the interconnected control areas. To achieve this goal and satisfy an optimal performance, the parameters of the PI controller in each control area can be selected through minimization of the following objective function:

ObjFnci=t=0K|ACEi,t|(11.4)

where ObjFnci is the objective function of control area i, K is equal to the simulation sampling time (s), and |ACEi,t| is the absolute value of the ACE signal for area i at time t.

Following use of the multiobjective GA optimization technique to tune the PI controllers and find the optimum values of objective functions (Equation 11.4), the fitness function (FitFunc) can be defined as follows:

FitFunc()=[ObjFnc1,ObjFnc2,...,ObjFncn](11.5)

Each GA individual is a double vector presenting PI parameters. Since a PI controller has two gain parameters, the number of GA variables could be Nvar = 2n, where n is the number of control areas. The population should be considered in a matrix form with size m × Nvar; where m represents individuals.

As mentioned earlier, the population of a multiobjective GA is composed of dominated and nondominated individuals. The basic line of the algorithm is derived from a GA, where only one replacement occurs per generation. The selection phase should be done first. Initial solutions are randomly generated using a uniform random number of PI controller parameters. The crossover and mutation operators are then applied. The crossover is applied on both selected individuals, generating two children. The mutation is applied uniformly on the best individual. The best resulting individual is integrated into the population, replacing the worst-ranked individual in the population. This process is conceptually shown in Figure 11.6.

The above-described multiobjective GA is applied to the three-control area power system example used in Section 2.4. The closed-loop system response for the following simultaneous load step increase (Equation 11.6) in three areas is examined, and some results for areas 2 and 3 are shown in Figure 11.7.

Images

FIGURE 11.6
Multiobjective GA for tuning of PI-based supplementary frequency control parameters.

Images

FIGURE 11.7
System responses: (a) area 2 and (b) area 3 (solid line, proposed methodology; dotted line, robust PI control).

ΔPL1=100MW,ΔPL2=80MW,ΔPL3=50MW(11.6)

It has been shown from simulation results that the proposed technique is working properly, as well as the robust H-PI control methodology addressed in Bevrani et al.31 Interested readers can find more time-domain simulations for various load disturbance scenarios in Daneshfar.13

 

 

11.4 GA for Tracking Robust Performance Index

A robust multiobjective control methodology for AGC design in a multiarea power system using the mixed H2/H control technique is introduced in Bevrani.32 The AGC problem is transferred to a static output feedback (SOF) control design, and the mixed H2/H control is used via an iterative linear matrix inequality (ILMI) algorithm to approach a suboptimal solution for the specified design objectives.

Here, the multiobjective GA is used as a PI tuning algorithm to achieve the same robust performance as provided by ILMI-based H2/H. In both control designs, the same controlled variables and design objectives (reducing unit wear and tear caused by equipment excursions, and addressing overshoot and number of reversals of the governor load set point signal, while area frequency and tie-line power are maintained close to specified values) are considered.

11.4.1 Mixed H2/H

In many real-world control problems, it is desirable to follow several objectives simultaneously, such as stability, disturbance attenuation, reference tracking, and considering the practical constraints. Pure H synthesis cannot adequately capture all design specifications. For instance, H synthesis mainly enforces closed-loop stability and meets some constraints and limitations, while noise attenuation or regulation against random disturbances is more naturally expressed in H2 synthesis terms. The mixed H2/H control synthesis gives a powerful multiobjective control design addressed by the LMI techniques.

A general synthesis control scheme using the mixed H2/H control technique is shown in Figure 11.8. G(s) is a linear time-invariant system with the following state-space realization:

x˙=Ax+B1w+B2uz=Cx+D1w+D2uz2=C2x+D21w+D22uy=Cyx+Dy1w(11.7)

Images

FIGURE 11.8
Mixed H2/H control configuration.

where x is the state variable vector, w is the disturbance and other external input vector, and y is the measured output vector. The output channel z2 is associated with the H2 performance aspects, while the output channel z is associated with the H performance. Let T(s) and T2(s) be the transfer functions from w to z and z2, respectively.

In general, the mixed H2/H control design method provides a dynamic output feedback (DOF) controller, K(s), that minimizes the following tradeoff criterion:

k1|T(s)|2+k2|T2(s)|22,(k10,k20)(11.8)

Unfortunately, most robust control methods, such as H2/H control design, suggest complex and high-order dynamic controllers, which are impractical for industry practices. For example, real-world AGC systems use simple PI controllers. Since a PI or proportional-integral-derivative (PID) control problem can be easily transferred to an SOF control problem,15 one way to solve the above challenge is to use a mixed H2/H SOF control instead of a H2/ H DOF control method. The main merit of this transformation is to use the powerful robust SOF control techniques, such as the robust mixed H2/H SOF control, to calculate the fixed gains (PI/PID parameters), and once the SOF gain vector is obtained, the PI/PID gains are ready in hand and no additional computation is needed. In continuation, the mixed H2/H SOF control design is briefly explained.

11.4.2 Mixed H2/H SOF Design

The mixed H2/H SOF control design problem can be expressed to determine an admissible SOF (pure gain vector) law Ki, belonging to a family of internally stabilizing SOF gains Ksof,

ui=Kiyi,KiKsof(11.9)

such that

infKiKsof||T2(s)||2subject to||T(s)||<1(11.10)

The optimization problem given in Equation 11.10 defines a robust performance synthesis problem, where the H2 norm is chosen as the performance measure. There are some proper lemmas giving the necessary and sufficient condition for the existence of a solution for the above optimization problem to meet the following performance criteria:

|T2(s)|2<γ2,|T(s)|<γ(11.11)

where γ2 and γ are H2 and H robust performance indices, respectively.

It is notable that the H and H2/H SOF reformulation generally leads to bilinear matrix inequalities that are nonconvex. This kind of problem is usually solved by an iterative algorithm that may not converge to an optimal solution. An ILMI algorithm is introduced in Bevrani32 to get a suboptimal solution for the above optimization problem. The proposed algorithm searches the desired suboptimal H2/H SOF controller Ki within a family of H2 stabilizing controllers Ksof such that

|γ2*-γ2|<,γ=|Tziv1i|<1(11.12)

where ॉ is a small real positive number, γ2* is H2 performance corresponding to H2/H SOF controller Ki, and γ2 is the optimal H2 performance index that can be obtained from the application of the standard H2/H DOF control.

11.4.3 AGC Synthesis Using GA-Based Robust Performance Tracking

The design of a robust SOF controller based on a H2/H control was discussed in the previous section. Now, the application of GA for getting pure gains (SOF) is presented to achieve the same robust performances (Equation 11.11). Here, like in the H2/H control scheme shown in Figure 11.8, the optimization objective is to minimize the effects of disturbances (w) on the controlled variables ( z and z2 ). This objective can be summarized as

Min γ2=||T2(s)||2Subject to γ=||T(s)||(11.13)

such that the resulting performance indices (γ2*,γ*) satisfy |γ2*γ2opt|<ε and γ*<1. Here, ε is a small real positive number, γ2* and γ* are H2 performance and H performance corresponding to the obtained controller Ki from the GA optimization algorithm, and γ2opt is the optimal H2 performance index that can be achieved from the application of standard H2/H dynamic output feedback control. In order to calculate γ2opt, one may simply use the hinfmix function in MATLAB based on the LMI control toolbox.33

TABLE 11.1
PI Parameters and Optimal Performance Index

Images

The proposed control technique is applied to the three-control area power system given in Section 2.4. In the proposed approach, the GA is employed as an optimization engine to produce the PI controllers in the supplementary frequency control loops with performance indices near the optimal ones. The obtained control parameters and performance indices are shown in Table 11.1. The indices are comparable to the results given by the proposed ILMI algorithm. For the problem at hand, the guaranteed optimal H2 performance indices (γ2opt) for areas 1, 2, and 3 are calculated as 1.070, 1.03, and 1.031, respectively.

Figure 11.9 shows the closed-loop response (frequency deviation, area control error, and control action signals) for areas 1 and 3, in the presence of simultaneous 0.1 pu step load disturbances, and a 20% decrease in the inertia constant and damping coefficient as uncertainties in all areas. The performance of the closed-loop system using GA-based H2/H PI controllers is also compared with that of the ILMI-based H2/H PI control design. Simulation results demonstrate that the proposed GA-based PI controllers track the load fluctuations and meet robustness for a wide range of load disturbances, as well as ILMI-based PI controllers.

 

 

11.5 GA in Learning Process

GAs belong to a class of adaptive general purpose methods, for machine learning, as well as optimization, based on the principles of population genetics and natural evolution. A GA learns by evaluating its knowledge structures using the fitness function, and forming new ones to replace the previous generation by breeding from more successful individuals in the population using the crossover and mutation operators.

Images

FIGURE 11.9
Closed-loop system response: (a) area 1 and (b) area 3 (solid, GA; dotted, ILMI).

In Section 7.3, the application of GA to find suitable state and action variables in the reinforcement learning (RL) process for a multiagent RL-based AGC design was described. In the present section, as another example, it is shown that the GA can also be effectively used to provide suitable training data in a multiagent Bayesian-network (BN)-based AGC (see Chapter 8).

11.5.1 GA for Finding Training Data in a BN-Based AGC Design

As mentioned in Chapter 8, the basic structure of a graphical model is built based on prior knowledge of the system. Here, for simplicity, assume that frequency deviation and tie-line power change are the most important AGC variables, regarding the least dependency to the model parameters and the maximum effectiveness on the system frequency. In this case, the graphical model for the BN-based AGC scheme can be considered as shown in Figure 11.10, and the posterior probability that can be found is pPcPtief).

After determining the most worthwhile subset of observations (ΔPtief), in the next phase of BN construction, a directed acyclic graph that encodes assertion of conditional independence is built. It includes the problem random variables, conditional probability distribution nodes, and dependency nodes.

As mentioned in Chapter 8, in the next step of BN construction, i.e., parameter learning, the local conditional probability distributions p(xi|pai) are computed from the training data. According to the graphical model (Figure 11.10), probability and conditional probability distributions for this problem are pf), pPtie), and pPCf,ΔPtie). To calculate the above probabilities, suitable training data are needed. In the learning phase, to find the conditional probabilities related to the graphical model variables, the training data can be used in a proper software environment.34

Images

FIGURE 11.10
The graphical model of area i.

Here, GA is applied to obtain a set of training data (ΔPtie, Δf, ΔPC) as follows: In an offline procedure, a simulation is run with an initial ΔPC vector provided by GA for a specific operating condition. Then, the appropriate ΔPC is evaluated based on the calculated ACE signal. Each GA’s individual ΔPC is a double vector (population type) with nv variables in the range [0 1] (the number of variables can be considered the same as the number of simulation seconds). In the simulation stage, the vector’s elements should be scaled to a valid ΔPC change for that area: [ΔPCmin ΔPCmax]. The ΔPCmax is the possible maximum control action change to use for an AGC cycle, and similarly, ΔPCmin is possible minimum change that can be applied to the governor set point.

11.5.2 Application Example

Consider the three-control area power system (used in the previous section) again. Here, the start population size is fixed at thirty individuals and run for one hundred generations. Figure 11.11 shows the results of running the proposed GA for area 1. To examine the individual’s eligibility (fitness), ΔPC values should be scaled according to the specified range for the control area. After scaling and finding the corresponding ΔPC, the simulation is run for a given load disturbance (a signal with one hundred instances) or the scaled ΔPC (with 100 s). The individual fitness is proportional to the average distances of the resulting ACE signal instances from zero, after 100 s of simulation. Finally, the individuals with higher fitness are the best ones, and the resulting set of (ΔPtie , Δf, ΔPC) provides a row of the training data matrix.

Images

FIGURE 11.11
Result of running GA for area 1.

As explained in Chapter 8, this large training data matrix is partly complete, and it can be used for parameter learning issues in the power system for other disturbance scenarios and operating conditions. Here, the Bayesian networks toolbox (BNT)34 is used as suitable software for simulation purposes. After providing the training set, the training data for each area are separately supplied to the BNT. The BNT uses the data and the parameter learning phase for each control area parameter, and determines the associated prior and conditional probability distributions pf), pPtie).

After completing the learning phase, we are ready to run the AGC system online, and the proposed model uses the inference phase to find an appropriate control action signal (ΔPC) for each control area as follows: at each simulation time step, corresponding control agents get the input parameters (ΔPtief) of the model and digitize them for the BNT (the BNT does not work with continuous values). The BNT finds the posterior probability distribution pPC|ΔPtief) for each area, and then the control agent finds the maximum posterior probability distribution from the return set and provides the most probable evidence ΔPC.

The response of the closed-loop system (for areas 2 and 3) in the presence of the disturbance scenario given in Equation 11.6 is shown in Figure 11.12. The performance of the proposed GA-based multiagent BN is compared with that of the robust PI control design presented in Bevrani and coworker.31

 

 

11.6 Summary

The GAs are emerging as powerful alternatives to the traditional optimization methods. As GAs are inherently adaptive, they can effectively converge to near optimal solutions in many applications, and therefore they have been used to solve a number of complex problems over the years. A GA performs the task of optimization by starting with a random population of values, and producing new generations of improved values that combine the values with best fitness from previous populations. GAs can efficiently handle highly nonlinear and noisy cost functions, and therefore they can be considered powerful optimization tools for real-world complex dynamic systems.

This chapter started with an introduction on GA algorithms and their applications in control systems. Then, several methodologies were presented for the GA-based AGC design problem. GAs are successfully used for the AGC system with different strategies, in the form of tuning of controller parameters, solving of multiobjective optimization problems, tracking of robust performance indices, and improving learning algorithms. The proposed design methodologies are illustrated by suitable examples. In most cases, the results are compared with those of recent robust control designs.

Images

FIGURE 11.12
System response: (a) area 2 and (b) area 3 (solid line, proposed method; dotted line, robust PI controller).

 

 

References

1. D. Rerkpreedapong, A. Hasanovic, A. Feliachi. 2003. Robust load frequency control using genetic algorithms and linear matrix inequalities. IEEE Trans. Power Syst. 18(2):855–61.

2. Y. L. Abdel-Magid, M. M. Dawoud. 1996. Optimal AGC tuning with genetic algorithms. Elect. Power Syst. Res. 38(3):231–38.

3. A. Abdennour. 2002. Adaptive optimal gain scheduling for the load frequency control problem. Elect. Power Components Syst. 30(1):45–56.

4. S. K. Aditya, D. Das. 2003. Design of load frequency controllers using genetic algorithm for two area interconnected hydro power system. Elect. Power Components Syst. 31(1):81–94.

5. C. S. Chang, W. Fu, F. Wen. 1998. Load frequency controller using genetic algorithm based fuzzy gain scheduling of PI controller. Elect. Machines Power Syst. 26:39–52.

6. Z. M. Al-Hamouz, H. N. Al-Duwaish. 2000. A new load frequency variable structure controller using genetic algorithm. Elect. Power Syst. Res. 55:1–6.

7. A. Huddar, P. S. Kulkarni. 2008. A robust method of tuning the feedback gains of a variable structure load frequency controller using genetic algorithm optimization. Elect. Power Components Syst. 36:1351–68.

8. P. Bhatt, R. Roy, S. P. Ghoshal. 2010. Optimized multi area AGC simulation in restructured power systems. Elect. Power Energy Syst. 32:311–22.

9. A. Demirorem, S. Kent, T. Gunel. 2002. A genetic approach to the optimization of automatic generation control parameters for power systems. Eur. Trans. Elect. Power 12(4):275–81.

10. P. Bhatt, R. Roy, S. P. Ghoshal. 2010. GA/particle swarm intelligence based optimization of two specific varieties of controller devices applied to two-area multi-units automatic generation control. Elect. Power Energy Syst. 32:299–310.

11. K. Vrdoljak, N. Peric, I. Petrovic. 2010. Sliding mode based load-frequency control in power systems. Elect. Power Syst. Res. 80:514–27.

12. F. Daneshfar, H. Bevrani. 2010. Load-frequency control: A GA-based multi-agent reinforcement learning. IET Gener. Transm. Distrib. 4(1):13–26.

13. F. Daneshfar. 2009. Automatic generation control using multi-agent systems. MSc dissertation, Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, Iran.

14. H. Golpira, H. Bevrani. 2010. Application of GA optimization for automatic generation control in realistic interconnected power systems. In Proceedings of International Conference on Modeling, Identification and Control, Okayama, Japan, CD-ROM.

15. H. Bevrani, T. Hiyama. 2007. Multiobjective PI/PID control design using an iterative linear matrix inequalities algorithm. Int. J. Control Automation Syst. 5(2):117–127.

16. D. E. Goldberg. 1989. Genetic algorithms in search, optimization and machine learning. Reading, MA: Addison-Wesley.

17. L. Davis. 1991. Handbook of genetic algorithms. New York: Van Nostrand.

18. K. F. Man, K. S. Tag. 1997. Genetic algorithms for control and signal processing. In Proceedings of IEEE International Conference on Industrial Electronics, Control and Instrumentation-IECON, New Orleans, LA, USA, vol. 4, pp. 1541–55.

19. J. Hu, E. Goodman. 2005. Topological synthesis of robust dynamic systems by sustainable genetic programming. In Genetic programming theory and practice II, ed. U. M. O’Reilly, T. Yu, R. Riolo, B. Wozel, New York: Springer, pp. 143–58.

20. B. Forouraghi. 2000. A genetic algorithm for multiobjective robust design. Appl. Intelligence 12:151–61.

21. M. O. Odetayo, D. Dasgupta. 1995. Controlling a dynamic physical system using genetic-based learning methods. In Practical handbook of genetic algorithms—New frontiers, ed. L. Chambers. Vol. II. Boca Raton, FL: CRC Press.

22. H. Bevrani. 2009. Real power compensation and frequency control. In Robust power system frequency control, pp. 15–38. New York: Springer.

23. J. L. Cohon. 1978. Multiobjective programming and planning. New York: Academic Press.

24. A. Osyczka. 1985. Multicriteria optimization for engineering design. In Design optimization, ed. J. S. Gero, 193–227. New York: Academic Press.

25. J. D. Schaer. 1984. Some experiments in machine learning using vector evaluated genetic algorithms. Unpublished doctoral dissertation, Vanderbilt University, Nashville, TN.

26. N. Srinivas, K. Deb. 1995. Multiobjective optimization using non-dominated sorting in genetic algorithm. Evol. Comput. 2(3):221–48.

27. H. Tamaki, H. Kita, S. Kobayashi. Multi-objective optimization by genetic algorithms: A review. In Proceedings of 1996 IEEE International Conference on Evolutionary Computation, Nagoya, Japan, pp. 517–22.

28. C. Poloni, et al. 1995. Hybrid GA for multi objective aerodynamic shape optimization, genetic algorithms in engineering and computer science, ed. C. Winter, et al., 397–416. Chichester, UK: Wiley.

29. C. M. Fonseca, P. J. Fleming. 1995. Multiobjective optimization and multiple constraint handling with evolutionary algorithms. Part I. A unified formulation. IEEE Trans. Syst. Man Cyber. A 28(I):26–37.

30. J. F. Whidborne, R. S. H. Istepanian. 2001. Genetic algorithm approach to designing finite-precision controller structures. IEE Proc. Control Theory Appl. 148(5):377–82.

31. H. Bevrani, Y. Mitani, K. Tsuji. 2004. Robust decentralised load-frequency control using an iterative linear matrix inequalities algorithm. IEE Proc. Gener. Transm. Distrib. 3(151):347–54.

32. H. Bevrani. 2009. Multi-objective control-based frequency regulation. In Robust power system frequency control, 103–22. New York: Springer.

33. P. Gahinet, A. Nemirovski, A. J. Laub, M. Chilali. 1995. LMI control toolbox. Natick, MA: The MathWorks.

34. K. Murphy. 2001. The Bayes net toolbox for Matlab. Comput. Sci. Stat. 33: 1–20.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset