5

Computational Methods for Electric Power Systems

Mariesa L. Crow

Missouri University of Science and Technology

5.1    Power Flow

Admittance MatrixNewton–Raphson Method

5.2    Optimal Power Flow

Steepest Descent AlgorithmLimitations on Independent VariablesLimitations on Dependent Variables

5.3    State Estimation

Electric power systems are some of the largest human-made systems ever built. As with many physical systems, engineers, scientists, economists, and many others strive to understand and predict their complex behavior through mathematical models. The sheer size of the bulk power transmission system forced early power engineers to be among the first to develop computational approaches to solving the equations that describe them. Today’s power system planners and operators rely very heavily on the computational tools to assist them in maintaining a reliable and secure operating environment.

The various computational algorithms were developed around the requirements of power system operation. The primary algorithms in use today are the power flow (also known as load flow), optimal power flow (OPF), and state estimation. These are all “steady-state” algorithms and are built up from the same basic approach to solving the nonlinear power balance equations. These particular algorithms do not explicitly consider the effects of time-varying dynamics on the system. The fields of transient and dynamic stability require a large number of algorithms that are significantly different from the powerflow-based algorithms and will therefore not be covered in this chapter.

5.1    Power Flow

The underlying principle of a power flow problem is that given the system loads, generation, and network configuration, the system bus voltages and line flows can be found by solving the nonlinear power flow equations. This is typically accomplished by applying Kirchoff’s law at each power system bus throughout the system. In this context, Kirchoff’s law can be interpreted as the sum of the powers entering a bus must be zero, or that the power at each bus must be conserved. Since power is comprised of two components, active power and reactive power, each bus gives rise to two equations—one for active power and one for reactive power. These equations are known as the power flow equations:

0=ΔPi=PinjiVij=1VjYijcos(θiθjϕij)0=ΔPi=PinjiVij=1VjYijcos(θiθjϕij)

(5.1)

0=ΔQi=QinjiViNj=1VjYijsin(θiθjϕij)i=1,,N0=ΔQi=QinjiVij=1NVjYijsin(θiθjϕij)i=1,,N

(5.2)

where PinjiPinji and QinjiQinji are the active power and reactive power injected at the bus i, respectively. Loads are modeled by negative power injection. The values Vi and Vj are the voltage magnitudes at bus i and bus j, respectively. The values θi and θj are the corresponding phase angles. The value Yij∠φij is the (ij)th element of the network admittance matrix Ybus. The constant N is the number of buses in the system. The updates ΔPi and ΔQi of Equations 5.1 and 5.2 are called the mismatch equations because they give a measure of the power difference, or mismatch, between the calculated power values, as functions of voltage and phase angle, and the actual injected powers.

The formulation in Equations 5.1 and 5.2 is called the polar formulation of the power flow equations. If Yij∠ϕij is instead given by the complex sum gij + jbij, then the power flow equations may be written in rectangular form as

0=ΔPi=PinjiViNj=1Vj(gijcos(θiθj)+bijsin(θiθj))0=ΔPi=PinjiVij=1NVj(gijcos(θiθj)+bijsin(θiθj))

(5.3)

0=ΔQi=QinjiViNj=1Vj(gijsin(θiθj)+bijcos(θiθj))i=1,,N0=ΔQi=QinjiVij=1NVj(gijsin(θiθj)+bijcos(θiθj))i=1,,N

(5.4)

There are, at most, 2N equations to solve. This number is then further reduced by removing one power flow equation for each known voltage (at voltage controlled buses) and the swing bus angle. This reduction is necessary since the number of equations must equal the number of unknowns in a fully determined system. In either case, the power flow equations are a system of nonlinear equations. They are nonlinear in both the voltage and phase angle. Once the number and form of the nonlinear power flow equations have been determined, the Newton–Raphson method may be applied to numerically solve the power flow equations.

5.1.1    Admittance Matrix

The first step in solving the power flow equations is to obtain the admittance matrix Y for the system. The admittance matrix of a passive network (a network containing only resistors, capacitors, and inductors) may be found by summing the currents at every node in the system. An arbitrary bus i in the transmission network is shown in Figure 5.1. The bus i can be connected to any number of other buses in the system through transmission lines. Each transmission line between buses i and j is represented by a π-circuit with series impedance Rij + jXij where Rij is the per unit resistance of the transmission line. The per unit line reactance is Xij = ωs Lij where ωs is the system base frequency and Lij is the inductance of the line. In the π-circuit, the line-charging capacitance is represented by two lumped admittances jBij/2 placed at each end of the transmission line. Note that although each transmission-line parameter is in per unit and therefore a unitless quantity, the impedance is given in Ω/Ω whereas the admittance is given as Ω/Ω. The series impedances can also be represented by their equivalent admittance value where

yijϕij=1Rij+jXijyijϕij=1Rij+jXij

Summing the currents at node i yields

Ii,inj=Ii1+Ii2++IiN+Ii10+Ii20++IiN0Ii,inj=Ii1+Ii2++IiN+Ii10+Ii20++IiN0

(5.5)

Image

FIGURE 5.1  Bus i in a power transmission network.

=(ˆViˆV1)Ri1+jXi1+(ˆViˆV2)Ri2+jXi2++(ˆViˆViN)RiN+jXiN+ˆV(jBi12+jBi22++jBiN2)=(VˆiVˆ1)Ri1+jXi1+(VˆiVˆ2)Ri2+jXi2++(VˆiVˆiN)RiN+jXiN+Vˆ(jBi12+jBi22++jBiN2)

(5.6)

=(ˆViˆV1)yi1ϕi1+(ˆViˆV2)yi2ϕi2++(ˆViˆVN)yiNϕiN+ˆVi(jBi12+jBi22++jBiN2)=(VˆiVˆ1)yi1ϕi1+(VˆiVˆ2)yi2ϕi2++(VˆiVˆN)yiNϕiN+Vˆi(jBi12+jBi22++jBiN2)

(5.7)

where ˆVi=ViθiVˆi=Viθi.

Gathering the voltages yields

Ii,inj=ˆV1yi1ϕi1ˆV2yi2ϕi2++ˆVi(yi1ϕi1+yi2ϕi2Z++yiNϕiNr+jBi12+jBi22++jBiN2)+ˆVNyiNϕiNIi,inj=Vˆ1yi1ϕi1Vˆ2yi2ϕi2++Vˆi(yi1ϕi1+yi2ϕi2Z++yiNϕiNr+jBi12+jBi22++jBiN2)+VˆNyiNϕiN

(5.8)

By noting that there are N buses in the system, there are N equations similar to Equation 5.8. These can be represented in matrix form

[I1,injI2,injIi,injIN,inj]=[Y11y12ϕ12y1iϕ1iy1Nϕ1Ny21ϕ21Y22y2iϕ2iy2Nϕ2Ny1iϕi1yi2ϕi2YiiyiNϕiNyN1ϕN1yN2ϕN2yNiϕNiYNN][ˆV1ˆV2ˆV1ˆVN]

(5.9)

where

Yii=(yi1ϕi1+yi2ϕi2++yiNϕiN+jBi12+jBi22++jBiN2)

(5.10)

The matrix relating the injected currents vector to the bus voltage vector is known as the system admittance matrix and is commonly represented as Y. A simple procedure for calculating the elements of the admittance matrix is

 

Y(i,j) negative of the admittance between buses i and j

Y(i,i) sum of all admittances connected to bus i

noting that the line-charging values are shunt admittances and are included in the diagonal elements.

Example 5.1

Find the admittance matrix for the line data given by

 

i

j

Rij

Xij

Bij

1

2

0.027

0.32

0.15

1

5

0.014

0.18

0.10

2

3

0.012

0.13

0.12

3

4

0.025

0.25

0.00

3

5

0.017

0.20

0.08

Solution

The first step in calculating the admittance matrix is to calculate the off-diagonal elements first. Note that in a passive network, the admittance matrix is symmetric and Y(i,j) = Y(j,i).

The off-diagonal elements are calculated as the negative of the series admittance of each line. Therefore

Y(1,2)=Y(2,1)=10.027+j0.32=3.113994.82°Y(1,5)=Y(5,1)=10.014+j0.18=5.538894.45°Y(2,3)=Y(3,2)=10.012+j0.13=7.659795.27°Y(3,4)=Y(4,3)=10.025+j0.25=3.980195.71°Y(3,5)=Y(5,3)=10.017+j0.20=4.982094.86°

The diagonal elements are calculated as the sum of all admittances connected to each bus.

Therefore

Y(1,1)=10.027+j0.32+10.014+j0.18+j0.152+j0.102=8.528185.35°Y(2,2)=10.027+j0.32+10.012+j0.13+j0.152+j0.122=10.639284.79°Y(3,3)=10.012+j0.13+10.025+j0.25+10.017+j0.20+j0.122+j0.082=16.522184.71°Y(4,4)=10.025+j0.25=3.980184.29°Y(5,5)=10.017+j0.20+j0.082=10.431185.32°

5.1.2    Newton–Raphson Method

The most common approach to solving the power flow equations is to use the iterative Newton–Raphson method. The Newton–Raphson method is an iterative approach to solving continuous nonlinear equations in the form

F(x)=[f1(x1,x2,,xn)f2(x1,x2,,xn)fn(x1,x2,,xn)]=0

(5.11)

An iterative approach is one in which an initial guess (x0) to the solution is used to create a sequence x0, x1, x2,… that (hopefully) converges arbitrarily close to the desired solution vector x where F(x) = 0.

The Newton–Raphson method for n-dimensional systems is given as

xk+1=xk[J(xk)]1F(xk)

(5.12)

where

x=[x1x2x3xn]F(xk)=[f1(xk)f2(xk)f3(xk)fn(xk)]

and the Jacobian matrix [J(xk)] is given by

[J(xk)]=[f1x1f1x2f1x3f2xnf2x1f2x2f2x3f2xnf3x1f3x2f3x3f3xnfnx1fnx2fnx3fnxn]

Typically, the inverse of the Jacobian matrix [J(xk)] is not found directly, but rather through LU factorization. Convergence is typically evaluated by considering the norm of the function

F(xk)<ε

(5.13)

Note that the Jacobian is a function of xk and is therefore updated every iteration along with F(xk).

In this formulation, the vector F(x) is the set of power flow equations and the unknown x is the vector of voltage magnitudes and angles. It is common to arrange the Newton–Raphson equations by phase angle followed by the voltage magnitudes as

[J1J2J3J4][Δδ1Δδ2Δδ3ΔδNΔV1ΔV2ΔV3ΔQN]=[ΔP1ΔP2ΔP3ΔPNΔQ1ΔQ2ΔQ3ΔQN]

(5.14)

where

Δδi=δk+1iδkiΔVi=Vk+1iVki

and ΔPi and ΔQi are as given in Equations 5.1 and 5.2 and are evaluated at δk and Vk. The Jacobian is typically divided into four submatrices, where

[J1J2J3J4]=[ΔPδΔPVΔQδΔQV]

(5.15)

Each submatrix represents the partial derivatives of each of the mismatch equations with respect to each of the unknowns. These partial derivatives yield eight types—two for each mismatch equation, where one is for the diagonal element and the other is for off-diagonal elements. The derivatives are summarized as

ΔPiδi=ViNj=1VjYijsin(δiδjϕij)+V2iYiisinϕii

(5.16)

ΔPiδj=ViVjYijsin(δiδjϕij)

(5.17)

ΔPiVi=Ni=1VjYijcos(δiδjϕij)ViViicosϕii

(5.18)

ΔPiVj=ViVijcos(δiδjϕij)

(5.19)

ΔQiδi=ViNj=1VjYijcos(δiδjϕij)+V2iYiicosϕii

(5.20)

ΔQiδj=ViVjYijcos(δiδjϕij)

(5.21)

ΔQiVi=Nj=1VjYijsin(δiδjϕij)+ViYiisinϕii

(5.22)

ΔQiVj=ViYijsin(δiδjϕij)

(5.23)

A common modification to the power flow solution is to replace the unknown update ΔVi by the normalized value ΔVi/Vi. This formulation yields a more symmetric Jacobian matrix as the Jacobian submatrices J2 and J4 are now multiplied by Vi to compensate for the scaling of ΔVi by Vi. All partial derivatives of each submatrix then become quadratic in voltage magnitude.

The Newton–Raphson method for the solution of the power flow equations is relatively straightforward to program since both the function evaluations and the partial derivatives use the same expressions. Thus it takes little extra computational effort to compute the Jacobian once the mismatch equations have been calculated.

Example 5.2

Find the voltage magnitudes, phase angles, and line flows for the small power system shown in Figure 5.2 with the following system:

Parameters in Per Unit

Image

Image

FIGURE 5.2  Example power system.

I

j

Rij

Xij

Bij

1

2

0.02

0.3

0.15

1

3

0.01

0.1

0.1

2

3

0.01

0.1

0.1

Solution

Calculating the admittance matrix for this system yields

Ybus=[13.150584.7148°3.326093.8141°9.950495.7106°3.32695.7106°13.150584.7148°9.950495.7106°9.950495.7106°9.950495.7106°19.801284.2606°]

(5.24)

By inspection, this system has three unknowns: δ2, δ3, and V3; thus, three power flow equations are required. These power flow equations are

0=ΔP2=0.5V23j=1VjYijcos(δ2δjθij)

(5.25)

0=ΔP3=1.2V33j=1VjYijcos(δ3δjθij)

(5.26)

0=ΔQ3=0.5V33j=1VjYijsin(δ3δjθij)

(5.27)

Substituting the known quantities for V1 = 1.02, V2 = 1.00, and δ1 = 0 and the admittance matrix quantities yields

ΔP2=0.5(1.00)[(1.02)(3.3260)cos(δ2093.8141°)+(1.00)(13.1505)cos(δ2δ2+84.7148°)+(V3)(9.9504)cos(δ2δ395.7106°)]

(5.28)

ΔP3=1.2(V3)[(1.02)(9.9504)cos(δ2095.7106°)+(1.00)(9.9504)cos(δ3δ2+95.7106°)+(V3)(19.8012)cos(δ3δ384.2606°)]

(5.29)

ΔQ3=0.5(V3)[(1.02)(9.9504)sin(δ3095.7106°)+(1.00)(9.9504)sin(δ3δ2+95.7106°)+(V3)(19.8012)sin(δ3δ384.2606°)]

(5.30)

The Newton–Raphson iteration for this system is then given by

[ΔP2δ2ΔP2δ3ΔP2V3ΔP3δ2ΔP3δ3ΔP3V3ΔQ3δ2ΔQ3δ3ΔQ3V3][Δδ2Δδ3ΔV3]=[ΔP2ΔP3ΔQ3]

(5.31)

where

ΔP2δ2=3.3925sin(δ293.8141°)+9.9504V3sin(δ2δ394.7106°)ΔP2δ3=9.9504V3sin(δ2δ395.7106°)ΔP2V3=9.9504cos(δ2δ395.7106°)ΔP3δ2=9.9504V3sin(δ3δ295.7106°)ΔP3δ3=10.1494V3sin(δ395.7106°)+9.9504V3sin(δ3δ295.7106°)ΔP3V3=10.1494cos(δ395.7106°)9.9504cos(δ3δ295.7106°)39.6024V3cos(84.2606°)ΔQ3δ2=9.9504V3cos(δ3δ295.7106°)ΔQ3δ2=10.1494V3cos(δ395.7106°)9.9504V3cos(δ3δ295.7106°)ΔQ3V3=10.1494sin(δ395.7106°)9.9504sin(δ3δ295.7106°)39.6024V3sin(84.2606°)

One of the underlying assumptions of the Newton–Raphson iteration is that the higher order terms of the Taylor series expansion upon which the iteration is based are negligible only if the initial guess is sufficiently close to the actual solution to the nonlinear equations. Under most operating conditions, the voltages throughout the power system are within ±10% of the nominal voltage and therefore fall in the range 0.9 ≤ Vi ≤ 1.1 per unit. Similarly, under most operating conditions the phase angle differences between adjacent buses are typically small. Thus if the swing bus angle is taken to be zero, then all phase angles throughout the system will also be close to zero. Therefore in initializing a power flow, it is common to choose a “flat start” initial condition. That is, all voltage magnitudes are set to 1.0 per unit and all angles are set to zero.

Iteration 1

Evaluating the Jacobian and the mismatch equations at the flat start initial conditions yields

[J0]=[13.28599.90100.99019.901020.0001.96040.99012.000019.4040][ΔP02ΔP03ΔQ03]=[0.50441.18020.2020]

Solving

[J0][Δδ12Δδ13ΔV13]=[ΔP02ΔP03ΔQ03]

by LU factorization yields

[Δδ12Δδ13ΔV13]=[0.00960.06210.0163]

Therefore

δ12=δ02+Δδ12=00.0096=0.0096δ13=δ03+Δδ13=00.0621=0.0621V13=V03+ΔV13=10.0163=0.9837

Note that the angles are given in radians and not degrees. The error at the first iteration is the largest absolute value of the mismatch equations, which is

ε1=1.1802

One quick check of this process is to note that the voltage update V13 is slightly less than 1.0 per unit, which would be expected given the system configuration. Note also that the diagonals of the Jacobian are all equal or greater in magnitude than the off-diagonal elements. This is because the diagonals are summations of terms, whereas the off-diagonal elements are single terms.

Iteration 2

Evaluating the Jacobian and the mismatch equations at the updated values δ12, δ13, and V13 yields

[J1]=[13.15979.77710.46849.674719.52800.75151.48453.092918.9086][ΔP12ΔP13ΔQ13]=[0.00740.02320.0359]

Solving for the update yields

[Δδ22Δδ23ΔV23]=[0.00050.00140.0021]

and

[δ22δ23V23]=[0.01010.06350.9816]

where

ε2=0.0359

Iteration 3

Evaluating the Jacobian and the mismatch equations at the updated values δ22, δ23, and V23 yields

[J2]=[13.13929.75670.46009.653019.48310.72131.48943.107918.8300][ΔP02ΔP03ΔQ03]=[0.17170.56390.9084]×104

Solving for the update yields

[Δδ22Δδ23ΔV23]=[0.13960.33900.5273]×105

and

[δ32δ23V33]=[0.01010.06350.9816]

where

ε3=0.9084×104

At this point, the iterations have converged since the mismatch is sufficiently small and the values are no longer changing significantly.

The last task in power flow is to calculate the generated reactive powers, the swing bus active power output and the line flows. The generated powers can be calculated directly from the power flow equations:

Pinji=ViNj=1VjYijcos(θiθjϕij)Qinji=ViNj=1VjYijsin(θiθjϕij)

Therefore

Pgen,1=Pinj1=0.7087Qgen,1=Qinj1=0.2806Qgen,2=Qinj2=0.0446

The total active power losses in the system are the difference between the sum of the generation and the sum of the loads, in this case:

Ploss=PgenPload=0.7087+0.51.2=0.0087pu

(5.32)

The line losses for line ij are calculated at both the sending and receiving ends of the line. The apparent power leaving bus i to bus j on line ij is

Sij=Viδi(Iij+jBij2Viθi)

(5.33)

=Viδi((ViθiVjθjRij+jXij)+jBij2Viθi)

(5.34)

and the power received at bus j from bus i on line ij is

Sji=Vjδj(IjijBij2Vjθj)

(5.35)

Thus

Pij=Re{Sij}=ViVjYijcos(δiδjϕij)V2iYijcosϕij

(5.36)

Qij=Im{Sij}=ViVjYijsin(δiδjϕij)V2i(YijsinϕijBij2)

(5.37)

Similarly, the powers Pji and Qji can be calculated. The active power loss on any given line is the difference between the active power sent from bus i and the active power received at bus j. Calculating the reactive power losses is more complex since the reactive power generated by the line-charging (shunt capacitances) must also be included.

5.2    Optimal Power Flow

The basic objective of the OPF is to find the values of the system state variables and/or parameters that minimize some cost function of the power system. The types of cost functions are system dependent and can vary widely from application to application and are not necessarily strictly measured in terms of dollars. Examples of engineering optimizations can range from minimizing

•  Active power losses

•  Particulate output (emissions)

•  System energy

•  Fuel costs of generation

to name a few possibilities. The basic formulation of the OPF can be represented as minimizing a defined cost function subject to any physical or operational constraints of the system:

minimize

f(x,u)xRnuRn

(5.38)

subject to

g(x,u)=0equality constraints

(5.39)

h(x,u)=0inequality constraints

(5.40)

where

x is the vector of system states

u is the vector of system parameters

The basic approach is to find the vector of system parameters that when substituted into the system model will result in the state vector x that minimizes the cost function f (x,u).

In an unconstrained system, the usual approach to minimizing the cost function is to set the function derivatives to zero and then solve for the system states from the set of resulting equations. In the majority of applications, however, the system states at the unconstrained minimum will not satisfy the constraint equations. Thus, an alternate approach is required to find the constrained minimum. One approach is to introduce an additional set of parameters λ, frequently known as Lagrange multipliers, to impose the constraints on the cost function. The augmented cost function then becomes

minimize f(x,u)λg(x,u)

(5.41)

The augmented function in Equation 5.41 can then be minimized by solving for the set of states that result from setting the derivatives of the augmented function to zero. Note that the derivative of Equation 5.41 with respect to λ effectively enforces the equality constraint of Equation 5.39.

Example 5.3

Minimize

C:12(x2+y2)

(5.42)

subject to the following constraint:

2xy=5

Solution

Note that the function to be minimized is the equation for a circle. The unconstrained minimum of this function is the point at the origin with x = 0 and y = 0, which defines a circle with a radius of zero length. However, the circle must also intersect the line defined by the constraint equation; thus, the constrained circle must have a nonzero radius. The augmented cost function becomes

C:12(x2+y2)λ(2xy5)

(5.43)

where λ represents the Lagrange multiplier. Setting the derivatives of the augmented cost function to zero yields the following set of equations:

0=Cx=x2λ0=Cy=y+λ0=Cλ=2xy5

Solving this set of equations yields [x y λ]T = [2 −1 1]T. The cost function of Equation 5.42 evaluated at the minimum of the augmented cost function is

C:12[(2)2+(1)2]=52

If there is more than one equality constraint (i.e., if g(x, u) of Equation 5.39 is a vector of functions) then λ becomes a vector of multipliers and the augmented cost function becomes

C:f(x,u)[λ]Tg(x,u)

(5.44)

where the derivatives of C become

[Cλ]=0=g(x,u)

(5.45)

[Cx]=0=[fx][gx]T[λ]

(5.46)

[Cu]=0=[fu]=[gu]T[λ]

(5.47)

Note that for any feasible solution, Equation 5.45 is satisfied, but the feasible solution may not be the optimal solution that minimizes the cost function. In this case, [λ] can be obtained from Equation 5.46 and then only

[Cu]0

This vector can be used as a gradient vector [∇C], which is orthogonal to the contour of constant values of the cost function C. Thus,

[λ]=[[gx]T]1[fx]

(5.48)

which gives

C=[Cu]=[fu][gu]T[λ]

(5.49)

=[fu][gu]T[[gx]T]1[fx]

(5.50)

This relationship provides the foundation of the optimization method known as the steepest descent algorithm.

5.2.1    Steepest Descent Algorithm

1.  Let k = 0. Guess an initial vector uk = u0.

2.  Solve the (possibly nonlinear) system of Equation 5.45 for a feasible solution x.

3.  Calculate Ck+1 and ∇Ck+1 from Equation 5.50. If ║Ck+1Ck║ is less than some predefined tolerance, stop.

4.  Calculate the new vector uk+1 = uk – γ∇C, where γ is a positive number, which is the user-defined “stepsize” of the algorithm.

5.  k = k + 1. Go to step 2.

In the steepest descent method, the u vector update direction is determined at each step of the algorithm by choosing the direction of the greatest change of the augmented cost function C. The direction of steepest descent is perpendicular to the tangent of the curve of constant cost. The distance between adjustments is analogous to the stepsize γ of the algorithm. Thus the critical part of the steepest descent algorithm is the choice of γ. If γ is chosen small, then convergence to minimum value is more likely, but may require many iterations, whereas a large value of γ may result in oscillations about the minimum.

Example 5.4

Minimize

C:x212x22u2=f(x1,x2,u)

(5.51)

subject to the following constraints:

0=x213x2+u3

(5.52)

0=x1x24u2

(5.53)

Solution

To find ∇C of Equation 5.50, the following partial derivatives are required:

[fu]=2u[fx]=[2x14x2][gu]T=[14][gx]=[2x1311]

yielding

C=[fu][gu]T[[gx]T]1[fx]=2u[14][[2x1311]]1[2x14x2]

Iteration 1

Let u = 1, γ = 0.05, and choose a stopping criterion of ε = 0.0001. Solving f or x1 and x2 yields two values for each with a corresponding cost function:

x1=1.7016x2=0.2984f=4.0734x1=4.7016x2=6.7016f=23.2828

The first set of values leads to the minimum cost function, so they are selected as the operating solution. Substituting x1 = 1.7016 and x2 = 0.2984 into the gradient function yields ∇C = 10.5705 and the new value of u becomes

u(2)=u(1)γΔC=1(0.05)(10.5705)=0.4715

Iteration 2

With u = 0.4715, solving for x1 and x2 again yields two values for each with a corresponding cost function:

x1=0.6062x2=0.7203f=1.6276x1=3.6062x2=3.4921f=14.2650

The first set of values again leads to the minimum cost function, so they are selected as the operating solution. The difference in cost functions is

|C(1)C(2)|=|4.07341.6276|=2.4458

which is greater than the stopping criterion. Substituting these values into the gradient function yields ∇C = 0.1077 and the new value of u becomes

u(3)=u(2)γΔC=0.4715(0.05)(0.1077)=0.4661

Iteration 3

With u = 0.4661, solving for x1 and x2 again yields two values for each with a corresponding cost function:

x1=0.5921x2=0.7278f=1.6271x1=3.5921x2=3.4565f=14.1799

The first set of values again leads to the minimum cost function, so they are selected as the operating solution. The difference in cost functions is

|C(2)C(3)|=|1.62761.6271|=0.005

which is greater than the stopping criterion. Substituting these values into the gradient function yields ∇C = 0.0541 and the new value of u becomes

u(4)=u(3)γΔC=0.4661(0.05)(0.0541)=0.4634

Iteration 4

With u = 0.4634, solving for x1 and x2 again yields two values for each with a corresponding cost function:

x1=0.5850x2=0.7315f=1.6270x1=3.5850x2=3.4385f=14.1370

The first set of values again leads to the minimum cost function, so they are selected as the operating solution. The difference in cost functions is

|C(3)C(4)|=|1.62711.6270|=0.001

which satisfies the stopping criterion. Thus, the values x1 = 0.5850, x2 = −0.7315, and u = 0.4634 yield the minimum cost function f = 1.6270.

Many power system applications, such as the power flow, offer only a snapshot of the system operation. Frequently, the system planner or operator is interested in the effect that making adjustments to the system parameters will have on the power flow through lines or system losses. Rather than making the adjustments in a random fashion, the system planner will attempt to optimize the adjustments according to some objective function. This objective function can be chosen to minimize generating costs, reservoir water levels, or system losses, among others. The OPF problem is to formulate the power flow problem to find system voltages and generated powers within the framework of the objective function. In this application, the inputs to the power flow are systematically adjusted to maximize (or minimize) a scalar function of the power flow state variables. The two most common objective functions are minimization of generating costs and minimization of active power losses.

The time frame of OPF is on the order of minutes to 1 h; therefore it is assumed that the optimization occurs using only those units that are currently on-line. The problem of determining whether or not to engage a unit, at what time, and for how long is part of the unit commitment problem and is not covered here. The minimization of active transmission losses saves both generating costs and creates a higher generating reserve margin.

Example 5.5

Consider again the three machine system of Example 5.2 except that bus 3 has been converted to a generator bus with a voltage magnitude of 1.0 pu. The cost functions of the generators are

C1:P1+0.0625P21$/hC2:P2+0.0125P22$/hC3:P3+0.0250P23$/h

Find the optimal generation scheduling of this system.

Solution

Following the steepest descent procedure, the first step is to develop an expression for the gradient ∇C, where

C=[fu][gu]T[[gx]T]1[fx]

(5.54)

where f is the sum of the generator costs:

f:C1+C2+C3=P1+0.0625P21+P2+0.0125P22+P3+0.0250P23

g is the set of load flow equations:

g1:0=P2PL2V23i=1ViY2icos(δ2δiϕ2i)g2:0=P3PL3V33i=1ViY3icos(δ3δiϕ3i)

where PLi denotes the active power load at bus i, the set of inputs u is the set of independent generation settings:

u=[P2P3]

and x is the set of unknown states

x=[δ2δ3]

The generator setting P1 is not an input because it is the slack bus generation and cannot be independently set. From these designations, the various partial derivatives required for ∇C can be derived:

[gu]=[11]

(5.55)

[gx]=[g1δ2g1δ3g2δ2g2δ3]

(5.56)

where

g1δ2=V2(V1Y12sin(δ2δ1ϕ21)+V3Y13sin(δ2δ3ϕ23))

(5.57)

g1δ3=V2V3Y32sin(δ2δ3ϕ23)

(5.58)

g2δ2=V3V2Y23sin(δ3δ2ϕ32)

(5.59)

g2δ3=V3(V1Y13sin(δ3δ1ϕ31)+V2Y23sin(δ3δ2ϕ32))

(5.60)

and

[fu]=[1+0.025P21+0.050P3]

(5.61)

Finding the partial derivative [∂f/∂x] is slightly more difficult since the cost function is not written as a direct function of x. Recall, however, that P1 is not an input, but is actually a quantity that depends on x, i.e.,

P1=V1(V1Y11cos(δ1δ1ϕ11)+V2Y12cos(δ1δ2ϕ12)+V3Y13cos(δ1δ3ϕ13))

(5.62)

Thus, using the chain rule,

[fx]=[fP1][P1x]

(5.63)

=(1+0.125P1)[V1V2Y12sin(δ1δ2ϕ12)V1V3V13sin(δ1δ3ϕ13)]

(5.64)

If the initial values of P2 = 0.56 pu and P3 = 0.28 pu are used as inputs, then the power flow yields the following states: [δ2 δ3] = [0.0286–0.0185] in radians and P1 = 0.1152. Converting the generated powers to megawatt and substituting these values into the partial derivatives yields

[gu]=[1001]

(5.65)

[gx]=[13.32679.93669.843419.9219]

(5.66)

[fu]=[15.000015.0000]

(5.67)

[fx]=15.4018[52.0136155.8040]

(5.68)

which yields

C=[0.32560.4648]

(5.69)

Thus, the new values for the input generation are

[P2P3]=[560280]γ[0.32560.4648]

(5.70)

With γ = 1, the updated generation is P2 = 560.3 and P3 = 280.5 MW.

Proceeding with more iterations until the gradient is reduced to less than a user-defined value yields the final generation values for all of the generators:

[P1P2P3]=[112.6560.0282.7]MW

which yields a cost of $7664/MWh.

Often the steepest descent method may indicate that either states or inputs lie outside of their physical constraints. For example, the algorithm may result in a power generation value that exceeds the physical maximum output of the generating unit. Similarly, the resulting bus voltages may lie outside of the desired range (usually ±10% of unity). These are violations of the inequality constraints of the problem. In these cases, the steepest descent algorithm must be modified to reflect these physical limitations. There are several approaches to account for limitations and these approaches depend on whether or not the limitation is on the input (independent) or on the state (dependent).

5.2.2    Limitations on Independent Variables

If the application of the steepest descent algorithm results in an updated value of input that exceeds the specified limit, then the most straightforward method of handling this violation is simply to set the input state equal to its limit and continue with the algorithm except with one less degree of freedom.

Example 5.6

Repeat Example 5.5 except that the generators must satisfy the following limitations:

80P11200MW450P2750MW150P3250MW

Solution

From the solution of Example 5.5, the output of generator 3 exceeds the maximum limit of 0.25 pu. Therefore after the first iteration in the previous example, P3 is set to 0.25 pu. The new partial derivatives become

[gu]=[01]

(5.71)

[gx]=same

(5.72)

[fu]=[1+0.025P2]

(5.73)

[fx]=same

(5.74)

From the constrained steepest descent, the new values of generation become

[P1P2P3]=[117.1588.3250.0]MW

with a cost of $7703/MWh, which is higher than the unconstrained cost of generation of $7664/MWh. As more constraints are added to the system, the system is moved away from the optimal operating point, increasing the cost of generation.

5.2.3    Limitations on Dependent Variables

In many cases, the physical limitations of the system are imposed upon states that are dependent variables in the system description. In this case, the inequality equations are functions of x and must be added to the cost function. Examples of limitations on dependent variables include maximum line flows or bus voltage levels. In these cases, the value of the states cannot be independently set, but must be enforced indirectly. One method of enforcing an inequality constraint is to introduce a penalty function into the cost function. A penalty function is a function that is small when the state is far away from its limit, but becomes increasingly larger the closer the state is to its limit. Typical penalty functions include

p(h)=ekhk>0

(5.75)

p(h)=x2nekhn,k>0

(5.76)

p(h)=ax2nekh+bekhn,k,a,b>0

(5.77)

and the cost function becomes

C:C(u,x)+λTg(u,x)+p(h(u,x)hmax)

(5.78)

This cost equation is then minimized in the usual fashion by setting the appropriate derivatives to zero. This method not only has the advantage of simplicity of implementation, but also has several disadvantages. The first disadvantage is that the choice of penalty function is often a heuristic choice and can vary by application. A second disadvantage is that this method cannot enforce hard limitations on states, i.e., the cost function becomes large if the maximum is exceeded, but the state is allowed to exceed its maximum. In many applications this is not a serious disadvantage. If the power flow on a transmission line slightly exceeds its maximum, it is reasonable to assume that the power system will continue to operate, at least for a finite length of time. If, however, the physical limit is the height above ground for an airplane, then even a slightly negative altitude will have dire consequences. Thus the use of penalty functions to enforce limits must be used with caution and is not applicable for all systems.

Example 5.7

Repeat Example 5.5, except use penalty functions to limit the power flow across line 2–3 to 0.4 pu.

Solution

The power flow across line 2–3 in Example 5.5 is given by

P23=V2V3Y23cos(δ2δ3ϕ23)V22Y23cosϕ23=0.467pu

(5.79)

If P23 exceeds 0.4 pu, then the penalty function

p(h)=[1000V2V3Y23cos(δ2δ3ϕ23)1000V22Y23cosϕ23400]2

(5.80)

will be appended to the cost function. The partial derivatives remain the same with the exception of [║f/║x], which becomes

[fx]=[fP1][P1x]+[fP23][P23x]

(5.81)

=(1+0.125P1)[V1V2Y12sin(δ1δ2ϕ1,2)V1V3Y13sin(δ1δ3ϕ1,3)]+2(P23400)[V2V3Y23sin(δ2δ3ϕ23)V2V3Y23sin(δ2δ3ϕ23)]

(5.82)

Proceeding with the steepest gradient algorithm iterations yields the final constrained optimal generation scheduling:

[P1P2P3]=[128.5476.2349.9]MW

and P23 = 400 MW. The cost for this constrained scheduling is $7882/MWh, which is slightly greater than the non constrained cost.

In the case where hard limits must be imposed, an alternate approach to enforcing the inequality constraints must be employed. In this approach, the inequality constraints are added as additional equality constraints with the inequality set equal to the limit (upper or lower) that is violated. This in essence introduces an additional set of Lagrangian multipliers. This is often referred to as the dual-variable approach, because each inequality has the potential of resulting in two equalities: one for the upper limit and one for the lower limit. However, the upper and lower limit cannot be simultaneously violated; thus, out of the possible set of additional Lagrangian multipliers only one of the two will be included at any given operating point and thus the dual limits are mutually exclusive.

Example 5.8

Repeat Example 5.7 using the dual-variable approach.

Solution

By introducing the additional equation

P23=V2V3Y23cos(δ2δ3ϕ23)V22Y23cosϕ23=0.400pu

(5.83)

to the equality constraints, an additional equation gets added to the set of g(x). Therefore an additional unknown must be added to the state vector x to yield a solvable set of equations (three equations in three unknowns). Either PG2 or PG3 can be chosen as the additional unknown. In this example, PG3 will be chosen. The new system Jacobian becomes

[gx]=[g1x1g1x2g1x3g2x1g2x2g2x3g3x1g3x2g3x3]

(5.84)

where

g1x1=V2(V1Y12sin(δ2δ1ϕ21)+V3Y13sin(δ2δ3ϕ23))g1x2=V2V3Y32sin(δ2δ3ϕ23)g1x3=0g2x1=V3V2Y23sin(δ3δ2ϕ32)g2x2=V3(V1Y13sin(δ3δ1ϕ31)+V2Y23sin(δ3δ2ϕ32)g2x3=1g3x1=V2V3Y23sin(δ2δ3ϕ23)g3x2=V2V3Y23sin(δ2δ3ϕ23)g3x3=0

and

[gu]=[100];[fu]=[1+0.025PG2]

Similar to Example 5.5, the chain rule is used to obtain [∂f/∂x]:

[fx]=[CPG1][PG1x]+[CPG3][PG3x]

(5.85)

=(1+0.125PG1)[V1V2Y12sin(δ1δ2ϕ12)V1V3Y13sin(δ1δ3ϕ13)0]+(1+0.050PG3)[V3V2Y32sin(32ϕ32)V3(V1Y13sin(δ3δ1ϕ31)+V2Y23sin(δ3δ2ϕ32))0]

(5.86)

Substituting these partial derivatives into the expression for ∇C of Equation 5.54 yields the same generation scheduling as Example 5.7.

5.3    State Estimation

In many physical systems, the system operating condition cannot be determined directly by an analytical solution of known equations using a given set of known, dependable quantities. More frequently, the system operating condition is determined by the measurement of system states at different points throughout the system. In many systems, more measurements are made than are necessary to uniquely determine the operating point. This redundancy is often purposely designed into the system to counteract the effect of inaccurate or missing data due to instrument failure. Conversely, not all of the states may be available for measurement. High temperatures, moving parts, or inhospitable conditions may make it difficult, dangerous, or expensive to measure certain system states. In this case, the missing states must be estimated from the rest of the measured information of the system. This process is often known as state estimation and is the process of estimating unknown states from measured quantities. State estimation gives the “best estimate” of the state of the system in spite of uncertain, redundant, and/or conflicting measurements. A good state estimation will smooth out small random errors in measurements, detect and identify large measurement errors, and compensate for missing data. This process strives to minimize the error between the (unknown) true operating state of the system and the measured states.

The set of measured quantities can be denoted by the vector z, which may include measurements of system states (such as voltage and current) or quantities that are functions of system states (such as power flows). Thus,

ztrue=Ax

(5.87)

where

x is the set of system states

A is usually not square

The error vector is the difference between the measured quantities z and the true quantities:

e=zztrue=zAx

(5.88)

Typically, the minimum of the square of the error is desired to negate any effects of sign differences between the measured and true values. Thus, a state estimator endeavors to find the minimum of the squared error, or a least squares minimization:

minimizee2=eTe=mi=1[zimj=1aijxj]2

(5.89)

The squared error function can be denoted by U(x) and is given by

U(x)=eTe=(zAx)T(zAx)

(5.90)

=(zTxTAT)(zAx)p

(5.91)

=(zTzzTAxxTATz+xTATAx)

(5.92)

Note that the product zT Ax is a scalar and so it can be equivalently written as

zTAx=(zTAx)T=xTATzp

Therefore the squared error function is given by

U(x)=zTz2xTATz+xTATAx

(5.93)

The minimum of the squared error function can be found by an unconstrained optimization where the derivative of the function with respect to the states x is set to zero:

U(x)x=02ATz+2ATz+2ATAx

(5.94)

Thus,

ATAx=ATz

(5.95)

Thus, if b = ATz and  = AT A, then

ˆAx=b

(5.96)

This state vector x is the best estimate (in the squared error) to the system operating condition from which the measurements z were taken. The measurement error is given by

e=zmeasAx

(5.97)

In power system state estimation, the estimated variables are the voltage magnitudes and the voltage phase angles at the system buses. The input to the state estimator is the active and reactive powers of the system, measured either at the injection sites or on the transmission lines. The state estimator is designed to give the best estimates of the voltages and phase angles minimizing the effects of the measurement errors. All instruments add some degree of error to the measured values, but the problem is how to quantify this error and account for it during the estimation process.

If all measurements are treated equally in the least squares solution, then the less accurate measurements will affect the estimation as significantly as the more accurate measurements. As a result, the final estimation may contain large errors due to the influence of inaccurate measurements. By introducing a weighting matrix to emphasize the more accurate measurements more heavily than the less accurate measurements, the estimation procedure can then force the results to coincide more closely with the measurements of greater accuracy. This leads to the weighted least squares estimation:

minimizee2=eTe=mi=1wi[zimj=1aijxj]2

(5.98)

where wi is a weighting factor reflecting the level of confidence in the measurement zi.

In general, it can be assumed that the introduced errors have normal (Gaussian) distribution with zero mean and that each measurement is independent of all other measurements. This means that each measurement error is as likely to be greater than the true value as it is to be less than the true value. A zero mean Gaussian distribution has several attributes. The standard deviation of a zero mean Gaussian distribution is denoted by σ. This means that 68% of all measurements will fall within ±σ of the expected value, which is zero in a zero mean distribution. Further, 95% of all measurements will fall within ±2σ and 99% of all measurements will fall within ±3σ. The variance of the measurement distribution is given by σ2. This implies that if the variance of the measurements is relatively small, then the majority of measurements are close to the mean. One interpretation of this is that accurate measurements lead to small variance in the distribution.

This relationship between accuracy and variance leads to a straightforward approach from which a weighting matrix for the estimation can be developed. With measurements taken from a particular meter, the smaller the variance of the measurements (i.e., the more consistent they are), the greater the level of confidence in that set of measurements. A set of measurements that have a high level of confidence should have a higher weighting than a set of measurements that have a larger variance (and therefore less confidence). Therefore, a plausible weighting matrix that reflects the level of confidence in each measurement set is the inverse of the covariance matrix. Thus, measurements that come from instruments with good consistency (small variance) will carry greater weight than measurements that come from less accurate instruments (high variance). Thus, one possible weighting matrix is given by

W=R1=[1σ210001σ220001σ2m]

(5.99)

where R is the covariance matrix for the measurements.

As in the linear least squares estimation, the nonlinear least squares estimation attempts to minimize the square of the errors between a known set of measurements and a set of weighted nonlinear functions:

minimizef=e2=eTe=mi=11σ2[zihi(x)]2

(5.100)

where xRn is the vector of unknowns to be estimated, zRm is the vector of measurements, σ2i is the variance of the ith measurement, and h(x) is the nonlinear function vector relating x to z, where the measurement vector z can be a set of geographically distributed measurements, such as voltages and power flows

In state estimation, the unknowns in the nonlinear equations are the state variables of the system. The state values that minimize the error are found by setting the derivatives of the error function to zero:

F(x)=HTxR1[zh(x)]=0

(5.101)

where

Hx=[h1x1h1x2h1xnh2x1h2h2h2xnhmx1hmx2hmxn]

(5.102)

Note that Equation 5.101 is a set of nonlinear equations that must be solved using Newton–Raphson or another iterative numerical solver. In this case, the Jacobian of F(x) is

JF(x)=HTx(x)R1x[zh(x)]

(5.103)

=HTx(x)R1Hx(x)

(5.104)

and the Newton–Raphson iteration becomes

[HTx(xk)R1Hx(xk)][xk1xk]=HTx(xk)R1[zh(x)k]

(5.105)

At convergence, xk + 1 is equal to the set of states that minimize the error function f of Equation 5.100.

Example 5.9

The SCADA system for the power network shown in Figure 5.3 reports the following measurements and variances:

 

zi

State

Measurement

Variance (s2)

1

V3

  0.975

0.010

2

P13

  0.668

0.050

3

Q21

−0.082

0.075

4

P3

−1.181

0.050

5

Q2

−0.086

0.075

Estimate the power system states.

Solution

The first step in the estimation process is to identify and enumerate the unknown states. In this example, the unknowns are [x1 x2 x3]T = [δ2 δ3 V3]T. After the states are identified, the next step in the estimation process is to identify the appropriate functions h(x) that correspond to each of the measurements. The nonlinear function that is being driven to zero to minimize the weighted error is

F(x)=HTxR1[zh(x)]=0

(5.106)

where the set of zh(x) is given by

z1=h1(x)V3x3z2h2(x)=P13(V1x3Y12cos(x2ϕ13)V21Y13cosϕ13)z3h3(x)=Q21(V2V1Y21sin(x1ϕ21)+V22Y21sinϕ21)z4h4(x)=P3(x3V1Y31cos(x2ϕ31)+x3V2Y32cos(x2x1ϕ32)+x23Y33cosϕ33)z5h5(x)=Q2[V2V1Y21sin(x1ϕ21)V22Y22sinϕ22+V2x3Y23sin(x1x2ϕ23)]

Image

FIGURE 5.3  Example power system.

and the matrix of partial derivatives for the set of functions in Equation 5.106 is

Hx=[000V1x3Y13sin(x2ϕ13)V1V2Y21cos(x1ϕ21)0x3V2Y21sin(x2xϕ32)x3V1Y31sin(x2ϕ31)x3V2Y32sin(x2x1ϕ32)V1V2Y21cos(x1ϕ21)+V2x3Y23cos(x1x2ϕ23)V2x3Y23cos(x1x2ϕ23)]×[1V1Y13cos(x2ϕ13)0V1Y31cos(x2ϕ31)+V2Y32cos(x2x1ϕ32)+2x3Y33cosϕ33V2Y23sin(x1x2ϕ23)]

(5.107)

The covariance matrix of the measurements is

R=[10.010210.050210.075210.050210.0752]

(5.108)

The Newton–Raphson iteration to solve for the set of states x that minimize the weighted errors is

[HTx(xk)R1Hx(xk)][xk1xk]=HTx(xk)R1[zh(xk)]

(5.109)

Iteration 1

The initial condition for the state estimation solution is the same flat start as for the power flow equations; namely, all angles are set to zero and all unknown voltage magnitudes are set to unity. The measurement functions h(x) evaluated at the initial conditions are

h(x0)=[1.00000.02020.06640.01980.1914]

The matrix of partials evaluated at the initial condition yields

H0x=[001.0000010.09901.00990.2257009.901020.00001.96041.21580.99019.9010]

The nonlinear functions in Equation 5.106 are

F(x0)=[0.56551.48050.2250]

The incremental updates for the states are

Δx1=[0.01190.06250.0154]

leading to the updated states

[δ12δ13V13]=[0.01190.06250.9846]

where δ2 and δ3 are in radians. The error at iteration 0 is

ε0=1.4805

Iteration 2

The updated values are used to recalculate the Newton–Raphson iterations:

h(x1)=[0.98460.65850.06341.15990.724]

The matrix of partials is

H1x=[001.000009.98580.37740.2660009.686419.54800.77150.74680.48099.9384]

The nonlinear function evaluated at the updated values yields

F(x1)=[0.01130.02580.0091]

The incremental updates for the states are

Δx2=[0.00070.00080.0013]

leading to the updated states

[δ22δ23V23]=[0.01130.06330.9858]

The error at Iteration 1 is

ε1=0.0258

The iterations are obviously converging. At convergence, the states that minimize the weighted measurement errors are

x=[0.01130.06330.9858]

This concludes the discussion of the most commonly used computational methods for power system analysis. This chapter describes only the basic approaches to power flow, OPF, and state estimation. These methods have been utilized for several decades, yet improvements in accuracy and speed are constantly being proposed in the technical literature.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset