i
i
i
i
i
i
i
i
5.4. Eigenvalues and Matrix Diagonalization 103
Here x, y,andz are the “unknowns” for which we wish to solve. We can write
this in matrix form:
⎡
⎣
372
2 −4 −3
521
⎤
⎦
⎡
⎣
x
y
z
⎤
⎦
=
⎡
⎣
4
−1
1
⎤
⎦
.
A common shorthand for such systems is Ax = b where it is assumed that A is
a square matrix with known constants, x is an unknown column vector (with ele-
ments x, y,andz in our example), and b is a column matrix of known constants.
There are many ways to solve such systems, and the appropriate method de-
pends on the properties and dimensions of the matrix A. Because in graphics
we so frequently work with systems of size n ≤ 4, we’ll discuss here a method
appropriate for these systems, known as Cramer’s rule, which we saw earlier,
from a 2D geometric viewpoint, in the example on page 92. Here, we show this
algebraically. The solution to the above equation is
x =
472
−1 −4 −3
121
372
2 −4 −3
521
; y =
342
2 −1 −3
511
372
2 −4 −3
521
; z =
374
2 −4 −1
521
372
2 −4 −3
521
.
The rule here is to take a ratio of determinants, where the denominator is |A| and
the numerator is the determinant of a matrix created by replacing a column of A
with the column vector b. The column replaced corresponds to the position of
the unknown in vector x. For example, y is the second unknown and the second
column is replaced. Note that if |A| =0, the division is undefined and there is
no solution. This is just another version of the rule that if A is singular (zero
determinant) then there is no unique solution to the equations.
5.4 Eigenvalues and Matrix Diagonalization
Square matrices have eigenvalues and eigenvectors associated with them. The
eigenvectors are those non-zero vectors whose directions do not change when
multiplied by the matrix. For example, suppose for a matrix A and vector a,we
have
Aa = λa. (5.9)
This means we have stretched or compressed a, but its direction has not changed.
The scale factor λ is called the eigenvalue associated with eigenvectora. Knowing