i
i
i
i
i
i
i
i
96 5. Linear Algebra
The transpose of a product of two matrices obeys a rule similar to Equation (5.4):
(AB)
T
= B
T
A
T
.
The determinant of a square matrix is simply the determinant of the columns
of the matrix, considered as a set of vectors. The determinant has several nice
relationships to the matrix operations just discussed, which we list here for refer-
ence:
|AB| = |A||B| (5.5)
A
1
=
1
|A|
(5.6)
A
T
= |A| (5.7)
5.2.3 Vector Operations in Matrix Form
In graphics, we use a square matrix to transform a vector represented as a matrix.
For example if you have a 2D vector a =(x
a
,y
a
) and want to rotate it by 90
degrees about the origin to form vector a
=(y
a
,x
a
), you can use a product of
a 2×2 matrix and a 2 ×1 matrix, called a column vector. The operation in matrix
form is
0 1
10

x
a
y
a
=
y
a
x
a
.
We can get the same result by using the transpose of this matrix and multiplying
on the left (“premultiplying”) with a row vector:
x
a
y
a
01
10
=
y
a
x
a
.
These days, postmultiplication using column vectors is fairly standard, but in
many older books and systems you will run across row vectors and premulti-
plication. The only difference is that the transform matrix must be replaced with
its transpose.
We can use also matrix formalism to encode operations on just vectors. If we
consider the result of the dot product as a 1 × 1 matrix, it can be written
a · b = a
T
b.
For example, if we take two 3D vectors we get
x
a
y
a
z
a
x
b
y
b
z
b
=
x
a
x
b
+ y
a
y
b
+ z
a
z
b
.
i
i
i
i
i
i
i
i
5.2. Matrices 97
A related vector product is the outer product between two vectors, which can
be expressed as a matrix multiplication with a column vector on the left and a row
vector on the right: ab
T
. The result is a matrix consisting of products of all pairs
of an entry of a with an entry of b. For 3D vectors, we have
x
a
y
a
z
a
x
b
y
b
z
b
=
x
a
x
b
x
a
y
b
x
a
z
b
y
a
x
b
y
a
y
b
y
a
z
b
z
a
x
b
z
a
y
b
z
a
z
b
.
It is often useful to think of matrix multiplication in terms of vector operations.
To illustrate using the three-dimensional case, we can think of a 3 × 3matrixas
a collection of three 3D vectors in two ways: either it is made up of three column
vectors side-by-side, or it is made up of three row vectors stacked up. For instance,
the result of a matrix-vector multiplication y = Ax can be interpreted as a vector
whose entries are the dot products of x with the rows of A.Namingtheserow
vectors r
i
,wehave
|
y
|
=
r
1
r
2
r
3
|
x
|
;
y
i
= r
i
·x.
Alternatively, we can think of the same product as a sum of the three columns c
i
of A, weighted by the entries of x:
|
y
|
=
|||
c
1
c
2
c
3
|||
x
1
x
2
x
3
;
y = x
1
c
1
+ x
2
c
2
+ x
3
c
3
.
Using the same ideas, one can understand a matrix-matrix product AB as an
array containing the pairwise dot products of all rows of A with all columns of B
(cf. (5.2)); as a collection of products of the matrix A with all the column vectors
of B, arranged left to right; as a collection of products of all the row vectors of
A with the matrix B, stacked top to bottom; or as the sum of the pairwise outer
products of all columns of A with all rows of B.(SeeExercise8.)
These interpretations of matrix multiplication can often lead to valuable geo-
metric interpretations of operations that may otherwise seem very abstract.
5.2.4 Special Types of Matrices
The identity matrix is an example of a diagonal matrix, where all non-zero ele-
ments occur along the diagonal. The diagonal consists of those elements whose
column index equals the row index counting from the upper left.
i
i
i
i
i
i
i
i
98 5. Linear Algebra
The identity matrix also has the property that it is the same as its transpose.
Such matrices are called symmetric.
The identity matrix is also an orthogonal matrix, because each of its columns
The idea of an orthogonal
matrix corresponds to the
idea of an
orthonormal
ba-
sis, not just a set of
orthog-
onal
vectors—an unfortu-
nate glitch in terminology.
considered as a vector has length 1 and the columns are orthogonal to one another.
The same is true of the rows (see Exercise 2). The determinant of any orthogonal
matrix is either +1 or 1.
A very useful property of orthogonal matrices is that they are nearly their own
inverses. Multiplying an orthogonal matrix by its transpose results in the identity,
R
T
R = I = RR
T
for orthogonal R.
This is easy to see because the entries of R
T
R are dot products between the
columns of R. Off-diagonal entries are dot products between orthogonal vec-
tors, and the diagonal entries are dot products of the (unit-length) columns with
themselves.
Example. The matrix
800
020
009
is diagonal, and therefore symmetric, but not orthogonal (the columns are orthog-
onal but they are not unit length).
The matrix
112
197
271
is symmetric, but not diagonal or orthogonal.
The matrix
010
001
100
is orthogonal, but neither diagonal nor symmetric.
5.3 Computing with Matrices and Determinants
Recall from Section 5.1 that the determinant takes nn-dimensional vectors and
combines them to get a signed n-dimensional volume of the n-dimensional par-
allelepiped dened by the vectors. For example, the determinant in 2D is the area
i
i
i
i
i
i
i
i
5.3. Computing with Matrices and Determinants 99
of the parallelogram formed by the vectors. We can use matrices to handle the
mechanics of computing determinants.
If we have 2D vectors r and s, we denote the determinant |rs|; this value is
the signed area of the parallelogram formed by the vectors. Suppose we have
two 2D vectors with Cartesian coordinates (a, b) and (A, B) (Figure 5.7). The
determinant can be written in terms of column vectors or as a shorthand:
Figure 5.7. The 2D de-
terminant in Equation 5.8 is
the area of the parallelo-
gram formed by the 2D vec-
tors.
a
b

A
B
aA
bB
= aB Ab. (5.8)
Note that the determinant of a matrix is the same as the determinant of its trans-
pose:
aA
bB
=
ab
AB
= aB Ab.
This means that for any parallelogram in 2D there is a “sibling” parallelogram that
has the same area but a different shape (Figure 5.8). For example the parallelo-
gram dened by vectors (3, 1) and (2, 4) has area 10, as does the parallelogram
dened by vectors (3, 2) and (1, 4).
Figure 5.8. The sibling
parallelogram has the same
area as the parallelogram in
Figure 5.7.
Example. The geometric meaning of the 3D determinant is helpful in seeing why
certain formulas make sense. For example, the equation of the plane through the
points (x
i
,y
i
,z
i
) for i =0, 1, 2 is
x x
0
x x
1
x x
2
y y
0
y y
1
y y
2
z z
0
z z
1
z z
2
=0.
Each column is a vector from point (x
i
,y
i
,z
i
) to point (x, y, z). The volume of
the parallelepiped with those vectors as sides is zero only if (x, y, z) is coplanar
with the three other points. Almost all equations involving determinants have
similarly simple underlying geometry.
As we saw earlier, we can compute determinants by a brute force expansion
where most terms are zero, and there is a great deal of bookkeeping on plus and
minus signs. The standard way to manage the algebra of computing determinants
is to use a form of Laplace’s expansion. The key part of computing the determi-
nant this way is to nd cofactors of various matrix elements. Each element of a
square matrix has a cofactor which is the determinant of a matrix with one fewer
row and column possibly multiplied by minus one. The smaller matrix is obtained
by eliminating the row and column that the element in question is in. For exam-
ple, for a 10×10 matrix, the cofactor of a
82
is the determinant of the 9 ×9 matrix
with the 8th row and 2nd column eliminated. The sign of a cofactor is positive if
i
i
i
i
i
i
i
i
100 5. Linear Algebra
the sum of the row and column indices is even and negative otherwise. This can
be remembered by a checkerboard pattern:
+ + ···
+ + ···
+ + ···
+ + ···
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
So, for a 4 × 4 matrix,
A =
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44
.
The cofactors of the rst row are
a
c
11
=
a
22
a
23
a
24
a
32
a
33
a
34
a
42
a
43
a
44
,a
c
12
=
a
21
a
23
a
24
a
31
a
33
a
34
a
41
a
43
a
44
,
a
c
13
=
a
21
a
22
a
24
a
31
a
32
a
34
a
41
a
42
a
44
,a
c
14
=
a
21
a
22
a
23
a
31
a
32
a
33
a
41
a
42
a
43
.
The determinantof a matrix is found by taking the sum of productsof the elements
of any row or column with their cofactors. For example, the determinant of the
4 × 4 matrix above taken about its second column is
|A| = a
12
a
c
12
+ a
22
a
c
22
+ a
32
a
c
32
+ a
42
a
c
42
.
We could do a similar expansion about any row or column and they would all
yield the same result. Note the recursive nature of this expansion.
Example. A concrete example for the determinant of a particular 3 ×3 matrix by
expanding the cofactors of the rst row is
012
345
678
=0
45
78
1
35
68
+2
34
67
=0(3235) 1(24 30) + 2(21 24)
=0.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset