Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Rogers Computational Chemistry Using the PC

.pdf
Скачиваний:
78
Добавлен:
15.08.2013
Размер:
4.65 Mб
Скачать

APPLICATIONS OF MATRIX ALGEBRA

33

Matrix Multiplication

Multiplication of a matrix A by a scalar x follows the rules one would expect from the algebra of numbers: Each element of A is multiplied by the scalar. If

E ¼ xA

ð2-6Þ

then

 

eij ¼ xaij

ð2-7Þ

Multiplication of two matrices, however, is quite different from multiplication of two numbers. The first row of the premultiplying matrix is multiplied element by element into the first column of the postmultiplying matrix, and the resulting sum is the first element in the product matrix. This process is repeated with the first row of the premultiplying matrix and the second column of the postmultiplying matrix to obtain the second element in the product matrix and so on, until all of the elements of the product matrix have been filled in. If

F ¼ AB

ð2-8Þ

where A is the premultiplying matrix and B is the postmultiplying matrix, then

n

X

fij ¼

aikbkj

ð2-9Þ

 

k¼1

 

To be conformable to multiplication, the horizontal dimension of A must be the same as the vertical dimension of B, that is, nA ¼ mB. Square matrices of the same size are always conformable to multiplication. This unusual definition of multiplication, with its rules for dimensions, will become clear with repeated use. The matrices we shall be interested in will usually be square; you should assume that the matrices discussed below are square unless otherwise stipulated. The rules for rectangular matrices and column and row matrices will be developed as needed.

Except in special cases, matrix multiplication is not commutative,

AB BAgeneral case

ð2-10Þ

which is why we are careful to distinguish between the premultiplying and postmultiplying matrices.

Exercise 2-2

Find the product AB and the product BA where

A ¼

 

3

4

 

and B ¼

 

7

8

 

 

 

1

2

 

 

 

5

6

 

34

 

 

 

 

COMPUTATIONAL CHEMISTRY USING THE PC

Solution 2-2

 

 

 

 

 

 

 

AB ¼

19

22

 

and

BA ¼

23

34

 

43

50

31

42

Division of Matrices

Division of matrices is not defined, but the equivalent operation of multiplication by an inverse matrix (if it exists) is defined. If a matrix A is multiplied by its own inverse matrix, A 1, the unit matrix I is obtained. The unit matrix has 1s on its principal diagonal (the longest diagonal from upper left to lower right) and 0s elsewhere; for example, a 3 3 unit matrix is

01

 

1

0

0

I ¼ @

0

1

0

0

0

1 A

The unit matrix plays the same role in matrix algebra that 1 plays in ordinary algebra. Multiplication of a matrix by the unit matrix leaves it unchanged:

AI ¼ A

ð2-11Þ

Inverse matrices are among the special matrices that commute

 

AA 1 ¼ A 1A ¼ I

ð2-12Þ

Among the ordinary numbers, only 0 has no inverse. Many matrices have no inverse. The question of whether a matrix A has or does not have a defined inverse is closely related to the question of whether a set of simultaneous equations has or does not have a unique set of solutions. We shall consider this question more fully later, but for now recall that if one equation in a pair of simultaneous equations is a multiple of the other,

x þ 2y ¼ 4

2x þ 4y ¼ 8

ð2-13Þ

no unique solution exists. Similarly for matrices, if one row (or column) of elements is a multiple of any other in the matrix, for example,

A ¼

1

2

ð2-14Þ

2

4

no inverse exists.

APPLICATIONS OF MATRIX ALGEBRA

35

Exercise 2-3

Obtain the product matrix AB where

A ¼

0

4

5

6

1

and

B ¼

0

2

2

2

1

 

@

1

2

3

 

 

 

@

1

0

1

 

 

7

8

9 A

 

 

4

1

3 A

Solve the problem by hand. The operation requires 27 individual multiplications and 9 additions.

Exercise 2-4

Write a short BASIC program to solve for AB above. Solve for BA. Do AB and BA commute? Solve the same problem using Mathcad.

Solutions 2-3 and 2-4

Both problems can be solved by hand, by writing a short BASIC program, or by Mathcad as follows:

A:

0 4

5

6 1

B :

0 2

1

2 1

 

¼B

1

2

3

 

¼ B

1

0

1

 

7

8

9 C

 

4

1

3 C

 

@

 

 

A

 

@

 

 

A

01

17 5 14

B C

A B ¼ @ 38 11 32 A

59 17 50

01

810 12

B C

B A ¼ @ 20 25 30 A

29 37 45

Note that in Mathcad, both matrices must be defined above the problem to be worked. In Mathcad, the symbol A :¼ means ‘‘matrix A is set equal to.’’

Powers and Roots of Matrices

If two square matrices of the same size can be multiplied, then a square matrix can be multiplied into itself to obtain A2; A3, or An. A is the square root of A2 and the nth root of An. A number has only two square roots, but a matrix has infinitely many square roots. This will be demonstrated in the problems at the end of this chapter.

36

COMPUTATIONAL CHEMISTRY USING THE PC

Matrix Polynomials

Polynomial means ‘‘many terms.’’ Now that we are able to multiply a matrix by a scalar and find powers of matrices, we can form matrix polynomial equations, for example,

A2 þ 4A þ 5I ¼ 0

ð2-15Þ

There are infinitely many matrices that satisfy this polynomial equation; hence, the polynomial has infinitely many roots.

Exercise 2-5

Show that the matrix

A ¼

2

3

3

2

satisfies the polynomial

A2 4A 5I ¼ 0

Solution 2-5

Using Mathcad we get

 

0

1

A :¼ 3

2

I :¼

2

3

0

1

0

A2 4 A 5 I ¼

0

 

 

 

0

0

 

Notice that the matrix A does not have to be squared before entering it into the Mathcad equation. Mathcad does the work of squaring A as part of the solution of the matrix equation. (keystroke : translates as :¼ in Mathcad.)

Exercise 2-6

Find the roots of the ordinary polynomial

a2 4a 5 ¼ 0

ð2-16Þ

Exercise 2-7

Note that the matrix polynomial in A can be factored to give

ðA 5IÞ and ðA þ IÞ

APPLICATIONS OF MATRIX ALGEBRA

37

Perform the subtraction and addition above and multiply the resultant matrices to show that the null matrix is obtained.

Solution 2-7

Mathcad yields

A :¼

3

2

I :¼

0

1

 

2

3

 

1

0

ðA 5IÞ ðA þ IÞ ¼ 0 0

ð2-17Þ

00

The Least Equation

The general form for a matrix polynomial equation satisfied by A is

cmAm þ cm 1Am 1 þ þ c0I ¼ 0

ð2-18Þ

The least equation is the polynomial equation satisfied by A that has the smallest possible degree. There is only one least equation

Ak þ ck 1Ak 1 þ þ c0I ¼ 0

ð2-19Þ

The degree of the least equation, k, is called the rank of the matrix A. The degree k is never greater than n for the least equation (although there are other equations satisfied by A for which k > n). If k ¼ n, the size of a square matrix, the inverse A 1 exists. If the matrix is not square or k < n, then A has no inverse.

One method of finding the least equation for the simple second degree case is illustrated. Find a number r such that

A2 rI

is a matrix that has 0 as the lead element (the element in the 1,1 position). Now, find a number s such that

A sI

has 0 as the lead element. Find a number t such that

ðA2 rIÞ tðA sIÞ ¼ 0

This leads to the least equation

A2 tA þ ðts rÞI ¼ 0

ð2-20Þ

38

COMPUTATIONAL CHEMISTRY USING THE PC

where the coefficients c0 ¼ ts r

and c1 ¼ t in Eq. (2-18). If the coefficient

c0 ¼ 0, the matrix A is singular and has no inverse. The method can be extended to higher degrees, but it soon becomes tedious.

Exercise 2-8

Use the method given above to find the least equation of the matrix

A ¼

2

1

 

 

 

 

1

3

 

 

 

 

Does A have an inverse?

 

 

 

Solution 2-8

 

 

 

 

A2 5A þ 5I ¼ 0

 

 

 

 

 

 

 

c0 6¼0

 

0:6

0:2

 

A

1

exists and is

 

0:2

0:4

Verify this solution by calculating and substituting A2 and 5A to prove the equality. We can see that A 1 exists because neither row nor column can be obtained from the other by simple multiplication. They are linearly independent.

Importance of Rank

The degree of the least polynomial of a square matrix A, and hence its rank, is the number of linearly independent rows in A. A linearly independent row of A is a row that cannot be obtained from any other row in A by multiplication by a number. If matrix A has, as its elements, the coefficients of a set of simultaneous nonhomogeneous equations, the rank k is the number of independent equations. If k ¼ n, there are the same number of independent equations as unknowns; A has an inverse and a unique solution set exists. If k < n, the number of independent equations is less than the number of unknowns; A does not have an inverse and no unique solution set exists. The matrix A is square, hence k > n is not possible.

Importance of the Least Equation

A number s for which

 

A sI

ð2-21Þ

has no reciprocal is called an eigenvalue of A. The equation

 

AV ¼ sV

ð2-22Þ

APPLICATIONS OF MATRIX ALGEBRA

39

where V is a vector (or vector function), is called the eigenvalue equation. If

Ak þ ck 1Ak 1 þ þ c0I ¼ 0

ð2-23Þ

is the least equation satisfied by A, then s is an eigenvalue only if

 

sk þ ck 1sk 1 þ þ c0 ¼ 0

ð2-24Þ

This is one way of finding eigenvalues. All atomic and molecular energy levels are eigenvalues of a special eigenvalue equation called the Schroedinger equation.

Exercise 2-9

Perform the matrix subtraction

A EI

where

A ¼

a

b

b

a

What is the condition on the resulting matrix that must be met if E is to be an eigenvalue of A?

Solution 2-9

 

 

 

 

 

 

 

 

 

A

 

EI

¼

a E

a

b

E

 

 

 

b

 

 

 

 

 

 

 

 

 

The matrix

 

b

a E

 

 

 

a E

b

must have no inverse.

Historical Note. It is interesting to note (Pauling and Wilson, 1935) that the very first systematic approach to what we now call quantum mechanics was made by Heisenberg, who began to develop his own algebra to describe the frequencies and intensities of spectral transitions. It was soon seen by Born and Jordan that Heisenberg’s ‘‘new’’ algebra is really matrix algebra. Heisenberg’s eigenfunctions were later called wave functions by Schroedinger in an independent but equivalent method. Schroedinger’s method is now called wave mechanics and is the method most familiar to chemists. Heisenberg’s method is called matrix mechanics.

Special Matrices

The transpose AT of a matrix is obtained by reflecting the matrix through its principal diagonal:

aijT ¼ aji

ð2-25Þ

40

Properties of the transpose include

ðA þ BÞT ¼ AT þ BT

and

ðABÞT ¼ BTAT

(note the order of A and B).

Exercise 2-10

COMPUTATIONAL CHEMISTRY USING THE PC

ð2-26Þ

ð2-27Þ

Demonstrate that properties (2-26) and (2-27) hold for arbitrarily selected matrices A and B.

A symmetric matrix equals its own transpose.

 

A ¼ AT

ð2-28Þ

Exercise 2-11

 

Give three examples of symmetric matrices.

 

The transpose of an orthogonal matrix is equal to its inverse

 

AT ¼ A 1

ð2-29Þ

The trace of a matrix is the sum of the elements on its principal diagonal

 

trðAÞ ¼ X aii

ð2-30Þ

Exercise 2-12

What is the trace of a unit matrix of size n?

A diagonal matrix has nonzero elements only on the principal diagonal and zeros elsewhere. The unit matrix is a diagonal matrix. Large matrices with small matrices symmetrically lined up along the principal diagonal are sometimes encountered in computational chemistry.

A tridiagonal matrix has nonzero elements only on the principal diagonal and on the diagonals on either side of the principal diagonal. If the diagonals on either side of the principal diagonal are the same, the matrix is a symmetric tridiagonal matrix.

Triangular matrices have nonzero elements only on and above the principal diagonal (upper triangular) or on and below the principal diagonal (lower triangular). Some of the more important numerical methods are devoted to transforming a general matrix into its equivalent diagonal or triangular form.

APPLICATIONS OF MATRIX ALGEBRA

41

A column matrix is an ordered set of numbers; therefore, it satisfies the definition of a vector. The 2 1 array

x ¼ 12

is both a matrix and a vector in 2-space. An m 1 matrix has one element in each of m rows; therefore, it is one way of representing a vector in an m-dimensional space. An m n matrix may be thought of as representing n vectors in m-space where each vector is a column in the matrix. The transpose of a column matrix is a row matrix, which can also represent a vector.

The Transformation Matrix

If a vector x is transformed into a new vector x0 by a matrix multiplication

x0 ¼ Ax

ð2-31Þ

then A is a transformation matrix. If several vectors are transformed in the same operation, where X is the matrix consisting of the column vectors xi, we write

X0 ¼ AX

If the transformation matrix is orthogonal, then the transformation is orthogonal. If the elements of A are numbers (as distinct from functions), the transformation is linear. One important characteristic of an orthogonal matrix is that none of its columns is linearly dependent on any other column. If the transformation matrix is orthogonal, A 1 exists and is equal to the transpose of A. Because A 1 ¼ AT

AAT ¼ AA 1 ¼ A 1A ¼ ATA ¼ I

ð2-32Þ

Orthogonal transformations preserve the lengths of vectors. If the same orthogonal transformation is applied to two vectors, the angle between them is preserved as well. Because of these restrictions, we can think of orthogonal transformations as rotations in a plane (although the formal definition is a little more complicated).

If two matrices are related as

B ¼ C 1AC

ð2-33Þ

then B and A are similar matrices. If the squares of the coefficients of each of two or more orthogonal vectors add up to 1, the vectors are orthonormal. If A is symmetric, the vectors of A are or can be chosen to be orthonormal and X in the equation

AX ¼ XD

ð2-34Þ

D ¼ X 1AX ¼ XTAX

42

COMPUTATIONAL CHEMISTRY USING THE PC

holds, where the vectors comprising the matrix X are called eigenvectors. D has been chosen to be a diagonal matrix with the eigenvalues of A on the principal diagonal. The question is whether we can find X. If we can, we have successfully converted A into a similar matrix D that has only one element in each row or column. If A was the matrix of coefficients of (possibly many) simultaneous equations, D is the matrix of coefficients of a mathematically similar set of equations, each equation containing only one term. Thus the entire set of equations has been solved if we can find X in Eqs. 2-34. We shall go into the details of this problem later. The point here is that matrix A can be reduced to a very simple form D if we can find or approximate the matrix of eigenvectors X.

Complex Matrices

p

Numbers may be real, a, imaginary, ic, or complex, a ic, where i ¼ 1. The elements in a matrix may be complex numbers. If so, the matrix is complex

A ¼ B þ iC

ð2-35Þ

(For a real matrix, C ¼ 0.) The complex conjugate of a complex matrix A is A . In A , each element in A replaced by its complex conjugate; a ic becomes a ic. The complex conjugate A of A is

A ¼ B iC

ð2-36Þ

The Hermetian conjugate of A is the transpose of A

AH ¼ ðA ÞT

ð2-37Þ

The Hermetian conjugate plays the same role for complex matrices that the symmetric matrix plays for real matrices.

If the Hermetian conjugate of a square complex matrix is equal to its inverse,

UH ¼ U 1

ð2-38Þ

the matrix U is called a unitary matrix. A Hermetian matrix is reduced to diagonal form by a unitary transformation

D ¼ UHAU ¼ U 1AU

ð2-39Þ

where D is real with elements equal to the eigenvalues of A. U has columns that are eigenvectors of A.

What’s Going On Here?

The best way to avoid losing the physics of these procedures is to think of a particle describing an elliptical path about an origin. If we choose our coordinate system in an arbitrary way, the result might look like Fig. 2-1 (left).

Соседние файлы в предмете Химия