Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Fitts D.D. - Principles of Quantum Mechanics[c] As Applied to Chemistry and Chemical Physics (1999)(en)

.pdf
Скачиваний:
45
Добавлен:
15.08.2013
Размер:
1.57 Mб
Скачать

332

 

 

 

 

Appendix I

 

 

 

 

1

 

AB

a11 b11 a12 b21 a13 b31

a11 b12 a12 b22 a13 b32

 

0 a21 b11 a22 b21 a23 b31

a21 b12 a22 b22 a23 b32

 

 

a31 b11 a32 b21 a33 b31

a31 b12 a32 b22 a33 b32

 

 

Continued products,@ such as ABC, may be de®ned and evaluated if the matricesA

are

conformable. In such cases, multiplication is associative, for example

 

 

ABC A(BC) (AB)C D;

 

dil Xj

Xk

aijbjk ckl

 

(I:4)

For the null matrix 0, all the matrix elements are zero

 

 

 

(I:5)

 

0

 

0

0

0

 

0

1

 

 

 

 

 

 

B

0

0

 

0

C

 

 

 

 

 

 

0

0

0

 

 

 

 

 

 

 

B

 

 

 

 

C

 

 

 

 

and we have

 

 

@

 

 

 

 

A

 

 

 

 

 

 

 

0 A A 0 A

 

 

 

 

(I:6)

The product of an arbitrary m 3 n matrix A with a conformable n 3 p null matrix is the m 3 p null matrix

A0 0 (I:7)

In matrix algebra it is possible for the product of two conformable matrices, neither of

which is a null matrix, to be a null matrix. For example, if A and B are

A 0

ÿ2

ÿ2

2 1;

B

0

ÿ1

ÿ1

1

then the product AB is the@

1

1

0

 

@

1

1

A

3

3

1

 

0

0

3 3 2 null matrix.A

 

 

 

The transpose matrix AT of a matrix A is obtained by interchanging the rows and columns of A. If the matrix A is given by equation (I.1), then its transpose is

 

 

 

 

 

a11

a21

 

am1

1

 

 

 

AT

 

0 a12

a22

am2

(I:8)

 

 

 

 

B a1n

a2n

 

amn

C

 

 

 

 

 

 

B

 

 

 

C

 

T

of A

T

 

 

@

T

 

 

A

 

Thus, the elements aij

 

are given by aij aji.

 

 

 

Let the matrix C be the product of matrices A and B as in equation (I.3). The

elements cT of the transpose of C are then given by

 

 

 

ik

 

 

 

 

 

 

 

 

 

 

 

cTik

 

n

 

n

 

n

 

 

 

j 1

akjbji

j 1

aTjk bTij

 

bTij aTjk

(I:9)

 

 

 

 

 

j 1

 

 

 

 

X

 

X

 

X

 

where we have noted that aáâT aâá and báâT bâá. Thus, we see that

 

or

 

 

 

 

CT BTAT

 

 

 

 

 

 

 

(AB)T BTAT

 

 

 

 

 

 

 

 

 

 

(I:10)

This result may be generalized to give

 

 

 

 

 

 

 

 

(AB Q)T QT BTAT

(I:11)

as long as the matrices are conformable.

 

 

 

 

, then the

If each element aij in a matrix A is replaced by its complex conjugate aij

 

 

 

 

 

 

 

 

 

 

resulting matrix A is called the conjugate of A. The transposed conjugate of A is

 

 

 

Matrices

333

called the adjoint1

of A and is denoted by Ay. The elements ayij of Ay

are obviously

given by ayij

 

 

 

 

 

a ji.

 

 

Square matrices

Square matrices are of particular interest because they apply to many physical situations.

A square matrix of order n is symmetric if aij aji, (i, j 1, 2, . . . , n), so that A AT, and is antisymmetric if aij ÿaji, (i, j 1, 2, . . . , n), so that A ÿAT. The diagonal elements of an antisymmetric matrix must all be zero. Any arbitrary square matrix A may be written as the sum of a symmetric matrix A(s) and an

antisymmetric matrix A(a)

where

 

A A(s) A(a)

 

 

(I:12)

 

 

 

 

 

 

 

 

a(s)ij 21(aij aji);

 

a(a)ij 21(aij ÿ aji)

(I:13)

A square matrix A is diagonal if aij 0 for i 6j. Thus, a diagonal matrix has the

form

 

0

01

a2

 

0

1

(I:14)

A

 

 

 

B

a

0

 

0

C

 

 

0

0

an

 

 

 

B

 

 

 

 

C

 

 

 

@

 

 

 

 

A

 

A diagonal matrix is scalar if all the diagonal elements are equal, a1 a2

an a, so that

 

0

0

a

 

0

1

(I:15)

A

 

 

 

B

a

0

 

0

 

 

 

0

0

a C

 

 

 

B

 

 

 

 

C

 

A special case of a scalar matrix is@the unit matrix

I, for Awhich a equals unity

(I:16)

I

 

0

0

1

 

0

1

 

 

 

1

0

 

0

C

 

 

B 0

0

1

 

 

 

B

 

 

 

 

C

 

 

 

@

 

 

 

 

A

 

The elements of the unit matrix are äij, the Kronecker delta function.

For square matrices in general, the product AB is not equal to the product BA. For

example, if

 

 

 

;

B

 

 

 

 

 

 

0

1

2

 

0

A 1

0

0

 

3

then we have

 

 

 

 

 

 

 

 

 

 

AB

0

3

;

 

BA

0

 

2

6AB

2

0

 

3

 

0

If the product AB equals the product BA, then A and B commute. Any square matrix A commutes with the unit matrix of the same order

1Mathematics texts use the term transpose conjugate for this matrix and apply the term adjoint to the adjugate matrix de®ned in equation (I.28).

334

 

 

 

 

 

 

 

 

Appendix I

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI IA

 

 

 

 

 

 

 

(I:17)

Moreover, two diagonal matrices of the same order commute

 

a2 b2

 

0

1

AB

 

0

01

a2

 

0

10

01

b2

 

0

1

 

0 a10 1

 

 

B

a

0

 

0

CB

b

0

 

0

C

 

B

b

0

 

0

C

 

0

0

an

0

0

bn

0

0

anbn

 

 

B

 

 

 

 

CB

 

 

 

 

C

 

B

 

 

 

 

C

 

 

@

 

 

 

 

A@

 

 

 

 

A

 

@

 

 

 

 

A

 

BA

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(I:18)

Determinants

For a square matrix A, there exists a number called the determinant of the matrix. This determinant is denoted by

j j

 

a11

a12

 

a1n

 

 

 

 

 

 

A

an1

an2

 

ann

 

(I:19)

a21

a22

 

a2n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

and is de®ned as the summation

 

X

 

 

 

 

 

jAj

äP a1i a2 j anq

 

(I:20)

 

 

P

where äP 1. The summation is taken over all possible permutations i, j, . . . , q of the sequence 1, 2, . . . , n. The value of äP is 1 (ÿ1) if the order i, j, . . . , q is obtained by an even (odd) number of pair interchanges from the order 1, 2, . . . , n. There are n! terms in the summation, half with äP 1 and half with äP ÿ1. Thus, for a second-order determinant, we have

 

 

 

 

 

 

 

a11

a12

 

 

 

 

 

 

 

 

jAj

a21

a22

 

a11 a22 ÿ a12 a21

(I:21)

 

 

 

 

 

 

 

 

 

 

 

 

 

and for a third-order determinant,

we have

 

 

 

 

a11

a12

a13

 

 

 

 

 

 

 

 

j j

 

 

 

 

 

 

 

 

 

 

 

 

A

 

 

a22

a23

 

 

 

 

 

 

 

 

a21

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

a32

a33

 

 

 

 

 

 

 

 

 

a31

 

 

 

 

 

 

 

 

a11 a22 a33 a12 a23 a31 a13 a21 a32 ÿ (a11 a23 a32 a12 a21 a33 a13 a22 a31) (I:22)

If the determinant jAj of the matrix A vanishes, then the matrix A is said to be singular. Otherwise, the matrix A is non-singular.

The determinant jAj has the following properties, which are easily derived from the de®nition (I.20).

1.The interchange of any two rows or any two columns changes the sign of the determinant.

2.Multiplication of all the elements in any row or in any column by a constant k gives a new determinant of value kjAj. (Note that if B kA, then jBj knjAj.)

3.The value of the determinant is zero if any two rows or any two columns are identical, or if each element in any row or in any column is zero. As a special case of properties 2 and 3, a determinant vanishes if any two rows or any two columns are proportional.

Matrices

335

4.The value of a determinant is unchanged if the rows are written as columns. Thus, the determinants of a matrix A and its transpose matrix AT are equal.

5.The value of a determinant is unchanged if, to each element of one row (column) is added a constant k times the corresponding element of another row (column). Thus,

we have, for example

 

a11

a12

a13

 

 

a11

ka12

 

a12

a13

 

 

a11 ka31

a12 ka32

a13 ka33

 

 

 

a22

a23

 

 

 

 

 

a22

a23

 

 

a21

a22

a23

 

a21

 

a21

ka22

 

 

 

 

 

 

a32

a33

 

 

 

ka32

 

a32

j j

 

 

 

a31

a32

a33

 

a31

 

a31

 

a33

 

 

 

 

Each element aij

of the determinant

A

in

equation

(I.19) has a cofactor Cij, which

 

 

is an (n ÿ 1)-order determinant. This cofactor Cij is constructedi j

by deleting the ith

 

 

row and the jth column of jAj and then multiplying by (ÿ1) . For example, the

 

 

cofactor of the element a12 in equation (I.22) is

 

 

 

 

 

 

 

 

 

 

 

 

 

 

a21

a23

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

a33

 

a23 a31 ÿ a21 a33

 

 

 

 

 

 

 

 

 

C12 ÿ a31

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The summation on the right-hand side of equation (I.20) may be expressed in terms of the cofactors of the ®rst row of jAj, so that (I.20) becomes

Xn

jAj a11 C11 a12 C12 a1n C1n a1k C1k

(I:23)

k 1

 

Alternatively, the expression of jAj in equation (I.20) may be expanded in terms of any row i

Xn

jAj

aik Cik ,

i 1, 2, . . . , n

(I:24)

or in terms of any column j

k 1

 

 

 

 

 

 

n

 

 

jAj

X

j 1, 2, . . . , n

(I:25)

akjCkj,

k 1

Equations (I.20), (I.24), and (I.25) are identical; they are just expressed in different notations.

Now suppose that row 1 and row i of the determinant jAj are identical. Equation (I.23) then becomes

Xn

jAj ai1 C11 ai2 C12 ainC1n aik C1k 0

k 1

where the determinant jAj vanishes according to property 3. This argument applies to any identical pair of rows or any identical pair of columns, so that equations (I.24) and

(I.25) may be generalized

X

 

 

X

 

 

n

n

i, j 1, 2, . . . , n

 

aik Cjk

akiCkj jAjäij,

(I:26)

k 1

k 1

 

 

It can be shown2 that the determinant of the product of two square matrices of the

same order is equal to the product of the two determinants, i.e., if C AB, then

 

 

jCj jAj : jBj

 

(I:27)

2See G. D. Arfken and H. J. Weber (1995) Mathematical Methods for Physicists, 4th edition (Academic Press, San Diego), p. 169.

336

Appendix I

It follows from equation (I.27) that the product of two non-singular matrices is also non-singular.

Special square matrices

 

 

^

 

 

 

 

 

 

 

 

 

 

 

 

The adjugate matrix A of the square matrix A is de®ned as

 

 

 

 

 

 

 

 

 

 

C11

C21

 

Cn1

C

 

 

 

 

 

 

 

 

A^

B C1n

C2n

Cnn

 

 

 

 

 

(I:28)

 

 

 

0 C12

C22

 

Cn2

1

 

 

 

 

 

 

 

 

 

B

 

 

 

C

 

 

 

 

 

 

where C

ij are the

cofactors of the@elements a

of the

determinantA

j

A

j

of

A

. Note that

 

^

 

 

ij

 

 

^

 

 

 

the element ^akl of A is the cofactor Clk. The matrix product AA is a matrix B whose

elements bij are given by

X

X

 

 

 

 

 

 

 

 

 

 

bij

 

n

n

aik Cjk jAjäij

 

 

 

 

 

 

 

 

 

aik ^akj

 

 

 

 

 

(I:29)

 

 

 

k 1

k 1

 

 

 

 

 

 

 

 

where equation (I.26) was introduced. Thus, we have

 

 

 

 

 

 

 

 

 

 

 

^

 

^

 

 

 

 

 

 

 

^(I:30)

 

 

 

 

AA B jAjI AA

 

 

 

 

 

 

where I is the unit matrix in equation (I.16), and we see that the matrices A and A

commute.

Any non-singular square matrix A possesses an inverse matrix Aÿ1 de®ned as

 

Aÿ1 A^=jAj

(I:31)

From equation (I.30) we observe that

 

AAÿ1 Aÿ1A I

(I:32)

Consider three square matrices A, B, C such that AB C. Then we have

 

Aÿ1AB Aÿ1C

 

or

 

B Aÿ1C

(I:33)

Thus, the inverse matrix plays the role of division in matrix algebra. Multiplication of equation (I.33) from the left by Bÿ1 and from the right by Cÿ1 yields

or

 

 

 

Cÿ1 Bÿ1Aÿ1

 

 

 

 

(AB)ÿ1 Bÿ1Aÿ1

 

 

 

 

 

(I:34)

This result may easily be generalized to show that

 

 

 

 

 

(AB Q)ÿ1 Qÿ1 Bÿ1Aÿ1

(I:35)

 

A square matrix A is hermitian or self-adjoint if it is equal to its adjoint, i.e., if

A

 

Ay or aij

 

 

 

 

 

aji. Thus, the diagonal elements of a hermitian matrix are real.

 

A square matrix A is orthogonal if it satis®es the relation

AAT ATA I

If we multiply AAT I from the left by Aÿ1, then we have the equivalent de®nition

AT Aÿ1

Since the determinants jAj and jATj are equal, we have from equation (I.27) jAj2 1 or jAj 1

Matrices

337

The product of two orthogonal matrices is an orthogonal matrix as shown by the following sequence

(AB)T BTAT Bÿ1Aÿ1 (AB)ÿ1

where equations (I.10) and (I.34) were used. The inverse of an orthogonal matrix is also an orthogonal matrix as shown by taking the transpose of Aÿ1 and noting that the order of transposition and inversion may be reversed

(Aÿ1)T (AT)ÿ1 (Aÿ1)ÿ1

A square matrix A is unitary if its inverse is equal to its adjoint, i.e., if Aÿ1 Ay or if AAy AyA I. For a real matrix, with all elements real so that aij aij , there is

no distinction between an orthogonal and a unitary matrix. In that case, we have

Ay AT Aÿ1.

Linear vector space

A vector x in three-dimensional cartesian space may be represented as a column matrix

 

x1

A

 

x

@ x3

 

0 x2

1

(I:36)

where x1, x2, x3 are the components of x. The adjoint of the column matrix x is a row matrix

xy

 

 

 

 

 

 

(I:37)

 

(x1

 

x2

x3 )

 

The scalar product of the vectors x and y when expressed in matrix notation is

 

 

0

 

1

 

 

 

xyy (x1 x2 x3 )

y1

 

 

 

 

 

y2

 

 

x1 y1 x2 y2

x2 y2

(I:38)

 

 

y

 

 

 

 

 

Consequently, the magnitude of the vector@

3xAis

 

 

 

(xyx)1=2 (jx1j2 jx2j2 jx3j2)1=2

 

(I:39)

If the vectors x and y are orthogonal, then we have xyy 0. The unit vectors i, j, k in

matrix notation are

0

0

1;

 

j

0

1

1;

 

k 0

0

1

 

(I:40)

i

 

 

 

 

 

1

 

 

 

 

 

0

 

 

 

0

 

 

 

 

 

 

0

 

 

 

 

 

0

 

 

 

1

 

 

 

 

A linear operator A^ in three@ -dimensionalA

cartesian@ A

space may@beArepresented as a

 

 

 

 

 

 

 

 

 

 

 

 

^

 

 

 

 

 

3 3 3 matrix A with elements aij. The expression y Ax in matrix notation becomes

y 0 y2 1 Ax

0 a21

a22

a23

10 x2

1

0 a21 x1

a22 x2

a23 x3

1

y1

 

a11

a12

a13

 

 

x1

 

a11 x1

a12 x2

a13 x3

 

@ y3 A

@ a31

a32

a33 A@ x3 A @ a31 x1 a32 x2 a33 x3 A

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(I:41)

If A is non-singular, then in matrix notation the vector x is related to the vector y by

x Aÿ1y

(I:42)

The vector concept may be extended to n-dimensional cartesian space, where we have n mutually orthogonal axes. Each vector x then has n components (x1, x2, . . . ,

338 Appendix I

xn) and may be represented as a column matrix x with n rows. The scalar product of the n-dimensional vectors x and y in matrix notation is

 

 

 

 

xyy

 

 

 

 

 

 

 

 

 

 

 

 

 

(I:43)

 

 

 

 

 

x1 y1

 

 

x2 y2

 

 

xn yn

 

 

 

 

and the magnitude of x is

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(xyx)1=2 (jx1j2 jx2j2 jxnj2)1=2

 

 

 

(I:44)

If we have xyy 0, then the vectors x and y are orthogonal. The unit vectors iá

 

(á 1, 2, . . . , n) when expressed in matrix notation are

 

 

 

 

 

 

 

B

1

C

 

 

 

B

0

C

 

 

 

 

B

0

C

 

 

..

 

 

 

..

 

 

..

 

i1

 

0

0.

1;

 

i2

 

0

1.

1;

 

;

in

 

0

0.

1

(I:45)

 

 

B

0 C

 

 

 

B

0 C

 

 

 

 

B

1 C

 

 

 

@

 

A

 

 

 

@

 

 

A

 

 

 

 

@

 

A

 

If a vector y is related to a vector x by the relation y Ax and if the magnitude of

y is to remain the same as the magnitude of x, then we have

 

 

 

 

 

 

 

 

 

xyx yyy (Ax)yAx xyAyAx

 

 

 

 

(I:46)

where equation (I.10) was used. It follows from equation (I.46) that AyA I so that A must be unitary.

Eigenvalues

The eigenvalues ë of a square matrix A with elements aij are de®ned by the equation

 

 

 

 

 

 

Ax ëx ëIx

 

 

 

(I:47)

where the eigenvector x is the column matrix corresponding to an n-dimensional

 

vector and ë is a scalar quantity. Equation (I.47) may also be written as

 

 

A ÿ ëI

 

 

 

 

(A ÿ ëI)x 0

 

 

 

(I:48)

If the matrix (

) were to possess an inverse, we could multiply both sides of

 

 

 

 

1

and obtain x 0. Since x is not a null matrix, the matrix

equation (I.48) by (A ÿ ëI)ÿ

 

(A ÿ ëI) is singular and its determinant vanishes

 

 

 

 

 

 

 

 

a11 ÿ ë

a12

ë

 

a1n

 

 

0

(I:49)

 

 

a21

a22

ÿ

 

a2n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

an1

an2

 

 

ann

 

 

 

 

 

 

 

ë

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ÿ

 

 

 

 

The expansion of this determinant

is a polynomial of degree

n in ë, giving the

 

characteristic or secular

equation

 

 

 

 

 

 

 

 

 

 

 

 

ën cnÿ1ënÿ1 c1ë c0 0

(I:50)

where ci (i 0, 1, . . . , n ÿ 1) are constants. Equation (I.50) has n roots or eigenvalues ëá (á 1, 2, . . . , n). It is possible that some of these eigenvalues are degenerate.

The eigenvalues of a hermitian matrix are real. To prove this statement, we take the adjoint of each side of equation (I.47), apply equation (I.10), and note that A Ay

(Ax)y xyAy xyA ë xy

(I:51)

Multiplying equation (I.47) from the left by xy and equation (I.51) from the right by x, we have

xyAx ëxyx xyAx ë xyx

Since the magnitude of the vector x is not zero, we see that ë ë and ë is real.

Matrices

339

The eigenvectors of a hermitian matrix with different eigenvalues are orthogonal. To prove this statement, we consider two distinct eigenvalues ë1 and ë2 and their corresponding eigenvectors x(1) and x(2), so that

Ax(1)

ë1x(1)

(I:52a)

Ax(2)

ë2x(2)

(I:52b)

If we multiply equation (I.52a) from the left by x(2)y and the adjoint of (I.52b) from the right by x(1), we obtain

x(2)yAx(1)

ë1x(2)yx(1)

 

 

 

 

 

(I:53a)

(Ax(2))yx(1)

 

x(2)yAyx(1)

 

x(2)yAx(1)

 

ë

x(2)yx(1)

(I 53b)

 

 

 

2

 

:

where we have used equation (I.10) and noted that ë2 is real. Subtracting equation

(I.53b) from (I.53a), we ®nd

 

 

 

 

 

 

 

 

 

 

1 ÿ ë2)x(2)yx(1) 0

 

 

 

 

Since ë1 is not equal to ë2, we see that x(1) and x(2) are orthogonal.

 

The eigenvector x(á) corresponding to the eigenvalue ëá may be determined by

substituting the value for ëá into equation (I.47) and then solving the resulting

 

simultaneous equations for the components x(á)

, x(á)

, . . . , x(á) of x(á) in terms of the

2

3

n

 

®rst component x1(á). The value of the ®rst component is arbitrary, but it may be

 

speci®ed by requiring that the vector x(á) be normalized, i.e.,

 

x(á)yx(á) jx1(á)j2 jx2(á)j2

jx(ná)j2 1

(I:54)

The determination of the eigenvectors for degenerate eigenvalues is somewhat more complicated and is not discussed here.

We may construct an n 3 n matrix X using the n orthogonal eigenvectors x(á) as columns

 

 

x1(1)

x1(2)

 

x1(n)

C

 

 

B

(1)

(2)

(n)

 

X

 

 

0 x2(1)

x2(2)

 

x2(n)

1

(I:55)

 

 

 

@

 

xn

 

xn

 

A

 

 

 

 

B xn

 

 

C

 

and a diagonal matrix Ë using the n eigenvalues

0

1

(I:56)

Ë

 

0

01

ë2

 

 

 

B

ë

0

 

0

C

 

 

 

0

0

ën

 

 

 

 

B

 

 

 

 

C

 

 

 

 

@

 

 

 

 

A

 

Equation (I.47) may then be written in the form

 

 

 

 

 

 

 

 

AX XË

 

 

 

(I:57)

The matrix X is easily seen to be unitary. Since the n eigenvectors are linearly independent, the matrix X is non-singular and its inverse Xÿ1 exists. If we multiply equation (I.57) from the left by Xÿ1, we obtain

Xÿ1AX

 

Ë

(I:58)

 

 

 

This transformation of the matrix A to a diagonal matrix is an example of a similarity transform.

340 Appendix I

Trace

The trace Tr A of a square matrix A is de®ned as the sum of the diagonal elements

Tr A a11 a22 ann

(I:59)

The operator Tr is a linear operator because

 

 

Tr(A B) (a11 b11) (a22 b22) (ann bnn) Tr A Tr B

(I:60)

and

 

 

 

Tr(cA) ca11 ca22 cann c Tr A

(I:61)

The trace of a product of two matrices, which may or may not commute, is

 

independent of the order of multiplication

XX

 

XX

 

n

n

n n

 

Tr(AB)

aijbji

bjiaij Tr(BA)

(I:62)

i 1 j 1

j 1 i 1

 

Thus, the trace of the commutator [A, B] AB ÿ BA is equal to zero. Furthermore, the trace of a continued product of matrices is invariant under a cyclic permutation of the matrices

Tr(ABC Q) Tr(BC QA) Tr(C QAB)

(I:63)

For a hermitian matrix, the trace is the sum of its eigenvalues

 

X

 

n

 

Tr A ëá

(I:64)

á 1

 

To demonstrate the validity of equation (I.64), we ®rst take the trace of (I.58) to obtain

 

 

 

 

 

 

X

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

 

 

Tr(Xÿ1AX) Tr Ë

ëá

 

(I:65)

We then note that

 

 

 

 

á 1

 

 

 

 

 

 

XX X

 

 

 

 

 

X

 

 

 

 

 

 

 

 

n

 

 

 

n n

n

 

 

 

 

Tr(Xÿ1AX)

 

(Xÿ1AX)

áá

X ÿ1 a

X

 

 

 

 

ái

ij

 

 

 

á 1

 

 

 

á 1 i 1 j 1

 

 

 

X

 

XX X

 

XX

 

 

 

 

n

n

n

X X áÿi1

n n

aijäij

n

 

 

 

aij

 

i 1 j 1

aii Tr A (I:66)

 

i 1 j 1

á 1

 

 

 

 

i 1

where X( x(já)) are the elements of X. Combining equations (I.65) and (I.66), we obtain equation (I.64).

Appendix J

Evaluation of the two-electron interaction integral

In the application of quantum mechanics to the helium atom, the following integral I arises and needs to be evaluated

e

(r1 r2)

 

 

I … …

 

ÿ

 

dr1 dr2

 

 

r12

 

 

 

 

eÿ(r1 r2)

 

 

… …

 

 

r12r22 sin è1 sin è2 dr1 1 dj1 dr2 2 dj2

(J:1)

 

r12

where the position vectors ri (i 1, 2) have components ri, èi, ji in spherical polar coordinates and where

r12 jr2 ÿ r1j

The distance r12 is related to r1 and r2 by the law of cosines

r212 r21 r22 ÿ 2r1r2 cos ã (J:2)

where ã is the angle between r1 and r2 as shown in Figure J.1. The integration is taken over all space for each position vector.

The integral I may be evaluated more easily if we orient the coordinate axes so that the vector r1 lies along the positive z-axis as shown in Figure J.2. In that case, the angle ã between r1 and r2 is equal to the angle è2. If we de®ne r. as the larger and r, as the smaller of r1 and r2 and de®ne s by the ratio

sr, r.

so that s < 1, then equation (J.2) may be expressed in the form

1

1

 

s2 ÿ 2s cos è2)ÿ1=2

 

 

 

 

(1

(J:3)

r12

r.

At this point, we may proceed in one of two ways, which are mathematically equivalent. In the ®rst procedure, we note that from the generating function (E.1) for Legendre polynomials Pl, equation (J.3) may be written as

1

1

X

1

r12

 

r.

Pl(cos è2)sl

 

 

 

l 0

The integral I in equation (J.1) then becomes

341