0387986995Basic TheoryC
.pdf78 IV. GENERAL THEORY OF LINEAR SYSTEMS
In this case, SN = NS = N and N2 = O. Let Vj = Image (PI(A)) (j = 1,2).
Then, by virtue of (1) of Lemma IV-1-10, Pj(A)p" = p" for all IT E Vj (j=1,2).
|
|
14 |
1 |
and |
0 |
|
Furthermore, V1 is spanned by 19 |
and V2 is spanned by 0 |
1 |
Set |
|||
|
|
12 |
0 |
|
-2 |
|
14 |
1 |
0 |
|
|
|
|
Pa = 19 |
0 |
1 |
. Then, |
|
|
|
|
12 |
0 |
-2 |
|
|
|
|
|
|
|
|
11 |
0 |
0 |
Pp 1 NPo = 2 [00 |
|
Po 1 SPo = |
0 |
1 |
0 |
1 |
|||
|
|
|
0 |
0 |
1 |
0 |
1 |
-1 .
-1
It is noteworthy that there is only one linearly independent eigenvector x =
for the eigenvalue A2 = 1.
It is not difficult to make a program for calculation of S and N with a computer. For more examples of calculation of S and N, see [HKSI.
IV-2. Homogeneous systems of linear differential equations
In this section, we explain the basic results concerning the structure of solutions of a homogeneous system of linear differential equations given by
(IV.2.1) |
dt = A(t)y', |
where the entries of the n x n matrix A(t) are continuous on an interval I = It : a < t < b). Let us prove the following basic theorem.
Theorem 1V-2-1. The solutions of (1V2. 1) forms an n-dimensional vector space over C.
We break the entire proof into three observations.
Observation IV-2-2. Any linear combination of a finite number of solutions of (IV.2.1) is also a solution of (IV.2.1). We can prove the existence of n linearly independent solutions of (IV.2.1) on the interval Z by using Theorem I-3-5 with n linearly independent initial conditions at t = to. Notice that each column vector of a solution Y of the differential equation
(IV.2.2) |
dt = A(t)Y |
on an n x n unknown matrix Y is a solution of system (IV.2. 1). Therefore, construct- ing an invertible solution Y of (IV.2.2), we can construct n linearly independent
solutions of (IV.2.1) all at once. If an n x n matrix Y(t) is a solution of equation
(IV.2.2)on an interval I={t:a<t<b}and Y(t)EGL(n,C)for all tE1,then
Y(t) is called a fundamental matrix solution of system (1V.2.1) on.T. Furthermore, n columns of a fundamental matrix solution Y(t) of (IV.2.2) are said to form a
fundamental set of n linearly independent solutions of (IV.2.1) on the interval T.
2. HOMOGENEOUS SYSTEMS OF LINEAR DIFF. EQUATIONS |
79 |
Observation IV-2-3. Let 4(t) be a solution of (IV.2.2) on Z. Also, let *(t) be a solution of the adjoint equation of (IV.2.2):
(IV.2.3) |
dZ |
= -ZA(t) |
|
dt |
|
on the interval Z, where Z is an n x n unknown matrix. Then,
-W(t)A(t)4s(t) + $(t)A(t)4;(t) = 0.
dt
This implies that the matrix %P(t)4i(t) is independent of t. Therefore, W(t)4i(t) =
%P(r)t(r) for any fixed point r E Z and for all t E Z. Note that the initial values
44(r) and %P(r) at t = r can be prescribed arbitrarily. In particular, in the case
when 4?(r) E GL(n,C), by choosing %P(r) = |
we obtain WY(t)4i(t) = In |
for all t E Z. Thus, we proved the following lemma.
Lemma IV-2-4. Let an n x n matrix 4i(t) be a solution of (IV.2.2) on the interval
Z. Then, 45(t) is invertible for all t E I (i.e., a fundamental matrix solution of
(IV.2.1)) if 4i(r) is invertible for some r E Z. Furthermore, 4 (t)-1 is the unique solution of (IV.2.8) on Z satisfying the initial condition Z(r) = 4i(r)-1.
Observation IV-2-5. Denote by 4?(t; r) the unique solution of the initial-value problem
(IV.2.4) |
- |
= A(t)Y, |
Y(r) = In, |
|
|||
|
|
dt |
|
where r E Z. Then, 1(t; r) E GL(n, C) for all t E Z. The general structure of solutions of (IV.2.1) and (IV.2.2) are given by the following theorem, which can be easily verified.
Theorem IV-2-6. The C"-valued functwn y(t) = 44(t; r)it is the unique solution of the initial-value problem
A(t)y", |
y(r) =1, |
dt |
|
where it E C. Also, the n x n matrix Y = 4i(t; r)r is the unique solution of the initial-value problem
dY = A(t)Y, |
Y(r) = I |
dt |
|
where r E
Theorem IV-2-1 is a corollary of Theorem IV-2-6. 0
Remark IV-2-7.
(1)The general form of a fundamental matrix solution of (IV.2.1) is given by Y(t) = 4i(t; r)r, where r E GL(n, C).
(2)If a fundamental matrix solution is given by Y(t) = 4'(t; r)r, then Y(r) = F.
Hence,
(IV.2.5) |
0(t; ,r) = |
Y(t)Y(r)-1 |
(t, r E Z) |
80 W. GENERAL THEORY OF LINEAR SYSTEMS
for any fundamental matrix solution Y(t). In particular,
(IV.2.6) |
1(t;r) _ |
for t, r, r1 E Z. |
||
(3) In the case when A(t) is a scalar (i.e., n = 1), we obtain easily |
||||
(IV.2.7) |
|
1(t; r) = exp I JA(s)ds)]t |
|
|
|
|
t |
1 |
|
In the general case, we define exp I J A(s)ds] by |
|
|||
|
|
V,r |
|
|
exp[B(t)] = In |
+M |
|
B(t) = Jt A(s)ds. |
|
B(t)"', |
where |
|||
|
|
m=1 T11. |
|
|
However, generally speaking, (IV.2.7) holds only in the case when B(t) and
B'(t) = A(t) commute. In particular, 4(t; r) = exp[(t - r )AJ if A = A(t) is independent of t. In §IV-3, we shall explain how to calculate exp((t - r)AJ, using the S-N decomposition of A. Also, (IV.2.7) holds in the case when
A(t) is diagonal on the interval I. A less trivial case is given in Exer- cise IV-9. It is easy to see that B(t) and A(t) commute if A(t) is a 2 x 2
upper-triangular matrix with an eigenvalue of multiplicity 2. For exam- ple, the matrix A(t) = [coStt I satisfies the requirement. In this
case, 4b(t; r) = exp I J t A(s)ds = exp (jsin t o sin r r, ) sint - sin
1 t 1 r
exp(sin t -sin r) 10
1 J1
t
(4)det Y(t) = det Y(r) exp if trA(s)ds) if Y(t) satisfies (IV.2.2), where det A and trA are the determinant and trace of the matrix A. This formula is
known as Abel's formula (cf. (CL, p. 28]).
Proof. |
|
|
|
|
|
Regarding detY(t) as a function of n column vectors {y'1(t),... |
of |
||||
Y(t), set det Y(t) = 7(g, (t), ... , y (t)). Then, |
|
|
|||
|
d det Y(t) |
- [: |
P(... |
|
|
(IV.2.8) |
dt |
L |
, A(t)ym(t), ... ). |
|
|
|
|
M=1 |
|
|
|
Denote the right-hand side of (IV.2.8) by G(y'1(t),... |
Then, 9 is |
multilinear and alternating in y1(t),... ,j,,(t). Furthermore, 9 = trA(t) if
Y(t) = I,,. Therefore, 9 = tr A(t) det Y(t). Solving the differential equation
ddet Y(t) = tr A(t) det Y(t), we obtain Abel's formula. 0
dt
3. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 81
IV-3. Homogeneous systems with constant coefficients
For an n x n matrix A, we define exp[A] by
(IV.3.1) |
exp[A] = In + E h Ah. |
|
h=1 |
It is easy to show that the matrix exp[A] satisfies the condition exp[A + B] = exp[A] exp[B] if A and B commute. This implies that exp[A] is invertible and
(exp[A])-1 = exp[-A]. Thus, we obtain a fundamental matrix solution Y = exp[tA]
of the system d = Ay with a constant matrix A by solving the initial-value
problem
(IV.3.2) |
= AY, |
Y(0) = In. |
This, in turn, implies that the unique solution of the initial-value problem
(IV.3.3) |
d = Ay, |
y(T) = P |
is given by y = exp[(t - r)A]p, where p E C' is a constant vector.
In this section, we explain how to calculate exp[tA] for a given constant matrix A, using the S-N decomposition of A. Assume that an n x n matrix A has k distinct eigenvalues Al, A2, ... , Ak. Let A = S + N be the S-N decomposition of A. Also, let Pj (A) (,j = 1, 2,... , k) be the projections defined in §IV-1 (cf. (IV.1.4)). Then,
|
k |
k |
(IV.3.4) |
In = > PJ(A), S = |
AjPJ(A), N = A - S, |
|
J=1 |
)=1 |
and |
|
|
|
|
(IV.3.5) PJ(A)Ph(A) |
0 (A) |
|
h |
|
if |
h |
36 |
||
|
|
|
|
= |
j
j |
(3, h = 1, 2, .... k). |
|
The two matrices S and N commute.
Denote by V, the image of the mapping PJ(A) : C" |
Cn (cf. Lemma IV-I-10). |
||
It is known that Sp = \,)5 for pin VJ . Hence, Sl p" = \1)5 and |
|||
|
|
+00 |
earh=1 |
exp[tS]p' = |
1 + E (A2t)n }ic |
||
|
|
n-i h |
|
On the other hand, exp(tN] = In + |
A Nh since N is nilpotent. Therefore, |
||
|
|
h=1 |
|
exp[tA)# = exp[t(S + N)]p' = expitN] exp[tS]p" = e'\'` exp[tN]P |
|||
(IV.3.6) |
[In |
n- i th |
|
= eA'` |
+ |
p" |
for fl E Vj. |
n=1
82 IV. GENERAL THEORY OF LINEAR SYSTEMS
Applying (IV.3.4) and (IV.3.6) to a general p E C", we derive
n-10
(IV.3.7) |
exp[tA]p = |
C.Njt In + E hl Nh |
P, (A)p' |
for 9E C". |
|
|
|
j=1 |
h=I |
|
|
Thus, we proved the following theorem. |
|
|
|
||
Theorem IV-3-1. The matrix exp[tA] is calculated by formula |
|
||||
|
|
k |
n-1 th |
Nh Pj (A). |
|
(IV.3.8) |
exp[tA] _ > ea+` |
In + F, |
|
||
|
|
=l |
h=1 T! |
|
|
Since the general solution of the differential equation
(IV.3.9)
is given by (IV.3.7), the following important result is obtained.
Theorem IV-3-2.
(i)If R(,\,) < 0 for j = 1, 2, ... , k, then every solution of (IV. 3.9) tends to 06 as t -' +00,
(ii)if R(Aj) > 0 for some j, some solutions of (IV.3.9) tend to 00 as t - +00,
(iii)every solution of (IV.3.9) is bounded for t > 0 if and only if R(AE) < 0 for j = 1, 2, ... , k and NP, (A) = 0 if R(A)) = 0.
Now, we illustrate calculation of exp[tA] in two examples. Note that in the case when A has nonreal eigenvalues, we must use complex numbers in our calculation.
Nevertheless, if A is a real matrix, then exp[tA] is also real. Hence, at the end of our calculation, we obtain real-valued solutions of (IV.3.9) if A is real.
-2 |
1 |
0 |
Example IV-3-3. Consider the matrix A = 0 |
-2 0 . The characteristic |
|
3 |
2 |
1 |
polynomial of A is pA(A) _ (A -1)(A+2)2. By using the partial fraction decomposi-
tion of |
1 |
|
(A + 2)2 - (A + 5)(A - 1) |
|
|
(A + 2)2 |
||
PA(A), we derive 1 = |
|
9 |
. Setting P1(A) = |
9 |
||||
and P2(A) |
+ 5)(A - 1), we obtain |
|
|
|
|
|||
|
|
9 |
|
|
|
|
|
|
|
|
0 |
0 |
0 |
1 |
0 |
0 |
|
|
|
P1(A) = 0 |
0 0, P2(A) = |
0 |
1 |
0 |
|
|
|
|
1 |
1 |
1 |
-1 |
-1 |
0 |
|
Set |
|
|
|
|
|
-2 |
0 |
0 |
0 |
1 |
0 |
S = PI(A) - 2P2(A) = 0 |
-2 0 , N = A - S = |
0 |
0 |
0. |
3 |
3 1 |
0 -1 0 |
3. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 83
Note that N2 = 0. Hence,
exp[tA] = e'[ 13 + tN ] PI(A) + e-21 [ 13 + tN ] P2(A)
e-2t |
to-2t |
|
0 |
|
|
|
|
0 |
e-2c |
|
0 |
et - e-2t |
et - (1 + |
t)e-2t |
et |
|
The solution of the initial-value problem dt = Ay, y(0) = y is y(t) = exp[tA]i).
To find a solution satisfying the condition |
Jim y(0) = 0, we must choose it so that |
|
t +m |
P1(A)ij = 0. Such an iJ is given by it = P2(A)c", where c is an arbitrary constant vector in C3.
0 |
-1 |
1 |
Example IV-3-4. Next, consider the matrix A = 1 |
0 |
-1 . The charac- |
-1 |
1 |
0 |
teristic polynomial of A is PA(A) = A(A2 + 3) = A(A - if)(A + i\/3-). Using the
partial fraction decomposition of |
1 |
, we obtain |
|
|
|||
|
|
|
|||||
|
|
|
|
PA (A) |
|
|
|
1 = |
(A- if)(A+ if) - A(A+ if)- sA(A- if}. |
||||||
Setting |
|
|
|
|
|
|
|
P1(A) = 3(A2 + 3), |
P2(A) |
|
IA(A + if), P3(A) |
A(A - if), |
|||
we obtain |
|
|
|
|
|
1-if 1+if |
|
1 |
1 |
1 |
1 |
|
1 -2 |
||
|
|
|
1-if |
||||
Pi(A) = - |
1 |
1 |
1 , P2(A) = -- 1+if |
-2 |
|||
3 |
1 |
1 |
1 |
|
6 1-if 1+iv |
-2 |
and P3(A) is the complex conjugate of P2(A). If we set
S = (if)P2(A) - (if)P3(A),
then S = A. This implies that N = 0. Thus, we obtain
exp(tA] = P1(A) + e,.''P2(A) + e-;J1tP3(A)
= P1(A) + 23t (e'3tP2(A)).
Using
(e'' t(1 + if)) = cos(ft) - \/3-sin(ft),
2 (e'-1t(1 - if)) wa(ft) + f sin(ft),
84 |
IV. GENERAL THEORY OF LINEAR SYSTEMS |
we find
a(t) b(t) c(t) exp[tA] = 1 c(t) a(t) b(t)
3 b(t) c(t) a(t)
where
a(t) = I + 2cos(ft),
b(t) = 1 - {cos(ft) + f sin(ft)} ,
c(t) = 1 - {wa(ft) - f sin(ft))
Remark IV-3-5. Fhnctions of a matnx In this remark, we explain how to define functions of a matrix A.
I. A particular case: Let A0, I,,, and N be a number, the n x n identity matrix, and an n x n nilpotent matrix, respectively. Also, consider a function f (X) in a neighborhood of A0. Assume that f (A) has the Taylor series expansion (i.e., f is analytic at A0)
(h)
f(A) = f(A0) + 1 f h?Ao)(A - Ao)h.
h=1
In this case, define f (,\o1 + N) by
=
f(h)PLO) |
.A |
= f(Ao)II + |
n-1 f(h)(AO) |
NA. |
|
f(AoI. + N) = f(Ao)I. + |
t |
A |
h. |
||
h=1 |
h. |
|
|
|
|
|
|
|
h=1 |
|
|
n-1 |
(h) |
|
|
|
|
Since N is nilpotent, the matrix >2f |
|
|
is also nilpotent. Therefore, the |
h=1
characteristic polynomial pf(A0,+N)(A) of f(AoI + N) is
Pf(aol-N)(A) = (A - 1(A0))".
II. The general case: Assume that the characteristic polynomial PA(A) of an n x n matrix A is
PA(A) = (A - .l1)m1(A - A 2 ) ' - 2 ... (A - Ak)mr.
where A1,... , Ak are distinct eigenvalues of A. Construct P, (A) (j = 1, ... , k), S, and N as above. Then,
A = (A1In + N)P1(A) + (A21n + N)P2(A) + ... + 0k1n + N)Pk(A).
Therefore,
A' = (A11n + N)1P1(A) + (A2In + N)'P2(A) 4- ... + (Ak1n + N)'Pk(A)
for every integer P.
3. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 85
Assuming that a function f (A) has the Taylor series expansion
f(A) = f(A3) |
0f(h)(A,)(A-A,)'' |
|
00 |
h=1 |
h! |
|
at A = A., for every j = 1, ... , k, we define f (A) by
f(A) = f (A11, + N)Pk(A) + f (A21n + N)P2(A)
(IV.3.10)
+ ... + f (Akin + N)Pk(A).
Since P2(S) = P,(A) (cf. Observation IV-1-15), this definition applied to S yields
f(S) = f(AIIn)P1(A) + f(A21.)P2(A) + -' + f(AkII)Pk(A)
and f (A) - f (S) has a form N x (a polynomial in S and N). Therefore, f (A) - f (S) is nilpotent. Furthermore, f (S) and f (A) commute. This implies that
f(A) = f(S) + (f(A) - f(S))
is the S-N decomposition of f (A). Thus,
Pf(A)(A) = pf(s)(A) = (A - f(A1))m' ... (A - |
f(Ak))m |
Example IV-3-6. In the case when f (A) = log(A), define log(A) by
log(A) = log(A1In+N)Pk(A) + log(A21n+N)Pk(A) + ... + log(Akln+N)Pk(A),
where we must assume that A is invertible so that A, # 0 for all eigenvalues of A.
Let us look at log(AoI,, + N) more closely, assuming that A0 0 0. Since
/ |
= |
|
+O° |
(-I)mm+1 |
log(Ao + u) = 1000) + log t 1 + -o) |
logl o) + |
I o |
||
\\ |
|
|
m=1 |
|
we obtain |
|
|
|
|
|
n-1 (-1)m+1 |
|
||
log(.1oIn + N) = log(Ao)ln + |
|
m |
|
|
|
m=1 |
|
||
|
|
|
||
It is not difficult to show that exp[log(A)J |
= A. In fact, since |
|
||
(log(A))m = (log(A11n + N))mPk(A) |
+ (log(A21n + N))mp2(A) |
|||
+ ... + (log(Akln + N))mPk(A), |
|
|||
it is sufficient to show that exp[1og(Aoln + N)J |
= A01,, + N. This can be proved by |
|||
using exp[log(Ao +,u)] =A0 +,u. |
|
|
|
|
86 IV. GENERAL THEORY OF LINEAR SYSTEMS
Observation IV-3-7. In the definition of log(A) in Example N-3-6, we used log(A,). The function log(A) is not single-valued. Therefore, the definition of log(A) is not unique.
Observation IV-3-8. Let A = S + N be the S-N decomposition of A. If A is invertible, S is also invertible. Therefore, we can write A as A = S(II + M), where
M = S' N = NS-1. Since S and N commute, two matrices S and M commute.
Furthermore, M is nilpotent. Using this form, we can define log(A) by
log(A) = log(S) + log(Ih + M),
where |
|
log(S) = Iog(A1)P1(A) + log(A2)P2(A) + |
. + log(Ak)Pk(A) |
and |
|
(1)m+1 |
|
log(IR + Al) = E - m |
Mm. |
M=1 |
|
This definition and the previous definition give the same function log(A) if the same definition of log(A,) is used.
3 |
4 |
3 |
Example N-3-9. Let us calculate sin(A) for A = 2 |
7 |
4 (cf. Example |
-4 8 |
3 |
IV-1-19). The matrix A has two eigenvalues 11 and 1. The corresponding projections are
0 |
14 |
7 |
|
-14 |
-7 |
|
25 |
25 |
|
25 |
25 |
P1(A) = 0 |
L9 |
L9 |
P2(A) = |
0 |
50 |
|
|
|
2s |
||
0 |
2512 |
506 |
|
-12 |
19 |
25 |
25 |
|
0 |
25 |
|
|
|
5 |
Define S = 11P1(A) + P2(A) and N = A - S. Then N2 = 0. Also,
( sin(11 + x) = -0.99999 + 0.0044257x + 0.499995x2 + 0(x3 ), t sin(1 + x) = 0.841471+0.540302x-0.420735x 2 + 0(x3).
Therefore,
sin(A) _ (-0.9999913 + 0.0044257N)P1(A) + (0.84147113 + 0.540302N)P2(A)
1.92207 -1.8957 -0.407549
1.0806 -1.42252 -0.591695
-2.1612 0.845065 0.1834
It is known that sin x has the series expansion
+00
sin x = 1 (2h +)1)I T2h+1
4. SYSTEMS WITH PERIODIC COEFFICIENTS |
87 |
Therefore, we can also define sin(A) by
sin(A) -- -+ (2h(-1)h+ 1)! A2h+1
However, this approximation is not quite satisfactory if we notice that
|
|
_1h |
= -117.147. |
sin(11) = -0.99999 |
and |
112h+1 |
|
(2h + 1)! |
|
||
|
|
|
h=O
IV-4. Systems with periodic coefficients
In this section, we explain how to construct a fundamental matrix solution of a system
(IV.4.1) |
dy |
= A(t)y |
|
dt |
|
in the case when the n x n matrix A(t) satisfies the following conditions:
(1)entries of A(t) are continuous on the entire real line R,
(2)entries of A(t) are periodic in t of a (positive) period w, i.e.,
(IV.4.2) |
A(t + w) = A(t) |
for t E R. |
Look at the unique n x n fundamental matrix solution 4'(t) defined by the initialvalue problem
dY |
= A(t)Y, |
Y(0) = In |
(IV.4.3) |
|
|
.it |
|
|
Since 4i '(t + w) = A(t + w)4S(t + w) = A(t)4'(t + w) and +(0 + w) = 4t(w), the matrix 41(t +w) is also a fundamental matrix solution of (IV.4.3). As mentioned in
(1) of Remark IV-2-7, there exists a constant matrix r such that 44(t +w) = 4i(t)r and, consequently, r = 4?(w). Thus,
(IV.4.4) |
41(t + w) = 4(t)t(w) |
for t E R. |
Setting B = w-' log(4?(w)J (cf. Example IV-3-6), define an n x n matrix P(t) by
(IV.4.5) |
P(t) = d'(t) exp(-tBJ. |
Then, P(t + w) = ((t + w) exp(-(t + w)BI = 4i(t)0(w) exp(-wB) exp(-tBJ =
4'(t) exp(-tBJ = P(t). This shows that P(t) is periodic in t of period w. Thus, we
proved the following theorem.