Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Schechter Minimax Systems and Critical Point Theory (Springer, 2009)

.pdf
Скачиваний:
40
Добавлен:
22.08.2013
Размер:
2.13 Mб
Скачать

12.4. Notes and remarks

147

if μ > λ0. Moreover,

 

 

(G (uk ), yk y) 0,

( p(uk + y) p(u), yk y) 0,

and

 

(12.50)

([L μ]y, yk y) 0.

Hence

 

(12.51)

yk y 2E o(1), k → ∞.

This shows that yk y in E, and the proof proceeds as before. If λ0 > μ, we apply Theorem 12.1 to G(u) and come to the same conclusion. In this case the inequality in (12.49) is reversed. This completes the proof.

12.4 Notes and remarks

Many authors have studied the one-dimensional periodic-Dirichlet problem for the semilinear wave equation

utt ux x = p(t, x, u), t R, x (0, π ), u(t, x) = 0, t R, x = 0, x = π,

u(t + 2π, x) = u(t, x), t R, x (0, π )

A basic problem in this one dimensional case is that the null space N of the linear partu = utt ux x is infinite dimensional. On the other hand, has a compact inverse on the orthogonal complement of N. In contrast to this, the higher dimensional periodicDirichlet problem for the semilinear wave equation (13.1)–(13.2) has the additional difficulty that does not have a compact inverse on N . In fact, it has a sequence of eigenvalues of infinite multiplicities stretching from −∞ to . This is a serious complication that causes all of the methods used to solve the one-dimensional case to fail.

Recently some authors have examined the radially symmetric counterpart of (13.1)–(13.3) that was considered in this chapter (cf. [146], [15],[14],[23],and [121]). It is assumed that the function f (t, x, u) is radially symmetric in x. This allows one to reduce the problem to

utt urr r 1(n 1)ur = f (t, r, u),

u(2π, r ) = u(0, r ), ut (2π, r ) = ut (0, r ), 0 r R, u(t, R) = u R (t, R) = 0, t R.

This is much more difficult than the one-dimensional problem for the wave equation, but the techniques used in solving it cannot be used to solve the n-dimensional problem for the wave operator when the region and functions are not radially symmetric. This will be addressed in Chapter 13.

Chapter 13

Semilinear Wave Equations

13.1 Introduction

In this chapter we shall consider the higher-dimensional periodic-Dirichlet problem for the semilinear wave equation

(13.1)

u utt u = p(x, t, u), (x, t) ,

(13.2)

u(x, t) = 0, t R, x ∂ (0, π )n,

(13.3)

u(x, t + 2π ) = u(x, t), (x, t) ,

where = (0, π )n × (0, 2π ). Here, x = (x1, . . . , xn) Rn and

(0, π )n = {x Rn : 0 < xk < π, 1 k n}.

In studying this problem, we shall make use of the theory of saddle points.

13.2 Convexity and lower semi-continuity

A set M is called convex if (1t)w0 +tw1 M whenever w0, w1 M and 0 t 1. Let M be a convex subset of a Hilbert space E, and let G be a functional (real-

valued function) defined on M. We call G convex on M if

G((1 t)w0 + tw1) (1 t)G(w0) + t G(w1), w0, w1 M, 0 t 1.

We call it strictly convex if the inequality is strict when 0 < t < 1, w0 = w1.

G(v) is called upper semi-continuous (u.s.c.) at w0 M if wk w0 M implies

G(w0) lim sup G(wk ).

It is called lower semi-continuous (l.s.c.) if the inequality is reversed and lim sup is replaced by lim inf. We have

M. Schechter, Minimax Systems and Critical Point Theory, DOI 10.1007/978-0-8176-4902-9_13, © Birkhäuser Boston, a part of Springer Science+Business Media, LLC 2009

150 13. Semilinear Wave Equations

Lemma 13.1. If M is closed, convex, and bounded in E and G is convex and l.s.c. on M, then there is a point w0 M such that

(13.4)

G(w0) = M

.

 

min G

 

If G is strictly convex, then w0 is unique.

In proving Lemma 13.1, we shall make use of

Lemma 13.2. If uk u in E, then there is a renamed subsequence such that u¯ k u, where

(13.5) u¯k = (u1 + · · · + uk )/ k.

Proof. We may assume that u = 0. Take n1 = 1, and inductively pick n2, n3, . . . , so

that

 

1

 

 

 

 

1

 

|(unk , un1 )| ≤

, . . . ,

|(unk

, unk1 )| ≤

.

 

 

k

k

This can be done since

 

 

 

 

 

 

 

 

(un , un j ) 0 as n → ∞,

1 j k.

Since

 

 

uk C

 

 

 

 

 

 

 

 

 

for some C, we have

 

 

 

 

 

 

 

 

u¯k 2 =

k

 

 

k

j 1

 

 

j =1

u j 2 + 2

(ui , u j ) / k2

 

 

 

j =1 i=1

 

 

 

 

 

k

j 1

 

 

 

kC2 + 2

 

 

/ k2

 

 

 

j

 

 

 

 

 

j =1 i=1

 

 

 

(C2 + 2)/ k 0.

Lemma 13.3. If G(u) is convex and l.s.c. on E, and uk u, then

G(u) lim inf G(uk ).

Proof. Let

L = lim inf G(uk ).

Then there is a renamed subsequence such that G(uk ) L. Let ε > 0 be given. Then

(13.6) L ε < G(uk ) < L + ε

13.2. Convexity and lower semi-continuity

151

for all but a finite number of k. Remove a finite number and rename the subsequence so that (13.6) holds for all k. Moreover, there is a renamed subsequence such that u¯ k u by Lemma 13.2, where u¯k is given by (13.5). Thus,

G(u)

lim inf G(u¯k ) = lim inf G

1

 

 

k

 

 

j =1 u j

k

1

k

 

 

1

· k(L + ε) = L + ε.

lim inf

 

 

G(u j ) lim inf

 

k

j =1

k

Since ε was arbitrary, we see that G(u) L, and the proof is complete.

A subset M E is called weakly closed if u M whenever there is a sequence {uk } M converging weakly to u in E. A weakly closed set is closed in a stronger sense than an ordinary closed set. It follows from Lemma 13.2 that

Lemma 13.4. If M is a closed, convex subset of E, then it is weakly closed in E.

Proof. Suppose {uk } M and uk u in E. Then, by Lemma 13.2, there is a renamed subsequence such that u¯k u, where u¯k is given by (13.5). Since M is convex, each u¯ k is in M. Since M is closed, we see that u M.

We can now give the proof of Lemma 13.1.

Proof. Let

α = inf G.

M

(At this point we do not know if α = −∞.) Let {wk } M be a sequence such that G(wk ) α. Since M is bounded, we see that there is a renamed subsequence such that wk w0. Since M is closed and convex, it is weakly closed (Lemma 13.4). Hence, w0 M. By Lemma 13.3, G(w0) lim inf G(wk ) = α. Since G(w0) α, we see that (13.4) holds. So far, we have only used the convexity of G. We use the strict convexity to show that w0 is unique. If there were another element w1 M such that G(w1) = α, then we would have

G

1

w0

+

1

w1

<

1

[G(w0) + G(w1)] = α,

2

2

2

which is impossible from the definition of α. This completes the proof.

We also have

Lemma 13.5. If M is closed and convex, G is convex, is l.s.c., and satisfies

(13.7)

G(u) → ∞ as u → ∞, u M

(if M is unbounded), then G is bounded from below on M and has a minimum there. If G is strictly convex, this minimum is unique.

152

13. Semilinear Wave Equations

Proof. If M is bounded, then Lemma 13.5 follows from Lemma 13.1. Otherwise, let u0 be any element in M. By (13.7), there is an R u0 such that

G(u) G(u0), u M, u R.

By Lemma 13.1, G is bounded from below on the set

MR = {w M : w R}

and has a minimum there. A minimum of G on MR is a minimum of G on M. Hence, G is bounded from below on M and has a minimum there.

13.3 Existence of saddle points

We say that (v0, w0) is a saddle point of G if

(13.8) G(v, w0) G(v0, w0) G(v0, w), v N, w M.

We now present some sufficient conditions for the existence of saddle points. Let M, N be closed, convex subsets of a Hilbert space, and let G(v, w) : M × N → R be a functional such that G(v, w) is convex and l.s.c. in w for each v N, and concave and u.s.c. in v for each w M. If M is unbounded, assume also that there is a v0 N such that

(13.9)

G(v0, w) → ∞ as w → ∞,

w M.

If N is unbounded, assume that there is a w0 M such that

(13.10)

G(v, w0) → −∞ as v → ∞,

v N.

[If M is bounded, then (13.9) is automatically satisfied; the same is true for (13.10) when N is bounded.] We have

Theorem 13.6. Under the above hypotheses, G has at least one saddle point.

Proof. Assume first that M, N are bounded. Then, for each v N, there is at least one point w where G(v, w) achieves its minimum (Lemma 13.1). Let

J (v) = min G(v, w).

w M

Since J (v) is the minimum of a family of functionals that are concave and u.s.c., it is also concave and u.s.c. In fact, if

 

 

vt = (1 t)v0 + tv1,

t [0, 1],

 

 

 

then

 

 

) w M

(v0, w)ˆ + t w M

(v1, w),ˆ

w

 

.

G(vt , w) (

1

t

M

 

 

min G

 

min G

 

 

 

 

 

 

 

ˆ

 

ˆ

 

 

 

 

13.3. Existence of saddle points

 

 

 

 

153

Since this is true for each w M, we see that

 

 

 

(13.11)

J (vt ) (1 t) J (v0) + t J (v1).

Similarly, if vk v N, then we have

 

 

 

 

 

J (vk ) G(vk , w),

w M.

 

Thus,

lim sup J (vk ) lim sup G(vk , w) G(v, w), w M.

 

Since this is true for each w M, we have

(v, w) =

 

 

(13.12)

lim sup J

(vk ) w M

J

(v).

 

inf G

 

 

 

 

 

 

 

 

 

Therefore, J (v) is concave and u.s.c.

Consequently, J (v) has a maximum point v¯

satisfying

J (v) J (v¯), v N

 

 

 

 

 

(Lemma 13.1). In particular, we have

 

 

w M.

(13.13)

J (v¯) = w M G(v¯, w)ˆ ≤ G(v¯, w),

min

 

 

 

 

ˆ

Let v be an arbitrary point in N, and let

vθ = (1 θ )v¯ + θ v, 0 θ 1.

Since G is concave in v, we have

 

G(vθ , w) (1 θ )G(v¯, w) + θ G(v, w).

Consequently,

 

 

 

 

 

J (v¯) J (vθ )

 

 

 

= G(vθ , wθ )

 

 

 

(1 θ )G(v¯, wθ ) + θ G(v, wθ )

 

(1 θ ) J (v¯) + θ G(v, wθ ),

where wθ is any point in M such that

 

 

 

G

(vθ , wθ ) = w M

(vθ , w).

 

 

min G

 

 

 

 

 

 

This gives

 

 

 

 

(13.14)

J (v¯) G(v, wθ ),

v N, 0 < θ 1.

Let {θk } be a sequence converging to 0, and let vk = vθk , wk = wθk . Then vk v¯.

Since M is bounded, there is a renamed subsequence such that wk

w¯ . Since

(1 θ )G(v¯, wθ ) + θ G(v, wθ ) G(vθ , wθ ) G(vθ , w),

w M,

154

13.

Semilinear Wave Equations

we have

 

w M.

(1 θk )G(v¯, wk ) + θk J (v) G(vk , w),

In the limit this gives

w M

 

G(v¯, w)¯ ≤ G(v¯, w),

 

(cf. Lemma 13.3). Since

 

 

J (v¯) G(v, wk ),

 

we have

v N, w M,

G(v, w)¯ ≤ J (v¯) G(v¯, w),

in view of (13.13) and (13.14). Take v = v¯ and w = w¯ . Then

G(v¯, w)¯ ≤ J (v¯) G(v¯, w),¯

 

showing that

 

 

G(v¯, w)¯ = J (v¯)

 

and

v N, w M.

G(v, w)¯ ≤ G(v¯, w)¯ ≤ G(v¯, w),

Thus, (v¯, w)¯ is a saddle point.

Now, we remove the restriction that M, N are bounded. Let R be so large thatv0 < R, w0 < R. The sets

MR = {w M : w R}, NR = {v N : v R}

are closed, convex, and bounded. By what we have already proved, there is a saddle point (v¯R , w¯ R ) such that

(13.15) G(v, w¯ R ) G(v¯R , w¯ R ) Gv¯R , w), v NR , w MR .

In particular, we have

G(v0, w¯ R ) G(v¯R , w¯ R ) G(v¯R , w0).

Since G(v0, w) is convex, is l.s.c., and satisfies (13.9), it is bounded from below on M

(Lemma 13.5). Thus,

G(v0, w¯ R ) A > −∞.

Similarly, G(v, w0) is bounded from above. Hence,

G(v¯R , w0) B < .

Combining these with (13.15), we have

A G(v0, w¯ R ) G(v¯R , w¯ R ) G(v¯R , w0) B.

By (13.9) and (13.10), the sequences {v¯R }, {w¯ R } are bounded. Hence, there are renamed subsequences such that

v¯R v¯, w¯ R w¯ as R → ∞

13.4. Criteria for convexity

155

and

G(v¯R , w¯ R ) λ as R → ∞.

In view of (13.15), we have in the limit

G(v, w)¯ ≤ λ G(v¯, w), v N, w M.

This shows that λ = G(v¯, w),¯ and (v¯, w)¯ is a saddle point. The theorem is completely proved.

13.4 Criteria for convexity

If G is a differentiable functional on a Hilbert space E, there are simple criteria that can be used to verify the convexity of G. We have

Theorem 13.7. Let G be a differentiable functional on a closed, convex subset M of E. Then G is convex on E iff it satisfies any of the following inequalities for u0, u1 M :

(13.16)

(G (u0), u1

u0) G(u1) G(u0)

(13.17)

(G (u1), u1

u0) G(u1) G(u0),

(13.18)

(G (u1) G (u0), u1

u0) 0.

Moreover, it will be strictly convex iff there is strict inequality in any of them when u0 = u1.

Proof.

Let ut = (1 t)u0 + tu1, 0 t 1, and ϕ(t) = G(ut ). If G is convex, then

(13.19)

G(ut ) (1 t)G(u0) + t G(u1)

or

 

(13.20)

ϕ(t) (1 t)ϕ(0) + tϕ(1), 0 t 1.

In particular, the slope of ϕ at t = 0 is less than or equal to the slope of the straight line connecting (0, ϕ(0)) and (1, ϕ(1)). Thus, ϕ (0) ϕ(1) ϕ(0), and this is merely (13.16). Reversing the roles of u0, u1 produces (13.17). We obtain (13.18) by subtracting (13.16) from (13.17). Conversely, (13.18) implies

ϕ (t) ϕ (s) = (G (ut ) G (us ), u1 u0)

= (G (ut ) G (us ), ut us )/(t s) 0, 0 s < t 1.

Thus,

ϕ (t) ϕ (s), 0 s t 1,

which implies (13.20). Since this is equivalent to (13.19), we see that G is convex. If G is strictly convex, we obtain strict inequalities in (13.16)–(13.18), and strict inequalities in any of them implies strict inequalities in (13.20) and (13.19).

156

13. Semilinear Wave Equations

Corollary 13.8. Let G be a differentiable functional on a closed, convex subset M of E. Then G is concave on E iff it satisfies any of the following inequalities for u0, u1 M :

(13.21)

(G (u0), u1

u0) G(u1) G(u0),

(13.22)

(G (u1), u1

u0) G(u1) G(u0),

(13.23)

(G (u1) G (u0), u1

u0) 0.

Moreover, it will be strictly concave iff there is strict inequality in any of them when u0 = u1.

Proof. Note that G(u) is concave iff G(u) is convex.

13.5 Partial derivatives

Let M, N be closed subspaces of a Hilbert space H satisfying H = M N. Let G(u) be a functional on H. We can consider “partial” derivatives of G in the same way we considered total derivatives. We keep w = w0 M fixed and consider G(u) as a functional on N, where u = v + w0, v N. If the derivative of this functional exists at v = v0 N, we call it the partial derivative of G at u0 = v0 + w0 with respect to v N and denote it by G N (u0). Similarly, we can define the partial derivative G M (u0). We have

Lemma 13.9. If G

exists at u0

=

v0

+

w0

 

 

 

 

 

M

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

, then G

 

(u0) and G

(u0) exist and satisfy

(13.24)

(G (u0), u)

=

 

(G

 

(u0), w)

+

(G

 

(u0), v), v

 

N, w

 

M.

 

 

 

 

 

 

 

 

 

 

 

 

 

M

 

 

 

 

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

 

Proof. By definition,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G(u0

+

 

u)

 

=

 

G(u0)

+

 

(G (u0), u)

 

+

 

o( u ),

u

 

 

 

H.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Therefore,

G(u0

 

 

 

 

v)

 

 

 

G(u0)

 

 

 

(G (u0), v)

 

 

 

o( v ),

v

 

 

N

 

 

and

 

 

 

 

+

 

 

 

=

 

 

 

 

 

+

 

 

 

 

 

 

 

 

+

 

 

 

 

 

 

 

 

 

G(u0 + w) = G(u0) + (G (u0), w) + o( w ), w M.

 

 

But

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(G

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G(u0

+

 

v)

 

=

G(u0)

+

 

 

(u0), v)

+

o( v ),

v

 

 

N

 

 

and

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(G

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G(u

0 +

w)

=

G(u

)

+

(u

), w)

 

+

 

o( w ), w

 

M.

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

M

 

0

 

 

 

 

 

 

 

 

 

 

 

 

In particular, we have

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(G (u

 

 

)

G

 

 

(u

), v)

=

o( v ) as v

 

 

0,

v

 

N.

 

 

 

 

 

0

 

 

 

N

 

0

 

 

 

 

 

 

 

 

 

 

 

Thus,

 

 

 

 

(G (u0)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G

(u0), tv)

=

o(

 

t

) as

t

| →

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

|

|

 

 

 

|

 

 

 

 

 

 

 

 

13.5. Partial derivatives

 

 

 

 

 

 

 

 

 

 

 

 

 

 

157

for each fixed v N. This means that

 

 

 

 

 

 

 

 

 

 

 

 

(G

(u0)

G (u0), v)

=

o(|t|)

 

0 as t

0.

t

 

 

N

 

 

 

 

 

 

 

Hence,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(G (u0), v)

=

(G

(u0), v),

v

 

N.

 

 

 

 

 

 

N

 

 

 

 

 

 

 

 

Similarly,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(G (u0), w)

=

(G

 

(u0), w),

w

 

M.

 

 

 

 

 

M

 

 

 

 

 

 

 

 

 

These two identities combine to give (13.24).

Lemma 13.10. Under the hypotheses of Lemma 13.9, assume that G is differentiable on H , convex on M, and concave on N. Then,

(13.25) G(u) G(u0) (G N (u0), v v0) + (G M (u), w w0),

u = v + w, u0 = v0 + w0, v, v0 N, w, w0 M.

If G is either strictly convex on M or strictly concave on N (or both), then one has strict inequality in (13.25) when u = u0.

Proof. This follows from Theorem 13.7 and its corollary. In fact, we have

G(u) G(u0) = G(u) G(v + w0) + G(v + w0) G(u0) (G (u), w w0) + (G (u0), v v0).

Apply Lemma 13.9.

We also have

Lemma 13.11. Under the hypotheses of Lemma 13.9, if G (u0) exists and u0 = v0+w0 is a saddle point, then

G (u0) = G M (u0) = G N (u0) = 0.

Proof. By definition,

G(v + w0) G(u0) G(v0 + w), v N, w M.

Since v0 is a maximum point on N, we see that G N (u0) = 0. Since w0 is a minimum point on M, we have G M (u0) = 0 for the same reason. We then apply Lemma 13.9.

Corollary 13.12. Under the hypotheses of Lemma 13.10, if G is either strictly convex on M or strictly concave on N (or both), then G has exactly one saddle point.

Proof. This follows from inequality (13.25).

Соседние файлы в предмете Экономика