Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

03.Transfer functions (the s-domain)

.pdf
Скачиваний:
18
Добавлен:
23.08.2013
Размер:
498.91 Кб
Скачать

This version: 22/10/2004

Chapter 3

Transfer functions (the s-domain)

Although in the previous chapter we occasionally dealt with MIMO systems, from now on we will deal exclusively with SISO systems unless otherwise stated. Certain aspects of what we say can be generalised to the MIMO setting, but it is not our intention to do this here.

Much of what we do in this book revolves around looking at things in the “s- domain,” i.e., the complex plane. This domain comes about via the use of the Laplace transform. It is assumed that the reader is familiar with the Laplace transform, but we review some pertinent aspects in Section E.3. We are a little more careful with how we use the Laplace transform than seems to be the norm. This necessitates the use of some Laplace transform terminology that may not be familiar to all students. This may make more desirable than usual a review of the material in Section E.3. In the s-domain, the things we are interested in appear as quotients of polynomials in s, and so in Appendix C we provide a review of some polynomial things you have likely seen before, but perhaps not as systematically as we shall require. The “transfer function” that we introduce in this chapter will be an essential tool in what we do subsequently.

 

Contents

 

3.1

Block diagram algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

3.2

The transfer function for a SISO linear system . . . . . . . . . . . . . . . . . . . . . . . .

78

3.3

Properties of the transfer function for SISO linear systems . . . . . . . . . . . . . . . . .

80

 

3.3.1 Controllability and the transfer function . . . . . . . . . . . . . . . . . . . . . . .

81

 

3.3.2 Observability and the transfer function . . . . . . . . . . . . . . . . . . . . . . . .

85

 

3.3.3 Zero dynamics and the transfer function . . . . . . . . . . . . . . . . . . . . . . .

87

3.4

Transfer functions presented in input/output form . . . . . . . . . . . . . . . . . . . . . .

90

3.5

The connection between the transfer function and the impulse response . . . . . . . . . .

95

 

3.5.1 Properties of the causal impulse response . . . . . . . . . . . . . . . . . . . . . . .

95

 

3.5.2 Things anticausal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

3.6

The matter of computing outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

3.6.1Computing outputs for SISO linear systems in input/output form using the right

causal Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

3.6.2Computing outputs for SISO linear systems in input/output form using the left

causal Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

3.6.3Computing outputs for SISO linear systems in input/output form using the causal

impulse response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

76

 

3 Transfer functions (the s-domain)

22/10/2004

 

3.6.4

Computing outputs for SISO linear systems . . . . . . . . . . . . . . .

. . . . . . 105

 

3.6.5

Formulae for impulse, step, and ramp responses . . . . . . . . . . . . .

. . . . . . 108

3.7Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

3.1 Block diagram algebra

We have informally drawn some block diagrams, and it is pretty plain how to handle them. However, let us make sure we are clear on how to do things with block diagrams. In Section 6.1 we will be looking at a more systematic way of handling system with interconnected blocks of rational functions, but our discussion here will serve for what we need immediately, and actually serves for a great deal of what we need to do.

The blocks in a diagram will contain rational functions with indeterminate being the Laplace transform variable s. Thus when you see a block like the one in Figure 3.1 where

1(s) R(s) 2(s)

Figure 3.1 The basic element in a block diagram

R R(s) is given by

R(s) = pnsn + pn−1sn−1 + · · · + p1s + p0 qksk + qk−1sk−1 + · · · + q1s + q0

means that xˆ2(s) = R(s)ˆx1(s) or that x1 and x2 are related in the time-domain by

pnx(1n) + pn−1x(1n−1) + · · · + p1x(1)1 + p0x1 = qkx(2k) + qk−1x(2k−1) + · · · + q1x(1)2 + q0x2

(ignoring initial conditions). We shall shortly see just why this should form the basic element for the block diagrams we construct.

Now let us see how to assemble blocks and obtain relations. First let’s look at two blocks

ˆ

in series as in Figure 3.2. If one wanted, one could introduce a variable x˜ that represents

1(s) R1(s) R2(s) 2(s)

Figure 3.2 Blocks in series

the signal between the blocks and then one has

ˆ

(s)ˆx1

ˆ

x˜(s) = R1

(s), xˆ2(x) = R2(s)x˜(s)

= xˆ2(s) = R1(s)R2(s)ˆx1(s).

Since multiplication of rational functions is commutative, it does not matter whether we write R1(s)R2(s) or R2(s)R1(s).

We can also assemble blocks in parallel as in Figure 3.3. If one introduces temporary

22/10/2004

 

 

 

3.1 Block diagram algebra

77

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1(s)

 

 

 

 

R1(s)

 

 

 

 

 

2(s)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

R2(s)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3.3 Blocks in parallel

 

ˆ

ˆ

for what comes out of the upper and lower block respectively, then we have

signals x˜1

and x˜2

 

 

ˆ

 

 

 

ˆ

(s) = R2

(s)ˆx1(s).

 

 

 

1(s) = R1

(s)ˆx1(s), x˜2

 

Notice that when we just split a signal like we did before piping xˆ1 into both R1

and R2,

 

 

 

 

 

 

 

 

 

ˆ

ˆ

 

the signal does not change. The temporary signals x˜1 and x˜2 go into the little circle that is

a summer. This does what its name implies and sums the signals. That is

 

 

 

 

 

 

 

 

ˆ

 

ˆ

 

 

 

 

 

 

2(s) = x˜1(s) + x˜2(s).

 

 

 

Unless it is otherwise depicted, the summer always adds the signals. We’ll see shortly what one has to do to the diagram to subtract. We can now solve for xˆ2 in terms of xˆ1:

2(s) = (R1(s) + R2(s))ˆx1(s).

The final configuration we examine is the negative feedback configuration depicted in Figure 3.4. Observe the minus sign attributed to the signal coming out of R2 into the

1(s)

 

 

 

R1(s)

 

2(s)

 

 

 

R2(s)

Figure 3.4 Blocks in negative feedback configuration

summer. This means that the signal going into the R1 block is xˆ1(s) − R2(s)ˆx2(s). This then gives

2(s) = R1(s)(ˆx1(s) − R2(s)ˆx2(s))

R1(s)

= xˆ2(s) = 1 + R1(s)R2(s)1(s).

We emphasise that when doing block diagram algebra, one need not get upset when dividing by a rational function unless the rational function is identically zero. That is, don’t be thinking to yourself, “But what if this blows up when s = 3?” because this is just not something to be concerned about for rational function arithmetic (see Appendix C).

78

3 Transfer functions (the s-domain)

22/10/2004

1(s)

R21(s)

R1(s)

R2(s)

2(s)

 

 

 

 

Figure 3.5 A unity feedback equivalent for Figure 3.4

We shall sometimes consider the case where we have unity feedback (i.e., R2(s) = 1) and to do so, we need to show that the situation in Figure 3.4 can be captured with unity feedback, perhaps with other modifications to the block diagram. Indeed, one can check that the relation between xˆ2 and xˆ1 is the same for the block diagram of Figure 3.5 as it is for the block diagram of Figure 3.4.

In Section 6.1 we will look at a compact way to represent block diagrams, and one that enables one to prove some general structure results on how to interconnect blocks with rational functions.

3.2 The transfer function for a SISO linear system

The first thing we do is look at our linear systems formalism of Chapter 2 and see how it appears in the Laplace transform scheme.

We suppose we are given a SISO linear system Σ = (A, b, ct, D), and we fiddle with Laplace transforms a bit for such systems. Note that one takes the Laplace transform of a vector function of time by taking the Laplace transform of each component. Thus we can take the left causal Laplace transform of the linear system

x˙ (t) = Ax(t) + bu(t)

(3.1)

y(t) = ctx(t) + Du(t)

to get

sL0+(x)(s) = AL0+(x)(s) + bL0+(u)(s) L0+(y)(s) = ctL0+(x)(s) + DL0+(u)(s).

It is convenient to write this in the form

 

 

 

L0+(x)(s) = (sIn − A)−1bL0+(u)(s)

(3.2)

L +

(y)(s) = ctL +

(x)(s) + DL +

(u)(s).

 

0−

0−

0−

 

 

We should be careful how we interpret the inverse of the matrix sIn − A. What one does is think of the entries of the matrix as being polynomials, so the matrix will be invertible provided that its determinant is not the zero polynomial. However, the determinant is simply the characteristic polynomial which is never the zero polynomial. In any case, you should not really think of the entries as being real numbers that you evaluate depending on the value of s. This is best illustrated with an example.

3.1 Example Consider the mass-spring-damper A matrix:

m s + m

 

m

m

 

A =

0

1

 

=

sI2 A =

 

s

−1d .

 

k

 

d

 

 

k

 

 

 

 

 

 

 

 

 

 

 

 

 

22/10/2004

3.2 The transfer function for a SISO linear system

79

To compute (sI2 − A)−1 we use the formula (A.2):

 

 

1

 

 

 

(sI2 − A)−1 =

 

adj(sI2 − A),

 

 

det(sI2 − A)

 

where adj is the adjugate defined in Section A.3.1. We compute

 

det(sI2 − A) = s2 + md s + mk .

This is, of course, the characteristic polynomial for A with indeterminant s! Now we use

the cofactor formulas to ascertain

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

s +

d

1

 

 

 

 

 

 

 

adj(sI2 − A) =

 

mk

s

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

m

 

 

 

 

 

 

 

and so

 

 

 

 

1

 

 

 

 

 

 

 

s +

d

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

s2

 

 

 

 

 

 

 

 

 

 

 

A)−1

 

+ md s + mk

mk

s

 

(sI

 

=

 

 

 

 

 

 

 

 

 

 

 

 

 

 

m

 

.

Note that we do not worry whether s2 +

d

 

 

 

 

k

 

 

 

 

 

 

 

 

 

 

 

 

s +

 

vanishes for certain values of s because

 

m

 

 

 

 

 

m

 

 

 

 

 

 

 

 

 

 

 

 

we are only thinking of it as a polynomial, and so as long as it is not the zero polynomial, we are okay. And since the characteristic polynomial is never the zero polynomial, we are always in fact okay.

Back to the generalities for the moment. We note that we may, in the Laplace transform

domain, solve explicitly for the output L +

(y) in terms of the input L +

(u) to get

 

0−

 

 

0−

 

L0+(y)(s) = ct(sIn − A)−1bL0+(u)(s) + DL0+(u)(s).

 

Note we may write

 

 

 

 

TΣ(s) ,

L0+(y)(s)

t

−1

 

 

L0+(u)(s)

= c

(sIn − A)

b + D

 

and we call TΣ the transfer function for the linear system Σ = (A, b, ct, D). Clearly if we put everything over a common denominator, we have

TΣ(s) = ctadj(sIn − A)b + DPA(s).

PA(s)

It is convenient to think of the relations (3.2) in terms of a block diagram, and we show just such a thing in Figure 3.6. One can see in the figure why the term corresponding to the D matrix is called a feedforward term, as opposed to a feedback term. We have not yet included feedback, so it does not show up in our block diagram.

Let’s see how this transfer function looks for some examples.

3.2 Examples We carry on with our mass-spring-damper example, but now considering the

various outputs. Thus we take

m

m

 

 

m

 

 

 

 

0

 

1

 

 

 

0

 

A =

 

k

 

 

d

 

,

b =

 

1 .

 

 

 

 

 

 

 

 

 

 

 

 

1.The first case is when we have the position of the mass as output. Thus c = (1, 0) and D = 01, and we compute

1

TΣ(s) = m .

s2 + md s + mk

80

 

3

Transfer functions (the s-domain)

 

 

 

 

22/10/2004

 

 

 

 

 

 

x0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(s)

 

 

 

 

 

 

u s

 

 

 

b

 

 

 

 

 

 

I n A

1

ct

 

 

 

y s

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ(

)

 

 

 

 

 

 

 

 

(s

)

 

 

 

 

 

 

 

 

ˆ(

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

D

Figure 3.6 The block diagram representation of (3.2)

2.If we take the velocity of the mass as output, then c = (0, 1) and D = 01 and with this we compute

s

TΣ(s) = m .

s2 + md s + mk

3.The final case was acceleration output, and here we had c = (−mk , −md ) and D = I1. We compute in this case

s2

TΣ(s) = m .

s2 + md s + mk

To top o this section, let’s give an alternate representation for ctadj(sIn − A)b.

 

 

 

sIn − A

 

3.3 Lemma ctadj(sIn

A)b = det

b .

 

 

−ct

0

Proof By Lemma A.1 we have

 

det

sIn − A

 

b

 

= det(sIn

A) det(ct(sIn

A)−1b)

 

 

 

−ct

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

−1

 

 

 

det sIn t A b

 

 

 

 

=

t

(sIn − A)

 

 

 

 

−c

 

0

 

 

 

 

 

c

 

b =

det(sIn − A)

 

 

 

 

Since we also have

 

 

 

 

 

 

 

 

 

ctadj(sIn − A)b

 

 

 

 

 

 

ct(sI

n

A)−1b =

,

 

 

we may conclude that

 

det(sIn − A)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

−ct

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ctadj(sIn

 

 

A)b = det

sIn − A

b

,

 

 

as desired.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3.3 Properties of the transfer function for SISO linear systems

Now that we have constructed the transfer function as a rational function TΣ, let us look at some properties of this transfer function. For the most part, we will relate these properties to those of linear systems as discussed in Section 2.3. It is interesting that we

22/10/2004

3.3 Properties of the transfer function for SISO linear systems

81

can infer from the transfer function some of the input/output behaviour we have discussed in the time-domain.

It is important that the transfer function be invariant under linear changes of state variable—we’d like the transfer function to be saying something about the system rather than just the set of coordinates we are using. The following result is an obvious one.

3.4 Proposition Let Σ = (A, b, ct, D) be a SISO linear system and let T be an invertible n × n matrix (where A is also in Rn×n). If Σ0 = (T AT −1, T b, ctT −1, D) then TΣ0 = TΣ.

By Proposition 2.5 this means that if we make a change of coordinate ξ = T −1x for the SISO linear system (3.1), then the transfer function remains unchanged.

3.3.1 Controllability and the transfer function We will first concern ourselves with cases when the GCD of the numerator and denominator polynomials is not 1.

3.5 Theorem Let Σ = (A, b, ct, 01) be a SISO linear system. If (A, b) is controllable, then

the polynomials

P1(s) = ctadj(sIn − A)b, P2(s) = PA(s) are coprime as elements of R[s] if and only if (A, c) is observable.

Proof Although A, b, and c are real, let us for the moment think of them as being complex. This means that we think of b, c Cn and A as being a linear map from Cn to itself. We also think of P1, P2 C[s].

Since (A, b) is controllable, by Theorem 2.37 and Proposition 3.4 we may without loss

of generality suppose that

 

 

 

 

 

 

 

·· ·· ··

 

 

 

 

 

 

0

 

 

0

0

1

0

0

 

 

 

 

 

 

 

0

1

0

0

 

0

 

 

 

 

 

 

0

 

 

A =

.

.

.

.

·.· ·

.

 

 

 

,

b =

 

.

.

(3.3)

 

..

..

..

..

..

..

 

 

 

 

 

 

..

 

 

 

 

0

0

0

1

 

0

 

 

 

 

 

 

0

 

 

 

 

0

0

0

0

 

1

 

 

 

 

 

0

 

 

 

 

p0

 

p1

 

p2

 

p3

· · ·

pn

 

 

 

 

 

 

 

 

 

 

· · · −

1

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Let us first of all determine TΣ with A and b of this form. Since the first n − 1 entries of b are zero, we only need the last column of adj(sIn − A). By definition of adj, this means we only need to compute the cofactors for the last row of sIn − A. A tedious but straightforward calculation shows that

 

 

 

 

 

 

· · ·

 

1

 

 

adj(sI

 

A) =

.. ...

..

..

 

.

 

n

 

 

 

.

 

 

sn−1

 

 

 

 

 

 

 

· · ·

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thus, if c = (c0, c1, . . . , cn−1) then it is readily seen that

 

 

 

 

s

 

 

 

 

 

1

 

 

 

 

..

(3.4)

adj(sIn

 

A)b =

 

.

 

 

 

 

 

sn−1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

= P1(s) = ctadj(sIn − A)b = cn−1sn−1 + cn−2sn−2 + · · · + c1s + c0.

82

3 Transfer functions (the s-domain)

22/10/2004

With these preliminaries out of the way, we are ready to proceed with the proof proper. First suppose that (A, c) is not observable. Then there exists a nontrivial subspace

V Cn with the property that A(V ) V and V ker(ct). Furthermore, we know by Theorem 2.17 that V is contained in the kernel of O(A, c). Since V is a C-vector space and since A restricts to a linear map on V , there is a nonzero vector z V with the property Az = αz for some α C. This is a consequence of the fact that the characteristic polynomial of A restricted to V will have a root by the fundamental theorem of algebra. Since z is an eigenvector for A with eigenvalue α, we may use (3.3) to ascertain that the components of z satisfy

z2 = αz1

z3 = αz2 = α2z1

.

.

.

zn−1 = αzn−2 = αn−2z1 −p0z1 − p1z2 − · · · − pn−1zn = αzn.

The last of these equations then reads

αzn + pn−1zn + αn−2pn−2z1 + · · · + αp1z1 + p0z1 = 0.

Using the fact that α is a root of the characteristic polynomial P2 we arrive at

αzn + pn−1zn = αn−1(α + pn−1)z1

from which we see that zn = αn−1z1 provided α 6= −pn−1. If α = −pn−1 then zn is left free. Thus the eigenspace for the eigenvalue α is

span (1, α, . . . , αn−1)

if α 6= −pn−1 and

span (1, α, . . . , αn−1), (0, . . . , 0, 1)

if α = −pn−1. In either case, the vector z0 , (1, α, . . . , αn−1) is an eigenvector for the eigenvalue α. Thus z0 V ker(ct). Thus means that

ctz0 = c0 + c1α + · · · + cn−1αn−1 = 0,

and so α is a root of P1 (by (3.4)) as well as being a root of the characteristic polynomial P2. Thus P1 and P2 are not coprime.

Now suppose that P1 and P2 are not coprime. Since these are complex polynomials, this means that there exists α C so that P1(s) = (s − α)Q1(s) and P2(s) = (s − α)Q2(s) for some Q1, Q2 C[s]. We claim that the vector z0 = (1, α, . . . , αn−1) is an eigenvector for A. Indeed the components of w = Az0 are

w1 = α w2 = α2

.

.

.

wn−1 = αn−2

wn = − p0 − p1α − · · · − pn−1αn−1.

22/10/2004 3.3 Properties of the transfer function for SISO linear systems 83

However, since α is a root of P2 the right-hand side of the last of these equations is simply αn. This shows that Az0 = w = αz0 and so z0 is an eigenvector as claimed. Now we claim that z0 ker(O(A, c)). Indeed, since α is a root of P1, by (3.4) we have

ctz0 = c0 + c1s + · · · + cn−1αn−1 = 0.

Therefore, z0 ker(ct). Since Akz0 = αkz0 we also have z0 ker(ctAk) for any k ≥ 1. But this ensures that z0 ker(O(A, c)) as claimed. Thus we have found a nonzero vector in ker(O(A, c)) which means that (A, c) is not observable.

To complete the proof we must now take into account the fact that, in using the Fundamental Theorem of Algebra in some of the arguments above, we have constructed a proof that only works when A, b, and c are thought of as complex. Suppose now that they are real, and first assume that (A, c) is not observable. The proof above shows that there is either a one-dimensional real subspace V of Rn with the property that Av = αv for some nonzero v V and some α R, or that there exists a two-dimensional real subspace V of Rn with vectors v1, v2 V with the property that

Av1 = σv1 − ωv2, Av2 = ωv1 + σv2

for some σ, ω R with ω 6= 0. In the first case we follow the above proof and see that α R is a root of both P1 and P2, and in the second case we see that σ + iω is a root of both P1 and P2. In either case, P1 and P2 are not coprime.

Finally, in the real case we suppose that P1 and P2 are not coprime. If the root they share is α R then the nonzero vector (1, α, . . . , αn−1) is shown as above to be in ker(O(A, c)). If the root they share is α = σ + iω then the two nonzero vectors Re(1, α, . . . , αn−1) and Im(1, α, . . . , αn−1) are shown to be in ker(O(A, c)), and so (A, c) is not observable.

I hope you agree that this is a non-obvious result! That one should be able to infer observability merely by looking at the transfer function is interesting indeed. Let us see that this works in an example.

3.6 Example (Example 2.13 cont’d) We consider a slight modification of the example Example 2.13 that, you will recall, was not observable. We take

A =

1

, b = 1 , c =

−1

,

 

0

1

0

1

 

from which we compute

 

 

 

 

 

 

 

ctadj(sI2 − A)b = 1 − s

 

 

 

det(sI2

− A) = s2 − s − 1.

 

Note that when = 0 we have exactly the situation of Example 2.13. The controllability matrix is

C(A, b) =

1

 

 

0

1

and so the system is controllable. The roots of the characteristic polynomial are

 

− ±

 

s =

4 + 2

 

2

 

 

 

84 3 Transfer functions (the s-domain) 22/10/2004

and ctadj(sI2 − A)b has the single root s = 1. The characteristic polynomial has a root of 1 when and only when = 0. Therefore, from Theorem 3.5 (which applies since (A, b) is controllable) we see that the system is observable if and only if 6= 0. This can also be seen by computing the observability matrix:

1 −1 O(A, c) = −1 1 − .

This matrix has full rank except when = 0, and this is as it should be.

Note that Theorem 3.5 holds only when (A, b) is controllable. When they are not con-

trollable, the situation is somewhat disastrous, as the following result describes.

 

 

3.7 Theorem If

(A, b) Rn×n × Rn is not controllable, then for any c Rn the polynomials

 

 

 

 

P1(s) = ctadj(sIn − A)b,

P2(s) = PA(s)

 

 

 

are not coprime.

 

 

 

 

 

 

 

 

 

 

 

 

 

Proof By Theorem 2.39 we may suppose that A and b are given by

 

 

 

 

 

 

 

 

A = 0n−`,`

A22

,

b = 0n−`

,

 

 

 

 

 

 

 

 

 

 

A11

A12

 

 

b1

 

 

 

 

 

for some ` < n. Therefore,

 

 

 

 

 

 

 

 

 

 

 

sIn

A =

sI`A11

 

A12

=

 

(sIn

A)−1

=

(sI`A11)−1

 

1

,

 

 

0n−`,`

sIn−`A22

 

 

 

 

0n−`,`

(sIn−`A22

 

where the denotes a term that will not matter to us. Thus we have

 

 

 

 

 

 

 

 

 

 

 

(sI A11)−1b1

 

 

 

 

 

 

 

 

 

 

(sIn − A)−1b =

`0n−`.

 

 

 

 

 

 

This means that if we write c = (c1, c2) R` × Rn−` we must have

 

 

 

 

 

 

 

 

ct(sIn − A)−1b = c1t (sI`A11)−1b1.

 

 

 

 

This shows that

 

ctadj(sIn − A)b

 

c1t adj(sI` − A11)b1

 

 

 

 

 

 

 

 

 

=

.

 

 

 

 

 

 

 

 

det(sIn − A)

 

det(sI` − A11)

 

 

 

 

The denominator on the left is monic of degree n and the denominator on the right is monic and degree `. This must mean that there is a monic polynomial P of degree n − ` so that

ctadj(sIn − A)b

=

P (s)c1t adj(sI` − A11)b1

,

det(sIn − A)

P (s) det(sI` − A11)

 

 

which means that the polynomials ctadj(sIn − A)b and det(sIn − A) are not coprime.

This result shows that when (A, b) is not controllable, the order of the denominator in TΣ, after performing pole/zero cancellations, will be strictly less than the state dimension. Thus the transfer function for an uncontrollable system, is never representing the complete state information.

Let’s see how this works out in our uncontrollable example.

22/10/2004 3.3 Properties of the transfer function for SISO linear systems 85

3.8 Example (Example 2.19 cont’d) We consider a slight modification of the system in Example 2.19, and consider the system

 

1

 

 

0

A = 1 −1 ,

b =

1 .

The controllability matrix is given by

 

 

 

 

 

0

 

 

 

C(A, b) = 1 −1 ,

which has full rank except when = 0. We compute

 

+ 1

 

=

 

adj(sI2 − A) = s 1

s − 1

adj(sI2 − A)b = s − 1 .

Therefore, for c = (c1, c2) we have

ctadj(sI2 − A)b = c2(s − 1) + c1 .

We also have det(sI2 − A) = s2 − 1 = (s + 1)(s − 1) which means that there will always be a pole/zero cancellation in TΣ precisely when = 0. This is precisely when (A, b) is not

controllable, just as Theorem 3.7 predicts.

 

3.3.2 Observability and the transfer function

The above relationship between ob-

servability and pole/zero cancellations in the numerator and denominator of TΣ relies on (A, b) being controllable. There is a similar story when (A, c) is observable, and this is told by the following theorem.

3.9 Theorem Let Σ = (A, b, ct, 01) be a SISO linear system. If (A, c) is observable, then

the polynomials

P1(s) = ctadj(sIn − A)b, P2(s) = PA(s) are coprime as elements of R[s] if and only if (A, b) is controllable.

Proof First we claim that

btadj(sIn − At)c = ctadj(sIn − A)b

det(sIn − At) = det(sIn − A). (3.5)

Indeed, since the transpose of a 1 × 1 matrix, i.e., a scalar, is simply the matrix itself, and since matrix inversion and transposition commute, we have

ct(sIn − A)−1b = bt(sIn − sAt)−1c.

This implies, therefore, that

ctadj(sIn − A)b

=

btadj(sIn − At)c

.

det(sIn − A)

 

det(sIn − At)

Since the eigenvalues of A and At agree,

 

 

 

det(sIn − At) = det(sIn − A),

86

3 Transfer functions (the s-domain)

22/10/2004

and from this it follows that

btadj(sIn − At)c = ctadj(sIn − A)b.

Now, since (A, c) is observable, (At, c) is controllable (cf. the proof of Theorem 2.38). Therefore, by Theorem 3.5, the polynomials

˜

 

t

t

)c,

˜

t

(s)

 

 

 

P1

(s) = b

adj(sIn − A

P2(s) = PA

 

 

 

 

t

 

 

 

 

 

˜

 

˜

are coprime

are coprime if and only if (A

, b) is observable. Thus the polynomials P1

and P2

 

 

 

 

 

˜

 

 

˜

, and the result

if and only if (A, b) is controllable. However, by (3.5) P1 = P1

and P2 = P2

now follows.

 

 

 

 

 

 

 

 

 

 

Let us illustrate this result with an example.

3.10 Example (Example 2.19 cont’d) We shall revise slightly Example 2.19 by taking

A =

1

−1 ,

b = 1 ,

c =

1 .

 

1

 

 

0

 

0

We determine that

 

 

 

 

 

 

 

ctadj(sI2 − A)b = s − 1

 

 

det(sI2 − A) = s2 − − 1.

 

The observability matrix is computed as

 

−1

 

 

 

 

O(A, c) = 1

,

 

 

 

 

0

1

 

 

so the system is observable for all . On the other hand, the controllability matrix is

C(A, b) =

1

−1

,

 

0

 

 

so the (A, b) is controllable if and only if = 0. What’s more, the roots of the characteristic

polynomial are s = ± 1 + . Therefore, the polynomials ctadj(sI2 − A)b and det(sI2 − A)

are coprime if and only if = 0, just as predicted by Theorem 3.9.

 

We also have the following analogue with Theorem 3.7.

 

3.11 Theorem If (A, c) Rn×n × Rn is not observable, then for any b Rn the polynomials

P1(s) = ctadj(sIn − A)b, P2(s) = PA(s)

are not coprime.

Proof This follows immediately from Theorem 3.7, (3.5) and the fact that (A, c) is observable if and only if (At, b) is controllable.

It is, of course, possible to illustrate this in an example, so let us do so.

22/10/2004

3.3 Properties of the transfer function for SISO linear systems

87

3.12 Example (Example 2.13 cont’d) Here we

work

with a slight modification of

Exam-

ple 2.13 by taking

A = 1

,

 

c =

−1 .

 

 

 

 

 

0

1

 

 

 

1

 

As the observability matrix is

−1

1 +

 

 

 

 

 

O(A, c) =

1

−1

,

 

the system is observable if and only if = 0. If b = (b1, b2) then we compute

ctadj(sI2 − A)b = (b1 − b2)(s − 1) + b1.

We also have det(sI2 −A) = s2 + s−1. Thus we see that indeed the polynomials ctadj(sI2 − A)b and det(sI2 − A) are not coprime for every b exactly when = 0, i.e., exactly when the system is not observable.

The following corollary summarises the strongest statement one may make concerning the relationship between controllability and observability and pole/zero cancellations in the transfer functions.

3.13 Corollary Let Σ = (A, b, ct, 01) be a SISO linear system, and define the polynomials

P1(s) = ctadj(sIn − A)b, P2(s) = det(sIn − A).

The following statements are equivalent:

(i)(A, b) is controllable and (A, c) is observable;

(ii)the polynomials P1 and P2 are coprime.

Note that if you are only handed a numerator polynomial and a denominator polynomial that are not coprime, you can only conclude that the system is not both controllable and observable. From the polynomials alone, one cannot conclude that the system is, say, controllable but not observable (see Exercise E3.9).

3.3.3 Zero dynamics and the transfer function It turns out that there is another interesting interpretation of the transfer function as it relates to the zero dynamics. The following result is of general interest, and is also an essential part of the proof of Theorem 3.15. We have already seen that det(sIn − A) is never the zero polynomial. This result tells us exactly when ctadj(sIn − A)b is the zero polynomial.

3.14 Lemma Let Σ = (A, b, ct, D) be a SISO linear system, and let ZΣ be the subspace constructed in Algorithm 2.28. Then ctadj(sIn − A)b is the zero polynomial if and only if b ZΣ.

Proof Suppose that b ZΣ. By Theorem 2.31 this means that ZΣ is A-invariant, and so also is (sIn − A)-invariant. Furthermore, from the expression (A.3) we may ascertain that ZΣ is (sIn − A)−1-invariant, or equivalently, that ZΣ is adj(sIn − A)-invariant. Thus we must have adj(sIn − A)b ZΣ. Since ZΣ ker(ct), we must have ctadj(sIn − A)b = 0.

88

3 Transfer functions (the s-domain)

22/10/2004

Conversely, suppose that ctadj(sIn−A)b = 0. By Exercise EE.4 this means that cteAtb = 0 for t ≥ 0. If we Taylor expand eAt about t = 0 we get

tk

X

 

Akb = 0.

c

 

k=0

k!

Evaluating the kth derivative of this expression with respect to t at t = 0 gives cAkb = 0,

k= 0, 1, 2, . . . . Given these relations, we claim that the subspaces Zk, k = 1, 2, . . . of Algorithm 2.28 are given by Zk = ker(ctAk−1). Since Z1 = ker(ct), the claim holds for

k= 1. Now suppose that the claim holds for k = m > 1. Thus we have Zm = ker(ctAm−1). By Algorithm 2.28 we have

Zm+1 = {x Rn | Ax Zm + span {b}}

=x Rn | Ax ker(ctAm−1) + span {b}

=x Rn | Ax ker(ctAm−1)

=x Rn | x ker(ctAm)

=ker(ctAm),

where, on the third line, we have used the fact that b ker(ctAm−1). Since our claim follows, and since b ker(ctAk) = Zk−1 for k = 0, 1, . . . , it follows that b ZΣ.

The lemma, note, gives us conditions on so-called invertibility of the transfer function. In this case we have invertibility if and only if the transfer function is non-zero.

With this, we may now prove the following.

3.15 Theorem Consider a SISO control system

of the form Σ = (A, b, ct, 0

1

). If ct(sI

n

A)b

t

 

 

 

is not the zero polynomial then the zeros of

c

adj(sIn − A)b are exactly the spectrum for

the zero dynamics of Σ.

 

 

 

 

 

 

Proof Since ctadj(sIn −A)b 6= 0, by Lemma 3.14 we have b 6 ZΣ. We can therefore choose a basis B = {v1, . . . , vn} for Rn with the property that {v1, . . . , v`} is a basis for ZΣ and

v`+1 = b. With respect to this basis we can write c = (0, c2) R` ×Rn−` since ZΣ ker(ct). We can also write b = (0, (1, 0, . . . , 0)) R` ×Rn−`, and we denote b2 = (1, 0, . . . , 0) Rn−`.

We write the matrix for the linear map A in this basis as

A11

A12

.

A21

A22

Since A(ZΣ) = ZΣ + span {b}, for k = 1, . . . ` we must have Avk = uk + αkv`+1 for some uk ZΣ and for some αk R. This means that A21 must have the form

 

 

α1

α2

· · ·

α`

 

 

 

A =

 

0

0

· · ·

0

.

(3.6)

 

.. ..

...

..

 

 

21

 

0

0

 

0

 

 

 

 

 

 

 

· · ·

 

 

 

 

 

 

 

 

 

 

 

 

 

Therefore f1 = (−α1, . . . , −α`) is the unique vector for which b2ft1 = −A21. We then define f = (f1, 0) R` × Rn−` and determine the matrix for A + bft in the basis B to be

A11 A12 .

0n−`,` A22

22/10/2004

3.3 Properties of the transfer function for SISO linear systems

89

Thus, by (A.3), A + bft has ZΣ as an invariant subspace. Furthermore, by Algorithm 2.28, we know that the matrix NΣ describing the zero dynamics is exactly A11.

We now claim that for all s C the matrix

sIn−` −t A22

b2

(3.7)

−c2

0

 

is invertible. To show this, suppose that there exists a vector (x2, u) Rn−` × R with the property that

 

−c2

 

0

u

 

c2x2

 

0

 

 

sIn−` −t

A22

b2

x2

=

(sIn−` At22)x2

+ b2u

= 0 .

(3.8)

Define

Z = ZΣ + span {(0, x2)} .

Since ZΣ ker(ct) and since −ct2x2 = 0 we conclude that Z ker(ct). Given the form of A21 in (3.6), we see that if v ZΣ, then Av ZΣ + span {b}. This shows that Z ZΣ, and from this we conclude that (0, x2) ZΣ and so x2 must be zero. It then follows from (3.8) that u = 0, and this shows that the kernel of the matrix (3.7) contains only the zero vector, and so the matrix must be invertible.

Next we note that

sIn − A − bft b

=

sIn − A b In t

0

 

 

−ct

 

0

−ct

 

0 −f 1

 

and so

 

−ct

 

0

 

−ct

0

 

 

 

bft

 

det

sIn

 

A

b

= det

 

sIn − A

b .

(3.9)

We now rearrange the matrix on the left-hand side corresponding to our decomposition. The matrix for the linear map corresponding to this matrix in the basis B is

 

0n−t`,`

sIn−` t

A22

b2

.

 

sI`

− A11

−A12

0

 

 

0

c2

 

0

The determinant of this matrix is therefore exactly the determinant on the left-hand side of (3.9). This means that

 

 

ct

0

 

 

 

 

 

 

 

 

sIn

 

A b

 

sI` − A11

A12

0

 

 

det

 

 

 

0t

 

ct

A22

0

 

.

 

 

= det

0n `,`

sIn `

b2

 

 

 

 

 

 

2

 

 

 

By Lemma 3.3 we see that the left-hand determinant is exactly ctadj(sIn −A)b. Therefore, the values of s for which the left-hand side is zero are exactly the roots of the numerator of the transfer function. On the other hand, since the matrix (3.7) is invertible for all s C, the values of s for which the right-hand side vanish must be those values of s for which

det(sI` − A11) = 0, i.e., the eigenvalues of A11. But we have already decided that A11

is

the matrix that represents the zero dynamics, so this completes the proof.

 

90

3 Transfer functions (the s-domain)

22/10/2004

This theorem is very important as it allows us to infer—at least in those cases where the transfer function is invertible—the nature of the zero dynamics from the transfer function. If there are zeros, for example, with positive real part we know our system has unstable zero dynamics, and we ought to be careful.

To further illustrate this connection between the transfer function and the zero dynamics, we give an example.

3.16 Example (Example 2.27 cont’d) Here we look again at Example 2.27. We have

A =

−2 −3

,

b =

1

,

c =

−1 .

 

0

1

 

 

0

 

 

1

We had computed ZΣ = span {(1, 1)}, and so b 6 ZΣ. Thus ctadj(sI2 − A)b is not the zero polynomial by Lemma 3.14. Well, for pity’s sake, we can just compute it:

ctadj(sI2 − A)b = 1 − s.

Since this is non-zero, we can apply Theorem 3.15 and conclude that the spectrum for the zero dynamics is {1}. This agrees with our computation in Example 2.29 where we computed

NΣ = 1 .

 

Since spec(NΣ) ∩ C+ 6= , the system is not minimum phase.

3.17 Remark We close with an important remark. This section contains some technically demanding mathematics. If you can understand this, then that is really great, and I encourage you to try to do this. However, it is more important that you get the punchline here which is:

The transfer function contains a great deal of information about the behaviour of the system, and it does so in a deceptively simple manner.

We will be seeing further implications of this as things go along.

 

3.4 Transfer functions presented in input/output form

The discussion of the previous section supposes that we are given a state-space model for our system. However, this is sometimes not the case. Sometimes, all we are given is a scalar di erential equation that describes how a scalar output behaves when given a scalar input. We suppose that we are handed an equation of the form

y(n)(t) + pn−1y(n−1)(t) + · · · + p1y(1)(t) + p0y(t) =

cn−1u(n−1)(t) + cn−1u(n−2)(t) + · · · + c1u(1)(t) + c0u(t) (3.10)

for real constants p0, . . . , pn−1 and c0, . . . , cn−1. How might such a model be arrived at? Well, one might perform measurements on the system given certain inputs, and figure out that a di erential equation of the above form is one that seems to accurately model what you are seeing. This is not a topic for this book, and is referred to as “model identification.” For now, we will just suppose that we are given a system of the form (3.10). Note here that there are no states in this model! All there is is the input u(t) and the output y(t). Our

22/10/2004

3.4 Transfer functions presented in input/output form

91

system may possess states, but the model of the form (3.10) does not know about them. As we have already seen in the discussion following Theorem 2.37, there is a relationship between the systems we discuss in this section, and SISO linear systems. We shall further develop this relationship in this section.

For the moment, let us alleviate the nuisance of having to ever again write the expression (3.10). Given the di erential equation (3.10) we define two polynomials in R[ξ] by

D(ξ) = ξn + pn−1ξn−1 + · · · + p1ξ + p0

N(ξ) = cn−1ξn−1 + cn−2ξn−2 + · · · + c1ξ + c0.

Note that if we let ξ = ddt then we think of D(ddt ) and N(ddt ) as a di erential operator, and we can write

D(ddt )(y) = y(n)(t) + pn−1y(n−1)(t) + · · · + p1y(1)(t) + p0y(t).

In like manner we can think of N(ddt ) as a di erential operator, and so we write

N(ddt )(u) = cn−1u(n−1)(t) + cn−1u(n−2)(t) + · · · + c1u(1)(t) + c0u(t).

In this notation the di erential equation (3.10) reads D(ddt )(y) = N(ddt )(u). With this little bit of notation in mind, we make some definitions.

3.18 Definition A SISO linear system in input/output form is a pair of polynomials (N, D) in R[s] with the properties

(i)D is monic and

(ii)D and N are coprime.

The relative degree of (N, D) is deg(D) − deg(N). The system is proper (resp. strictly proper) if its relative degree is nonnegative (resp. positive). If (N, D) is not proper, it is improper. A SISO linear system (N, D) in input/output form is stable if D has no roots in C+ and minimum phase if N has no roots in C+. If (N, D) is not minimum phase, it is nonminimum phase. The transfer function associated with the SISO linear system (N, D) in input/output form is the rational function TN,D(s) = ND((ss)) .

3.19 Remarks 1. Note that in the definition we allow for the numerator to have degree greater than that of the denominator, even though this is not the case when the input/output system is derived from a di erential equation (3.10). Our reason for doing this is that occasionally one does encounter transfer functions that are improper, or maybe situations where a transfer function, even though proper itself, is a product of rational functions, at least one of which is not proper. This will happen, for example, in Section 6.5 with the “derivative” part of PID control. Nevertheless, we shall for the most part be thinking of proper, or even strictly proper SISO linear systems in input/output form.

2.At this point, it is not quite clear what is the motivation behind calling a system (N, D) stable or minimum phase. However, this will certainly be clear as we discuss properties

of transfer functions. This being said, a realisation of just what “stable” might mean will not be made fully until Chapter 5.

92

3 Transfer functions (the s-domain)

22/10/2004

If we take the causal left Laplace transform of the di erential equation (3.10) we get simply D(s)L0+(y)(s) = N(s)L0+(u)(s), provided that we suppose that both the input u and the output y are causal signals. Therefore we have

TN,D(s) =

L0+(y)(s)

=

N(s)

=

cn−1sn−1 + cn−1sn−2 + · · · + c1s + c0

.

 

 

 

 

L0+(u)(s)

 

D(s)

 

sn + pn−1sn−1 + · · · + p1s + p0

Block diagrammatically, the situation is illustrated in Figure 3.7. We should be very clear

uˆ(s)

 

 

 

N (s)

 

yˆ(s)

 

 

 

D(s)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3.7 The block diagram representation of (3.10)

on why the diagrams Figure 3.6 and Figure 3.7 are di erent: there are no state variables x in the di erential equations (3.10). All we have is an input/output relation. This raises the question of whether there is a connection between the equations in the form (3.1) and those in the form (3.10). Following the proof of Theorem 2.37 we illustrated how one can take di erential equations of the form (3.1) and produce an input/output di erential equation like (3.10), provided (A, b) is controllable. To go from di erential equations of the form (3.10) and produce di erential equations of the form (3.1) is in some sense artificial, as we would have to “invent” states that are not present in (3.10). Indeed, there are infinitely many ways to introduce states into a given input/output relation. We shall look at the one that is related to Theorem 2.37. It turns out that the best way to think of making the connection from (3.10) to (3.1) is to use transfer functions.

3.20 Theorem Let (N, D) be a proper SISO linear system in input/output form. There exists a complete SISO linear control system Σ = (A, b, ct, D) with A Rn×n so that TΣ = TN,D.

Proof Let us write

 

 

 

 

 

 

D(s) = sn + pn−1sn−1 + · · · + p1s + p0

 

 

 

 

 

 

N(s) = c˜nsn + c˜n−1sn−1 + c˜n−2sn−2 + · · · + c˜1s + c˜0.

We may write

N(s)

as

D(s)

 

 

 

 

 

 

 

N(s)

=

nsn + c˜n−1sn−1 + c˜n−2sn−2 + · · · + c˜1s + c˜0

 

D(s)

 

 

 

 

sn + pn−1sn−1 + · · · + p1s + p0

+pn−1sn−1 + · · · + p1s + p0

=n sn + pn−1sn−1 + · · · + p1s + p0 + (˜cn−1 − c˜npn−1)sn−1 + (˜cn−2 − c˜npn−2)sn−2 + · · · + (˜c1 − c˜np1)s + (˜c0 − c˜np0)sn

sn + pn−1sn−1 + · · · + p1s + p0

cn−1sn−1 + cn−2sn−2 + · · · + c1s + c0 = c˜n + sn + pn−1sn−1 + · · · + p1s + p0 ,

22/10/2004

 

 

 

 

 

3.4

Transfer functions presented in input/output form

93

where ci = c˜i − c˜npi, i = 0, . . . , n − 1. Now define

 

0

 

 

c1

 

 

 

 

 

0

0

1

0

·· ·· ··

0

 

 

 

 

 

 

 

 

 

 

 

 

0

1

0

0

 

0

 

 

 

 

 

 

0

 

 

 

 

c0

 

 

 

 

A =

.

.

.

.

·.· ·

.

 

 

 

,

b =

 

.

 

,

c =

 

.2

 

 

,

D = c˜n .

 

..

..

..

..

..

..

 

 

 

 

 

 

..

 

 

 

..

 

 

 

 

 

 

0

0

0

1

 

0

 

 

 

 

 

 

0

 

 

 

 

c

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

0

0

0

 

1

 

 

 

 

 

0

 

 

cn

 

2

 

 

 

 

 

 

p0

 

p1

 

p2

 

p3

· · ·

pn

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

· · · −

1

 

 

 

1

 

 

cn

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(3.11) By Exercise E2.11 we know that (A, b) is controllable. Since D and N are by definition coprime, by Theorem 3.5 (A, c) is observable. In the proof of Theorem 3.5 we showed that

ctadj(sIn − A)b

=

cn−1sn−1 + cn−2sn−2 + · · · + c1s + c0

,

 

det(sIn − A)

 

sn + pn−1sn−1 + · · · + p1s + p0

 

(see equation (3.4)), and from this follows our result.

 

We shall denote the SISO linear control system Σ of the theorem, i.e., that one given by (3.11), by ΣN,D to make explicit that it comes from a SISO linear system in input/output form. We call ΣN,D the canonical minimal realisation of the transfer function TN,D. Note that condition (ii) of Definition 3.18 and Theorem 3.5 ensure that (A, c) is observable. This establishes a way of getting a linear system from one in input/output form. However, it not the case that the linear system ΣN,D should be thought of as representing the physical states of the system, but rather it represents only the input/output relation. There are consequences of this that you need to be aware of (see, for example, Exercise E3.20).

It is possible to represent the above relation with a block diagram with each of the states x1, . . . , xn appearing as a signal. Indeed, you should verify that the block diagram of Figure 3.8 provides a transfer function which is exactly

L0+(y)s)

=

cn−1sn−1 + · · · + c1s + c0

+ D.

L0+(u)(s)

sn + pn−1sn−1 + · · · + p1s + p0

Note that this also provides us with a way of constructing a block diagram corresponding to a transfer function, even though the transfer function may have been obtained from a di erent block diagram. The block diagram of Figure 3.8 is particularly useful if you live in mediaeval times, and have access to an analogue computer. . .

3.21 Remark Note that we have provided a system in controller canonical form corresponding to a system in input/output form. Of course, it is also possible to give a system in observer canonical form. This is left as Exercise E3.19 for the reader.

Theorem 3.20 allows us to borrow some concepts that we have developed for linear systems of the type (3.1), but which are not obviously applicable to systems in the form (3.10). This can be a useful thing to do. For example, motivated by Theorem 3.15, our notion that a SISO linear system (N, D) in input/output form is minimum phase if all roots of N lie in C+, and nonminimum phase otherwise, makes some sense.

Also, we can also use the correspondence of Theorem 3.20 to make a sensible notion of impulse response for SISO systems in input/output form. The problem with a direct definition is that if we take u(t) to be a limit of inputs from U as described in Theorem 2.34, it is not clear what we should take for u(k)(t) for k ≥ 1. However, from the transfer function

94

3 Transfer functions (the s-domain)

22/10/2004

 

 

 

 

 

 

cn1

 

 

 

 

 

 

 

cn2

 

 

 

 

 

 

 

c1

 

1

n

1

xˆn1

2

1

1

ˆ

ˆ

 

 

. . .

 

 

c0

u

 

 

 

 

 

 

y

s

 

s

 

 

s

 

 

pn1

 

 

 

 

 

 

 

pn2

 

 

 

 

 

 

 

p1

 

 

 

 

 

 

 

p0

 

 

 

 

 

 

 

 

 

 

D

 

 

 

 

Figure 3.8 A block diagram for the SISO linear system of Theorem 3.20

point of view, this is not a problem. To wit, if (N, D) is a strictly proper SISO linear system in input/output form, its impulse response is given by

hN,D(t) = 1(t)cteAtb

where (A, b, c, 01) = ΣN,D. As with SISO linear systems, we may define the causal impulse

response h+N,D : [0, ∞) → R and the anticausal impulse response hN,D : (−∞, 0] → R. Also, as with SISO linear systems, it is the causal impulse response we will most often use, so we

will frequently just write hN,D, as we have already done, for h+N,D.

We note that it is not a simple matter to define the impulse response for a SISO linear system in input/output form that is proper but not strictly proper. The reason for this is that the impulse response is not realisable by a piecewise continuous input u(t). However, if one is willing to accept the notion of a “delta-function,” then one may form a suitable notion of impulse response. How this may be done without explicit recourse to delta-functions is outlined in Exercise E3.1.

Соседние файлы в предмете Электротехника