Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
INTRODUCTION TO ADJUSTMENT CALCULUS.docx
Скачиваний:
49
Добавлен:
07.09.2019
Размер:
3.01 Mб
Скачать

Variance factor к plays. It can be regarded as the variance of unit

2

weight (see 6.U.3) and is accordingly usually denoted by either Sq or

2

o"o (in case of postulated variances). This is again intuitively pleasing

since it ties together formulae (6.66) and (6.65)> where к can be also

2 л A 2 л 2

equated to Sq . Analogically, we denote к by either Sq or cq .

л 2

By adopting the notation cq for к, and further by denoting the weight coefficient matrix of the estimated parameters X, i.e. N-"1, by Q,

the equations (6.90) and (6.91) become:

Лф А

А 2 v pv

(6.98) (6.99)

Example 6.18: Let us compute the estimated variance-covariance matrix

л

of the adjusted parameters X in example 6.l6. The

EA matrix is computed from equation (6.99). First9 from x

the above mentioned example we have: H - н -- 2h

vt = _J Li_[d d d]

1,3 ■ ?d. 12 3'

1 1

P . - . p 1 1 It

3,3 = diag [v V S

and df=n-u=3-2=l.

Hence,

ЛТ HJ - HG - i5i

1,3

?d.

1 1

vtpv= (Hj - Vfh.)2 / I d.

and

г 2 v pv ,„ itг ч2

if" " (HJ " HG " SEi}" / di -1

dl d2 d3

d2 + d3

d2d3

As we have seen, N = Q is given Ъу -1

Svd.

= N

1_


2,2

1

dn + d.

1

dld2

We thus obtain finally

2 ,HJ ~ HG ~¥ЧЧ2

dl(d2+d3) ' dld3

d^d^ , d^(d2 + d^

Example 6.19: Let us compute the estimated variance-covariance matrix E" of the adjusted parameters X in example 6.17. We are going to use equations (6.98) and (6.99)First, from the above mentioned example we have

VT = [0.00, 0.02, 0.02, -0.0U, -0.0k, 0.0k] 1,6

In metres,

P = diag [0.25, 0.5, 0.5, 0.25, 0.5, 0.25] 6,6

-2- a

in m and

df=n-u=6-3=3.

Hence

VTPV = 0.002 (unitless),

and

T

2 V PV

0,002 = 0.00067 (unitless) .

о df 3

Also, from example 6.IT, we have 1.6 0.8 0.8

Q = N 3,3

Finally,

0.8 1.6 0.8 0.8 0.8 1.2

in m

К = о Q = 10

Л О

3,3

10.67 5.33 5.33

5.33 10.67 5.33

5-33 5-33 8.0 in m

or /4

10.67 5.33 5.33

5.33 10.67 5.33 5.33 5.33 8.0

in cm

6.4.8 Some Properties of the Parametric Adjustment Solution Vector

It can he shown that the choice of the weight matrix P of the observations L (proportional to the inverse of variance-cоvariance matrix

I-) and the choice of the least-squares method (minimization of V PV)

h

to get the solution X = X ensures that the resulting estimate X has got the smallest possible trace of its variance-covariance matrix Z*. In

A

"2-1 T

other words: taking P = a Z and seeking min V PV, provides such a

° L XeRu

solution X that satisfies at the same time the condition

min trace Zl. . (6.100)

This is a result similar to the consequence of the least squares principle applied to random multivariate (section 5•4) and we are not going to prove it here.

(I1 - L.°)2

Similarly, it can be shown that for uncorrelated multisample of observations L = (L^, L^, . . ., L^) which are assumed to be normally distributed with PDF given by:

n

<j>(LJ3;L) = П exp [ - — 1- } у (6.101)

° i=l S./(2tt) 2

T

we get the most probable estimate of L if the condition min V PV

° XeR