- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
6.4.6 Parametric Adjustment
In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
AX + С = L (n > u) (6.69)
which, for the adjustment, will be reformulated as:
AX - (L + V) = 0
or
V
= АХ - L*)
. (6.70)
Here
A is called the design
matrix,
X is the vector of unknown parameters, L is the vector of
observations,
(L
=
L*
-
С where
L* is the mean of the observed multisample),
and
V is the vector of discrepancies, which is also unknown.
The
formulation (6.70)
is
known as a set of observation equations.
a
We
wish to get such X =
X
that would minimize the quadratic
T
form
V
PV
in which P is the assumed weight matrix for the observations L (see
the previous section).
This
quadratic form, which is sometimes called the quadratic form
of
weighted discrepancies, can be rewritten using the observation
equations (6.70)
as
VTPV
=
(AX
-
L)T
P(AX -
L)
=
((AX)T
-
LT)
(PAX -
PL
T
T —T T
T —T
—
=
X
A PAX -
L
PAX -
X
A L +
L
PL (6.71)
From
equation (6.66)
we
have P =
к £_\
where к
is
a
constant
scalar and
L
£-
is
the variance-covariance matrix of L. Since £-
is
symmetric,
the
L L
T
weight
matrix P is symmetric as well and P =
P.
We can thus write
LT
=
PAX
=
XTATPL (6.72)
since
it is a scalar quantity.
Substituting
(6.72)
into
(6.71)
we
get
*
If
we have a non-linear model
L
=
F(X)
pv
9F ,
ov
(x-x
)+...,
x=x
L-F(X°)
and A (
a
matrix) for 9F/3Xi
о
we
get
j
x—x
AL
=
AAX
.
This
is essentially the same form as equation (6.69).
However,
in
this case we are solving for the corrections AX to the approximate
value X of the vector X, instead of solving for X itself.
L
=
F(X
)
+
gxIt can be easily linearized by Taylor's series expansion, I.E.
In which we neglect the higher order terms. Putting ax for X-X , al for
VTPV = XTATPAX - 2XTATPL + LTPL . (6.73)
The quadratic function (6.73), called sometimes the variations function, is to be minimized with respect to X. This is accomplished by equating all the partial derivatives to zero, i.e.
-JL_ VTPV =0 i = 1, 2, ... , u, (6.74)
эх1
and we obtain, writing Ъ/ЪХ for the whole vector of partial derivatives
Э/ЭХ1,
— V PV = 2X A PA - 2L PA = 0 , 1)
oX
which can be rewritten as:
лТ T ~T
X A PA = L PA
or by taking the transpose of both sides we get:
(6.75)
T T -
(A PA)X = A PL
This system of linear equations is called the system of normal equations which can be written, as often used in the literature, in the following abbreviated form:
N X = U (6.76)
T
where N = (A PA) is known as the matrix of coefficients of the normal
T -
equations, or simply the normal equation matrix and U = A PL is the vector of absolute terms of the normal equation.