48
Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex- periment are constructed. A picture of exposition of experimental re- searches in frameworks of the confluent-influent-regression models is finished. That allows the contributor to understand better to itself a picture of researches and to carry out correctly parameters' estimation. The method of the effective correction of rounding errors is constructed also for the procedure of the numerical solution of the SLAE and of the numerical computation of parameters' estimates. The regularized esti- mation methods for a case of the incomplete-rank matrices are devel- oped. 2. ANALYSIS OF ACTIVE AND COMPLICATED EXPERIMENTS When experiments are carried out in practice, especially in physical and engi- neering investigations, the problem most often encountered is that of estimating the unknown parameters of a linear model with imprecisely controlled matrix (the so- called predictor matrix). The researchers prescribe the exact predictor matrix based on the equipment, but random errors accumulate inside the equipment. We describe this representation of information in such a way. Assumption 2.0. Assume the following linear finctional relationship (the functional equation of thejrst kind)

Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Chapter 2

SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS

Abstract In the second chapter, the models of the passive-active-regression ex- periment are constructed. A picture of exposition of experimental re- searches in frameworks of the confluent-influent-regression models is finished. That allows the contributor to understand better to itself a picture of researches and to carry out correctly parameters' estimation. The method of the effective correction of rounding errors is constructed also for the procedure of the numerical solution of the SLAE and of the numerical computation of parameters' estimates. The regularized esti- mation methods for a case of the incomplete-rank matrices are devel- oped.

2. ANALYSIS OF ACTIVE AND COMPLICATED EXPERIMENTS

When experiments are carried out in practice, especially in physical and engi- neering investigations, the problem most often encountered is that of estimating the unknown parameters of a linear model with imprecisely controlled matrix (the so- called predictor matrix). The researchers prescribe the exact predictor matrix based on the equipment, but random errors accumulate inside the equipment. We describe this representation of information in such a way.

Assumption 2.0. Assume the following linear finctional relationship (the functional equation of thejrst kind)

Page 2: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

94 Alexander S. Mechenov

where @ = ;.., is a known matrix, 0 = (81 ,s . . , 6p)T are unknown pa- [ rameters, q = (qq,- . . ,Cn)T is an unknown response vector. We assume the structural relationship corresponding to the Eq. (2.0.0)

where F = f l , - . - , f p ] is a random matrix of realization @, J = [ j l ; - . , j p ] is its [ T error and i = (il , . . - , in ) is an unknown response vector.

That is, the structural relationship is a functional relationship with the matrix containing an additive random error.

In this chapter the author offers a constructive solution of the more complicated models, that is, containing both actively controlled part of a matrix, and passively measured and theoretically known parts. For the beginning we consider the sim- plest case of the structural relationship (2.0.1).

2.1 Analysis of Active Experiment

Assumption 2.1. We shall use the linear stochastic model of active experi- ment off11 rank, which is based on the structural relation (2.0.1)

Such experiment is conducted actively, that is, the researcher prescribe the exact predictor matrix @ based on the equipment, but random errors J accumulate inside the equipment, and the events being studies with certain random (unknown) values F=Q,+J and unknown parameters 0. Thus, errors are introduced in the ap- pearance first by an inaccuracy in the realization of controlled predictor values. The researchers have at their disposal a little another, than in the regression analy- sis, a random response vector

that is further corrupted by measurement errors e, in which the complete vector of error u=Je+e also depends on the unknown parameters (and the exact predic- tor matrix @). Such a model also still refers to as the model generated by struc- tural relations of the form i=F0. In [Kendall & Stuart 1967-691, they refer to structural relationship, instead of functional relationship as F is the random ma-

Page 3: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 95

trix, on which the random response measurement error e is imposed. We shall assume that there are no correlations between the controlled and measured quanti- ties (hence EeJ=O). Such an assumption can be justified by the fact that their er- rors arise at different times (error of the matrix a representation proceeds before error of the right-hand side i registration). They also arise at different spaces (they are aMixed to the different type objects), but, probably, it is impossible to exclude completely these correlations. The scheme of such an experiment is shown in Fig- ure 2.1-1.

-

I Figure 2.1-1. Scheme and description of model of an active experiment. Here all initial pre-

["- prescribed

predictors u measurements 1 dictors I/+, are set nonrandom, and all casualties in model appear due to errors j,, of instrument realization of the controlled predictor matrix and due to additional errors e of the response meas- urements which are superimposed on a right-hand side i=W+ J9.

Thus, experiment is realized in the conditions that are a little bit distinct from desirable theoretically and, accordingly, controlled. Thus not so traditional re- sponse y=@8+e is removed (as it assumes in the regression analysis). Since the full error u=J8+e depends on the unknown parameter 8 and on the errors of ma- trix realization. The response, i.e., is correlated with the matrix errors. Neither the LSM nor the LDM can be applied to estimate the unknown parameter 8. We shall call such model more correctly influent (influent, that is, acting, influencing), that is, model that studies the results of direct action, instead of results of passive con- templation. In such model, as against confluent model of passive experiment, it is impossible to interchange places any column of a matrix and a right-hand side.

For the first time the problem on various linear models of a multiple analysis of passive and active experiment in a case of homoscedastic error e and J was con- sidered in [Berkson 19501, where the incorrect conclusion that parameters of such model can be estimated by the LSM is made. In [Durbin 19541 the behavior of estimation errors was studied; Fuller [Fuller 19801 has studied properties of these estimators. Fedorov has studied the model of a multiple analysis of active experi- ment in linear [Fedorov 1968, 197 1, 19781 and nonlinear cases [Ajvazjan, etc. 19851 with homoscedastic measurement errors, and the case of heteroscedastic

Page 4: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

96 Alexander S. Mechenov

errors there is in [Mechenov 19881. Statements of a question of similar type belong to much. Vuchkov has named such experiment planned (in [Vuchkov, etc. 19871 there is the literature). Frequently the problem of passive experiment is substituted by the problem of active experiment [Islamov 1988, 19891. Algebraically (as against a confluence analysis of passive experiment) such problem is not solved because of presence of correlations between the complete right-hand side errors and the realization errors of the predictor matrix. Therefore it is necessary to apply the statistical methods of an unknown parameter estimation.

2.1.1 Maximum Likelihood Method of the Parameter Estimation of Linear Model of Active Experiment

In what follows, we shall assume that the errors are normally distributed and we shall study the estimation of the unknown parameters by the MLM. We write out the likelihood function:

1 ( T -* exp - + ( q - ~ ~ ) ~ ( ~ u u ) (q-W) .(2.1.1) 1 In calculations we use its negative double logarithm which is more convenient for the further reasoning

at which sometimes we throw also a constant n In 2n. Problem 2.1.1. Given a single realization i = cf,0 + k +Z of the random

vector q, the matrix cf, (rank cf, = p) and the corresponding covariance matrices Z, P , estimate the unknown parameter 0 of the linear influent stochastic model (2.1.0) of the active experiment by the MLM.

Theorem 2.1.1. The estimates of the parameters 0 of the linear influent model (2.1.0) from Problem 2.1.1 can be computed by minimizing the functional

Page 5: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 97

where Y = {oik + 8 T ~ i k 8 } , and Pik are the elements of the cell having label ik

and dimension pxp in the covariance matrix ~ ( ~ n * ~ n ) ; or can be computed from the SNAE

where ii = 08 - Q and

We shall call this system of the nonlinear algebraic equations by the normal equation of active experiment.

Proof. As the minimized functional is written out, it is enough to show, that EuuT = yl , to reject an insignificant constant and to differentiate the turned out expression. The calculation EuuT is made similarly by item 1.4.1, and derivation of matrix expressions explicitly is considered in [Ermakov & Zhiglavskij 19871.

2.1.1.1 Nonlinear Model. Assumption 2.0a. We assume the following nonlinear functional equation of

the first kind

where 0 is a known controlled matrix, 8 are the unknown parameters, p is an unknown response vector and the following structural relationship

corresponding to the relationship (2.0. Oa). The structural relationship, i.e., is a nonlinear functional relationship with a

matrix containing an additive random error. Assumption 2.la. We shall use the (linear on measurement errors of the

response vector) influent stochastic model o f f i l l rank, which is based on the structural relation (2.0. la) of active experiment:

Page 6: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

98 Alexander S. Mechenov

If the linearization procedure is possible, the results of the previous paragraph are applicable, and in this case if it is not present, the main problem becomes the construction of mathematical expectation of the full errors of model.

2.1.2 Homoscedastic Errors in the Matrix and in the Response

We consider the most popular case when all elements of an controlled matrix are realized with the same variance p2 as well as the response is measured with the same variance 02.

Assumption 2.1.2. We shall use the linear influent stochastic model of active experiment offill rank, which is based on the structural relation (2.0.1):

Thus, perturbations have a scalar covariance matrix both in the right-hand side and in the matrix of the predictor realizations.

Then the Problem 2.1.1 has the form. Problem 2.1.2. Given a single realization = @8 + 58 + Z of the random

variable q, the matrix cD (rank 0 = p) and the corresponding values 2 and p2, estimate the unknown parameter 8 of the linear influent stochastic model (2.1.2) of the active experiment by the MLM.

Corollary 2.1.2. The estimator of the parameter 8 of the linear influent model (2.1.2)from Problem 2.1.2 can be computed by minimizing the finctional

or can be computed from the SLAE

pe - q2 [ n - CT + p 2 101 2 ) l ) = m ~ q

Proof. The value R$ is calculated by obvious manner. Differentiating it with respect to parameter vector and equating it to zero, we find

Page 7: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 99

As the first factor will not convert in zero, that, having equated to zero the second factor, we find the normal equation

2.1.2.1 Variance Estimation Calculating a derivative on 02 of the negative double logarithms of the ratio of

a likelihood and equating its to zero, we receive the equation

In that case when the ratio of variances is equal to K = 02/p2 , the asyrnptoti- cally unbiased estimator of a variance d can be computed from the formula

Remark 2.1.2. The problem of deriving an unbiased estimator remains open. That is reached whether the estimate unbiasedness by replacement n on n-p? In fact, even at p=n, the RSS is not equal to zero. Thus the mean of expressions in brackets for the equation (*)

is positive. In the equation (*), the diagonal elements of the matrix mT@ have positive shift (we mark similarity with a regularization method where also there is a positive shift of diagonal elements of the transformed matrix). As it is known, simplifications are frequently deceptive. Therefore in [Ajvazjan, etc. 19851 in the chapter devoted to the nonlinear analysis of active experiment, at construction of

Page 8: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

100 Alexander S. Mechenov

recurrent sequence the term with the logarithm was throw (probably, the author considered it poorly significant). But it ensures a positiveness of the component on a diagonal of a matrix (without it the estimation of parameters actually coincides

with an estimation of passive experiment). If in experiment p2llilV << 02, the

LSM application to these input data can give enough good estimate.

2.1.2.2 Log-Likelihood Function We study the behavior of function

where 0, is a solution of the Euler equation

Since the function S$ (a) = 1@0a -G? is non-decreasing for a20 (being

o2 +P21ea12

the ratio of non-decreasing residue function r2(a) = I@ea -12 and a decreasing 2 function (a) = d + p2 leal that is convex above (see [Moromv 1987])), and

the function T$ (a) = n ln(02 + 2 lea?) is non-increasing, their sum ZI2 (0,)

has at least one point of a minimum for a 2 0 . From the necessary minimum condition

we have

Page 9: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 101

The iterative process of the minimum computation can be begun with the follow- ing initial approximation a. = -p2 (n - p / 2) + p2n = p2p / 2 . The behavior of log-likelihood function is represented in Figure 2.2-2.

N a) -

!!.a) 5 -

$a)

60.00039595 1, 0 I 0 50 100

-0.5 a 100

Figure 2.1-2. Log-likelihood function.

2.1.2.3 Example of the Estimate Computation Let us consider the same example in item 1.4.3. From the same four observa-

tions (see Figure 1.4-4) we form the functional relation cp = +e. I

Figure 2.1-3. Comparison of the LSM for a regression model (a) and the MLM for an active model (b) (in models without background).

These measurements are approximated before by the LSM, leading in a result to the line j z 0,9q (at the leR in Figure 2.1-3a). In Figure 2.1-3b, the errors of realization are designated by the arrows going in points to reflect the activity of executing of the experiment, whereas the arrows going out points represent the measurements. After these measurements are approximated by the MLM assum-

Page 10: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

102 Alexander S. Mechenov

ing that the experiment is active, leading to the line 2 = %31 z0,7( (on the right

in Figure 2.1-3b).

2.1.3 Effect of Rounding Errors for Systems of Linear Algebraic Equations

The model of influent analysis of an active experiment is used to describe the rounding errors in the computer solution of SLAE. The estimation of the parame- ters (solving the SALE) of this model by the MLM has the effect of positive dis- placing the diagonal elements of the Gauss transformation matrix.

Rounding errors in the computer solution of SLAE can play rather significant role. Now from practical observations at the computer solution of SLAE it was known, that at shift of a spectrum in a positive leg the process of a solution be- comes more stable. There are not emergency stoppings of the computer and even it is possible to select such displacing, that the solution improves a little. However the origin of this phenomenon was not known. The author constructs the solution estimate in view of rounding error influence and the obtained solution computing method just leads to shift of a spectrum in a positive displacing that justifies earlier suggested and used practical recommendations.

This item is a continuation of the ideas in [Mechenov 19881, where a procedure for compensating of rounding errors when solving the normal equations of regres- sion analysis by computer was analyzed [Mechenov 19951.

Suppose that the full-rank system of simultaneous, accurately given linear al- gebraic equations (2.0.0) is to be solved by computer.

We will compute a solution by any direct method [Faddeev & Faddeeva 19631. To describe the rounding errors, we use the concept of equivalent perturbations [Wilkinson 1965, Voevodin 19691, that is, we assume that the initial values are perturbed by the value of the rounding errors, and that the subsequent computa- tions are accurate. As we know [Voevodin 19691, equivalent perturbations J of the matrix cD and equivalent perturbations e of the right-hand side cp are practically random variables, additive, undisplaced and uncorrelated with one another. Then to account for the influence of rounding errors in the approximate computer solu- tion of the SLAE, the model (2.0.0) is converted to the model [Voevodin 19691

where the matrix J is expanded in the column vector along the rows, Z={G,) and P={pv) are the covariance matrices of the errors. In turn, ,the direct methods of [Voevodin 1969, Faddeev & Faddeeva 19631 consist in multiplying the expanded matrix [cD,cp] on the left by the matrices which reduce cD to a simple form (triangular or bidiagonal) for computing the solution of the SLAE and then doing

Page 11: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 103

so. This procedure finally results in all rounding errors being accumulated on the right-hand side. Thus, model (2.1.3) can be transformed to the stochastic model

where u=J0+e is the total error of the response q=<p0+J0+e and Y={w,} is its covariance matrix. Thus we have the following.

Assertion 2.1.3. When the SLAE (2.0.0) is solved by computer, the rounding errors reduce it to the stochastic model (2.1.3').

Since model (2.1.3') is the model of an active experiment [Mechenov 19881, [Fedorov 19681, in which the error u depends on the unknown solution 0, in this case the LSM cannot be used to estimate the solution of the SLAE (or, in statisti- cal terminology, to find estimators of the parameters) and so the MLM is used. Following [Kim 19721, we assume that the error of the classical rounding methods is Gaussian. For random variables with a normal distribution, the likelihood func- tion has the form of Eq. (2.1.1).

Problem 2.1.3. Having (bat do not knowing) one realization Q of the random variable q

the matrix Q (rank Q =n) and covariance matrices Z and P, it is required to estimate the unknown parameters (to estimate the solution of the SLAE) 0 of the linear stochastic model (2.1.3') so that the 1ikelihoodJicnction L(0) in Eq. (2.1.1) is a maximum.

Since Theorem 2.1; 1, the estimators of the parameters (of the solution of the SLAE) of model (2.1.3') minimize the negative double logarithm of the likelihood function (2.1.1 ').

According to [Voevodin 19691, "independence of rounding errors in aggregate cannot be assumed without proof' and the theorem 2.1.3 permits a full analysis of their influence on the solution of SLAE (2.0.0). However [Voevodin 19691 "all errors arising when a matrix is decomposed into factors by elimination methods are asymptotically independent of one another". We will thus assume that the co- variance matrices Z and P are diagonal. The covariance matrices of rounding er- rors for iterative methods have been written out in [Kim 19721, where the rounding errors have been shown to be homoscedastic (that is, to have equal variance). The same will be assumed of the direct methods. There are the majorant estimates poevodin 19691 of the variance of errors of the matrix p2 and right-hand side 2 for the direct methods which, in floating point computations, satisfy the relations

Page 12: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

where t is the number of digits in the binary representation of the ma@issa, f@(n), f,(n)' are functions which depend on the method used to compute the so-

lution of the SLAE and, in the worst case, do not increase with order greater than n2 (see [Voevodin 1969, Kim 1972, Voevodin 1969a1). However, when these functions are used to compute the solution of the inverse matrix, their order of increase will not exceed n3 and the norm of the matrix can be replaced by its conditionality number [Voevodin 1969aI. Thus Eq. (2.1.1 ') can be written in the form

Let

Differentiating Eq. (21.2') with respect to 0, to compute the estimator 6 of the SLAE we obtain the Euler equation

[eTe + &1]6 = eTq ,

where

has a non-linear dependence on the required estimator

(*)

6 . We will consider the Euler equation as an equation in 0, for arbitrary a 2 0 . Since the function

4 (a) = (me, - 6(Z/(02 + p2 10, 12) is non-decreasing for a t 0 (being the ratio

of the non-decreasing function r2 (a) = lee, - and a decreasing convex above

Page 13: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 105

function (a) = o2 + p2 19 a l 2 [Morozov 1987]), and the fmction

ly (a) = ln[02 + p2 19, 12) is non-increasing, their sum R2 (9,) has at least one

point of a minimum for a 2 0 .

We thus have the following corollary.

Corollary 2.1.3. For model (2.1.3') with asymptotically independent ho- moscedastic errors, the parameter estimator (the estimate of the solution of the S U E (2.0.0)) gives a minimum of the functional (2.1.2') or else can be com- puted from the Euler equation with (*).

The value a that gives a minimum of the function R2(ea) is computed with an iterative process constructed in one of way described in item 3.5.5 [Mechenov 1988, Mechenov 19771. It is natural to use the algorithm of [Voevodin 1969al for the computer solution of the Euler equation. The estimate given above for the er- rors of a solution is applicable here too.

Thus, the computation of the MLM estimate for such a model entails an in- crease in the diagonal elements of the Gauss transformation matrix aT6, in the Euler equation by a positive amount, which improves its conditionality beforehand. This effect is especially important for SLAE with an ill-conditioned matrix. Thus the SLAE (2.0.0) can be solved (even when it has an ill-conditioned matrix) with- out the need for additional information on the initial exact problem.

So, it is constructed the method of allowing of equivalent perturbations of the input data caused by rounding errors in the computer solution of the SLAE (2.0.0).

Remark 2.1.3. 1) Since the estimates p2 and d are not error-free, the follow- ing method can be used in practice to monitor the solution. Since the residue func- tional is not always monotone with respect to a [Mechenov 19771 when there are rounding errors, its local minimum, corresponding in many cases to the most accu- rate solution, can be computed. It is natural then first to compute the solution, then its residue and its norm, rather than the residue as a solution of the transformed

ra = -aq ; the norm of that is less subject to rounding

errors and more often has a local minimum for a different value of a (or else does not have one at all) [Mechenov 19771. Non-monotonicity of the rounding errors of

the function q(a) = 1 ~ 0 , - q12 + a1G could be used in the same way [Mechenov 19771.

2) The improved stability that is obtained by using a regularization algorithm [Tikhonov 19651 in the computer solution of ill-conditioned SLAE, which is basi- cally intended to compute the stable approximation to a normal solution of the incomplete-rank SLAE, also results from the computation of the solution from an

Page 14: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

106 Alexander S. Mechenov

equation of the Euler equation form, the value of a in which is sought from the residue principle [Morozov 19871.

3) It is quite obvious from the above what will be the effect of the influence of rounding errors when overdetermined SLAE are solved by the LSM, the SLAE with an inaccurately measured matrix and right-hand side by the LDM or by the LSDM [Mechenov 19911, or the SLAE with inaccurately realized matrix by the MLM [Fedorov 19681. Since the general scheme of the solution constructing for these models is still similar to the Gauss transformation or the solution of the Euler equation, the given approach is suitable for those problems also.

2.1.4 Incomplete-Rank Model of Active Experiment.

The incomplete-rank model is obviously "nonsense" at the representation of the predictor matrix Q, that is, when experiment is planned beforehand and it is pos- sible to provide all troubles, but from the theoretical point of view it is interesting. Except for it deserves study, especially in view of possible applications in the lin- ear integral equations of the first kind and in the operational equations of the first kind.

Assumption 2.1.4. We use the linear influent stochastic model (2.1.0) of ac- tive experiment of incomplete rank.

Definition 2.1.4. The vector80 is the normal solution of the SLAE (2.0.0) with the incomplete-rank matrix Q, if

2 go = Arg min l1811E

@:@@=(p

at known exactly the matrix Q and the vector 9. Definition 2.1.5. Let

is the set of admissible pseudosolutions, where

Problem 2.1.4. Given a single realization vector = Q8 + h +Z of the

random vector q, the exact matrices Q, Z, P and that fact, that the matrix Q of the SLAE (2.0.0) is incomplete rank, estimate the unknown values of the ap-

Page 15: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 107 A

proximation vector t, to the normal vector 80 so that the sum of squares of this approximation was minimum on the set of admissible pseudosolutions

Theorem 2.1.4. The solution of the Problem 2.1.4 exists, is unique, satisjes to the relation

A

t , = Arg min o : ( ( D o - ~ ) ~ Y-'((DO-q)+lndet ~ + n ln 2 n = d

18l2 .

The proof is similar to item 2.1.1 and to item 1.3.

2.1.5 Regularization in the Case of an Error Homoscedastisity

We consider the most popular case when the matrix errors and the response errors are known with an identical variance.

Assumption 2.1.5. We use the linear influent stochastic model (2.1.2) of ac- tive experiment of incomplete rank.

Thus, the perturbations have a scalar covariance matrix both in the right-hand side error and in the error matrix J. Then the Problem 2.1.4 has the following form.

Problem 2.1.5. Given a single realization vector = cD8 + i 8 + Z of the ran-

dom vector q, the exact matrix 0 of the SLAE (2.0.0) of incomplete rank and the values o and p, estimate the unknown values of an approximation vector i, to

the normal vector 80 so that the sum of squares of this approximation was mini- mum on the set of admissible pseudosolutions

Corollary 2.1.5. The stable approximation to a normal vector of parameters exists, is unique, satisjes to the relation

A

t , = Arg min

e : e + n * ( 2 + p 2 1 e 1 2 ) +nln2n=m2 lei'

0 +p21q2

and the S U E

Page 16: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

where ~ 1 I R 2 i l is the numerical Lagrange multiplier. Proof. For the proof it is enough to use the item 2.1.2 and 2.1.4. Remark 2.1.5. The result will not vary asymptotically, if in the last equation

we proceed to the relation

2.1.6 Mixed Models of Active Experiments

When experiments are carried out in practice, especially in physical investiga- tions, the problem most often encountered is that of estimating of unknown pa- rameters of a linear model with imprecisely controlled predictor matrix with an a priori information on the unknown parameters. For the beginning we consider the simple case.

Assumption 2.1.6. Given the fitnctional relation (2.1.0) with the a priori in formation

We shall consider, that the supplement condition is satisjed for the mixed model (2.1.6), that is, the complete matrix of this fitnctional relationship has a fill rank.

We assume further, that all errors submit to the normal law. We consider an estimation of the required parameters by the MLM. We write out the likelihood function

Page 17: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

xexp - (q - W ) T ( E ~ ~ T ) - ' (q - 0 0 ) - i ( t - e ) ~ @ c c ~ ) - l ( t - 0)) ( and we calculate its negative double logarithm which is more convenient for the further reasoning

+ lndet Euu det Ecc + n + l - ln2n. ( TI ( T I ( )

Problem 2.1.6. Given the realizations ij = 0 0 + ?0 + Z and ? of the random

vectors q and t, the matrix 0 (rank@=r) and the corresponding covariance ma- trices 2, P, K , estimate the unknown parameter 0 of the mixed linear influent stochastic model (2.1.6) by the MLM.

Theorem 2.1.6. The estimates of the parameters 0 of the mixed linear influ- ent model (2.1.6)from the Problem 2.1.6 minimize the Jitnctional

Proof. As the minimized functional is written out it is enough to show, that EUU~=Y, that is already done earlier.

2.1.6.1 Homoscedastic Errors in the Matrix and in the Response. We consider the most popular case when all errors of the predictor matrix are

realized with the same variance p2 as well as the response is measured with the same variance 02.

Assumption 2.1.6a. We use the following mixed linear influent stochastic model of active experiment with the a priori information of a functional relation (2.1 .O) of incomplete rank

Page 18: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

110 Alexander S. Mechenov

Thus, perturbations have a scalar covariance matrix in the right-hand side er- rors, in the error matrix J and in the a priori information. Then the Problem 2.1.6 has the following form.

Problem 2.1.6a. Given a single realization = 08 + j8 + E and 7 of the random variables q and t, the matrix d[, (rank<D=r), the values o, p and v, esti- mate the unknown parameter 8 of the linear influent stochastic model (2.1.6a) by the MLM.

Corollary 2.1.6a. The estimates of the parameters 8 of the mixed linear in- fluent model (2.1.6a) from the Problem 2 . 1 . 6 ~ minimize the following quadratic form

Remark 2.1.6. 1) One of variances can be estimated, having taken advantage of the RSS mean.

2) First, that was applied to the account of the a priori information is the Bay- esian approach [Zhukovskij & Morozov 19721, [Zhukovskij 19771, [Murav'eva 19791. But the application of the Bayesian approach to the active experiment model leads to complicated distributions for the response, that, most likely, not vitally. Really, if we replace the parameter vector 8 on the random vector t in the active experiment model 2.1.1, then at once there is the distribution problem of the vector Jt. If J and t, say, are normal, the error Jt+e has a complicated distribution, which correspondence to distribution of the real errors is enough uneasy to check up. Besides, the MLM application becomes enough complicated. Therefore the approach, where the parameter vector is random, leads to complicated theoretical research.

We consider now the models of active experiments containing also the regres- sion parameters.

Page 19: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

2.2 Analysis of Active-Regression Experiment

In engineering and especially in physical researches, one frequently meets with the problem of estimating the unknown parameters of a linear model of active experiment with an imprecisely controlled predictor matrix and a regressor matrix known purely theoretically. For example, this part of a matrix has the form of a background vector. That reflects the assumption about a nonzero constant of ex- pectation of the response vector errors.

Assumption 2.2. We use the following linear Jitnctional equation (relationship)

where 8 = ,. ..,+P] is a known precisely prescribed matrix, H = [q , a -., qk]

T is precisely known theoretical matrix, 8 = (el,. . a , o ~ ) ~ and 6 = (61 , . - ~ , 6 ~ ) are

unknown parameters, cp = ( qq . . .., pn)T is an unknown vector of the response. Also we enter the corresponding Eq. (2.0.2) concept of structural relationships

where i = (il ,. . is a unknown vector of the response. F = f l , - . a . fp] is a [ random matrix of realization @ and J is its errors, submitting to a normal law.

That is the structural relationship is a functional relationship in which the part of an initial matrix contains an additive random error.

2.2.1 Maximum-Likelihood Method of the Parameter Estimation of Linear Model of Active-Regression Experiment

Assumption 2.2.1. We shall use the following linear injluent-regression sto- chastic model of an active experiment o f f i l l rank, which uses Eq. (2.0.2) and Eq. (2.2.0)

in which the errors are subject to a normal law. We consider, that experiment is conducted actively, that is, the researchers

prescribe the exact predictor matrix @ based on the equipment, but random errors

Page 20: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

112 Alexander S. Mechenov

J accumulate inside the equipment, and the events being studied occur with certain random (unknown) values F = (D + J and unknown parameters 8. Except for it, there are still not taken into account the theoretically known regressors H which influence 6 also is necessary to estimate. Thus, errors beforehand are introduced in an appearance by an inaccurate realization of prescribed values of predictors. The researchers have at their disposal a random response vector

that is further corrupted by measurement errors e, in which the complete vector of errors u = J8 +e also depends on the unknown parameters. Such model also can be counted the model generated by structural relationship (2.2.0) i = F8 + H6 on which the measurement error e of the response i is imposed. The scheme of such an experiment is shown in Figure 2.2-1.

I . I Figure 2.2-1. Scheme and description of an active-regression experiment.

Remark 2.2.1. Vuchkov and Boyadjieva have undertaken the attempt to un- derstand the simplified model of the form q = F8 + 6 + e, F = @ + J , in [Vuchkov & Boyadjieva 198 11.

Since the error u = J8 + e depends on the unknown parameters 8, neither the LSM nor the LDM can be applied to estimate the unknown parameters. In what follows we shall assume that the errors are normally distributed and study the es- timation of the unknown parameters by the MLM. We write out the likelihood function

In our computation we shall use the logarithm of this function multiplied by -2.

Page 21: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 113

Problem 2.2.1. Knowing one realization ?j = a0 + H6 + %I + Z of the ran-

dom variable q, the matrices @(rank@ = p) and H (rank H=k) and the corre-

sponding covariance matrices Z, P, estimate the unknown parameters 0 ,6 of the linear influent-regression stochastic model (2.2.1) by the MLM.

Theorem 2.2.1. The estimates i,d of the parameters 0, 6 of the linear influ- ent-regression model (2.2.1) from Problem 2.2.1 can be computed by minimizing the Jirnctional

where Y = {oik + CITpik0), and Pik are the elements of the cell having label ik

and dimension pxp in the covariance matrix ~ ( ~ n x pn).

The proof of this result is made similarly item 2.1.1.

2.2.2 Homoscedastic Model of Active-Regression Experiment

We consider more in detail the most popular case when the prescribed matrix is realized with of the same type errors with the same variance, and the response is measured with errors with the same variance.

Assumption 2.2.2. We use the following linear influent-regression stochastic models of active experiment of full rank, which uses a structural relationship (2.2.0)

Then the Problem 2.2.1 will have the following form. Problem 2.2.2, Knowing one realization ii = @0 + H6 + i 0 + Z of the ran-

dom variable q, the matrices @(rank@ = p) and H (rank H = k) and the corre-

sponding values 2 and 8, estimate the unknown parameters 0, 6 of the linear influent-regression stochastic model (2.2.2) by the MLM.

Corollary 2.2.2. The estimates i, ;1 of the parameters i, h of the linear influ- ent-regression model (2.2.2) from Problem 2.2.2 can be computed by minimizing the finctional

Page 22: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

or can be computed from the SLAE

We call this system of the nonlinear algebraic equations the normal equation of active-regression experiment.

2.2.2.1 Numerical Example Let us consider the same example. From the same four observations (see Figure

1.1-3a) we form the functional relationship <p = 98 + 6. These measurements are approximated by the MLM assuming the experiment is active, leading to the line g = 1.5 (see Figure 2.2-2b). On the same four observations we construct an esti- mate for the parameters of a functional relationship of the form q= 761 + 6. In Fig- ure 2.2-2a the four measurements are approximated by the LSM, leading to a constant j = 1.5 in a result.

Figure 2.2-2. Comparison of the LSM for simple regression model with a background (a) and the MLM for model of simple active-regression experiment (b).

2.2.3 Degenerated ~ o d e l of Active-Regression Experiment

The degenerated model is obviously "nonsense" at the representation of the predictor matrix and the theoretical H, that is when experiment beforehand is planned and it is possible to provide all troubles. But for the applications concern-

Page 23: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 115

ing linear integral equations of the first kind, such supposition is normal, therefore it deserves to study.

Assumption 2.2.3. We use the linear influent-regression stochastic model (2.2.1) of active experiment of an incomplete rank, which uses Eq. (2.2.0).

T Definition 2.2.3. The vector ( B $ , s $ ) is called a normal solution (normal

vector) of the SLAE (2.0.2) with the incomplete-rank matrix [@,HI if

T (Bg,6$) = Arg min 1012 +16r

0,6:@0+HS=y,

where the matrix [@,HI and the vector cp are known exactiy. Definition 2.2.4. Let

0,6:(@0 +Hs - i)T(EuuT)-l(@O +HS - i ) + lndet EUUT 2 m

is the set of admissible estimates (pseudosolutions), where

-1 - m2 = E(@G + HS - ( E U U T ) (@0 + ~g - a) + lndet EUU'

Problem 2.2.3. Knowing the approached vector = @0 + H6 + i 0 + Z as

realization q, the exact matrices @, Z, P, and that fact, that the matrix [@,HI of the SLAE (2.0.2) has incomplete rank, estimate unknown values of an approxi- mation vector to the normal vector so that the sum of squares of this approxima- tion was minimum on the set of admissible estimates

^T ^ T (t ,d ) = Arg rnin l0r +16(2. m 0 ,Sn

Theorem 2.2.3. The solution of the Problem 2.2.3 exists, is unique, satisjies to the relation

T ( i ' ;dT) = ~ r g min

m p12 + 1612 .

0,6:(@0+~S-$~Y-' (@0+HS-ij)+ln(det Y)+n In 2n=m2

The proof of this theorem is similar to item 2.1.4.

Page 24: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

116 Alexander S. Mechenov

2.2.4 Regularization in Case of the Homoscedastisity of Errors

We consider the most popular case when errors of a matrix and of a response have the same variance.

Assumption 2.2.4. We use the linear injuent-regression stochastic model (2.2.2) of incomplete rank.

Thus, perturbations have a scalar covariance matrix as well in errors of a right- hand side e as in the matrix J. Then the Problem 2.2.3 has the form.

Problem 2.2.4. Knowing the vector { = @0 + H6 + k3 + E as realization q, the exact incomplete-rank matrix A=[@,H] of the SLAE (2.2.0) and values a, p, estimate the approximate vector to the normal vector so that the sum of squares of this approximation was minimum on the set of admissible pseudosolutions

T (i' ; QT) = Arg min ler +IS? .

zu ~ , s : ( @ ~ + H s - ~ ) ~ ( ~ u u ~ ~ ~ ( @ 0 + ~ 6 - ~ ) + l n ( d e t Euu')+n1n2&mz

Corollary 2.2.4. The stable approximation to the normal vector exists, is unique, satisfies to the relation

and the SLAE

where a=lli220 is a numerical Lagrange multiplier. Proof. For the proof it is enough to use 2.2.2 and 2.2.3. Remark 2.2.4. The given problem can be simplified not only as in item 2.1.5,

simplifying residue principle, but also knowing that fact, that only one of matrices <D or H is degenerated. Then the regularization can be carried out only for this matrix.

Page 25: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

2.3. Analysis of Passive-Active Experiment

Assumption 2.3. We are given a linear finctional relation

where 8 = [E,l,...,E,rn] is an unknown measured matrix, 9 = +l,...,+p] is a I known precisely prescribed fill-rank matrix, rankA = rank [E@] = m+ p,

p = ( p I , ---, /3rn)T and 8 = (O1,..., 1 3 ~ ) ~ are unknown parameters,

r = ( P I ~ - . P ~ ) is an unknown response vector. Also we construct for this lin- ear finctional relation a structural relation

T where i = (il , a . .,in) is an unknown measured response vector, F = f 1 , .. fp ] is I a random matrix and J is an error matrix.

We consider briefly this yet universal experiment description.

2.3.1 Maximum-Likelihood Method of the Parameter Estimation of Linear Model of Passive-Active Experiment

Assumption 2.3.1. We shall assume the linear stochastic model of passive- active experiment offi l l rank, which uses the finctional relation (2.0.3) and the structural relation (2.3.0):

and we shall assume that the errors are normally distributed. We assume that the experiment is both passive and active, that is, the research-

ers prescribe the exact predictor matrix Q, measure the matrix E, and estimate the parameters 8 and p. The researchers have at their disposal a random response vector

Page 26: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

118 Alexander S. Mechenov

that is further corrupted by measurement errors e, the random regressor matrix X and the exact predictor matrix @. That is, the model contains structural relations of the Eq. (2.3.0) on which the measurement response errors e are imposed. Mod- els of such type in view of complexity of their parameter estimation were consid- ered by nobody. Though a variant of experiment where the researcher prescribes something, and measures something passively, it seems in many respects natural. The scheme of such an experiment is shown in Figure 2.3-1.

Figure 2.3-1. Scheme of passive-active experiment.

We shall study the estimation of the unknown parameters by the MLM. Problem 2.3.1. Knowing one realization q = EP + a 0 +& + Z of the ran-

dom variable q, one realization % = = + (rank X=m) of the random matrix X , the nonrandom matrix @ (rank @=p), and covariance matrices M, T , Z , P, estimate the unknown parameters p, 0 of the linear confluent-influent stochastic model (2.3.1) by the MLM.

Theorem 2.3.1. The estimates 6,i of the parameters P, 0 of the linear con- fluent-influent model (2.3.1) of passive-active experiment from Problem 2.3.1 can be computed by minimizing the functional

T where Y = {oik +pThilikp -(TJ + T ~ ) P +eTpike] , Z~ = {oik + e pike). Proof. We change model and statement of the problem. Instead of the matrix X

and the vector q we consider the vector z = ( x , - ~ ~ ) ~ of dimension mn+n. To do this we arrange the matrix X by rows into a row and adjoin to it the row -qT , and we do the same for the vectors l, = (%-cpT)T and w = ( ~ , - u ~ ) ~ . Then the

Page 27: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 119

original linear confluent-influent model reduces to a linear regression model with linear constraints T<=-@8

where the constraint matrix r is the same, as in item 1.4.1 and the covariance matrix has the form: zp = {oik +B'P~~o) (its computation is similar to item 2.1.1).

The Problem 2.3.1 thus reduces to the following two stage minimization prob- lem: estimate the true values of parameters < so that the negative double log-like- lihood function

is minimized subject to the constraints I?< 4 8 , and then find the minimum of this form over all possible values of the parameters P, 8:

Consider the first stage. We use the method of undetermined Lagrange multi- pliers, i.e., multiply the constraints r<+W=O by the vector 2h=2(A1, &..., add the product to the negative double log-likelihood and come to the Lagrangian minimization:

$= min (z-<) T n -1 (z-<)+2f (r (+~)+h&t(~)+(nm+n)ln2n. ~BR~,&R~":T<=-@O

A necessary condition of minimum is equality to zero of derivatives with re- spect to < and h

Page 28: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

120 Alexander S. Mechenov

To construct the estimator for the vector of undetermined Lagrange multipliers, we left-multiply the first equation of the augmented normal equation by the matrix rC2. Since

the estimator of the vector < is calculated from the relationship

and its covariance matrix from the relationship

At any gangs of parameters P, the matrix r is a full rank because of presence of the submatrix I. Therefore the minimization problem of the negative double log- likelihood in linear constraints always has the unique solution.

Then the original variational problem is rewritten in the form independent of 3 and 9:

R2 = ( 2 - i)T C~- 'C~Q-~ ( 5 - i) + lndet(C2) + (nm + n)ln 2n

= iTl?C21'Ti+ lndef(C2) +(nm+n) ln2~

T -1 = (q - % p - @€I) Y (i - %p - ale) + lndef(C2) + (nm + n)ln 2x

where Y = {oik + pT~, ,$ - (T; + T;)P + B'P~~B), Mik are elements of the cell with number ik and dimension mxm of covariance matrix M(mnxmn), Tik is the segment k of dimension m of the row i of the matrix T(nxnm), Pik are the elements of the cell with number ik and dimension pxp of the covariance matrix P(pnxpn).

Differentiating this expression with respect to the parameters P, 8, we obtain the SNAE

Page 29: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 12 1

-T -1- T -T 1 aJ 1- -T -1 T -T 1 - 2(X Y X),p-u Y - -Y-u+2(X Y Q,),0=2(X Y-@j,j=l,m;

@i

aY dSZ -=-- - {(Pik + ~ l ) T e ) . Since the original matrix is nonsingular, the esti- a, a, mator 6,i, is the unique solution of Problem 2.3.1.

Now, using (*), we easily obtain the estimators of the regressor matrix E and the response vector cp respectively

Theorem 2.3.1.1. The mean of RSS is given by

T -1- Eii Y u = n .

The obtained result satisfies completely to the correspondence principle. Really, when the matrix E is absent in Eq. (2.3. l), this estimation method is transformed to the MLM [Fedorov 1968, Mechenov 19881, and when the matrix Q, is absent, it is transformed to the LDM [Mechenov 19881.

2.3.2 Homoscedastisity in Experiment

Assumption 2.3.2. We assume the following well-posed linear stochastic confluent-influent model of passive-active experiments (the Jirnctional relation (2.0.3), the structural relation (2.3.0))

Page 30: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

122 Alexander S. Mechenov

We assume that experiment is both passive and active. Also each random ma- trix has the own error variance. We consider an estimation of required parameters by the MLM.

Problem 2.3.2. Assume that we have one realization ii of the random vari-

able q and one realization 2 of random matrix X and the predictor matrix a. We also have their variances 2, j.?, $. It is required to estimate the parameters p, 8 of the linear stochastic model (2.3.2).

Theorem 2.3.2. The estimates of the parameters P, 8 of the linear model (2.3.2) from Problem 2.3.2 minimize

and are calculated from the SNAE

We call this SNAE the normal equation of the passive-active experiment. Proof. We apply the same proof scheme, as in item 2.3.1. Following it, we

consider that place where the vector < estimate is calculated from the equation (*). Having taken advantage of an estimate for < (*), we rewrite the original varia-

tional problem in the form independent of 9 and of cp:

Differentiating this relation with respect to the parameters P, 8, we obtain the SNAE (**), whence we can calculate the required estimate supplying the solution to the Problem 2.3.2.

Page 31: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

2.3.3 Singular Model

The degenerated model is obviously nonsense at the representation of the pre- dictor matrix (9 but for the matrix E such supposition can take place in practice therefore it deserves study.

Assumption 2.3.3. We use the following ill-posed linear stochastic model of passive-active experiments (the Jitnctional relation (2.0.3), the structural relation (2.3.0))

T Definition 2.3.3. The vector (pT;eT) is called the normal solution of the

0

SLAE (2.0.3) with the degenerated matrix [E, (91, if

T T T (P $3 ) = Arg min lpr + 0 $,0:E$+@Q=(p

at the known exact matrices E, @ and the vector 9. Before to put a problem, we alter model. As in item 2.3.1, we reduce the origi-

nal linear confluent-influent model to linear regression model with the linear con- straints ry =-@€I

where the matrix r(nxmn+n) has the same appearance, as in item 1.4.1 and the matrix xp = {aik + eTp*e) (its computation is similar 2.1.1.).

Definition 2.3.4. Let r i s the set of admissible pseudosolutions (estimates)

2 b,t: min 1%-&l ,+ lnde tR+(nm+n) ln la i s (;:T<+@t=O

where

Page 32: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

2 m2 = min- IE -&la + lndet + (nm + n)ln2n <:T;t;+oe=o

Problem 2.3.3. Knowing the realization vector E containing the fill-rank matrix %, the exact matrices CD, C, T , M, P, the valui d and that fact, that the matrix [E, CD] of the SLAE (2.0.3) has incomplete rank, calculate the approxi- mation vector to the normal vector so that the square of this approximation was minimum on the set of admissible pseudosolutions

Theorem 2.3.3. The solution of the Problem 2.3.3 exists, is unique, satisjes to the relation

T g T ; i T =Arg min

A A - 2 1612 +lif ( b, t:,xb+@t-i&, +nlndet C2+(mn+n)1nk=m2

Proof. We take advantage of the Theorem 2.3.1 and rewrite the original varia- tional problem in the form

Since the minimum is attained on the set boundary [Tikhonov & Arsenin 19791, the inequality can be replaced with an equality:

T iT;iT = Arg min A,. - 1612 +lif . ( b , t : ~ b + @ t - $ + n l n d e t ~ + ( m n + n ) l n 2 z = m ~

We use the method of undetermined Lagrange multipliers. For this purpose we multiply the constraint by the multiplier A, add the product to the minimized quad- ratic form and come to Lagrangian minimization

The estimates, calculated from this relation, supply the solution of Problem 2.3.3.

Page 33: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

2.3.4 Degeneracy and Homoscedastisit y

Assumption 2.3.4. We use the following incomplete-rank linear stochastic model of passive-active experiments (the functional relation (2.0.3), the struc- tural relation (2.3.0))

We reduce the original linear confluent-influent model (2.3.4) to the linear relatively 5 regression model with the linear constraints I'c =-me of the form:

Problem 2.3.4. Given the realization vector E, the exact matrix <D, corre- sponding variances 2, ,u2, p2; the value and that fact, that the matrix [E, @] of the SLAE (2.0.4) has incomplete rank, calculate the approximation vector to the normal vector (fIT;~T)To SO that square of this approximation was minimum on the set of admissible pseudosolutions

(6'; iT)T = Arg min + /tr . tD b,tcr

Theorem 2.3.4. The solution of the Problem 2.3.4 exists, is unique and satis- $es to the relation

(6',ir); = ~r~ min

6,i: P*-f +nm*$ +n l n ( d +dl f ) t(nrn+n)1n2n=3 a2+p21b12+p21t12

Proof. We take advantage of the Theorem 2.3.3 and we rewrite an original variational problem in the form:

Page 34: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

126 Alexander S. Mechenov

^ T ^T (I, ,t ) = A & min m 1612 + 1i12 .

6,i: +nmlnr2+n fn[02 +p21tr) +(nrn+n)fn2n=m2 a2+p21b(2+p21t)2

We apply for a minimum computing the method' of undetermined Lagrange multipliers. For this purpose we multiply the constraint

on the multiplier A, we add the product to the quadratic form and come to minimi- zation of the quadratic form

Differentiating this expression with respect to b, t and A, we obtain the SNAE

where a=lli2>0 is the numerical Lagrange multiplier. The obtained estimate will supply the solution of Problem 2.3.4.

In the given paragraph the problem of a parameter estimation of models of well-posed and ill-posed passive-active experiments is solved. For completeness of research it is necessary to supplement only the model investigated above in free (regression) parameters, as it will be done in the following paragraph.

Page 35: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 127

2.4. Analysis of Passive-Active-Regression Experiment

Having connected all models it is easier to see a real difference between them and to understand their interaction.

Assumption 2.4. We consider the following linearjimctional relation

where 8 = [E,l,...,E,m] is an unknown measured matrix, 6 = 41,...,4p] is a [ known prescribed matrix, H = [ q , -. ., q k ] is a known theoretical& matrix with

common r a n k ~ = r a n k [ E , 6 , ~ ] = m + ~ + k , p=(pl,..-,pm)T, ~ = ( Q ~ , - . . , B ~ ) ~

T and 6 = (Lj l ,...,Ljk) are unknown parameters, cp = ,...,fpn)T is an ~nknown response vector. Wewe enter also the structural relation

corresponding Eq. (2.0.4), where i =(il,...,in)T is an unknown measured re-

sponse vector, F = f , , . . ., f p ] is a random matrix and J is its error. [ That is the structural relation is a functional relation in which one of three ma-

trices contains an additive random error.

2.4.1 Maximum Likelihood Method of the Parameter estimation of Linear Model of Passive-Active-Regression Experiment

Assumption 2.4.1. We shall assume the fill-rank linear confluent-influent- regression stochastic model of passive-active-regression experiment

based on thefinctional relation (2.0.4) and the structural relation (2.4.0). We assume that the experiment is both passive and active, that is, the events

occur when E, 0, H, p , 0, 6 have certain values. The researchers have at their disposal the predictor matrix 0, the matrix of theoretical values H, the random response vector

Page 36: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

with error vector u=J8+e, and the random confluent matrix X with error matrix C. This model is generated by structural relations of the form i=EP+FB+HG, on which the measurement errors e of the response i are imposed. In such model all nuances of carrying out of experiment are taken into account completely, it con- tains a confluent, influent and regression part. The scheme of such an experiment is shown in Figure 2.4-1.

In what follows we shall assume that the errors are normally distributed and study the estimation of the unknown parameters by the MLM.

Problem 2.4.1. Suppose we know one realization = EP + (DO + H6 + i 8 + Z and = E + (rank X=m) respectively of the random variables q and X , the values of the matrices <D (rank (D=p), H (rank H =k). Assume that we know also the corresponding covariance matrices M , Z, P, T . It is required to estimate the unknown parameters p, 8, 6 of linear stochastic model (2.4.1) of the passive- active-regression experiment by the MLM.

* A A

Theorem 2.4.1. The estimates b, t ,d of the parameters P,8, 6 of linear con- fluent-influent-regression model (2.4.1) from Problem 2.4.1 minimize the func-

Page 37: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations

tional

T where Y = {oik + P ~ M ~ ~ ~ J - (T; + T;)P + B ~ P ~ O } , ZP = toik + e Pike}.

Proof. We alter model and statement of a problem. Instead of the matrix X and the vector q we consider the vector z = of dimension mn+n. To do this we arrange the matrix X by rows into a row and adjoin to it the row -qT, and we do the same for the vectors < = (3,-cpTlT and w = ( ~ , - u ~ ) ~ . Then Eq. (2.4.1) reduces to the linear relatively < regression model with linear constraints T<=-@8- H6

where the matrix r is the same, as in item 1.4.1 and the matrix Z = {oik + 0 T~ikO) (its evaluation is similar 2.1.1).

The Problem 2.4.1 thus reduces to the following two-stage minimization prob- lem: estimate the unknown values of parameters < so that the negative double logarithm of likelihood function is minimized subject to the constraints T<=-08- H6, and then minimize this negative double logarithm of likelihood function over all possible values of the parameters P, 8, 6

Let consider the first stage. We use for its solution the method of undetermined Lagrange multipliers. For this purpose we multiply the constraints TC;+<PB+HG=O by the vector 2L=2(A1, ;12 ,..., A,,)* and, add the product to the negative double logarithm of the likelihood function being minimized, we find the Lagrangian minimization

i2= min min ( i - < ) T R - 1 ( - i - < ) + 2 h T ( r g + O + ~ 6 ) + l n d e t ( ~ . n e n t;rnwn

Page 38: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

130 Alexander S. Mechenov

A necessary condition of minimum is equality to zero of the derivatives with re- spect to the vector C; and the vector h

To construct the estimator for the vector of undetermined Lagrange multipliers, we left-multiply the first equation of the expanded normal equation by the matrix rC2. Since

the estimator Z of the vector C; is calculated from the relationship

and its covariance matrix is equal to

Since the matrix r is nonsingular by virtue of presence of unit submatrix I, the estimator Z is the unique solution of the first stage of Problem 2.4.1.

Having taken advantage of the estimator Z (*), we rewrite the initial varia- tional problem in form independent of the unknowns E and <p

where Y = {qk + pTlulikj3 - (T: + T;)P + eTp ik8 ) ? Mik are the elements of a cell with number ik and dimension m*m of covariance matrix M(mn *mn), Tik is a

Page 39: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 13 1

k segment of dimension m of i row of a matrix T (n*nm), Pik are the elements of a cell with number ik and dimension p*p of covariance matrix P,, ,,.

Differentiating this expression with respect to the parameters P, 0 and 6, we obtain the SNAE

aY where 5 = q - @ - @0 - HG. - = {(Mik + M $ ) T ~ - ( T ~ ) ~ - ( T ~ ~ ) ~ ) ,

43 j m m ---= - {(Pik + P;)T~) . Since the original matrices R , ~ H are non singu- aj aj - A A

lar, the estimators b,t,d, are the unique solution of the Problem 2.4.1. Having taken advantage of a relation (*), it will be easy to calculate an estima-

tion of the regressor matrix E and of the response vector cp accordingly

Theorem 2.4.1.1. The mean of the RSS is equal to

T -1- Eii Y u = n .

2.4.2 Homoscedastic Experiment

Assumption 2.4.2. Given the functional relationship (2.0.4) and the finc- tional-structural relationship (2.4.0), we consider the following homoscedastic well-posed linear conjluent-injluent-regression stochastic model of the passive- active experiment

Page 40: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

132 Alexander S. Mechenov

The experiment is homoscedastic, i.e., events occur when e, C, J have respec- tively constant variances.

We consider an estimation of required parameters by the MLM. Problem 2.4.2. Assume given one realization of the random variable q and

one realization of random matrix X :

the matrixes @, H. We also know the corresponding variances 2, j?, 8. It is required to estimate the unknowns cp, Z and the parameters P, 0, 6 of linear stochastic model (2.4.2) by the MLM.

Theorem 2.4.2. The estimators of the parameters P, 0, 6 of linear model (2.4.2) for Problem 2.4.2 minimize

They are calculated from the SNAE

Proof. We apply the same proof scheme, as in the item 2.4.1. Following it, we consider that place where the estimator of the vector C, is calculated from the rela- tionship

where

Page 41: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 133

Then the initial variational problem is rewritten in form independent of E and 0:

Differentiating this expression with respect to the parameters P, 8, 6, we obtain the above-stated SNAE.

Remark 2.4.2. It is easy to construct an iterative process that generates the sought estimators. The mean weghted RSS is used as the zeroth approximation. Substituting this value in the SNAE, we solve the SLAE, recalculate the weighted RSS, and so on.

2.4.3 Singular Model

The singular model obviously nonsense at the representation of a predictor matrix Q, or theoretical H, but for the matrix E such supposition is normal, there- fore it deserves study.

Assumption 2.4.3. We use the following linear stochastic model of an in- complete rank for passive-active-regression experiment (the linear jimctional equation (2.0.4), the structural relation (2.4.0))

T T T T Definition 2.4.3. The vector (P ;8 ;6 ) ,-, refers to as a normal solution of the SLAE (2.0.4) with the degenerated matrix [E,@,H], if

T ( ~ ~ ; 8 ~ ; 6 ~ ) o = Arg min + l8f + 16r

P,0,6:BP+@B+H6=(p

Page 42: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

134 Alexander S. Mechenov

at known exactly matrixes E, a, H and the vector cp. Before to put a problem, we alter model to the form

(its evaluation is similar 2.4.1). Definition 2.4.4. Let the set of admissible estimators T

b,fd: min 1 ? - ~ ~ + l n d e t ~ + ( n m + n ) l n 2 x ~ d <:rcmt+Hd=O

where

Problem 2.4.3. Given the approached vector z, and also exact matrixes <D, H , 2, T , M , P , the value a? and that fact that the matrix [E,<D,H] of the SLAE (2.0.4) has incomplete rank, calculate the unknown values of estimator to the normal vector (pT;~T;~T)To SO that square of estimator is minimum on the set of admissible estimators

(6';i';d~)~ = Arg min lb12 +lt12 +1d2, w b,t,da-

Theorem 2.43. The solution of the Problem 2.4.3 exists, is unique, satisfies to the relation:

^ T ^T ^ T (b , t ,d )T = Arg min 2 lbp +lt12 +Id12 .

b,t,d:IXb+@t+~d-~~~+nlndet~+(mn+n)ln2r=w~

Proof. We take advantage of the Theorem 2.4.1 and we copy an initial varia- tional problem as

T GT;zT;dT = Arg min lbf +1tl2 +1d12 . ( )" b,t,d:I~b+@t+Hd-q(, +nlndet ~+(mn+n)ln2r~a?

Page 43: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 135

It is known [Tikhonov & Arsenin 19791, that the minimum is attained on the set's boundary, that is

We apply for its evaluation a method of undetermined Lagrange multipliers. For this purpose we multiply the constraint on the multiplier A, we add the product to the quadratic form and in a result we minimize the Lagrangian

Differentiating this expression with respect to parameters and A, we obtain the SNAE and the calculated from this SNAE estimate is a unique solution of the Problem 2.4.3.

Remark 2.4.3. But in this case the value a? hardly gives in to an estimation as S2 depends on unknown parameters. Therefore the variant containing only a resi- due, seems more preferably, especially in view of that fact, that asymptotically they lead to the same outcome. As the RSS estimate is known from the Theorem

T -1 2.4.1.1. and is equal to n then Ew S2 w I n . Then last equation of this SNAE is possible to replace on

2.4.4 Singularity and Homoscedasticity

Assumption 2.4.4. We use the following incomplete-rank linear stochastic model of passive-active-regression experiment (the linear Jitnctional equation (2.0.4), the structural relation (2.4.0))

We reduces Eq. (2.4.4) to the linear relatively 6 regression model with linear constraints T&=-@e-H6

Page 44: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

136 Alexander S. Mechenov

where the matrix I? is the same, as in item 1.4.1. Problem 2.4.4. Given the approached vector z, and also exact matrixes a, H,

corresponding variances d, p2, p2, the value a? and that fact, that the matrix [=,@,HI of the SLAE (2.0.4) has incomplete rank, calculate the unknown values

T T T T of estimator vector to the normal vector (p ;8 ;6 ) so that square of this esti- mator was minimum on the set of admissible estimators:

" T "T "T (b ;t ;d ) L = ~ r g min \b(Z +I$ +1d2. b,t,d€T

Theorem 2.4.4. The solution of the Problem 2.4.4 exists, is unique, satisjes to the relation:

min 1pm+EkI-q2

bI2 +ltlZ +ld2 b,W Srmthp~(2+~~t~~+(lon+n)h2IC;3

2+IUzlbI2+p21tl2

and the SNA E

where e l l b 0 is a numerical Lagrange multiplier. Proof. We take advantage of the Theorem 2.4.3 and we rewrite an initial

variational problem in the following form:

Page 45: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 137

We apply for an evaluation of a minimum the method of undetermined La- grange multipliers. For this purpose we multiply the constraint

on the multiplier A, we add the product to the quadratic form and in a result we minimize the Lagrangian

Differentiating this expression with respect to the parameters and the numerical Lagrange multiplier (having replaced a=1 /DO), we obtain the required SNAE and the estimator is the unique solution of the Problem 2.4.4.

Remark 2.4.4. As the mean of the weighted RSS is equal to n:

that, having replaced compulsorily last line the SNAE and corresponding values, we receive the SNAE

Page 46: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

138 Alexander S. Mechenov

which solutions asymptotically are identical previous, but it is a little bit easier for computations. We remark that for precise matrices (influent and regession parts) the negative displacement of diagonal elements is not present. For a matrix of a passive part there is a negative displacement. So formulas of regularized solution computation do not coincide.

2.4.5 Summary Table of Experiments

All results are shown in small tables that give a total characteristic of an ex- perimental material and methods of their parameter estimation. By completely simplifjring the description of all the errors occurring in the model (or in the sys- tem of linear algebraic equations), that is, assuming they are homoscedastic and have variance 1, one can construct a summary table of the models, quadratic forms, and equations for estimating the parameters (for computing the pseudoso- lutions) of all possible combinations of measurements and prescriptions of the initial data. Table 2.4.1 contains the different models in presence of l~omoscedastic errors. In Application the tables of models in the presence of the homoscedastic errors (Table 1) and in the presence of the heteroscedastic errors (Table 2) are shown.

Table 2.4.1. Models of errors for systems of the linear algebraic equations and methods of a computation of their pseudosolutions. There is the input data in column D and the unknown val- ues in column V.

dodels c p = &

A =[$@,HI

1 e equations and quadratic forms, the authors

[Gauss 1809, Legendre 18061

Y P (RTR - g[%-711)~ = R ~ Y Rep - u - 2 1 % ~ -112 s =min = - A'["-Y]

I I [Pearson 1901, Mechenov 19881

Page 47: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Pseudosolution of Linear Functional Equations 139

[Mechenov 199 11

k ' @ + n 1 - s 2 1

[Fedorov 19681

m T a + n l - s 2 1 e + m T ~ 6 = m T q ) H ~ C D € I + H ~ H S = ~ ~ r j

[Mechenov 19961

p+ZToe=%l'"4 oTWp +kTm + n I - S ~ I

[Mechenov 19961

I ( g T t - s21 )p + gT@9 + g T H 6 = g T i j

d t p + ( ~ D T Q + n1- S ~ I )e + Q I ~ H S = @'a

H ' X ~ + ~~m + H ~ H ~ =

[Mechenov 19961

Page 48: Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS · Chapter 2 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Abstract In the second chapter, the models of the passive-active-regression ex-

Alexander S. Mechenov

2.4.6 Inference of Chapter 2

The basic result of the second Chapter is construction of model of passive- active-regression experiment. Thus, the picture of exposition of experimental re- searches in frameworks of confluent-influent-regression models is completed. Such gradation of experiments allows the contributor to understand better to itself a picture of researches and correctly to carry out a parameter estimation. Really appreciated of usual Gauss transformation, the parameters turn out "underestimated" in case of passive experiment and "overstated in case of active.

The method of effective correction of rounding errors is constructed also at the SLAE solution and at the parameter estimation on the computer. They are devel- oped regularized methods of an estimation for a case of the singular matrices. The given approach has ample opportunities of development in a multivariate conflu- ence analysis, in nonlinear models, in model with linear constraints, with the not Gauss error, etc.

Tikhonov has applied the regularization term for computation of a solution of integral equations of first kind and the singular SLAE, that is, for problems with strict and beforehand known singularity. In them, for deriving a solution (in view of their infinite number), the additional (a priori) condition of selection of unique solution is necessary. It does not concern to the badly stipulated (nevertheless, stipulated) SLAE, to the account of the a priori information on a solution though in a result these methods can lead to the equations for an evaluation of the solution, similar among themselves. Problems of a solution of integral equations are consid- ered in Chapter 3.