47
arXiv:2005.10647v2 [math.CO] 29 Oct 2020 The explicit formula for Gauss-Jordan elimination and error analysis Nam Van Tran 1 , J´ ulia Justino 2,3 , Imme van den Berg 3,Abstract The explicit formula for the elements of the successive intermediate matri- ces of the Gauss-Jordan elimination procedure for the solution of systems of linear equations is applied to error analysis. Stability conditions in terms of relative uncertainty and size of determinants are given such that the Gauss- Jordan procedure leads to a solution respecting the original imprecisions in the right-hand member. The solution is the same as given by Cramer’s Rule. Imprecisions are modelled by scalar neutrices, which are convex groups of (nonstandard) real numbers. The resulting calculation rules extend informal error calculus, and permit to keep track of the errors at every stage. Keywords: Gauss-Jordan elimination, error propagation, stability, scalar neutrices. AMS classification: 03H05, 15A06, 15B33, 65G99. 1. Introduction In the present article we study imprecise systems of linear equations. The imprecisions occurring in the coefficient matrix and the right-hand member of a system of linear equations are modelled by convex subgroups of the * Imme van den Berg Email addresses: [email protected] (Nam Van Tran), [email protected] (J´ ulia Justino), [email protected] (Imme van den Berg) 1 Faculty of Applied Sciences, HCMC University of Technology and Education, Ho Chi Minh city, Vietnam 2 Set´ ubal School of Technology, Polytechnic Institute of Set´ ubal, Set´ ubal, Portugal 3 Research Center in Mathematics and Applications (CIMA), University of ´ Evora, ´ Evora, Portugal Preprint submitted to Indagationes Mathematicae October 30, 2020

The explicit formula for Gauss-Jordan elimination and

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

arX

iv:2

005.

1064

7v2

[m

ath.

CO

] 2

9 O

ct 2

020

The explicit formula for Gauss-Jordan elimination and

error analysis⋆

Nam Van Tran1, Julia Justino2,3, Imme van den Berg3,∗

Abstract

The explicit formula for the elements of the successive intermediate matri-ces of the Gauss-Jordan elimination procedure for the solution of systems oflinear equations is applied to error analysis. Stability conditions in terms ofrelative uncertainty and size of determinants are given such that the Gauss-Jordan procedure leads to a solution respecting the original imprecisions inthe right-hand member. The solution is the same as given by Cramer’s Rule.Imprecisions are modelled by scalar neutrices, which are convex groups of(nonstandard) real numbers. The resulting calculation rules extend informalerror calculus, and permit to keep track of the errors at every stage.

Keywords: Gauss-Jordan elimination, error propagation, stability, scalarneutrices.AMS classification: 03H05, 15A06, 15B33, 65G99.

1. Introduction

In the present article we study imprecise systems of linear equations. Theimprecisions occurring in the coefficient matrix and the right-hand memberof a system of linear equations are modelled by convex subgroups of the

∗Imme van den BergEmail addresses: [email protected] (Nam Van Tran),

[email protected] (Julia Justino), [email protected] (Imme van denBerg)

1Faculty of Applied Sciences, HCMC University of Technology and Education, Ho ChiMinh city, Vietnam

2Setubal School of Technology, Polytechnic Institute of Setubal, Setubal, Portugal3Research Center in Mathematics and Applications (CIMA), University of Evora,

Evora, Portugal

Preprint submitted to Indagationes Mathematicae October 30, 2020

nonstandard reals, called (scalar) neutrices. The vagueness is reflected bythe invariance under some additions, a formalization of the Sorites property[8, 24]; we were inspired by the functional neutrices of Van der Corput’sTheory of Neglecting [1]. The setting within the real number system en-ables individual treatment of the imprecisions and a straightforward calculusmodelling error-propagation.

Stability conditions for systems of linear equations are formulated. Inparticular, the relative imprecisions of elements of the coefficient matrix A,when compared to det(A), should be at most of same order as the relative im-precision of the right-hand member, and det(A) should also be not too small.We derive that each Gauss-Jordan operation transforms a stable system intoa stable system and that there is no significant blow-up of the imprecisions.The Main Theorem (Theorem 2.30) states that the Gauss-Jordan proceduresolves a stable system within the bounds given by the imprecisions in theright-hand member, leading to the same outcome as Cramer’s Rule.

Within nonstandard analysis a neutrix is usually an external set. Externalnumbers are sums of a real number and a neutrix. They give rise to the alge-braic structure of a Complete Arithmetical Solid [6]. This structure is weakerthan a field, being based on additive and multiplicative semigroups, with adistributive law which is valid under some conditions [5]. Still the struc-ture is completely ordered, with a Dedekind completeness property and anArchimedean property, and is sufficiently strong to enable rather straightfor-ward algebraic calculations [15][7], while common techniques and operationsof linear algebra and matrix calculus remain valid to a large extend [13] [21][22]. This may be observed also in the present article.

It follows from the above that the scalar neutrices allow for a strongeralgebraic structure than Van der Corput’s neutrices, which is partly due tothe absence of functional dependence. The last section of the present articlecontains a result which may be seen to fall within Van der Corput’s programof Ars Negligendi: when we recognize a system as being stable, we may aswell solve a simpler system, neglecting all terms in the coefficient matrixcontained in its biggest neutrix.

There is an extensive literature on error analysis for the Gaussian andGauss-Jordan elimination procedure, see e.g. [25], [19], [12] and [18], [10],which contain many more references. Often the approach is asymptotic, asa function of the number of variables m. Some key-notions are the growth

2

factor ρ ≡maxi,j,k

|a(k)i,j |maxi,j

|ai,j|, where k ≤ m and [a

(k)i,j ] is the k-th intermediate matrix,

and the condition number in the form of the product of norms cond(A) ≡‖A‖.‖A−1‖.

Here we choose a non-asymptotic approach taking m standard. The prin-cipal tools in our setting are explicit formulas for the elements of the transi-tion matrices [26],[23] and estimates of determinants and its principal minors;this seems somewhat natural, since by Cramer’s Rule the solution of linearsystems is stated in the form of quotients of determinants, and the Gauss-Jordan operations are carried out with quotients of minors; we point out thatthere exists a relationship between the orders of magnitude of determinantsand its principal minor, see Subsection 4.1 .

This article has the following structure. Section 2 recalls some basic prop-erties of nonstandard analysis, and of neutrices and external numbers. Alsosome notions and notations are given for the Gauss-Jordan operations, ma-trices with external numbers and systems of linear equations with externalnumbers (flexible systems). We define the notion of stability, and formulatetwo principal theorems, the first stating that stability is respected by theGauss-Jordan operations, and the second indicating the solution sets of flex-ible systems. Section 3 presents examples illustrating the principal theoremsand the role of their conditions. Section 4 recalls useful properties of thecalculus of external numbers and the explicit expressions for the elements ofthe transition matrices. In Section 5 the impact of each step of the Gauss-Jordan procedure on the size of the neutrices is shown. These results andthe generalization of Cramer’s Rule proved in Section 6 allow us to prove theMain Theorem in Section 7. In Section 8 we define equivalent systems, hav-ing the same solutions, and show in the case of stability a given system maybe substituted by a simpler equivalent system; this is illustrated numerically.

2. Backgrounds and main theorems

We start with some background on Nonstandard Analysis in Subsection2.1. In Subsection 2.2 we recall the notions of neutrix and of external numberused to model the imprecisions. In Subsection 2.3 we introduce some notionsand notations with respect to the Gauss-Jordan operations, which we willeffectuate in the form of matrix multiplications. Subsection 2.4 containsnotions and notations with respect to matrices and matrix operations. In

3

Subsection 2.5 we recall the definition of flexible systems of linear equations,with a slight modification, and introduce a notion of stability. In Subsection2.6 we state the main theorems, the first saying that the Gauss-Jordan elim-ination procedure transforms a stable system into a stable system, and thesecond saying that stable systems maybe solved both by Cramer’s rule andGauss-Jordan elimination, leading to equal solutions.

2.1. Nonstandard Analysis

We adopt the axiomatic form of nonstandard analysis Internal Set TheoryIST of [17]; an important feature is that, next to the standard numbers,infinitesimals and infinitely large numbers are already present within theordinary set of real numbers R. We use only bounded formulas, and thenneutrices and external numbers are well-defined external sets in the extensionHST of a bounded form of IST given by Kanovei and Reeken in [14]. Forintroductions to IST we refer to e.g. [3], [2] or [16] and for introductionsto external numbers and illustrative examples we refer to [15], [6] or [7];the latter contains an introduction to a weak form of nonstandard analysissufficient for a practical understanding of our approach. An important toolis External induction which permits induction for all IST -formulas over thestandard natural numbers.

A real number is limited if it is bounded in absolute value by a standardnatural number, and real numbers larger in absolute value than all limitednumbers are called unlimited. Its reciprocals, together with 0, are calledinfinitesimal. Appreciable numbers are limited, but not infinitesimal. Theset of limited numbers is denoted by £, the set of infinitesimals by ⊘, theset of positive unlimited numbers by 6∞ and the set of positive appreciablenumbers by @; these sets are all external.

2.2. Neutrices and external numbers

Remark 2.1. Throughout this article we use the symbol ⊆ for inclusion and⊂ for strict inclusion.

Definition 2.2. A (scalar) neutrix is an additive convex subgroup of R. Anexternal number is the Minkowski-sum of a real number and a neutrix.

So each external number has the form α = a+A = {a+x|x ∈ A}, whereA is called the neutrix part of α, denoted by N(α), and a ∈ R is called arepresentative of α.

4

In classical analysis the only neutrices are {0} and R, but in NonstandardAnalysis there are many more neutrices, all external sets. Examples are⊘ and £, for the sum of two infinitesimals is infinitesimal, and the sumof two limited numbers is limited. Let ε ∈ R be a positive infinitesimal.

Other examples of neutrices are ε£, ε⊘, Mε ≡⋂

st(n)∈N

[−εn, εn] = £ε 6∞ and

µε ≡⋃

st(n)∈N

[−e−1/(nε), e−1/(nε)] = £e−@/ε; as groups they are not isomorphic.

Identifying {a} and a, the real numbers are external numbers with N(α) ={0}. We call α zeroless if 0 6∈ α, and neutricial if α = N(α).

Let N be a neutrix. Clearly £N = N . An absorber of N is a real numbera such that aN ⊂ N . No appreciable number is an absorber of any neutrix,and in the examples above the infinitesimal number ε is an absorber of £and ⊘, but not of Mε and µε. Neutrices are ordered by inclusion, and if theneutrix A is contained in the neutrix B, we may write B = max{A,B}.

Notions as limited, infinitesimal and absorber may be extended in a nat-ural way to external numbers.

The collection of all neutrices is not an external set in the sense of [14],but a definable class, denoted by N . Also the external numbers form a class,denoted by E.

The rules for addition, subtraction, multiplication and division of externalnumbers of Definition 2.3 below are in line with the rules of informal erroranalysis [20]. Here they are defined formally as Minkowski operations on setsof real numbers.

Definition 2.3. Let a, b ∈ R, A,B be neutrices and α = a + A, β = b + Bbe external numbers.

1. α± β = a± b+ A+B = a+ b+max{A,B}.2. αβ = ab+ Ab+Ba+ AB = ab+max{aB, bA,AB}.

3. If α is zeroless,1

α=

1

a+

A

a2.

If α or β are zeroless, in Definition 2.3.2 we may neglect the neutrix prod-uct AB. Definition 2.3.3 does not permit to divide by neutrices. However,division of neutrices is allowed in terms of division of groups.

Definition 2.4. Let A,B ∈ N . Then we define

A : B = {c ∈ R | cB ⊆ A}.

5

An order relation is given as follows.

Definition 2.5. Let α, β ∈ E. We define

α ≤ β ⇔ ∀a ∈ α∃b ∈ β(a ≤ b).

If α ∩ β = ∅ and α ≤ β, then ∀a ∈ α∀b ∈ β(a < b) and we write α < β.

The relation ≤ is an order relation indeed, and compatible with the op-erations, with some small adaptations [15][7]. The inverse order relation isgiven by

α ≥ β ⇔ ∀a ∈ α∃b ∈ β(a ≥ b),

and α > β if ∀a ∈ α∀b ∈ β(a > b). Clearly α < β implies β > α. However,both ⊘ ≤ £ and ⊘ ≥ £ hold. External numbers α such that 0 ≤ α arecalled non-negative.

The absolute value of an external number α = a + A is defined by |α| =|a| + A. Notice that this definition does not depend on the choice of therepresentative of α.

By the close relation to the real numbers, practical calculations with ex-ternal numbers tend to be quite straightforward, this may be verified on theexamples of Section 3. Some care is needed with distributivity, see Subsec-tion 4.1. A full list of axioms for the operations on the external numbers hasbeen given in [6] and [7]. The resulting structure has been called a Com-pletely Arithmetical Solid (CAS). A Completely Arithmetical Solid relates acompletely regular commutative additive semigroup and a completely regu-lar commutative multiplicative semigroup by the modified distributive law ofTheorem 4.1, it has a total order relation with a generalized Dedekind prop-erty and contains two built-in models for the natural numbers. The resultsof the present article use only calculatory properties of nonstandard analysis,and could also have been presented in the setting of a CAS, and then theneutrices, external numbers and the CAS itself are true sets.

2.3. Gauss-Jordan operations

The Gauss-Jordan operations will be effectuated by multiplications byelementary matrices. These matrices will have real coefficients. This reflectsthe common practice to use relatively simple numbers for these operations,and in this way more algebraic laws are respected. Below we give notationsfor the Gauss-Jordan procedure and the intermediate matrices.

We consider here only square matrice and denote by Mn(R) the set ofall n× n matrices over the field R, with n ∈ N, n ≥ 1.

6

Definition 2.6. Let A = [aij ]n×n ∈ Mn(R) be non-singular. For every qwith 1 ≤ q ≤ 2n, the Gauss-Jordan operation matrix Gq and the intermediatematrix A(q) are defined as follows.

Let G0 be the n× n identity matrix In and A(0) = A. Assuming that G2k

and A(2k) = [a(2k)ij ]n×n are defined for k < n, we also assume that a

(2k)k+1k+1 6= 0.

Then G2k+1 =[

g(2k+1)ij

]

n×n, where

g(2k+1)ij =

1 if i = j 6= k + 1

0 if i 6= j1

a(2k)k+1k+1

if i = j = k + 1, (1)

leading toA(2k+1) = G2k+1A(2k) ≡ [a

(2k+1)ij ]n×n, (2)

and G2k+2 =[

g(2k+2)ij

]

n×n, where

g(2k+2)ij =

0 if j 6∈ {i, k + 1}1 if j = i

−a(2k+1)ik+1 if i 6= k + 1, j = k + 1

, (3)

resulting inA(2k+2) = G2k+2A(2k+1) ≡ [α

(2k+2)ij ]n×n. (4)

The matrix of odd order G2k+1 corresponds to the multiplication of row

k + 1 of A(2k) by 1/a(2k)k+1k+1, and the matrix of even order G2k+2 corresponds

to transforming the entries of column k of A(2k+1) into zero, except for theentry a

(2k+1)k+1k+1(= 1).

Up to changing rows and columns we may always assume that the pivotsa(2k)k+1k+1 are non-zero. In fact they may always be chosen to be maximal,

which is numerically desirable. The properties in question are a consequenceof the next definition and propositions. We introduce first a notation forminors, taken from [9].

Notation 2.7. Let A ∈ Mn(R). For each k ∈ N such that 1 ≤ k ≤ n, let1 ≤ i1 < · · · < ik ≤ m and 1 ≤ j1 < · · · < jk ≤ n.

1. We denote the k×k submatrix of A consisting of the rows with indices{i1, . . . , ik} and columns with indices {j1, . . . , jk} by Ai1...ik

j1...jk.

7

2. We denote the corresponding k × k minor by

mi1...ikj1...jk

= det(

Ai1...ikj1...jk

)

. (5)

3. For 1 ≤ k ≤ min{m,n} we may denote the principal minor of order kby mk = m1···k

1···k. We define formally m0 = 1.

Definition 2.8. Assume A ∈ Mn(R). Then A is called properly arranged,if |aij | ≤ |a11| for 1 ≤ i ≤ n and 1 ≤ j ≤ n and

∣m1···ki1···kj

∣ ≤ |mk+1| for everyk such that 1 ≤ k ≤ n− 1, whenever k + 1 ≤ i ≤ n and k + 1 ≤ j ≤ n. Wesay that A is diagonally eliminable, if mk 6= 0 for 1 ≤ k ≤ n.

Proposition 2.9. Let n ≥ 1. Let A = [aij ]n×n ∈ Mn(R). By if necessarychanging rows and columns A can be properly arranged.

The proof of this proposition is straightforward. If A is non-singular,by Proposition 2.10 the matrix is automatically diagonally eliminable, so weassume without restriction of generality that this is always the case.

Proposition 2.10. [23] Let n ≥ 1. Let A = [aij ]n×n ∈ Mn(R) be non-singular and properly arranged. Then A is diagonally eliminable.

Definition 2.11. Let n ≥ 1. Let A = [aij ]n×n ∈ Mn(R) be non-singular andproperly arranged. Then we call the sequence A,A(1), · · · ,A(2n) the Gauss-Jordan procedure and we write G = G2n(G2n−1 · · · (G2G1)).

2.4. Matrices with external numbers

We denote by Mm,n(E) the class of all m× n matrices

A =

α11 α12 · · · α1n...

.... . .

...αm1 αm2 · · · αmn

, (6)

where αij = aij + Aij ∈ E for 1 ≤ i ≤ m, 1 ≤ j ≤ n; we always suppose thatm,n ∈ N, m, n ≥ 1 are standard. The matrix A is called an external matrixand we use the common notation A = [αij ]m×n. A matrix A ∈ Mm,n(E)is said to be neutricial if all of its entries are neutrices. With respect to(6) the matrix P = [aij ]m×n ∈ Mm,n(R) is called a representative matrixand the matrix A = [Aij ]m×n the associated neutricial matrix. If m = nwe may write Mn(E) instead of Mm,n(E). A matrix A ∈ Mn(E) with

8

representative matrix equal to the identity matrix In and associated neutricialmatrix contained in [⊘]n×n is called a near-identity matrix, and is denotedby In.

For A,B ∈ Mm×n(E) we write A ⊆ B if αij ⊆ βij for all i, j such that1 ≤ i ≤ m, 1 ≤ j ≤ n.

Notation 2.12. Let A = [αij]n×n ≡ [aij + Aij ]n×n ∈ Mn(E). We define

|α| = max1≤i,j≤n

|αij | , A = max1≤i,j≤n

Aij.

Definition 2.13. An external matrix A is said to be limited if α ⊂ £ andreduced if α = α11 and α11 = 1 + A11, with A11 ⊆ ⊘, while all other entrieshave representatives which in absolute value are at most 1.

By the second part of Definition 2.13 a reduced external matrix alwayshas a reduced representative matrix.

For A ∈ Mn(E), the determinant ∆ ≡ det(A) ≡ d+D is defined in theusual way through sums of signed products [13].

Definition 2.14. Let A ∈ Mn(E). Then A is called non-singular if ∆ iszeroless.

Observe that a representative matrix of a non-singular matrix A is al-ways non-singular. It is not true in general that det(A) is equal to the setof determinants of representative matrices. This is shown in [22], which con-tains an overview of the calculus of matrices with external numbers and itsdeterminants.

Let A = [αij ]n×n be an external matrix. The Gauss-Jordan operationswill always be effectuated using the elements of a representative matrixP = [aij ]n×n. In particular the notions of properly arranged and diagonallyeliminable are defined by reference to representative matrices.

Definition 2.15. Let A ∈ Mn(E) be reduced and non-singular.

1. We say that A is properly arranged if it has a properly arranged rep-resentative matrix P . In this case we say that A is properly arrangedwith respect to P .

2. If A has a diagonally eliminable representative matrix P , we say thatA is diagonally eliminable with respect to P .

9

Because A has a reduced representative matrix, by Proposition 2.9 wemay assume without restriction of generality that a properly arranged rep-resentative matrix P is reduced. The matrix P is non-singular, for it is arepresentative matrix of the nonsingular matrix A. Hence P is diagonallyeliminable by Proposition 2.10.

Definition 2.16 extends the notions of Gauss-Jordan operations matrixand intermediate matrix to matrices of external numbers.

Definition 2.16. Let A ∈ Mn(E) be diagonally eliminable with respect toa representative matrix P . For 1 ≤ q ≤ 2n we denote the qth Gauss-Jordanoperations matrix by GP

q and we write GP = GP2n(GP

2n−1 · · · (GP2 GP

1 )).

We will see that under the condition of stability of Definition 2.17 below,the result of the Gauss-Jordan procedure does not depend on the choice of therepresentative matrix and we may simply write GP = G. We may also writeGP2q = G2q for 1 ≤ q ≤ 2n if there is no ambiguity on the representative matrix

P . Then we also adopt the notation of Definition 2.6 for the intermediatematrices, i.e. we have

GPq (GP

q−1 · · · (GP1 A)) = Gq(Gq−1 · · · (G1A)) ≡ A(q)

≡ [α(q)ij ]n×n ≡ [a

(q)ij + A

(q)ij ]n×n = P (q) + [A

(q)ij ]n×n. (7)

The Gauss-Jordan procedure applied to the matrix A is the sequence A,A(1), . . . ,A(2n). We will see that for stable matrices the last matrix is anear-identity matrix.

We recall the notion of relative uncertainty for matrices from [13], anduse it to define stable matrices.

Definition 2.17. Let A = [αij ]n×n ∈ Mn(E) be a limited non-singular ma-trix. The relative uncertainty R(A) is defined by R(A) = A/∆. The matrixA is called stable if R(A) ⊆ ⊘.

The biggest neutrix A occurring in a limited matrix A is always containedin ⊘, but if ∆ is infinitesimal, for the matrix to be stable, the entries needto be sharper.

2.5. Flexible systems and stability

We recall the definition of flexible systems of linear equations of [13] ina slightly modified form, and show equivalence with the earlier definition.

10

For the particular case of square non-singular systems we define a notion ofstability, implying that the Gauss-Jordan operations give rise to at most amoderate increase of imprecisions.

Definition 2.18. Let n ∈ N be standard and ξ1, . . . , ξn be external numbers.Then ξ ≡ (ξ1, . . . , ξn)

T is called an external vector. For 1 ≤ i ≤ n, letξi = xi + Xi. Then x ≡ (x1, . . . , xn)

T is called a representative vector andX ≡ (X1, . . . , Xn)

T is called the associated neutricial vector, i.e. ξ = x+X.

Definition 2.19. A flexible system is a system of inclusions

α11x1+ α12x2+ · · · +α1nxn ⊆ β1...

.... . .

......

αm1x1+ αm2x2+ · · · +αmnxn ⊆ βm

, (8)

where m,n are standard natural numbers, x = (x1, . . . , xn) ∈ Rn and αij ≡aij+Aij and βi ≡ bi+Bi are external numbers for 1 ≤ i ≤ m and 1 ≤ j ≤ n.We denote the matrix [αij]m×n by A, the representative matrix [aij ]m×n byP , the associated neutricial matrix [Aij ]m×n by A and the external vector(βi, . . . , βm)

T by B. A vector x is called an admissible solution of the flexiblesystem (8) if it satisfies the system.

The system (8) is equivalent with the inclusion Ax ⊆ B, and usually iswritten in the matrix form A|B.

In [13] and [21] flexible systems with variables in the form of externalnumbers have been considered, i.e. systems of the form

α11ξ1+ α12ξ2+ · · · +α1nξn ⊆ β1...

.... . .

......

αm1ξ1+ αm2ξ2+ · · · +αmnξn ⊆ βm

, (9)

with an admissible solution being an external vector ξ satisfying the system.The systems (9) and (8) are equivalent, because of Proposition 2.20.

Proposition 2.20. An external vector ξ is an admissible solution of (9) ifand only if every representative x of ξ is an admissible solution of (8).

Proof. Let ξ = (ξ1, . . . , ξn). It is obvious that if the inclusions of (9) aresatisfied by (ξ1, . . . , ξn), they are also satisfied by any representative vectorx = (x1, . . . , xn).

11

Conversely, let 1 ≤ i ≤ m, and assume that αi1x1 + · · · + αinxn ⊆ βi

whenever x1 ∈ ξ1, . . . , xn ∈ ξn. Let t ∈ τ ≡ αi1ξ1 + · · · + αinξn. It followsfrom the definition of the Minkowski operations that for all j with 1 ≤ j ≤ nthere exist aij ∈ αij and xj ∈ ξj such that t = ai1x1 + · · · + ainxn. Thent ∈ βi. Hence τ ⊆ βi.

We conclude that the external vector ξ is an admissible solution of (9) ifand only if all its representative vectors are.

In the present article we only study the system (8) with real variables, inthe case m = n.

We now introduce some terminology, in particular we carry over some ofthe notions on matrices of Section 2.4 to systems of equations.

Definition 2.21. The system A|B is called reduced if A is reduced, homo-geneous if B is a neutrix vector, upper homogeneous if β is a neutrix anduniform if the neutrices of the right-hand side Bi ≡ B are all the same. Thesystem is called non-singular if A is non-singular, properly arranged respec-tively diagonally eliminable (with respect to a matrix of representatives P ) ifA is properly arranged respectively diagonally eliminable with respect to P .

We recall the following notion of relative uncertainty of external vectorsfrom [13] and use it, together with the notion of relative uncertainty of ma-trices of Definition 2.17 to define stable systems.

Definition 2.22. Let B = (β1, . . . , βn)T be an external vector. Let |β| =

max1≤i≤n

|βi| and B = min1≤i≤n

Bi.

If β is zeroless, its relative uncertainty R(B) is defined by

R(B) = B/β.

In the special case that β = B for some neutrix B, we define

R(B) = B : B.

Observe that, whenever 1 ≤ i ≤ n, it holds that

R(B)βi ⊆ B, (10)

and that R(B) ⊆ ⊘ if the system is not upper homogeneous.

12

Definition 2.23. Let A ∈ Mn(E) be limited and non-singular. The systemA|B is said to be stable if

1. A is stable.

2. R(A) ⊆ R(B).

3. ∆ is not an absorber of B.

Condition (2) expresses that in a sense the coefficient matrix is moreprecise than the right-hand member and Condition (3) expresses that thedeterminant ∆, which of course must be non-zero, should not be too small.Condition (1) originating from Definition 2.17 expresses that the neutricesin the coefficient matrix are small with respect to the determinant. Notethat if R(A) = A/∆ ⊃ ⊘ we have R(B) ⊇ R(A) ⊇ £, which means thesystem must be upper homogeneous. Then uniform systems, which form theprincipal class of systems under consideration (see Convention 2.25 below),are homogeneous, which is very restrictive.

It will be seen that the notion of stability is respected by the steps of theGauss-Jordan procedure, which possibbly lead to only a moderate increaseof imprecisions in the coefficient matrix and leave the imprecision of theright-hand member invariant, finally resulting in the same imprecision forthe solution.

Notation 2.24. Suppose that A ∈ Mn(E) and that the system A|B is non-singular, properly arranged with respect to a matrix of representatives P anduniform. We write [B] = [B,B, . . . , B]T , B(0) = B and [B](0) = [B−1](0) =[B]. For 1 ≤ q ≤ 2n we write

B(q) = GPq (GP

q−1 · · · GP1 B)) ≡ [β

(q)1 , . . . , β(q)

n ]T ,∣

∣β(q)

∣= max

1≤i≤n

∣β(q)i

∣,

[B](q) = GPq (GP

q−1 · · · GP1 [B])).

With these notations we see that after q Gauss-Jordan operations thesystem becomes A(q)|B(q), and for q = 2n the Gauss-Jordan procedure endswith GPA|GPB.

13

2.6. Solutions sets and main results

We define solution sets in several ways. The Main Theorem states thatunder the conditions of stability they are all equal. We start with a theoremwhich is the principal tool for the proof of the Main Theorem, saying thatthe Gauss-Jordan operations respect the notion of stability. We will alwayssuppose that Convention 2.25 holds.

Convention 2.25. From now on we always suppose that the system A|Bis square, i.e. A ∈ Mn(E), and that the system is non-singular, reduced,properly arranged with respect to a reduced matrix of representatives P anduniform.

As for non-singular systems, only the condition of uniformity is restrictive.In the context of the Gauss-Jordan procedure the condition is essential, sincethe simple addition of equations may have the effect that the solution of theresulting system is no longer feasible for the original system, see Example 3.4.By transforming a system with different neutrices B1, . . . , Bn in the right-hand side into the system with neutrices at the right-hand side equal to B,we get a uniform system, whose solutions are always feasible with respect tothe original system.

The first theorem expresses that each Gauss-Jordan operation transformsa stable system into a stable system, and at the end we find a stable systemwith a coefficient matrix in the form of a near-identity matrix. As we willsee its solution is simply the right-hand member.

Theorem 2.26. Suppose that the flexible system A|B is properly arrangedwith respect to a representative matrix P and stable. Then

1. The intermediate system A(q)|B(q) is stable for all q such that 0 ≤ q ≤2n.

2. In particular GPA|GPB is stable, and GPA is a near-identity matrix.

Definition 2.27. The solution S of A|B is the (external) set of all realadmissible solutions. If the Minkowski product AS satisfies AS = B we callS exact.

We now define the Gauss-Jordan solution and the Cramer solution.

14

Definition 2.28. Assume the system A|B is properly arranged with respectto a representative matrix P of A. The Gauss-Jordan solution GP of A|Bwith respect to P is defined by

GP ={

x ∈ Rn|(

GP (A))

x ⊆ GP (B)}

. (11)

If GP does not depend on the choice of P , we simply call it the Gauss-Jordansolution, denoted by G.

Definition 2.29. Consider the system A|B. Let Mi be the matrix obtainedfrom A by the substitution of the ith column by the right-hand member B.Then the external vector

ξT =

(

det(M1)

∆, . . . ,

det(Mn)

)T

(12)

is called the Cramer-solution if every representative vector x satisfies A|B.Theorem 2.30 (Main Theorem). Assume the system A|B is stable, andproperly arranged with respect to a representative matrix P of A. Let S beits solution. Then

S = G = GP (B) =(

det(M1)

∆, . . . ,

det(Mn)

)T

. (13)

The solution of stable systems by Cramer’s rule was shown in [13] fornon-homogeneous systems.

3. Examples

We start with some straightforward applications of Theorem 2.30. Thenwe indicate some properties of flexible systems which are not shared by or-dinary systems, and illustrate the role of the conditions of Theorem 2.30.

In example 3.1 we verify first that the system is stable, then we showGauss-Jordan procedure in some detail, searching for neutrices instead of ze-ros, to see at the end that the right-hand side is the solution indeed. Observethat the solution is given in the form of truncated expansions.

Example 3.1. Consider the system

(1 + ε2⊘) x1 + x2 + (1 + ε3£)x3 ⊆ 1 + ε⊘(1 + ε3£)x1 +

(

−12+ ε2⊘

)

x2 − 12x3 ⊆ −2 + ε⊘

(

12ε+ ε3⊘

)

x1 +12x2 + (1 + ε2⊘) x3 ⊆ ε+ ε⊘

, (14)

15

where ε is a positive infinitesimal. Let A be its matrix of coefficients andB be the right-hand member. The matrix is reduced and non-singular, with∆ = detA = −3

4+ ε2⊘ zeroless. Let

P =

1 1 11 −1

2−1

212ε 1

21

. (15)

Then P is a representative matrix of A, and is properly arranged. Indeed,formula (5) is obvious for k = 1, and is also satisfied for k = 2 with m2 =m12

12 = −32, m12

13 = −32, m13

12 =12− 1

2ε, m13

13 = 1− 12ε. As a consequence,

A is properly arranged. Because R(A) = ε2⊘ ⊆ R(B) = ε⊘ and ∆B =(−3

4+ ε2⊘)ε⊘ = ε⊘ = B, the system is stable.By Theorem 7.2 the solution may be obtained by the Gauss-Jordan pro-

cedure. It is given by

S ≡

ξ1ξ2ξ3

=

−1 + ε⊘4− 3ε+ ε⊘−2 + 3ε+ ε⊘

, (16)

which we verify in detail. The second and third coordinate of S have the formof a truncated expansion. Also some expansions appear in the coefficients ofthe intermediate matrices. We get the following succession of stable systems:

A|B =

1 + ε2⊘ 1 1 + ε3£ | 1 + ε⊘1 + ε3£ −1

2+ ε2⊘ −1

2| −2 + ε⊘

12ε+ ε3⊘ 1

21 + ε2⊘ | ε+ ε⊘

−→L2 − L1

L3 − 12εL1

1 + ε2⊘ 1 1 + ε3£ | 1 + ε⊘ε2⊘ −3

2+ ε2⊘ −3

2+ ε3£ | −3 + ε⊘

ε3⊘ 12− 1

2ε 1− 1

2ε+ ε2⊘ | 1

2ε+ ε⊘

−→−2

3L2

1 + ε2⊘ 1 1 + ε3£ | 1 + ε⊘ε2⊘ 1 + ε2⊘ 1 + ε3£ | 2 + ε⊘ε3⊘ 1

2− 1

2ε 1− 1

2ε+ ε2⊘ | 1

2ε+ ε⊘

L1 − L2

−→L3 −

(

1−ε2

)

L2

1 + ε2⊘ ε2⊘ ε3£ | −1 + ε⊘ε2⊘ 1 + ε2⊘ 1 + ε3£ | 2 + ε⊘ε2⊘ ε2⊘ 1

2+ ε2⊘ | −1 + 3

2ε+ ε⊘

16

−→2L3

1 + ε2⊘ ε2⊘ ε3£ | −1 + ε⊘ε2⊘ 1 + ε2⊘ 1 + ε3£ | 2 + ε⊘ε2⊘ ε2⊘ 1 + ε2⊘ | −2 + 3ε+ ε⊘

−→L2 − L3

1 + ε2⊘ ε2⊘ ε3£ | −1 + ε⊘ε2⊘ 1 + ε2⊘ ε2⊘ | 4− 3ε+ ε⊘ε2⊘ ε2⊘ 1 + ε2⊘ | −2 + 3ε+ ε⊘

≡ I3|S.

The system I3|S being stable, by Theorem 2.30 the external vector S solves thelatter system. It is easy to verify this by substitution, and it is straightforwardto verify that Cramer’s Rule yields the same solution.

The next example deals with a system having a coefficient matrix with aninfinitesimal determinant. Yet it is not an absorber of the neutrix occurringin the right-hand member. Also the remaining conditions for stability hold,so the Gauss-Jordan procedure still works. Again the solution will be anexternal vector with coordinates in the form of a truncated expansion, nowstarting with a ”singular”, i.e. unlimited term.

Example 3.2. Let ε be a positive infinitesimal. We will use the microhalosMε = £ε 6∞ and Mε1 = £ε 6∞1 , where ε1 = εω with ω ∈ N unlimited. Considerthe system

{(

1 +£ε 6∞1

)

x+ y ⊆ 1 +£ε 6∞

x+ (1− ε+£ε 6∞)y ⊆ 2 +£ε 6∞.

Let A =

[

1 +£ε 6∞1 11 1− ε+£ε 6∞

]

. Then ∆ = detA = −ε+£ε 6∞ is zeroless.

One easily verifies that the system is stable. Applying the Gauss-Jordanprocedure we obtain

A|B =

[

1 +£ε 6∞1 1 | 1 +£ε 6∞

1 1− ε+£ε 6∞ | 2 +£ε 6∞

]

−→L2 − L1

[

1 +£ε 6∞1 1 | 1 +£ε 6∞

£ε 6∞1 −ε+ £ε 6∞ | 1 +£ε 6∞

]

−→−1

εL2

[

1 +£ε 6∞1 1 | 1 +£ε 6∞

£ε 6∞1 1 +£ε 6∞ | −1ε+£ε 6∞

]

−→L1 − L2

[

1 +£ε 6∞1 £ε 6∞ | 1ε+ 1 +£ε 6∞

£ε 6∞1 1 +£ε 6∞ | −1ε+£ε 6∞

]

.

17

By Theorem 2.30 the vector ξ = (1ε+ 1 + £ε 6∞,−1

ε+ £ε 6∞)T is the solution

of the system.

The following two examples show that flexible systems do not need tohave exact solutions, and that the solution of a non-uniform system does notneed to be an external vector; then it is also possible that the Gauss-Jordanoperations lead to non-feasible solution, i.e. a vector which does not satisfythe original system.

Example 3.3. The simple equation ⊘x ⊆ £ does not have an exact solution.Indeed, it is satisfied by all limited numbers, but not by any unlimited number.Hence S = £, with ⊘£ = ⊘ ⊂ £.

Example 3.4. Consider the flexible system

{

(1 +⊘)x + (1 + ε⊘)y ⊆ ⊘(1 + ε£)x − (1 + ε£)y ⊆ ε£

. (17)

As shown in [21] the solution is given by

N = ⊘(

1212

)

+ ε£

(

12

−12

)

, (18)

which is not an external (neutricial) vector, though it is the result of applyinga rotation to the neutricial vector (⊘, ε£).

To show that the Gauss-Jordan operations may not respect feasibility, wesubtract the first equation from the second. Then we get

{

(1 +⊘)x + (1 + ε⊘)y ⊆ ⊘⊘x − 2(1 + ε£)y ⊆ ⊘ .

The obvious solution is the neutricial vector K ≡ (⊘,⊘), but due to the factthat one neutrix at the right-hand side has been increased, it does no longersatisfy the original system. Indeed N ⊂ K, for instance the representativevector (0,

√ε) does not satisfy the second equation of (17).

We now turn to the stability conditions.Example 3.5 shows that, without the condition stating that the relative

uncertainty of the coefficient matrix must be smaller than the relative un-certainty of the constant term, a non-singular flexible system may have nosolution at all.

18

Example 3.5. Let ε ≃ 0, ε 6= 0. The equation (1 + ⊘)x ⊆ 1 + ε£ has nosolution.

The next example shows that if the determinant of the coefficient matrixis an absorber of the neutrix of the right-hand side, the solution given by theGauss-Jordan procedure may be not feasible.

Example 3.6. Consider the system

{

x1 + x2 ⊆ 1 + ⊘εx2 ⊆ ⊘ ,

with ε ≃ 0, ε 6= 0. The determinant of the coefficient matrix ∆ = ε isan absorber of the neutrix of the right-hand side B = ⊘. Applying Gauss-Jordan elimination we blow B up to ⊘/ε, and obtain at the right-hand sidethe neutrix vector (⊘/ε,⊘/ε)T , which is obviously not admissible.

The stability conditions are stated in terms of bounds, and as may beexpected, they are not minimal. This is illustrated by the final example.

Example 3.7. Consider the following system

{

x1 ⊆ 1 + ⊘εx2 ⊆ ⊘ ,

with ε ≃ 0, ε 6= 0. As in Example 3.6 the determinant ∆ = ε is an absorberof B = ⊘. Gauss-Jordan elimination only consists in multiplying the secondinclusion by 1/ε, and leads to GP (B) = (1 +⊘,⊘/ε)T , which is the solutionof the system indeed.

4. Preliminary results

In Subsection 4.1 we recall some useful properties of the calculus withexternal numbers and matrices, and Subsection 4.2 contains explicit expres-sions for entries of intermediate matrices of the Gauss-Jordan eliminationprocedure and the Gauss-Jordan operation matrices.

19

4.1. On the calculus of external numbers and matrices

We will consider the modified distributive law for external numbers, someadditional properties, and properties of matrix multiplication, in particularmodified laws for distributivity and also associativity. We end with someproperties concerning the order of magnitude of determinants and minors.

The distributive law holds for the external numbers under fairly generalconditions, but in particular it may not hold when multiplying two almostopposite numbers. Unfortunately, this is common practice in the contextGauss-Jordan operations, for we search for zero’s or neutrices by annihilat-ing. However subsdistributivity always holds, and this does not affect theinclusions we work with.

Theorem 4.1. [4](Distributivity with correction term) Let α, β, γ = c + Cbe external numbers. Then

αγ + βγ = (α + β)γ + Cα+ Cβ. (19)

Because a neutrix term is added in the right-hand side of (19), we alwayshave the following form of subdistributivity.

Corollary 4.2. (Subdistributivity) Let α, β, γ be external numbers. Then(α + β)γ ⊆ αγ + βγ.

Theorem 4.4 below gives conditions such that the common distributivelaw holds, i.e. the correction terms figuring in (19) may be neglected. Tothis end we recall the notions of relative uncertainty and oppositeness.

Definition 4.3. [15, 4] Let α = a + A and β = b + B be external numbersand C be a neutrix.

1. The relative uncertainty R(α) of α is defined by R(α) = A/α if α iszeroless, otherwise R(α) = R.

2. α and β are opposite with respect to C if (α + β)C ⊂ max(αC, βC).

Theorem 4.4. Let α, β, γ = c + C be external numbers. Then αγ + βγ =(α+β)γ if and only if R(γ) ⊆ max(R(α), R(β)), or α and β are not oppositewith respect to C.

20

Simple and important special cases are given by

(x+N)β = xβ +Nβ and x(α + β) = xα + xβ,

whenever x ∈ R, N ∈ N and α, β ∈ E.Next proposition lists some useful general properties of external numbers.

Proposition 4.5. [15] Let α = a+A and γ be a zeroless external numbers,B be a neutrix and n ∈ N be standard. Then

1. αB = aB and Bα= B

a.

2. N(1/α) = N(α)/α2.

3. R(α), R(1/α) ⊆ ⊘.

4. α ∩ ⊘α = ∅.

5. N(αγ) = αN(γ) +N(α)γ.

6. N(

(a+ A)n)

= an−1A.

7. If α is limited and is not an absorber of B, then αB = Bα= B.

Below we give a brief account of some relevant properties of matrices overexternal numbers. We refer to [22] for more details, proofs and examples.

The following general property of inclusion is an immediate consequenceof the fact that, given external numbers α, β, γ such that α ⊆ β, one hasγα ⊆ γβ.

Proposition 4.6. Let A ∈ Mm,n(E) and B, C ∈ Mn,p(E). If B ⊆ C thenAB ⊆ AC.

Because subdistributivity holds for external numbers, it also holds for thecalculus of matrices of external numbers. Next proposition gives a conditionfor distributivity.

Proposition 4.7. Let A = [αij]m×n ∈ Mm,n(E) and B = [βij ]n×p, C =[γij]n×p ∈ Mn,p(E). If max

1≤i≤m1≤j≤n

R(αij) ≤ min1≤i≤m1≤j≤n

max{R(βij), R(γij)}, then

A(B + C) = AB +AC.

21

For subassociativity to hold conditions are needed, and associativity holdsunder stronger conditions.

Proposition 4.8. Let A ∈ Mm,n(E),B ∈ Mn,p(E) and C ∈ Mp,q(E). Then

1. (AB)C ⊆ A(BC) if A is a real matrix or B, C are both non-negative.

2. A(BC) ⊆ (AB)C if C is a real matrix or A,B are both non-negative.

Proposition 4.9. Let A ∈ Mm,n(E),B ∈ Mn,p(E) and C ∈ Mp,q(E). ThenA(BC) = (AB)C if one of the following conditions is satisfied:

1. A and C are both real matrices.

2. B is a neutricial matrix.

3. A,B, C are all non-negative matrices.

In the final part we relate some orders of magnitude of determinants oflimited and reduced matrices and its minors.

To start with, it is easily proved that the determinant of a limited matrixis limited, as are its minors. The neutrix of these determinants does notexceed the biggest neutrix of the entries.

Proposition 4.10. Let n ∈ N be standard and A ∈ Mn(E) be limited. Thenthere exists a limited number L > 0 such that whenever k ∈ {1, . . . , n} and1 ≤ i1 < · · · < ik ≤ n, 1 ≤ j1 < · · · < jk ≤ n

|mi1...ikj1...jk

| ≤ L.

In particular |∆| ≤ L. Moreover N(∆) ⊆ A.

The last property plays an important part in our approach to error ana-lyis, and says that at least one the minors ∆i,j , obtained by eliminating rowi and column j from the matrix A for some i, j with 1 ≤ i, j ≤ n, is of thesame order of magnitude as the determinant. It is a consquence of the factthat the Laplace expansion holds with inclusions.

Proposition 4.11. Let A ∈ Mn(E) be a reduced square matrix of ordern. Suppose that ∆ is zeroless. Then for each j ∈ {1, . . . , n} there existsi ∈ {1, . . . , n} such that |∆i,j| > ⊘∆.

22

4.2. Explicit expressions for the Gauss-Jordan operations

We will use explicit expressions for the Gauss-Jordan operation matricesand the intermediate matrices. These are given in terms of quotients ofminors, for which we recall the convenient notation of [9]. Proofs can befound in [26], in a different notation, and in [23]. In particular a pivot isalways given in the form of a quotient of principal minors, of which the orderof magnitude can be determined with the methods of Subsection 4.1. At theend we consider the inverse Gauss-Jordan procedure.

Theorem 4.12 (Explicit expressions for Gauss-Jordan operations). LetA = [aij ]n×n ∈ Mn(R) be diagonally eliminable. For k < n the Gaussian

elimination matrix of odd order G2k+1 = [g(2k+1)ij ]n×n satisfies

g(2k+1)ij =

1 if i = j 6= k + 1

0 if i 6= jmk

mk+1if i = j = k + 1

(20)

and the Gaussian elimination matrix of even order G2k+2 =[

g(2k+2)ij

]

n×n

satisfies

g(2k+2)ij =

0 if j 6∈ {i, k + 1}1 if j = i

(−1)k+i+1m1...k

1...i−1i+1...k+1

mkif 1 ≤ i ≤ k, j = k + 1

−m1...ki

1...kj

mkif k + 1 < i ≤ n, j = k + 1

.

Theorem 4.13 (Explicit expressions for Gauss-Jordan elimination). LetA = [aij]n×n ∈ Mn(R) be diagonally eliminable. Let k < n. Then

A(2k) =

1 · · · 0 a(2k)1k+1 · · · a

(2k)1n

.... . .

......

. . ....

0 · · · 1 a(2k)kk+1 · · · a

(2k)kn

0 · · · 0 a(2k)k+1k+1 · · · a

(2k)k+1n

.... . .

......

. . ....

0 · · · 0 a(2k)nk+1 · · · a

(2k)nn

,

23

where

a(2k)ij =

(−1)k+im1...k

1...i−1i+1...kj

mk

if 1 ≤ i ≤ k, k + 1 ≤ j ≤ n

m1...ki1...kj

mk

if k + 1 ≤ i ≤ n, k + 1 ≤ j ≤ n

. (21)

In particular A(2n) = In.

If A is diagonally eliminable, the inverses of the matrices of the Gauss-Jordan procedure G are well-defined, as follows. For odd indices we have

G−12k+1 =

[

(

g−1ij

)(2k+1)]

, with

(

g−1ij

)(2k+1)=

1 if i = j 6= k + 1

0 if i 6= jmk+1

mk

if i = j = k + 1

,

and for even indices G−12k+2 =

[

(

g−1ij

)(2k+2)]

n×n, with

(

g−1ij

)(2k+2)=

0 if j 6∈ {i, k + 1}1 if j = i

(−1)k+im1...k

1...i−1i+1...k+1

mkif 1 ≤ i ≤ k, k + 1 ≤ j ≤ n

m1...ki1...kj

mkif k + 1 < i ≤ n, k + 1 ≤ j ≤ n

.

The sequence A(2n),G−11

(

A(2n))

, . . . ,G−11

(

· · ·(

G−12n A(2n)

))

= A is calledthe inverse Gauss-Jordan procedure.

5. Stability and Gauss-Jordan operations

For reduced and properly arranged matrices, when applying the Gauss-Jordan operations to the intermediate matrices, only a moderate growth ispossible for the elements and their neutrix parts. If the determinant is notan absorber of the neutrix part of the right-hand member, this neutrix evenremains constant. If in addition the flexible system is stable, a stable matrix

24

is transformed into a stable matrix, while the relative uncertainty of theintermediate matrices remains always less than the relative uncertainty ofthe right-hand members. Together this leads to a proof of Theorem 2.26on the preservation of stability under the Gauss-Jordan operations, with thefinal matrix being a near-identity matrix.

The principal tools in establishing the above properties of orders of mag-nitude and stability are bounds on the order of magnitude of minors. Indeed,because the pivots are quotients of minors, they have direct influence on theorder of magnitude of the entries and neutrix parts of the intermediate ma-trices and the right-hand members.

Remark 5.1. We recall from the previous section that a reduced matrix A hasalways a reduced representative matrix, and from now on we always supposethat a representative matrix is reduced.

Proposition 5.2 shows that the Gauss-Jordan operations do not lead toan unlimited growth of the elements of the intermediate matrices.

Proposition 5.2. Let A = [αij]n×n ∈ Mn(E) be a reduced non-singular ma-trix, such that it admits a properly arranged representative matrix P . Thena(q)ij is limited whenever 1 ≤ q ≤ 2n and 1 ≤ i, j ≤ n.

Proof. We apply external induction. Because P is reduced, it holds that

|aij| ≤ 1 for 1 ≤ i, j ≤ n and, since a(1)ij = aij , the same is true for

∣a(1)ij

∣.

It follows that∣

∣a(2)1j

∣= |a1j | ≤ 1 for 1 ≤ j ≤ n and

∣a(2)ij

∣= |aij − ai1a1j | ≤

|aij | + |ai1| |a1j | ≤ 2 for 2 ≤ i ≤ n, 1 ≤ j ≤ n. Hence a(2)ij is limited for

1 ≤ i, j ≤ n. As for the induction step, let k ≤ n − 1 and suppose that a(q)ij

is limited for q ≤ 2k and 1 ≤ i, j ≤ n. Because the jth column of P (2k+1)

is a unit vector for 1 ≤ j ≤ k, the entries of these columns are limited. For1 ≤ i ≤ n, k + 1 ≤ j ≤ n one has

a(2k+1)ij =

a(2k)ij if i 6= k + 1

m1...ki1...kj

mk+1

if i = k + 1.

So a(2k+1)ij = a

(2k)ij is limited for i 6= k+1 and k+1 ≤ j ≤ n by the induction

hypothesis, and because P is properly arranged, also for i = k + 1 and

25

k+1 ≤ j ≤ n, since∣

∣a(2k+1)k+1j

∣≤∣

m1...ki1...kj

mk+1

≤ 1. Combining, we see that a(2k+1)ij

is limited for 1 ≤ i, j ≤ n.As for P (2k+2), in addition to the first k columns, also the (k+1)th column

is a unit vector, i.e. has limited components. Because the elements of P (2k+1)

are limited we derive that a(2k+2)k+1j = a

(2k+1)k+1j is limited for k + 2 ≤ j ≤ n, and

a(2k+2)ij = a

(2k+1)ij − a

(2k+1)ik+1 a

(2k+1)k+1j is limited for 1 ≤ i ≤ n, i 6= k + 1 and

k + 1 ≤ j ≤ n. Hence a(2k+2)ij is limited for 1 ≤ i, j ≤ n.

The quotients of the principal minors and a fortiori the pivots have def-inite lower bounds and upper bounds in terms of neutrices. This is shownin Theorem 5.5. This theorem includes bounds for the pivots of the inverseprocedure, which we will need to verify that the Gauss-Jordan solution is asolution of the original system. To prove the theorem we present first somenotation and an auxiliary result, saying that the determinants of the interme-diate matrices are at least of the same order of magnitude as the determinantof the original matrix.

Notation 5.3. Let A = [αij]n×n ∈ Mn(E) be a reduced non-singular matrix,such that it admits a properly arranged representative matrix P = [aij ]n×n.For 1 ≤ q ≤ 2n we write d = det(P ), d(q) = det(P (q)) and ∆(q) = detA(q) =d(q) +D(q).

Lemma 5.4. Let A = [αij ]n×n ∈ Mn(E) be a reduced non-singular matrix,such that it admits a properly arranged representative matrix P . Let 1 ≤ q ≤2n and k be such that q = 2k − 1 or q = 2k. Then

∣d(q)∣

∣ =

d

mk

> ⊘∆.

Proof. Let 1 ≤ k ≤ n and q = 2k − 1 or q = 2k. In both cases

∣d(q)∣

∣ = |det(Gq) det(Gq−1) · · ·det(G1)d| (22)

=

mk−1

mk

mk−2

mk−1

· · · m1

m2

1

m1

d

=

d

mk

.

Suppose |d(q)| ⊆ ⊘∆. Then d ∈ mk⊘∆ by (22). By Proposition 4.10 it holdsthat d ∈ ⊘∆. Hence d ∈ ⊘∆ ∩ ∆. Because ∆ is zeroless, this contradictsProposition 4.5.4. Hence

∣d(q)∣

∣ > ⊘∆.

26

Theorem 5.5. Let A = [αij ]n×n ∈ Mn(E) be a reduced, non-singular matrix,which is properly arranged with respect to a matrix of representatives P . Thenfor 1 ≤ k < n

⊘∆ <∣

mk+1

mk

∣∈ £ (23)

and

⊘ <

mk

mk+1

∈ £

∆. (24)

Proof. For 1 ≤ k ≤ n− 1 we have

A(2k) =

1 + A11 A(2k)12 · · · A

(2k)1k α

(2k)1(k+1) · · · α

(2k)1n

A(2k)21 1 + A

(2k)22 · · · A

(2k)2k α

(2k)2(k+1) · · · α

(2k)2n

......

. . ....

.... . .

...

A(2k)k1 A

(2k)k2 · · · 1 + A

(2k)kk α

(2k)k(k+1) · · · α

(2k)kn

A(2k)(k+1)1 A

(2k)(k+1)2 · · · A

(2k)(k+1)k α

(2k)(k+1)(k+1) · · · α

(2k)(k+1)n

......

. . ....

.... . .

...

A(2k)n1 A

(2k)n2 · · · A

(2k)nk α

(2k)n(k+1) · · · α

(2k)nn

.

Suppose on the contrary thatmk+1

mk= a

(2k)k+1k+1 =

mk+1

mk∈ ⊘∆. From

∣a(2k)ij

∣=

m1...ki1...kj

mk

≤∣

mk+1

mk

=∣

∣a(2k)k+1k+1

∣for k + 1 ≤ i, j ≤ n one derives that a

(2k)ij ∈

⊘∆ for k + 1 ≤ i, j ≤ n. Let Sn−k be the set of all permutations of {k +1, . . . , n}. Because ∆ is limited,

d(2k) =∑

σ∈Sn−k

sgn(σ)a(2k)k+1σ(k+1) . . . a

(2k)nσ(n) ∈ (⊘∆)n−k ⊆ ⊘∆,

in contradiction to Lemma 5.4. Hence

mk+1

mk

> ⊘∆.

Also, by Theorem 4.12 we have mk+1/mk = a(2k+2)k+1k+1, and the latter is

limited by Proposition 5.2. Hence formula (23) holds. Taking multiplicativeinverses, we derive (24).

Theorem 5.5 gives bounds on the pivots and the entries of the elementarymatrices of the Gauss-Jordan procedure, and the inverse procedure.

27

Theorem 5.6. Let A = [αij ]n×n ∈ Mn(E) be a reduced, non-singular matrix,which is properly arranged with respect to a matrix of representatives P .

1. Let 1 ≤ k < n. Then the kth diagonal element of GP2k+1 satisfies

g(2k+1)k+1k+1 ∈ £

∆and the elements of GP

2k+2 are all limited.

2. All elements of the matrices (G−1)Pq , 1 ≤ q ≤ 2n of the inverse Gauss-Jordan procedure are limited.

Proof. 1. The property is a direct consequence of Theorem 5.5 and Propo-sition 5.2.

2. For the intermediate matrices of odd index the property follows from(23), and for the intermediate matrices of even index q = 2k, k < n

the property follows from the fact that∣

∣(g−1)

(2k+2)ik+1

∣=∣

∣g(2k+2)ik+1

∣for

1 ≤ i ≤ n, 1 ≤ k ≤ n− 1, and Part 1.

With the help of Theorem 5.6 we derive a bound for the possible increaseof the neutrix parts of the intermediate matrices of the Gauss-Jordan proce-dure. If in addition the original matrix is stable, Lemma 5.4 permits to provethat they always are infinitesimal, implying that the intermediate matricesremain both non-singular and stable, until obtaining a near-identity matrixat the end.

Proposition 5.7. Let A ∈ Mn(E) be a reduced, non-singular matrix, whichis properly arranged with respect to a matrix of representatives P . Then forall k such that 1 ≤ k ≤ n,

A(2k) = A(2k−1) ⊆ A

mk.

Proof. We will apply external induction. For k = 1, because m1 = a11 = 1,

one has A(2k−1) = A(1) = A =A

m1. By Part 1 of Theorem 5.6 it holds that

g(2k)ij = g

(2)ij is limited for 1 ≤ i, j ≤ n, hence A(2) = A(1) =

A

m1.

As for the induction step, let k < n and suppose that A(2k−1) = A(2k) ⊆A

mk. Then A(2k+1) ⊆ mk

mk+1A(2k) ⊆ mk

mk+1

A

mk=

A

mk+1. Again, by Part 1 of

Theorem 5.6 one has A(2k+2) = A(2k+1) =A

mk+1.

28

Proposition 5.8. Let A = [αij ]n×n ∈ Mn(E) be a reduced, non-singularstable matrix, which is properly arranged with respect to a matrix of repre-sentatives P . Let 1 ≤ q ≤ 2n. Then

1. ∆(q) is zeroless.

2. ⊘∆ < ∆(q) ⊂ £.

3. A(q) ⊆ ⊘∆(q) ⊆ ⊘.

4. A(q) is limited, non-singular and stable.

Proof. 1. Let q = 2k or q = 2k − 1 with 1 ≤ k ≤ n. By Lemma 5.4 one has∣

∣d(q)∣

∣ =

d

mk

. Because the matrix is non-singular and stable, it holds that

A ⊆ ⊘∆ < |d|, and because it is also reduced, it follows from Proposition

5.7 that A(q) ⊆ A

mk. Hence

∣d(q)∣

∣ > A(q). Also D(q) ⊆ A(q) by Proposition 5.2

and Proposition 4.10. Hence ∆(q) is zeroless.2. We show first that D(q) ⊆ ⊘. Indeed, suppose ⊘ ⊂ D(q). Then

£ ⊆ D(q). By Proposition 4.10 and Proposition 5.2 it holds that d(q) islimited. This implies that ∆(q) is a neutrix, in contradiction to Part 1. HenceD(q) ⊆ ⊘, which implies that ∆(q) = d(q) + D(q) ⊂ £. It also follows fromPart 1 that ∆(q) ⊆ (1 + ⊘)d(q). Now d(q) > ⊘∆ by Lemma 5.4, hence also∆(q) > ⊘∆.

3. Let 1 ≤ q ≤ 2n. Then q = 2k or q = 2k − 1 with 1 ≤ k ≤ n. ByProposition 5.7, the stability of the matrix A, Lemma 5.4 and Part 2, onehas

A(q) ⊆ A

mk

⊆ ⊘d

mk

= ⊘d(q) = ⊘∆(q) ⊆ ⊘.

4. By Proposition 5.2 the matrix A(q) is limited. By Part 1 the matrixA(q) is non-singular. Then A(q) is stable by Part 3.

Theorem 5.9. Let A ∈ Mn(E) be a reduced, non-singular stable matrix,which is properly arranged with respect to a matrix of representatives P .Then GP (A) is a near-identity matrix.

Proof. Let A be the associated neutricial matrix of A. By Proposition 4.7 wehave GP (A) = GP (P ) + GP (A) = I + A′, where A′ = [A′

ij ]n×n is a neutricialmatrix. By Part 3 of Proposition 5.8 one has A′ ⊆ [⊘]n×n. Hence GP (A) isa near-identity matrix.

29

We consider now the effect of the Gauss-Jordan procedure on the right-hand member of the system A|B, where we always assume that the systemsatisfies Convention 2.25. In fact, in contrast to the the possible increase ofthe neutrix parts of the coefficient matrix of a stable system, the pivots ofthe Gauss-Jordan procedure do not change the neutrix part of the right-handmember, and the same is true for the inverse procedure. The invariance ofthe neutrix part will be a consequence of the next proposition.

Proposition 5.10. Suppose that the flexible system A|B is properly arrangedwith respect to a representative matrix P . Assume that ∆ is not an absorberof B. Then for all k such that 1 ≤ k ≤ n− 1

mk+1

mkB =

mk

mk+1B = B.

Proof. Let 1 ≤ k ≤ n− 1. By formula (23) it holds that ⊘∆ <∣

mk+1

mk

∣∈ £.

The fact that ∆ is not an absorber of B and Proposition 4.5.7 imply thatmk+1

mkB = B. It follows that

mk

mk+1B = B for 1 ≤ k ≤ n− 1.

When applying the inverse Gauss-Jordan procedure to the right-handmember of the flexible system GPA|GPB, we define for 1 ≤ q ≤ 2n

[B](−q) =(

(

GPq

)−1(

(

GPq+1

)−1 · · ·(

(

GP2n

)−1[B])))

and(

GP)−1

[B] =(

(

GP1

)−1(

(

GP2

)−1 · · ·(

(

GP2n

)−1[B])))

.

Theorem 5.11. Suppose that the flexible system A|B is properly arrangedwith respect to a representative matrix P . Assume that ∆ is not an absorberof B. Then for all q such that 1 ≤ q ≤ 2n one has [B](q) = [B] and

[B](−q) = [B]. In particular GP [B] = [B] and(

GP)−1

[B] = [B].

Proof. We will apply External Induction. Because a11 = 1, we have [B](1) =GP1 [B] = I[B] = [B].As for the induction step, let q < 2n and suppose that [B](q) = [B]. We

consider two cases.

30

Case 1: q + 1 = 2k + 1 for some k ∈ {1, . . . , n − 1}. By the inductionhypothesis and Proposition 5.10 we have

[B](q+1) = GPq+1[B](q) = GP

2k+1[B]

=

1 0 · · · 0 · · · 00 1 · · · 0 · · · 0...

.... . .

.... . .

...

0 0 · · · mk

mk+1· · · 0

......

. . ....

. . ....

0 0 · · · 0 · · · 1

·

B...B

=

B...B

.

Case 2: q + 1 = 2k + 2 for some k ∈ {1, . . . , n− 1}. By Theorem 5.6(1) allentries of the matrix GP

2k+2 are limited, and the elements of its diagonal areequal to 1. Then it follows from Case 1 that

[B](q+1) = GPq+1[B](q) = GP

2k+2[B](2k+1) = GP2k+2[B] = [B].

In particular GP [B] = [B](2n). This proves the theorem for the Gauss-Jordanprocedure GP . The proof for the inverse procedure is similar.

Proposition 5.12. Suppose that the flexible system A|B is properly arrangedwith respect to a representative matrix P and stable. Then

(

GP)−1 (GPB

)

= B.

Proof. Let B = b+B. By Proposition 4.7 and Theorem 5.11,

(

GP)−1 (GPB

)

=(

GP)−1 (GP (b+B)

)

=(

GP)−1 (GP b+ GPB

)

=(

GP)−1 (GP b+B

)

=(

GP)−1 (GP b

)

+(

GP)−1

B

=(

(

GP)−1 GP

)

b+B = b+B = B.

The neutrix part of the right-hand member is also invariant by multiplyingand dividing by the determinants ∆ and ∆(q). This is shown in Proposition5.13.

31

Proposition 5.13. For stable systems A|B it holds that

∆B =B

∆= B. (25)

Moreover, for 1 ≤ q ≤ 2n the determinant ∆(q) is not an absorber of B, and

∆(q)B =B

∆(q)= B. (26)

Proof. It follows from the fact that ∆ is zeroless and Proposition 4.10 that⊘∆ < ∆ ⊂ £. Also ∆ is not an absorber of B. Then (25) follows fromProposition 4.5.7. By Proposition 5.8.2 we have ⊘∆ < ∆(q) ⊂ £. Then also∆(q) is not an absorber of B, hence (26) holds by Proposition 4.5.7.

We are now able to prove that the Gauss-Jordan operations respect thestability property.

Proof of Theorem 2.26. 1. Assume that the system A|B is stable. Let 1 ≤q ≤ 2n. By Proposition 5.8(4) the matrixA(q) is stable. By Theorem 5.11 thesystem A(q)|B(q) is uniform with [B](q) = [B]. Then ∆(q) is not an absorberof [B](q) by Proposition 5.13. We still need to show that

R(A(q)) ⊆ R(B(q)). (27)

Observe that R(A(q)) is well-defined, because ∆(q) is zeroless by Proposition5.8.1.

We show first that for 1 ≤ q ≤ 2n

A(q) β(q) ⊆ B. (28)

In order to derive (28), we show by external induction that for 0 ≤ q ≤ 2nand 1 ≤ i, j ≤ n

A(q)ij β

(q) ⊆ B. (29)

For q = 0 we have by stability, (10) and (25)

A(0)ij β(0) ⊆ Aβ ⊆ ∆R(A)β ⊆ ∆R(B)β ⊆ ∆B = B.

Assuming that the property (29) holds for q < 2n, we will prove it for q + 1.

Because β(q+1) = β(q+1)p for some p ∈ {1, . . . , n},

∣β(q+1)

∣=∣

∣β(q+1)p

∣ =

n∑

j=1

g(q+1)pj β

(q)j

≤n∑

j=1

∣g(q+1)pj

∣β(q)j

∣≤

n∑

j=1

∣g(q+1)pj

∣β(q)

∣.

32

Also for 1 ≤ i, j ≤ n

A(q+1)ij = g

(q+1)i1 A

(q)1j + · · ·+ g

(q+1)in A

(q)nj . (30)

If q + 1 = 2k + 2 for some k ∈ {1, . . . , n − 1}, by Theorem 5.6 and theinduction hypothesis one has

A(q+1)ij β(q+1) ⊆

(

g(q+1)i1 A

(q)1j + · · ·+ g

(q+1)in A

(q)nj

)(

n∑

j=1

∣g(q+1)ij

∣β(q)

)

=(

g(q+1)i1 A

(q)1j β

(q) + · · ·+ g(q+1)in A

(q)nj β

(q))

(

n∑

j=1

∣g(q+1)ij

)

⊆(£B + · · ·+£B)£ ⊆ B.

If q+1 = 2k+1 for some k ∈ {1, . . . , n−1}, we consider separately the casesi 6= k + 1 and i = k + 1.

Case 1: For i 6= k+1 and 1 ≤ i ≤ n, the row g(q+1)i is a unit vector, so the

neutrices of the ith row of A(q+1) satisfy A(q+1)ij = A

(q)ij for 1 ≤ j ≤ n. Also

β(q+1) =

(

β(q)1 , . . . , β

(q)k ,

mk

mk+1

β(q)k+1, β

(q)k+2, . . . , β

(q)n

)

.

If β(q+1) = β(q)s for some s ∈ {1, . . . , n} \ {k+1}, for i 6= k+1, 1 ≤ i ≤ n and

1 ≤ j ≤ n one has by the induction hypothesis

A(q+1)ij β(q+1) = A

(q)ij β

(q)s ⊆ A

(q)ij β

(q) ⊆ B.

If β(q+1) =mk

mk+1β(q)k+1, then for i 6= k + 1, 1 ≤ i ≤ n and 1 ≤ j ≤ n it follows

from the induction hypothesis and Proposition 5.10 that

A(q+1)ij β(q+1) = A

(q)ij

mk

mk+1β(q)k+1 ⊆

mk

mk+1B = B.

Case 2: For i = k + 1, by formula (30) one has for 1 ≤ j ≤ n

A(q+1)k+1j = A

(q)k+1j

mk

mk+1.

If β(q+1) = β(q)s for some s ∈ {1, . . . , n}\{k+1}, due to Proposition 5.10 one

has for 1 ≤ j ≤ n

A(q+1)k+1j β

(q+1) =mk

mk+1A

(q)k+1jβ

(q)s ⊆ mk

mk+1A

(q)k+1jβ

(q) ⊆ mk

mk+1B = B.

33

If β(q+1) =mk

mk+1β(q)k+1, again using Proposition 5.10 we find for 1 ≤ j ≤ n

A(q+1)k+1j β

(q+1) =mk

mk+1A

(q)k+1j

mk

mk+1β(q)k+1 ⊆

( mk

mk+1

)2

A(q)k+1jβ

(q) ⊆( mk

mk+1

)2

B = B.

Combining, we see that property (29) holds for all q such that 0 ≤ q ≤ 2n.Formula (28) follows directly from (29).

To finish the proof, we consider separately the cases that β(q) is zerolessand that β(q) = B is neutricial. If β(q) is zeroless, by (28) and Proposition5.13

R(A(q)) =A(q)

∆(q)⊆ 1

∆(q)

B

β(q)=

B

β(q)= R(B(q)).

If β(q) = B is neutricial, formula (28) takes the form A(q)B ⊆ B. Then

R(A(q))B = A(q)B

∆(q)= A(q)B ⊆ B.

We conclude from Theorem 5.11 that R(A(q)) ⊆ B : B = R([B(q)]) =R(B(q)).

2. By setting q = 2n we obtain from Part 1 that the final systemGPA)|GPB is stable, while GPA is a near-identity matrix by Theorem 5.9.

6. Stability and Cramer’s rule

By Theorem 4.4 of [13] Cramer’s rule in the form (12) solves non-singularreduced uniform non-homogeneous stable systems. Here we extend the proofto homogeneous systems. We continue to adopt Convention 2.25 (the factthat A is properly arranged is not essential here) and prove the followingtheorem.

Theorem 6.1 (Cramer’s Rule for flexible systems). If the system A|B isstable, its solution is given by the external vector (12).

Let ξ be given by (12). Proposition 6.4 shows that the neutrix parts ofthe components of ξ are equal to the neutrix at the right-hand side B. Thenthe proof of Theorem 6.1 for homogeneous systems consists in showing thatthe solution is neutricial, with components equal to B.

We first introduce some notations, which in part will be used in the proofthat for stable systems the Gauss-Jordan solution and the Cramer solutioncoincide, and provide bounds for the determinants of Mj and its neutrices.

34

Definition 6.2. Consider the system A|B with A ∈ Mn(E) non-singular.Let ∆ = det(A) ≡ d + D with d = det(P ), where P is a representativematrix of A. For 1 ≤ i ≤ n, let Mi(b) be the matrix obtained from A by thesubstitution of the ith column by a representative vector b of B. We write

ξ(b, d)T =

(

det(M1(b))

d, . . . ,

det(Mn(b))

d

)T

ξ(b)T =

(

det(M1(b))

∆, . . . ,

det(Mn(b))

)T

.

Lemma 6.3. Assume the system A|B is stable. Then for 1 ≤ j ≤ n

1. |det(Mj)| ≤ 2n!∣

∣β∣

∣.

2. N(

det(Mj))

⊆ β · A+B.

Proof. Let Sn be the set of all permutations of {1, . . . , n} and σ ∈ Sn. Put

γσ = ασ(1)1 . . . ασ(j−1)j−1ασ(j+1)j+1 . . . ασ(n)n.

Because the system is reduced,

|γσ| ≤ αn−1 ≤ (1 +⊘)n−1 = 1 +⊘, (31)

and, as a consequence of Proposition 4.5.6,

N(γσ) = N

(

1≤k≤n,k 6=j

(aσ(k)k + Aσ(k)k)

)

⊆ N(1 + A)n−1 = A. (32)

1. It follows from (31) that

|det(Mj)| ≤∑

σ∈Sn

∣γσβσ(j)

∣ ≤∑

σ∈Sn

∣(1 +⊘)|β|∣

∣ = n!(1 +⊘)|β| = 2n!|β|.

2. It follows from (32) and (31) that

N(

det(

Mj

)

)

= N

(

σ∈Sn

sgn (σ) γσβσ(j)

)

=∑

σ∈Sn

N(

γσβσ(j)

)

=∑

σ∈Sn

(

βσ(j)N(γσ) + γσN(βσ(j)))

⊆∑

σ∈Sn

(

βA+ (1 +⊘)B)

= βA+B.

35

Proposition 6.4. Assume the system A|B is stable. Then for 1 ≤ j ≤ n

N

(

det(Mj)

)

= B.

As a consequence, if the system is homogeneous, for 1 ≤ j ≤ n

det(Mj)

∆= B.

Proof. Let D = N(∆). By Proposition 4.5.3, Lemma 6.3 and Proposition4.10, we have for 1 ≤ j ≤ n

N

(

det(

Mj

)

)

=1

∆N (det(Mj)) + det

(

Mj

)

N

(

1

)

(33)

=1

∆N(

det(

Mj

))

+ det(

Mj

) D

∆2

⊆ 1

∆(βA+B) + 2n!β

D

∆2

⊆βA

∆+

B

∆+ β

A

∆2.

From the stability condition R(A) ⊆ R(B) we derive both in the homoge-

neous and non-homogeneous case that βA

∆⊆ B. Then we obtain from (33)

and Proposition 5.13 that

N

(

det(

Mj

)

)

⊆ βA

∆+

B

∆+

1

(

β

∆A

)

⊆ B +B +B/∆ = B. (34)

It follows from Proposition 4.11 that |∆ij | > ⊘∆ for some i ∈ {1, . . . , n}.Because ∆ is not an absorber of B, also ∆ij is not an absorber of B. HenceB ⊆ B∆ij . Using the fact that products containing a neutrix have always

36

the same sign and subdistibutivity, we derive that

B ⊆ B∆1j + · · ·+B∆nj

⊆ det

1 + A11 · · · α1(j−1) B α1(j+1) · · · α1n...

. . ....

......

. . ....

αn1 · · · αn(j−1) B αn(j+1) · · · αnn

⊆ N

det

1 + A11 · · · α1(j−1) b1 +B α1(j+1) · · · α1n...

. . ....

......

. . ....

αn1 · · · αn(j−1) bn + B αn(j+1) · · · αnn

= N (det(Mj)) .

Then by Proposition 5.13

B =B

∆=

N(

det(Mj))

∆⊆ N

(

det(Mj))

∆+ det(Mj)N

(

1

)

= N

(

detMj

)

.

(35)

Combining (34) and (35), we conclude that B = N

(

det(Mj)

)

for 1 ≤ j ≤n.

As a consequence, if the system is homogeneous, it holds thatdet(Mj)

∆=

N

(

det(Mj)

)

= B for 1 ≤ j ≤ n.

Proof of Theorem 6.1. Let x = (x1, x2, . . . , xn)T ∈ ξ. In order to show that

x satisfies the system A|B, assume first that the system is not homogeneous.By Theorem 4.4 of [13] the external vector ξ given by (12) is the solutionof the system A|B given in the form (9), hence x satisfies A|B by Propo-sition 2.20. Secondly, assume that the system A|B is homogeneous. Thenξ = (B,B, . . . , B)T by Proposition 6.4. By direct verification we see that ξsatisfies (9). Again x is a solution of the system A|B by Proposition 2.20.

Suppose now that x is an admissible solution of system A|B. Let P =[aij ]n×n be a representative matrix for A. Then for 1 ≤ i ≤ n there existsbi ∈ βi such that

a11x1+ · · · +a1nxn = b1...

. . ....

...an1x1+ · · · +annxn = bn

.

37

Let b = (b1, . . . , bn)T . By Cramer’s rule, one has xi =

MPi (b)

d∈ det(Mi)

∆for

1 ≤ i ≤ n. Hence x ∈ ξ.

7. Proof of the Main Theorem

The proof of Theorem 2.30 is organized as follows: we prove first that theGauss-Jordan procedure does not alter the solution of a stable system, i.e.the set of admissible solutions. With this and Theorem 6.1, we show thatthe solutions given by Cramer’s rule and by Gauss-Jordan elimination areequal. Then we show that a stable system such that the coefficient matrixis a near-identity matrix is simply solved by the right-hand member. TheMain Theorem will follow from these theorems.

Theorem 7.1. Suppose that the system A|B is stable. Then the Gauss-Jordan solution G is well-defined, and a real vector x is an admissible solutionif and only if x ∈ G.

Proof. Let S be the solution of A|B, and P be a properly arranged matrixof representatives of A.

Assume first that x is an admissible solution, i.e. x ∈ S. Then Ax ⊆ B.By Proposition 4.9 and Proposition 4.6

(

GP (A))

x = GP (Ax) ⊆ GP (B).

Hence x ∈ GP .Conversely, assume that x ∈ GP . Then

(

GP (A))

x ⊆ GP (B). Using

Proposition 4.8, Proposition 4.9 and Proposition 5.12 we derive that

Ax = I(Ax) = ((

GP)−1 GP )(Ax)

⊆(

GP)−1(

GP (Ax))

=(

GP)−1((

GP (A))

x)

⊆(

GP)−1 (GP (B)

)

= B.

Hence x ∈ S. Combining, we see that S = GP . Consequently GP does notdepend on the choice of P , hence G ≡ GP is well-defined. We conclude thatS = G.

Theorem 7.2. Assume that the system A|B is stable. Then the Gauss-Jordan solution is equal to the Cramer-solution.

38

Proof. The theorem follows from Theorem 7.1 and Theorem 6.1.

The solution of the system whose coefficient matrix is the identity matrixis of course the right-hand member. We use Theorem 6.1 to show that thisproperty remains valid if the coefficient matrix is a near-identity matrix,provided the system is stable.

Theorem 7.3. Let A be a near-identity matrix and B = b+B. Suppose thatthe system A|B is stable. Then B is the solution of the system.

Proof. Put ξ = (ξ1, . . . , ξn)T with ξi = det(Mi)/∆ for 1 ≤ i ≤ n. By

Theorem 6.1 the vector ξ is the solution of the system A|B. We have A =In + A with A ⊆ [⊘]n×n, so In is a representative matrix of A, and bi is arepresentative of det(Mi) for 1 ≤ i ≤ n. It follows from the stability that∆ = 1 +D with D ⊆ A ⊆ ⊘. In addition, by Proposition 6.4 it holds that

N

(

det(Mi)

)

= B for 1 ≤ i ≤ n. Then

ξi = bi +N

(

det(Mi)

)

= bi +B = βi.

Hence for 1 ≤ i ≤ n, i.e. ξ = B. Hence the solution of the system A|B isequal to B.

Theorem 7.4 gives an effective way to find the solution. As in the realcase, the solution of A|B is given by the Gauss-Jordan procedure, whereby Theorem 7.1 we may choose any representative matrix P of A, providedit is reduced and properly arranged. The result follows from the fact thatthe Gauss-Jordan procedure, which due to Part 2 of Theorem 5.2 does notaffect the stability of the system, leads to a stable system whose matrix ofcoefficients is a near identity matrix, the solution of which is equal to right-hand member by Theorem 7.3.

Theorem 7.4. Suppose that the system A|B is stable, and properly arrangedwith respect to a representative matrix P of A. Then GP (B) is the Gauss-Jordan solution of A|B.

Proof. By Theorem 5.9 it holds that GP (A) is a near-identity matrix. By

Part 2 of Theorem 2.26, the system(

GP (A))

x ⊆ GP (B) is stable. Then

GP (B) is the solution of the system(

GP (A))

x ⊆ GP (B) by Theorem 7.3.

39

Hence GP (B) = GP , so it is the Gauss-Jordan solution of the system A|Bwith respect to P . By Theorem 7.1 it is Gauss-Jordan solution of the systemA|B.

Proof of Theorem 2.30. The solution S is equal to the Gauss-Jordan solutionG by Theorem 7.1, which also says that the application of the Gauss-Jordanprocedure does not depend on the choice of the matrix of representatives P .Then G = GP (B) by Theorem 7.4. Also G is equal to the Cramer-solutionby Theorem 7.2, which takes the form (12) by Theorem 6.1.

8. Equivalent systems

Systems with the same right-hand member will be said to be equivalentif they have equal solutions. By showing that two systems are equivalent,we may obtain simplifications. In particular, let A|B be a system such thatsome of the entries of A are given in the form of expansions. Assume thatA′ is obtained from A by truncating the expansions in such a way that A′|Bis equivalent to A|B. Then we may solve as well the simplified system A′|B,neglecting the extra terms occurring in A.

If A|B is stable, we will see that the simplification is justified if the ne-glected terms t of the expansions satisfy t/∆ ⊆ R(B). We may roughlyinterpret this by the possibility to neglect decimals in a coefficient matrix, ifcompared with the determinant they are small with respect to the relativeimprecisions of the right-hand member.

We will illustrate the effects of simplification with the help of Example3.1 and Example 8.7 below, and some numerics. Again we consider systemsA|B in the sense of Convention 2.25.

Definition 8.1. Let A,A′ ∈ Mn(E) and B ∈ Mn,1(E) be an external vector.The system A′|B is said to be equivalent to A|B if the solution of A′|B is equalto the solution of A|B.

Proposition 8.2 gives conditions for such flexible systems to be equivalent.

Proposition 8.2. Let A|B be a stable system with solution S = GP (B),where P = [aij ]n×n is a reduced properly arranged representative matrix. LetQ = [qij ]n×n ∈ Mn(R) be a reduced properly arranged matrix such that for1 ≤ i, j ≤ n

qij − aij ∈ A. (36)

40

Let A′ ≡ [α′ij ] with α′

ij = qij + A′ij and A′

ij ⊆ A for 1 ≤ i, j ≤ n. Then A′|Bis a stable equivalent system, and GP (B) = GQ(B).

We prove first two lemmas.

Lemma 8.3. Let A ∈ Mn(E) = [αij]n×n = [aij + Aij ]n×n be a non-singularstable matrix, properly arranged with respect to a reduced representative ma-trix P = [aij ]n×n. Let A′ ≡ [α′

ij]n×n be defined by

α′ij = aij + A′

ij ,

with A′ij ⊆ A for 1 ≤ i, j ≤ n. Then the matrix A′ is non-singular and stable.

Proof. Because A is limited, the matrix A′ is also limited. Let d = det(P )and ∆′ = det(A′). Then d and ∆′ are also limited. Because A′

ij ⊆ A for1 ≤ i, j ≤ n, and the matrix A is non-singular and stable, it holds that∆′ ⊆ d+A ⊆ (1+⊘)d. So ∆′ is zeroless, hence A′ non-singular. In addition

A′

∆′⊆ A

(1 +⊘)d=

A∆

⊆ ⊘.

Hence A′ is stable.

Lemma 8.4. Let A|B be a stable system, and P = [aij ]n×n be a reducedproperly arranged representative matrix of A. Let A′ ≡ [α′

ij] with α′ij =

aij + A′ij and A′

ij ⊆ A for 1 ≤ i, j ≤ n. Then A′|B is a stable equivalentsystem satisfying Convention 2.25.

Proof. Because A′ ⊆ A, by Lemma 8.3 the matrix A′ is non-singular andstable. Then A′|B satisfies Convention 2.25. By Theorem 7.4 both systemsA|B and A′|B are solved by GP (B). Hence the systems are equivalent.

Proof of Proposition 8.2. Put A′′ = Q+ (A)n×n; note that the system A′′|Bsatisfies Convention 2.25. It follows from (36) that A′′ = P + (A)n×n, soby Lemma 8.3 the matrix A′′ is non-singular and stable. Then A′′|B is astable system, and by Lemma 8.4 the systems A|B and A′′|B are equivalent.The system A′|B shares with the system A′′|B the representative matrix Q,hence by Lemma 8.4 it is stable and equivalent to A′′|B. Hence the systemsA|B and A′|B are also equivalent. Then it follows from Theorem 7.4 thatGP (B) = GQ(B).

41

Corollary 8.5. Let A|B be a stable system, where A = P + A, with P =[aij ]n×n is a reduced properly arranged representative matrix and A a neu-tricial matrix. Let Q = [qij ]n×n ∈ Mn(R) be a reduced properly arrangedrepresentative matrix of A′ ≡ P + (A)n×n. Then A|B and Q|B are equiva-lent.

The corollary indicates that for stable systems we may neglect all termsand neutrices smaller than the biggest neutrix in the coefficient matrix, andsolve instead for any real coefficient matrix lying within this range of impre-cision.

We will apply Proposition 8.2 and Corollary 8.5 in Example 8.6 andExample 8.7 below.

Example 8.6. (Continuation of Example 3.1.) Put

(1 + ε2⊘) x1 + (1 + ε2⊘)x2 + (1 + ε2⊘) x3 ⊆ 1 + ε⊘(1 + ε2⊘) x1 +

(

−12+ ε2⊘

)

x2 + (−12+ ε2⊘)x3 ⊆ −2 + ε⊘

(

12ε+ ε2⊘

)

x1 + (12+ ε2⊘)x2 + (1 + ε2⊘) x3 ⊆ ε+ ε⊘

.

Let A′ be the coefficient matrix of 8.6. Note that the matrix P given by(15) is a representative matrix of both A and A′, and that the associatedneutricial matrices satisfy A ⊆ A′ with A′ = A, and that the vectors in theright-hand side of both systems are the same. Then by Proposition 8.2 thesolution of the system (8.6) is also given by (16).

Example 8.7. Let ε > 0 be infinitesimal. Consider the reduced flexiblesystem A|B given by

(1 + ε£)x1 + (1− ε)x2 + (12+ 2ε2)x3 +

12x4⊆−1+ε£

(−1 + 3ε)x1 + x2 +(

12+ ε2 + ε2⊘

)

x3 +12x4⊆ ε£

x2 − 12x3 + (1− 3ε2 + ε2⊘)x4 ⊆−1

2+ε£

(

12+ ε+ ε⊘

)

x1 + (1 + ε£)x3 + (1 + ε⊘)x4 ⊆2 +ε£

. (37)

The matrix

P =

1 1− ε 1/2 + 2ε2 1/2−1 + 3ε 1 1/2 + ε2 1/2

0 1 −1/2 1− 3ε2

1/2 + ε 0 1 1

, (38)

is a representative matrix of A. One verifies that P is non-singular, reducedand properly arranged, with det(P ) ∈ 3+£ε zeroless, m1 = 1, m2 ∈ 2−2ε+

42

⊘ε and m3 ∈ 2−5/2ε+⊘ε. Also R (A) = A�∆ = ε£, R (B) = B�β = ε£and ∆B = ε£ = B. Hence A = ε£ ⊂ ⊘ = ⊘∆, R (A) ⊆ R (B) and ∆ is notan absorber of B, so the system A|B stable.

Let

Q =

1 1 1/2 1/2−1 1 1/2 1/20 1 −1/2 11/2 0 1 1

. (39)

The matrix Q is reduced and non-singular, with determinant d ≡ det(Q) =−3. A straightforward calculation shows that Q is properly arranged, withm2 = 2 and m3 = −2. The entries of Q and P differ for at most a limitedmultiple of ε, which is contained in A = £ε.

Applying the usual Gauss-Jordan procedure we derive that

X = GQB =

−1/2 + ε£−13/8 + ε£3/4 + ε£3/2 + ε£

is the Gauss-Jordan solution of the system. By Corollary 8.5 it is also thesolution of (37).

We illustrate Example 3.1/8.6 and Example 8.7 numerically. We assumethat ε = 0.01, and represent ⊘ by [−0.1, 0.1] and £ by the interval [−2, 2].We will not do an exhaustive investigation, and instead of applying intervalcalculus we choose the extreme values of the numerical intervals somewhatat random.

Working with the matrix (15) and the right-hand member (1,−2, 1/100)T ,we find the exact solution

x =

x1

x2

xx

=

−1397100197100

=

−13.97−1.97

.

To represent the coefficient matrix of Example 3.1, we may consider, say,

A′ =

1.00001 0.99999 1.0000020.999998 −0.50001 −0.50.00499999 0.5 1.00001

.

43

Rounded off at 7 significative digits, we find the the solution

x′ =

−0.9999703.969929

−1.969945

.

The largest deviation with respect to the exact solution is about 0.000071in the second coordinate, which is significantly smaller than 0.001, i.e. theabsolute value of the bounds of the interval representing ⊘ε.

In Example 8.6 all entries of the coefficient matrix are imprecise. In orderto compare with the numerical matrix A′, we choose a matrix A′′ using arandomization which is the same for the imprecise coefficients of A′ and put

A′′ =

1.00001 0.99999 1.000010.99999 −0.50001 −0.499990.004999 0.49999 1.00001

.

Rounded off at 7 significative digits, we find the the solution

x′′ =

−0.9999433.969928

−1.969915

.

As expected, the result is not as good as x′, still the largest deviation of about0.000085 for the third coordinate lies well within the interval [−0.001, 0.001]representing ⊘ε.

Finally we illustrate Corollary 8.5 by comparing the solution of the system(37) when using the representative matrices P given by (38) and Q given by(39).

The solution x′ for the matrix

P ′ =

1 0.99 0.5002 0.5−0.97 1 0.5001 0.5

0 1 −0.5 0.99970.51 0 1 1

is, rounding off at 7 significative digits,

x′ = GP ′

b =

−0.5159373−1.6320990.75371781.509410

.

44

For the matrix Q and the right-hand member bT = (−1, 0,−1/2, 2)T we findthe exact solution

x = GQb =

−1/2−13/83/43/2

=

−0.5−1.6250.751.5

.

We observe the largest deviation between x′ and x in the second coordi-nate, with a value of about 0.007. This is 0.7 times the value 0.01 chosen forε, so it can be considered to lie within £ε.

References

[1] van der Corput JG. Introduction to the neutrix calculus. Journald’Analyse Mathematique. 1959; 7 (1):291–398.

[2] Diener F, Diener M, Editors. Nonstandard analysis in practice. Berlin:Springer-Verlag; 1995.

[3] Diener F, Reeb G. Analyse nonstandard. Paris: Hermann; 1989.

[4] Dinis B, van den Berg IP. Algebraic properties of external numbers.Journal of Logic & Analysis. 2011; 3 (9):1–30.

[5] Dinis B, van den Berg IP. Characterization of distributivity in a solid.Indagationes Mathematicae. 2017; 28 (4):785-795.

[6] Dinis B, van den Berg IP. Axiomatics for the external numbers of non-standard analysis. Journal of Logic and Analysis. 2017; 9 (7):1–47.

[7] Dinis B, van den Berg IP. Neutrices and External Numbers. A flexiblenumber system. London: Taylor and Francis; 2019.

[8] Dominic H, Diana R. Sorites Paradox. The Stanford Encyclo-pedia of Philosophy, Summer 2018 Edition. Available from:http://plato.stanford.edu/archives/sum2018/entries/sorites-paradox/.

[9] Gantmacher FR. The theory of matrices, vols I, and II. New York:Chelsea Publishing Co.; 1960.

45

[10] George A, Ikramov KD, Kucherov AB. On the growth factor in Gaussianelimination for generalized Higham matrices. Linear algebra Appl. 2002;9:107–114.

[11] Grossman DP. On the problem of the numerical solution of systemsof simultaneous linear algebraic equations. Uspekhi Mat. Nauk. 1950;5(3):87–103.

[12] Ikramov KD. Conditionality of the intermediate matrices of the Gauss,Jordan and optimal elimination methods. USSR Comput. Maths Math.Phys. 1979; 18:1–16.

[13] Justino J, van den Berg IP. Cramer’s rule applied to flexible systems oflinear equations. Electronic Journal of Linear Algebra. 2012; 24:126–152.

[14] Kanovei V, Reeken M. Nonstandard Analysis, Axiomatically, SpringerMonographs in Mathematics. Berlin: Springer-Verlag Heidelberg; 2004.

[15] Koudjeti F, van den Berg IP. Neutrices, external numbers and externalcalculus In: Diener F and Diener M Editors. Nonstandard analysis inpractice; 145-170. Berlin: Springer-Verlag; 1995.

[16] Lyantse W, Kudryk T. Introduction to nonstandard analysis. Lviv:VNTL Publishers; 1997.

[17] Nelson E. Internal set theory: A new approach to nonstandard analysis.Bulletin of the American Mathematical Society. 1977; 83:1165–1198.

[18] Parker DS. Explicit Formulas for the results of Gaussian Elimination.Available from: http://web.cs.ucla.edu, 1995.

[19] Peters G, Wilkinson JH. On stability of Gauss-Jordan elimination withpivoting. Communications of the ACM. 1975; 18 (1):20–24.

[20] Taylor JR. An introduction to error analysis: The study of uncertaintiesin physical measurements (2nd ed.). University Science Books; 1997.

[21] Tran VN, van den Berg IP. A parameter method for linear algebra andoptimization with uncertainties. Optimization. 2020; 69 (1):21–61. DOI:10.1080/02331934.2019.1638387.

46

[22] Tran VN, van den Berg IP. An algebraic model for the propagation oferrors in matrix calculus. Special matrices. 2020; 8 (1):68–97.

[23] Tran VN, Justino J, van den Berg IP. On the explicit formula for Gauss-Jordan elimination. Available from: https://arxiv.org/abs/2010.01085(accepted for JP Journal of Algebra, Number Theory and Applications),2020.

[24] Weiss SE. The sorites fallacy: What difference does apeanut make? Synthese. 1976; 33:253–272. Available from:http://www.jstor.org/stable/20115132.

[25] Wilkinson JH. Error Analysis of Direct Methods of Matrix Inversion. J.ACM. 1961; 8:281–330.

[26] Yi Li. An Explicit Construction of Gauss-Jordan Elimination Matrix.2009. Available from: http://arxiv.org/pdf/0907.5038.pdf.

47