26
1 March 1, 2016 The Basic Matrix Algebra in Linear Models Chapter 1Deal with generalized inverse matrices allied topics Chapter 2Extending to sections on the distribution of quadratic and bilinear forms and the singular multinomial distribution Chapter 3Full Rank models A sample explanation of regression multiple regression A unified treatment for testing a general linear hypothesis Chapter 4Models not of full rank Dummy (0, 1) variables Estimable functions Non- estimable functions Chapter 5Non -full-rank model Testing any testable linear hypothesis Chapter 6 - Chapter 8Give many details for the analysis of unbalanced data (Unequal-subclass-numbers data). Chapter 9 - Chapter 11Data with variance components

Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

1

March 1, 2016

The Basic Matrix Algebra in Linear Models

Chapter 1:Deal with generalized inverse matrices allied topics

Chapter 2:Extending to sections on the distribution of quadratic and

bilinear forms and the singular multinomial distribution

Chapter 3:Full Rank models

A sample explanation of regression →multiple regression

A unified treatment for testing a general linear hypothesis

Chapter 4:Models not of full rank

Dummy (0, 1) variables

Estimable functions

Non- estimable functions

Chapter 5:Non -full-rank model

Testing any testable linear hypothesis

Chapter 6 - Chapter 8:Give many details for the analysis of unbalanced

data (Unequal-subclass-numbers data).

Chapter 9 - Chapter 11:Data with variance components

Page 2: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

2

Occupation

Education

High School

Incomplete High School College Graduate

Laborer 14 8 7

Artisan 10 - -

Professional - 17 22

Self-employed 3 9 10

Unequal numbers of observations in the subclasses including perhaps

some that contain no observations at all ⟹ unequal-numbers data,

unbalanced data, “messy” data.

“A” generalized inverse of a matrix A is defined, as any matrix G that

satisfies the equation.

AGA = A

Another name such as Conditional inverse

Pseudo inverse

g - inverse

G for a given matrix A is not unique.

To illustrate the existence of G & its non-uniqueness

If A has order 𝑝 × 𝑞

( )

( ) ( ) ( )

r q rr r

p p p q q q p q

p r r p r q r

ODP A Q

O O

More simply,

rD OPAQ

OO

P & Q are products of elementary operations.

r is the rank of A & Dr is a diagonal matrix of order r.

Page 3: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

3

Matrix Algebra

1

m

ij j

i

a a

The sum of the diagonal element of a square matrix is called the trace of

the matrix, written tr (A).

i.e., for A = {𝑎𝑖𝑗} for 𝑖, 𝑗 = 1, … , 𝑛

tr (A) = 𝑎11 + 𝑎22 + ⋯ + 𝑎𝑛𝑛 = ∑ 𝑎𝑖𝑖𝑛𝑖=1

Example:

1 7 6

8 3 9 1 3 8 4

4 2 8

tr

When A is not square, the trace is not defined. That is, it does not exist.

tr(𝐴′) = tr(𝐴)

tr(𝐴 + 𝐵) = tr(𝐴) + tr(𝐵)

(𝐴 + 𝐵)′ = 𝐴′ + 𝐵′

| ( , )

ith row

r c

c s r s

ith i j th

elementcolumn

2x3

3 4

2 4

:

0 6 1 51 0 2

1 1 0 71 4 3

3 4 4 3

6 14 9 11

13 10 11 32

x

x

eg

A B

AB

Please refer to Chapter 1- Chapter 4

of the book “Matrix Algebra useful

for Statistics”.

.

Page 4: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

4

AB is described as A post multiplied by B, or as A multiplied on the

right by B.

scalar

vector

Matrix (Matrices)

Identity matrices

(Unit matrix)

When A is of order p×q

I p A p×q = A p×q I q = A p×q

(The transpose of a product)

(AB) = B A

:eg

𝐴𝐵 = [1 0 −12 −1 3

] [1 1 10 2 43 0 7

] = [−2 1 −611 0 19

]

𝐵′𝐴′ = [1 0 31 2 01 4 7

] [10

−1

2−13

] = [−21

−6

110

19] = (𝐴𝐵)′

(The trace of a product)

tr (AB) = tr (BA)

Note that tr(AB) exists only if AB is square, which occurs only when A is

r×c and B is c×r. Then if AB = P = {pij} and BA = T = {tij}

tr(AB) = ∑ p𝑖𝑗𝑟𝑖=1 = ∑ (∑ a𝑖𝑗

𝑐𝑗=1 b𝑗𝑖)𝑟

𝑖=1 = ∑ (∑ b𝑗𝑖a𝑖𝑗𝑐𝑗=1 )𝑟

𝑖=1

= ∑ (∑ b𝑗𝑖a𝑖𝑗𝑟𝑖=1 )𝑐

𝑗=1 = ∑ (t𝑖𝑗) = tr(BA)𝑐𝑗=1

2 3

1 0 01 0

I and I 0 1 00 1

0 0 1

Page 5: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

5

Partitioned matrices

11 12 11

21 22 21

A A BA= and B=

A A B

Then

11 12 11 11 11 12 21

21 22 21 21 11 22 21

A A B A B +A BAB= =

A A B A B +A B

(The laws of Algebra)

a. Associative Laws

(A+B)+C = A+ (B+C)

(AB)C = A (BC) = ABC

b. The distributive Laws

A (B+C) = AB+AC

c. Commutative Laws

A+B = B+A

But AB = BA

(?)

When AB

BA

both exist and are of the same order, they are not in general

equal.

1 2 0 -1 2 -3 0 -1 1 2 -3 -4= =

3 4 1 -1 4 -7 1 -1 3 4 -2 -2

IA=AI=A

0A=A0=0 for A being square

(Contrasts with scalar algebra)

AX+BX = (A+B) X

XA+XB = X (A+B)

Page 6: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

6

But XP+QX generally does not have X as a factor.

(Notice) Even though

AB = 0, neither A nor B is 0

Further, BA = 2B

BA-2B = 0

B (A-2I) = 0

We cannot conclude either that A-2I = 0 or B = 0

(NO!)

Similarly, x2 = 0 ⟹ x = 0

e.g.

1 2 5

x= 2 4 10

-1 -2 -5

we have x2 = 0

Likewise, Y2 = I implies neither Y = I nor Y = -I

e.g. 21 0 1 0

Y= but Y =I=4 -1 0 1

Similarly, we can have M2 = M with both M≠I and M≠0

M = [3 −23 −2

] = M2

A square matrix is defined as symmetric when it equals its transpose;

i.e.,

A is symmetric when A = A , with aij = aji for i, j = 1,…, r for A r×r

(AB) = B A =BA

A A=0 implies A=0

tr(A A)=0 implies A=0

Recall that if a sum of square of real numbers is zero, then each of the

number is zero.

i.e. for real numbers x1,x2,…,xn ,2

i

i=1

x =0n

implies

1 2 ...... 0nx x x

c

2

j=1 =1 =1

tr(A A)= ( th diagonal element of A A)= ac r

kj

j k

j

Pxx =Qxx implies Px = Qx

(proof:)

(Pxx -Qxx )(P -Q )=(Px-Qx)x (P -Q )=(Px-Qx)(Px-Qx) =0

Px-Qx=0 i.e., Px=Qx

(Sums of outer products)

Page 7: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

7

aj is the jth column of A & B j is the j th row of B

1

c

21 2 c j j

j=1

c

AB= a a a = a

Thus AB is the sum of outer products of columns of A with corresponding

row in B.

1 4 43 48 1 4 7 8 36 407 8

AB= 2 5 = 59 66 = 2 7 8 + 5 9 10 = 14 16 + 45 509 10

3 6 75 84 3 6 21 24 54 60

(Elementary Vectors) For ei being the ith column of In, namely a vector with unity for its ith

element and zero elsewhere.

ei is called an elementary vector.

1 2 12 1 2

1 0 0 1 0

e = 0 and e = 1 , E =e e = 0 0 0

0 0 0 0 0

ij i jE =e e is null except for element (i, j) being unity n

n i i

1

I = e ei

n

3 2 3 2

2

n n

2

n n n n

1 1, ,1

1 1 =n

1 1 1

1 1 1 1 1 = 1 1 =J

1 1 1

1 1 =J having all elements unity

J 1 1 with J =nJ

1and J = J with J J

n

n

n

n

r s r s

n n n

Page 8: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

8

and for statistics

n n n

1C =I-J I- J

n

Known as a centering matrix

First observe that

2, 1 0 and CJ=JC=0C C C C

(Here are the vectors.)

Define

1 2

1 2 11 1( )

1 1

111 1

n

n

i

n i

i

x x x x

xx x x

x x xn n n n

x C x x J x x x x x xn

Is the data vector with each observation expressed as a deviation from x ?

(This is the origin of the same centering matrix for C)

being a data vector

1 is the mean

is the vector of deviations from the mean

is the sum of squares about the mean

x

xn

x C

x Cx

e.g.

2

1 12 2

1 12 2

C

A special case of the form X AX is known as a quadratic form, which can be used for

sums of squares.

2

22 2 2

1 1

( 1 ) (1 )

( )

n n

i i

i i

x Cx x x x x x x x x x nx

x x x nx x x nx x Cx

Page 9: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

9

Idempotent Matrices:(From “same” ”power” [Latin])

When k is such that

k2 = k, we say k is idempotent.

kr = k for r being a positive integer

k2 = k implies (I-k)2 = I-k

(I-k) (I-k) = I-k-k+k2 = I-k

But (k-I) is not idempotent.

e.g.

2

I-J is idempotent

1 -12 2

C ex. C =-1 1

2 2

GA is idempotent whenever G is such that AGA = A

(A matrix G of the nature is called a generalized inverse of A)

Orthogonal Matrix:

Another useful class of Matrices

AA =I=A A Such matrices are called orthogonal.

The norm of a real vector 1 2x = x nx x is defined as norm of n

12 2i

i=1

X= X X ( x )

A vector is said to be either normal or a unit vector when its norm is

unity i.e., when X X=1

Any non-null vector x can be changed into a unit vector by multiplying it

by the scalar (1 X X );i.e., 1

u=( )XX X

is the normalized form of X

(because u u=1 ).

Non-null vectors X and Y are described as being orthogonal when X Y=0

e.g.

X = 1 2 2 4 X Y=0

Y = 6 3 -2 -2

Two vectors are defined as orthonormal vectors when they are

orthogonal and normal.

(e.g.)

u and v are orthonormal

When

u u=1=v v & u v=0

1u = 1 1 3 3 4

are orthonormal vectors6

v = -0.1 -0.7 -0.1 -0.4 0.4

Page 10: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

10

The vectors of an orthonormal set are all normal, and pairwise orthogonal.

A matrix Pr×c whose rows constitute an orthonormal set of vectors is said

to have orthonormal rows, where rPP =I .

But P P is not necessarily an identity matrix Ic.

(e.g.)

2 3

1 0 01 0 0

P= PP =I but P P= 0 1 0 I0 1 0

0 0 0

Conversely , when Pr×c have orthonormal columns cP P=I but PP may

not be an identity matrix.

2PP =P P=I (P called orthogonal matrix)

Any two of the conditions implies the third.

(i) P square

(ii) P P=I (P has orthonormal columns)

(iii) PP =I (P has orthonormal rows)

(e.g.)

2 2 21

A= 3 - 3 06

2 1 -2

is an orthogonal matrix easily verify!

Quadratic Forms

n

2

i

i=1

(x -x) x cx

General form x Ax Any sum of squares can be represented as x Ax

1

2 2 2

1 2 3 2 1 2 1 3 1 1 2 2 3 2 1 3 2 3 3

3

2 2 2

1 1 2 1 3 2 2 3 3

2

i ii i j ij ji

i j>i

1 2 3 x

x Ax= x x x 4 7 6 x x +4x x +2x x +2x x +7x -2x x +3x x +6x x +5x

2 2 5 x

=x +x x (4+2)+x x (2+3)+7x +x x (-2+6)+5x

So, x Ax= x a + x x (a +a )

i ix x =1

for all i

i jx x =0

for =1,2, ,ni j

Page 11: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

11

2 2 2

1 2 3 1 2 1 3 2 3x Ax=x +7x +5x +6x x +5x x +4x x

1 1 1 1 2 3

=x 5 7 0 x or x 4 7 6 x or

4 4 5 2 -2 5

IF A is symmetric then

11 3 2

2

x 3 7 2 x

12 2 5

2

for any particular quadratic

form there is a unique symmetric matrix A for which the quadratic form

can be expressed as x Ax .

When A is not symmetric then 1

(A+A )2

is symmetric. Hereafter,

whenever we deal with a quadratic form x Ax , we assume A=A .

(Positive definite Matrices) 1. When x Ax > 0 for all x other than x = 0 then x Ax is a positive

definite quadratic form and A=A is correspondingly a positive definite

(p.d.) matrix.

(e.g.)

1

2 2 2

1 2 3 2 1 2 3 1 2 1 3 2 3

3

2 2 2

1 2 1 3 2 3

2 2 1 x

x Ax= x x x 2 5 1 x 2x +5x +2x +4x x +2x x +2x x

1 1 2 x

=(x +2x ) +(x +x ) +(x +x )

2. When x Ax 0 for all x and x Ax=0 for some x 0 then x Ax is a

positive semi definite quadratic form and hence A=A is a positive semi

definite (p.s.d.) matrix.

p.d.

non-negative definite (n.n.d.).

p.s.d.

(e.g.)

1

2 2 2

1 2 3 2 1 2 1 3 2 3

3

37 -2 -24 x

x Ax= x x x -2 13 -3 x =(x -2x ) +(6x -4x ) +(3x -x )

-24 -3 17 x

This is zero for x = 2 1 3

(e.g.)

n2

i

i=1

(x -x) x cx is p.s.d.

C=I-J is idempotent

p.s.d.

Page 12: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

12

(Determinants) 7 3 7 3

A= A 7(6) 3(4) 304 6 4 6

1 2 35 6 4 6 4 5

A 4 5 6 1( 1) 2( 1) 3( 1) 38 10 7 10 7 8

7 8 10

N-order determinants n

i+j

ij ij

j=1

A = a (-1) M for any i

When expanding by elements of a row.

(1) A = A

(2) If two rows of A are two same, A = 0

1 4 35 2 7 2 7 5

7 5 2 = -4 +3 =05 2 7 2 7 5

7 5 2

(3)Cofactors

C𝑖𝑗 = (−1)𝑖+𝑗|M𝑖𝑗| Where Mij is A with its ith row and jth column

deleted.

(4)Add multiple of a row (column) to a row (column).

DO NOT affect the value of the determinant.

1 3 2

A = 8 17 21 =1(17-147)-3(8-42)+2(56-34)=16

2 7 1

1 3 2 1 3 2

A = 8+4 17+12 21+8 = 12 29 29 =1(29-203)-3(12-58)+2(84-58)=16

2 7 1 2 7 1

(5) AB = A B When A and B are square and of the same order n.

(6)P 0

= P QX Q

For R and S square and of the same order n. o R

= R-I S

I A A 0 0 AB=

0 I -I B -I B

I A A 0 0 AB= A B = AB

0 I -I B -I B

(Corollaries)

(1) AB = BA (because A B = B A )

Page 13: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

13

(2)22A = A (each equals A A )

(3)For orthogonal A, 2

A = 1 (because AA =I implies A =1)

(4)For idempotent A, 22A =0 or 1 (because A =A implies A = A )

(Elementary row operations)

21elementary operator matrix P (4)adding 4 times row1 to row2

ij

1 3 2 1 0 0 1 3 2

12 29 29 = 4 1 0 8 17 21

2 7 1 0 0 1 2 7 1

1 3 2 1 0 0 1 3 2 1 3 2

12 29 29 = 4 1 0 8 17 21 = 8 17 21

2 7 1 0 0 1 2 7 1 2 7 1

E = I with ith and jth row

ii

s interchanged.

R (λ) = I with ith diagonal element replaced by .

(e.g.)

12

33

ij ii ij

0 1 0 1 3 2 8 17 21

E A= 1 0 0 8 17 21 = 1 3 2

0 0 1 2 7 1 2 7 1

1 0 0 1 3 2 1 3 2

R (5)A= 0 1 0 8 17 21 = 8 17 21

0 0 5 2 7 1 10 35 5

P (λ) =1, R (λ) =λ and E =-1

4 6 2 3=2

1 7 1 7

A + B A+B

(Chapter 5, 6 and 7) of matrix algebra…..為前述介紹之內容

Chapter 8 generalized inverses

Addition, subtraction and multiplication have already been dealt.

Division does not exist in matrix algebra.

The concept of “dividing” by A is replaced by the concept of multiplying

by a matrix called the inverse of A.

The inverse of a square matrix A is a matrix whose product with A is the

identity matrix.

A-1 the inverse of A(A-inverse;A to the (power of) minus one.)

Ax = b

As x = A-1b

Page 14: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

14

Where A-1A = I

A-1A = A A-1 = I and A-1 unique for given A.

( Derivation of the inverse ) tedious!

1 2 3

A= 4 5 6

7 8 10

Derive the cofactors of each column.

First column 1+1 2+1 3+1

5 6 2 3 2 3(-1) =2 (-1) =4 (-1) =-3

8 10 8 10 5 6

A =2(1)+4(4)-3(7)=-3

Second column:2, -11 & 6

Third column:-3, 6 & -3

Now consider the matrix obtained by replacing the elements of A by their

cofactors.

i.e.,

[1 2 34 5 67 8 10

]obtaining[2 2 −34 −11 6

−3 6 −3]

Transpose & multiply it by 1A

, it’s inverse

2 4 -31

2 -11 6-3

-3 6 -3

How to get the inverse function?

11 12 13

21 22 23

31 32 33

a a a

A= a a a

a a a

Formed a new matrix by replacing each element of A by its cofactor.

11 12 13

21 22 23

31 32 33

C C C

C C C

C C C

This was transposed, giving 11 21 31

12 22 32

13 23 33

C C C

C C C adjugate adjoint

C C C

Multiplied by the scalar 1A

1 1 2 1 3 1

-1

12 22 32

13 23 33

C C C

1 C C C =AA

C C C

If 10A A does not exist.

Page 15: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

15

So

transposed

-1A with every element1

A =replaced by its cofactorA

(1)A-1 can exist only when A is square

(2)A-1 does exist only if A is nonzero when its determinant is zero, a

square matrix is said to be singular.

Properties of the inverse:

IF A is a square, nonsingular matrix

A-1 was the following properties.

(1) -1 -1A A=AA =I

(2)The inverse of A is unique

-1

-1 1 1 -1

because if S is another inverse different from A then SA=I,

SAA ,so S=A contradict!IA A

-1 -1-1

-1

A A = AA = I =11(3) A =

A 1A =

(4)The inverse matrix is nonsingular A

(5) -1 -1(A ) =A -1 -1 -1 -1 -1 -1 because I=A A, (A ) =(A ) A A=IA=A -1 -1 -1 -1 -1 -1(6)(A ) =(A ) because I=AA , I=I =(AA ) =(A ) A =(A ) A

(7)If A =A then -1 -1 -1 -1 -1(A ) =A because (A ) =(A ) =A

(8) -1 -1 -1 -1 -1 -1 -1 -1 -1 -1(AB) =B A because B A AB=B (A A)B=B IB=B B=I=(AB) AB

-1 -1 -1(AB) =B A

(Some simple special cases)

-1a X 1

A= has A for ab-XY 0Y b

b X

Y aab XY

1 12

14

13

200 00

0 40 0 0

0 03 00

[ A | I ] → → [ I | A-1 ] providing A-1 exists

Page 16: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

16

(Chapter 6 Rank)

Ax = b, x = A-1b only if A-1 exists

And A-1 does exists only if |A|≠0

→permit us to ascertain whether or not |A| is zero, without having to

tediously expand |A| in full.

(Linear combinations of vectors)

(Refer to Page 7 outer products)

X= [x1 x2 ……xn] and a= [a1 a2……an]

1 1 2 2

1

......n

n n i i

i

a x a x a x a x

𝑋𝑎

𝑋𝑎 is a column vector, a linear combination of the columns of x

Similarly, b x is a row vector, it is a linear combination of the rows of x.

AB is a matrix:

Its rows are linear combinations of the rows of B ,and its columns are

linear combinations of the columns of A.

(Linear transformations)

𝑋𝑎 is called the linear transformation of the vector a to the vector xa,

with x being the matrix of the transformation.

y =Ax represents the linear transformation of x to y.

(Linear dependence & independence)

○1 Definitions

The product Xa is a vector, and it is a linear combination of the

column vector in X

Xa = a1x1+a2x2+……+anxn

Linearly dependent vectors:

If there exists a vector a 0, such that a1x1+a2x2+……+anxn = 0, then

provided none of x1, x2, …, xn is null.

Alternative :If Xa =0 for some non-null a

Page 17: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

17

Then the column of x are linearly dependent vectors;

provided none is null.

Linearly independent vectors:

If a = 0 is the only vector for which a1x1+a2x2+……+anxn=0, then

provided none of x1, x2, …, xn is null, those vectors are said to be linearly

independent vectors.

(Alternative)

If Xa = 0 only for a =0, then the columns of x are linearly

independent vectors.

To sum up , Xa = 0 being true for some a≠0 means the columns of x

are linearly dependent, whereas it being true only for a = 0 means they

are linearly independent.

(The properties of linearly dependent vectors)

(a) At least two a s are nonzero

Because, x1, x2, …, xp are linearly dependent. When

a1x1+a2x2+……+apxp = 0, for not all the a s being zero

Suppose only one a is nonzero called 2a , there 2a 2x = 0

2a = 0 because 2x is not null.

Contradict

Therefore, more than one a is nonzero.

(b) Vectors are linear combination of others.

Suppose that a1 & a2 are nonzero

2

1 11 2( ) ...... ( ) 0paa

pa ax x x

32

1 1 11 2 3( ) ( ) ...... ( )paaa

pa a ax x x x

i.e., 𝑥1 can be expressed as a linear combination of the other x s .

(c) Partitioning Matrices

(d) Zero determinants

Page 18: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

18

Suppose p linearly dependent vectors of order p are used as columns of

a matrix.

→linear dependence of vectors implies that one vector can always be

expressed as a linear combination of the others.

→ determinant = 0

(e.g.) 1 2 3

3 0 2

6 5 1

9 5 1

x x x

Subtracting ( 3 32 32 2

x x ) from 𝑥1

1 2 3

0 0 2

0 5 1 0

0 5 1

x x x

(e) Inverse Matrices

When the column (rows) of a square matrix are linearly dependent, that

matrix has not inverse. → Singular

because |A| = 0

(f) Testing for dependence

A simple test for linear dependence among p vector of order p is to

evaluate the determinant of the matrix formed from using the vectors as

columns.

That is, Zero determinant linearly dependent.

Otherwise LIN

Given a set of vectors, their dependence

independence

can be ascertained by

attempting to solve Xa = 0. If a solution can be found other than a = 0

→ it will be a non-null solution

→ the vector dependent.

Otherwise → LIN

Page 19: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

19

Furthermore, for square x that has no null columns.

(i) Columns of x are linearly dependent.

equivalence

(ii) Xa = 0 can be satisfied for a non-null a.

(iii) x is singular, i.e., x-1 does not exist.

(LIN vectors)

a. Nonzero determinants and inverse matrices

(i)Columns of x are LIN

(ii) Xa = 0 only for a = 0

(iii) x is nonsingular, i.e., x-1 exists

b. A max. number of LIN vectors.

Theorem:A set of LIN vectors of order n cannot contain more than n

such vectors.

Corollary:When p vectors of order n are LIN then p≦n.

(pf):Let u1, u2, …, un be n LIN vectors of order n.

Let un+1 be any other non-null vector of order n.

We show that it and u1, u2, …, un linearly dependent.

Since U = [ u1, u2, …, un] has LIN columns, |U|≠0 & U-1 exist.

Let q = -U-1 un+1≠0 because un+1≠0, i.e., not all elements of q is zero.

Then Uq+Un+1 = 0, which can be rewritten as

q1u1+q2u2+…+qnun+un+1 = 0

With not all the q s being zeros.

→u1, u2, …,un+1 are linearly dependent.

𝑒𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑐𝑒 {

Page 20: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

20

Theorem:The number of LIN rows in a matrix is the same as the

number of LIN columns.

(The Rank of a Matrix)

Definition: The rank of a matrix is the number of linearly independent

row (and columns) in the matrix.

Notation The rank of A → rA or r(A)

If rA ≡ r(A) = k then A has k LIN rows and columns.

(Some properties of rank)

(i) rA is a positive integer.

(ii) ( )p qr A p and q

(iii) ( )n nr A n

(iv) when Ar =r 0 there is at least one square sub matrix of A having

order r that is nonsingular.

( )

( ) ( ) ( )

r r r q r

p q

p r r p r q r

X YA

Z W

And X r×r is nonsingular.

All square sub matrices of order greater than r are singular.

(v) When ( )n nr A n then A is nonsingular & A-1 exist.

(vi) When ( )n nr A n then A is singular & A-1 does not exist.

(vii) When ( )p qr A p q , A is said to have full row rank, or to be of full

row rank. Its rank equals its number of rows.

(viii) When ( )p qr A q p , A is said to have full column rank.

(ix) When ( )n nr A n , A is said to have full rank, or to be of full rank. Its

Page 21: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

21

rank equals its order, it is nonsingular, and its inverse exists. It is said to

be invertible.

Equivalent Statement of the existence of A-1 of order n “Inverse existing”:

1. A-1 exists

2. A is nonsingular

3. |A|≠0

4. A has full rank

5. rA = n

6. A has n LIN rows

7. A has n LIN columns

8. Ax = 0 has sole solution, x = 0

Permutation Matrices

For example,

1 1 3

1 1 3

4 4 12

2 2 5

M

24

1 0 0 0

0 0 0 1

0 0 1 0

0 1 0 0

E

24

1 0 0 0 1 1 3 1 1 3

0 0 0 1 1 1 3 2 2 5

0 0 1 0 4 4 12 4 4 12

0 1 0 0 2 2 5 1 1 3

E M

E24 is an identity matrix with its second and fourth rows

interchanged, and E24M is M with those same two rows interchanged.

Ers →symmetric orthogonal ( rsrs rs rsE E E E I )

In the same way that premultiplication of M by Ers interchanges rows

r and s of M, so does post multiplication interchange columns.

24 23

1 1 3 1 3 11 0 0

2 2 5 2 5 20 0 1

4 4 12 4 12 40 1 0

1 1 3 1 3 1

E ME

Consider

Page 22: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

22

1 1 3 2

1 1 3 2

3 3 9 6

2 2 5 4

1 1 7 8

A

25 34 25

1 1 3 2 1 1 3 2

1 1 3 2 1 1 7 8

2 2 5 4 2 2 5 4

3 3 9 6 3 3 9 6

1 1 7 8 1 1 3 2

PA E E A E

Where

25 34 25

1 0 0 0 0 1 0 0 0 0

0 1 0 0 0 0 0 0 0 1

0 0 0 1 0 0 0 0 1 0

0 0 1 0 0 0 0 1 0 0

0 0 0 0 1 0 1 0 0 0

P E E E

For Q = E24

1 2 3 1

1 8 7 1

2 4 5 2

3 6 9 3

1 2 3 1

PAQ

P is a product of elementary permutation matrices (the E-matrices)

P is not necessarily symmetric, but it is always orthogonal. (Because it is

a product of orthogonal E-matrices)

So 1P P is also a permutation matrix.

(P is defined as an identity matrix with its rows resequenced, it is also an

identity matrix with its columns resequenced.)

Canonical Forms

3 elementary operators matrices

(Row operations)

12

0 1 0 1 1 1 2 2 2

1 0 0 2 2 2 1 1 1

0 0 1 3 3 3 3 3 3

E A

Page 23: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

23

Rii ( ) multiplies the ith row of A by

22

1 0 0 1 1 1 1 1 1

(4) 0 4 0 2 2 2 8 8 8

0 0 1 3 3 3 3 3 3

R A

Pij ( ) A adds times the jth row of A to its ith row

12

1 0 1 1 1 1 2 1 2 1 2

( ) 0 1 0 2 2 2 2 2 2

0 0 1 3 3 3 3 3 3

P A

(Transposes)

ij ijE E

( ) ( )ii iiR R

And ( ) ( )ij jiP P

(Column operations)

Post multiplication by elementary operators performs similar

manipulations on the columns of A.

(e.g.)

12

1 1 1 1 0 0 1 1 1

( ) 2 2 2 1 0 2 2 2 2

3 3 3 0 0 1 3 3 3 3

A P

Inverses:

( ) 1ijP 1

ij ijE E

( )ijR 1

1( ) ( )ii iiR R

1ijE 1

( ) ( )ij ijP P

(Rank and the elementary operators)

The rank of a matrix is unaffected when it is multiplied by an

Page 24: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

24

elementary operator.

r (EA) = r (A)

R-type

P-typethe same

The independence of rows is unaffected and the same number will be

linearly independent after making the product.

So,

2 3 2( ) ( , ) ( , ) ( , )r A r E A r E E A r E E E A

= r [PA]

It is done by using the operators of elementary operators to change A

until its rank is obvious.

(Equivalence)

When A is multiplied by elementary operator matrices, the product is said

to be equivalent to A

e.g. B = PAQ → B A

P and Q is the product of elementary operators

A = P-1BQ-1 A B Thus rA = rB

(Calculating Rank)

(e.g.)

𝐴 = [1 23 −15 −4

4 32 −20 −7

]

𝐴 = [1 20 −70 −14

4 3

−10 −11−20 −22

]

𝐴 ≅ [1 20 −70 0

4 3

−10 −110 0

] = 𝐵 rank = 2

r (B) = r (A)

(Row operations)

B = PA

= (PI) A

(−3)

(−5)

(−2)

Page 25: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

25

= (E3E2E1I) A

P = E3E2E1I can be derived by carrying out on I the same row

operations as have been made on A to derive B

(Continued)

3

1 0 0 1 0 0 1 0 0

0 1 0 3 1 0 3 1 0

0 0 1 5 0 1 1 2 1

I p

1 0 0 1 2 4 3 1 2 4 3

3 1 0 3 1 2 2 0 7 10 11

1 2 1 5 4 0 7 0 0 0 0

PA

The same as before

(Column operations)

1 2 4 3 1 0 0 0 1 0 0 0 1 0 0 0

0 7 10 11 0 7 10 11 0 7 0 0 0 1 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

PA B c

Q was obtained by carrying out on an identity matrix the column

operations contained.

1 2 4 3

0 1 0 0

0 0 1 0

0 0 0 1

and then

8 17 7

10 117 7

1 2

0 1

0 0 1 0

0 0 0 1

Finally,

82 17 7 7

101 117 7 7

1

0

0 0 1 0

0 0 0 1

Q

Page 26: Matrix Algebra of Linear Models - 國立中興大學benz.nchu.edu.tw/~kucst/Matrix Algebra.pdf · The Basic Matrix Algebra in Linear Models ... of the book “Matrix Algebra useful

26

82 17 7 7

101 117 7 7

11 0 0 1 2 4 3

03 1 0 3 1 2 2

0 0 1 01 2 1 5 4 0 7

0 0 0 1

1 0 0 0

0 1 0 0

0 0 0 0

PAQ

C

(The equivalent canonical form)

Theorem: (Its importance is that it always exists)

Any non-null matrix A of rank r is equivalent to

0

0 0

rIPAQ C

Where Ir is the identity matrix of order r, and the null sub matrices

are of approximate order to make C the same order as A. For A of

order m n , P and Q are nonsingular matrices of order m and n,

respectively, being products of elementary operators.