BMI II SS06 – Class 3 “Linear Algebra” Slide 1
Biomedical Imaging IIBiomedical Imaging II
Class 3 – Mathematical Preliminaries: Elementary Linear Algebra
2/13/06
BMI II SS06 – Class 3 “Linear Algebra” Slide 2
System of linear equations:
11 12 1 1 1
21 22 2 2 2
1 2
N
N
M M MN N M
a a a x b
a a a x b
a a a x b
11 1 12 2 1 1
21 1 22 2 2 2
1 1 2 2
N N
N N
M M MN N M
a x a x a x b
a x a x a x b
a x a x a x b
BMI II SS06 – Class 3 “Linear Algebra” Slide 3
x
y 1
3
T1
1 33
1, 3x yv v
v
2 21 3 10 3.16 v3
1
1
vv
v
v
ˆ ˆi 3j
Vector concepts I
BMI II SS06 – Class 3 “Linear Algebra” Slide 4
x
y
z1
2
3
v
3
2 1
2 2
2 2
2 2 2
1 2 5
5 3
1 2 3
u
v
1
2
0
u
Vector concepts II
BMI II SS06 – Class 3 “Linear Algebra” Slide 5
T
1 2
2 2 21 2
12 2 2 21 2
12
2 2
1 1
N
N
N
N N
n nn n
v v v
v v v
v v v
v v
v
v
Vector concepts III
BMI II SS06 – Class 3 “Linear Algebra” Slide 6
x
y
v1
v2
θ
1 2 1 2 cos v v v v
2 1
1 2 1 1 2 22 1
T T1 2 2 1
1 2 1 2
x x
x y x yy y
x x y y
v vv v v v
v v
v v v v
v v
v v v v
1 2 1 2
1 2 1 1 2 2
cos T
T T
v v v v
v v v v v v
2
1 1 1 1 1 1 1cos0 Tv v v v v v v
Vector dot products I
BMI II SS06 – Class 3 “Linear Algebra” Slide 7
x
y
v1
v2
90°
cos90° = 0
v1T v2 = 0
v1 and v2 are orthogonal, v1 v2
1 21 2
1 2
, v vu u
v v
u1 isu2 is
normalized, a unit vectornormalized, a unit vector
u1T u2 = 0
u1 and u2 are orthonormal
Vector dot products II
BMI II SS06 – Class 3 “Linear Algebra” Slide 8
v1
v2
θ
v = ||v||
22 1
1
T T T2 2 1 2 1 2
1 1T TT T1 1 1 11 1 2 2
cosvv
v v
v v v v v vv v
v v v vv v v v
T1 2
2 2 1T1 1
v vv v v
v v
Projections and components
BMI II SS06 – Class 3 “Linear Algebra” Slide 9
+ =A B C
cij = aij + bij
A - B = D
dij = aij - bij
Matrix sums and differences
BMI II SS06 – Class 3 “Linear Algebra” Slide 10
=AB
C
=DE
F
=GH
J
1
2
1 2
1 1 2 2
T
1
j
j
ij i i iK
Kj
i j i j iK Kj
K
ik kj i j i jk
b
bc a a a
b
a b a b a b
a b
a b a b
Matrix products I
BMI II SS06 – Class 3 “Linear Algebra” Slide 11
6 2 1
2 1 0 7 0 2 5
5 3 2 1 8 7 4
1 3 9
(2)(6) + (-1)(0) + (0)(8) + (7)(1) = 12 + 0 + 0 + 7 = 19
19
Matrix products II
BMI II SS06 – Class 3 “Linear Algebra” Slide 12
6 2 1
2 1 0 7 0 2 5 19 15 70
5 3 2 1 8 7 4 15 13 9
1 3 9
6 2 1
0 2 5 2 1 0 7!
8 7 4 5 3 2 1
1 3 9
TILT
Matrix multiplication is NOT commutative.
Case 1: AB = C, BA does not exist.
Matrix products III
BMI II SS06 – Class 3 “Linear Algebra” Slide 13
2 1
2 1 0 7 2 5 15 70
5 3 2 1 7 4 13 9
3 9
2 1 1 5 2 13
2 5 2 1 0 7 21 17 10 9
7 4 5 3 2 1 6 19 8 45
3 9 51 24 18 30
Matrix multiplication is NOT commutative.
Case 2: AB = C, BA = D; C and D have different dimensions.
Matrix products IV
BMI II SS06 – Class 3 “Linear Algebra” Slide 14
2 1 7 6 2 1 11 24 61
5 3 1 8 7 4 55 28 26
0 2 5 1 3 9 11 29 37
6 2 1 2 1 7 2 10 35
8 7 4 5 3 1 19 21 29
1 3 9 0 2 5 17 26 35
Matrix multiplication is NOT commutative.
Case 3: AB = C, BA = D; A, B, C and D all are N×N, but C D.
Matrix products V
BMI II SS06 – Class 3 “Linear Algebra” Slide 15
However, matrix multiplication is associative:
A(BC) = (AB)C.
And matrix multiplication is distributive over addition:
A(B + C) = AB + AC,
(B + C)A = BA + CA,
(s1 + s2)A = s1A + s2A.
Matrix products VI
BMI II SS06 – Class 3 “Linear Algebra” Slide 16
.* =A B C
cij = aijbij
Scalar multiplication of matrices
When using Matlab, we perform matrix multiplication with statements such as
>> C = A*B;
Matlab also lets us perform term-by-term multiplication of the elements in two matrices:
>> C = A.*B;The latter is a very useful and convenient tool to have…, and is NOT a linear algebraic operation. (Look up the Hadamard product of two matrices.)
BMI II SS06 – Class 3 “Linear Algebra” Slide 17
./ =A B C
cij = aij/bij
Scalar division of matrices
Matlab lets us perform term-by-term division of the elements in two matrices:
>> C = A./B;
This is a very useful and convenient tool to have…, and is NOT a linear algebraic operation.
Division is an undefined operation in linear algebra! However…
BMI II SS06 – Class 3 “Linear Algebra” Slide 18
Identity matrix:
2 3
1 0 01 0 0
1 0 0 1 0, 0 1 0 , .
0 10 0 1
0 0 1
n
I I I
Matrix inverse: B = A-1 if and only if AB = I and BA = I.
Only square matrices can have inverses.Many square matrices don’t have them.
1 17 911 11
25 1011 11
10 411 11
2 1 7 2
5 3 1 3
0 2 5 1
1 75 21 1283 283 283
68 53 16283 283 283
31 20 26283 283 283
6 2 1
8 7 4
1 3 9
Given any square matrix M, IM = MI = M.This is the matrix’s main diagonal.
Inverse of a matrix
BMI II SS06 – Class 3 “Linear Algebra” Slide 19
Multiplication of inverses: if C = AB and A-1 and B-1 both exist, then C-1 = B-1A-1. (Sometimes A-1 and B-1 don’t exist, but C-1 still does!)
CC-1 = (AB)(B-1A-1) = A(BB-1)A-1 = AIA-1 = AA-1 = I
C-1C = (B-1A-1)(AB) = B-1(A-1A)B = B-1IB = B-1B = I
Analogous rule holds for matrix transposes: if C = AB, then CT = BTAT.
Products of matrix inverses or transposes
BMI II SS06 – Class 3 “Linear Algebra” Slide 20
A square matrix that is equal to its own transpose, A = AT, is a symmetric matrix.
Another way of saying the same thing: a square matrix for which aij = aji for all i, j.
E.g.,3 0 5
0 2 4
5 4 1
Each half of a symmetric matrix is the reflection of the other across the main diagonal.
These elements are below the main diagonal.
These elements are above the main diagonal.
If two matrices A and B are symmetric, so is their product C = AB.
It follows that matrix multiplication is commutative, AB = BA, if A and B are symmetric.
AB = C = CT = (AB)T = BTAT = BA
1 1
N N
ij ik kj jk ki jik k
c a b a b c
The ith row is equal to the (transpose of) the ith column
Symmetric matrices
BMI II SS06 – Class 3 “Linear Algebra” Slide 21
Formally, if Ax = b and A-1 exists, then
A-1Ax = A-1b
A-1b = (A-1A)x = Ix = x
In practice, we don’t actually compute solutions to linear systems this way.
Formal solution, system of linear equations
BMI II SS06 – Class 3 “Linear Algebra” Slide 22
Given: a set of N vectors v1, v2, …, vN and N scalar constants s1, s2, …, sN. The vector
u = s1v1 + s2v2 + … + sNvN
is a linear combination of v1, v2, …, vN.
Given: a set of N vectors v1, v2, …, vN. If none of the vectors in the set can be expressed as a linear combination of the others, then v1, v2, …, vN are linearly independent.
If any of v1, v2, …, vN is equal to a linear combination of the others, then the vectors are linearly dependent.The set of all possible linear combinations of v1, v2, …, vN is called the span of these vectors. The vectors v1, v2, …, vN are called the basis of the set of all possible linear combinations.
Linear algebraic definitions I
BMI II SS06 – Class 3 “Linear Algebra” Slide 23
If v1, v2, …, vN are M-dimensional vectors (i.e., M×1 matrices), then v1, v2, …, vN can not be linearly independent if N > M. If N M, then v1, v2, …, vN may be linearly independent.
x
x
y
y z
1
0
0
1
1
0
0
0
1
0
0
0
1
1 0
,0 1
x y x y
1 0 0
, , 0 1 0
0 0 1
x y z x y z
Linear independence
BMI II SS06 – Class 3 “Linear Algebra” Slide 24
A set of N linearly independent M-dimensional vectors spans a N-dimensional vector space.
x
y
z
[1 -1 0]T
[1 1 -2]T
[1 1 1]T
The vectors [1 -1 0]T and[1 1 -2]T span a two-dimensional subspace of 3. They are a basis for the subspace consisting of all linear combinationss1[1 -1 0]T + s2[1 1 -2]T.
The one-dimensional sub-space s1[1 1 1]T is the orthogonal complement of the preceding 2-D sub-space.
Linear algebraic definitions II
BMI II SS06 – Class 3 “Linear Algebra” Slide 25
Let A be a N×N square matrix. The number of linearly independent rows is the rank of A. If all N rows are linearly independent, A is of full or maximum rank.
The number of linearly independent rows is equal to the number of linearly independent columns.
If rank(A) < N, A is singular. If rank(A) = N, A is non-singular.
Let B be a M×N rectangular matrix. The maximum possible rank of B is the number of rows or of columns, whichever is smaller. If M < N, the rank of B is the number of linearly independent rows. If M > N, the rank of B is the number of linearly independent columns.
Linear algebraic definitions III
BMI II SS06 – Class 3 “Linear Algebra” Slide 26
System of linear equations:
11 12 1 1 1
21 22 2 2 2
1 2
N
N
M M MN N M
a a a x b
a a a x b
a a a x b
What is a Solution?“Row picture”: each equation corresponds to a (N - 1)-dimensional “plane” in N-dimensional space (N). The Solution, if it exists, is the point at which all M planes intersect.“Column picture”: the Solution, if it exists, is that linear combination of columns of A which is equal to b.
111 12 1
221 22 21 2
1 2
N
NN
MNM M M
aa a b
aa a bx x x
aa a b
BMI II SS06 – Class 3 “Linear Algebra” Slide 27
When is there a Solution?
A(M×N)x(N×1) = b (M×1)
If M = N, and A is of full rank (i.e., A is non-singular, rows/columns of A span N), then:
1) A-1 exists
2) Ax = b has a unique Solution.
3) Ax = b is a fully or completely determined system.
If M < N, and A is of full rank, then:
1) Ax = b has infinitely many Solutions.
2) A has infinitely many N×M right inverses, i.e., matrices AR such that AAR = I(M×M). Every Solution corresponds to some
one of these: x = ARb.
3) Of these right inverses, there is one, known as the generalized inverse or pseudoinverse A+, that in someparticular sense gives the “best” Solution, x = A+b.
4) Ax = b is an underdetermined system.
BMI II SS06 – Class 3 “Linear Algebra” Slide 28
When is there a Solution?
A(M×N)x(N×1) = b (M×1)
If M > N, and A is of full rank, then:
1) Ax = b sometimes has a Solution, but most often has noSolutions.
2) A has infinitely many N×N left inverses, i.e., matrices AL such that ALA = I(N×N). Every vector x = ALb is a “solution.”
3) Of these left inverses, there is one, known as the generalized inverse or pseudoinverse A+, that in someparticular sense gives the “best” “solution,” x = A+b.
4) Ax = b is an overdetermined system.If A is of less than full rank, then:
1) Ax = b has no Solutions.
2) A has a pseudoinverse, A+, that in some particular sense gives the “best” “solution,” x = A+b.
3) Neither product AA+ nor A+A is equal to an identity matrix. That is, A does not have either a left or a right inverse.
BMI II SS06 – Class 3 “Linear Algebra” Slide 29
Gaussian elimination I1 1 1 5
2 2 5 2
4 6 8 9
u
v
w
2÷1=2
4÷1=4
1 1 1 5
2 2 1 2 5 2
4 6 8 9
u
v
w
1 1 1 5
0 2 5 2
4 6 8 9
u
v
w
1 1 1 5
0 2 2 1 5 2
4 6 8 9
u
v
w
1 1 1 5
0 0 5 2
4 6 8 9
u
v
w
1 1 1 5
0 0 5 2 1 2 2 5
4 6 8 9
u
v
w
1 1 1 5
0 0 3 12
4 6 8 9
u
v
w
1 1 1 5
0 0 3 12
4 4 1 6 4 1 8 4 1 9 4 5
u
v
w
1 1 1 5
0 0 3 12
0 2 4 11
u
v
w
1 1 1 5
0 2 4 11
0 0 3 12
u
v
w
3w = -12
w = -4
2v + 4w =
2v + 4(-4) =
2v - 16 = -11
2v = 5
v = 5/2
u + v + w = u + 5/2 - 4 = u - 3/2 = 5
u = 13/2
132
52
4
u
v
w
BMI II SS06 – Class 3 “Linear Algebra” Slide 30
13 13 52 2 2
52
1 1 1 4 5
2 2 5 13 5 20 2
4 6 8 4 26 15 32 9
Always check Solution!
BMI II SS06 – Class 3 “Linear Algebra” Slide 31
1 1 1 1 0 0
2 2 5 0 , 1 , 0
4 6 8 0 0 1
u
v
w
Gaussian elimination can be used, if one is so inclined, to find the inverse of a non-singular square matrix.
Gaussian elimination II
BMI II SS06 – Class 3 “Linear Algebra” Slide 32
What happens if we try to use Gaussian elimination to solve Ax = b, but A is singular?
1 1 1 5
2 3 4 2
4 6 8 9
u
v
w
1 1 1 5
0 1 2 12
0 2 4 11
u
v
w
After first round of elimination:
1 1 1 5
0 1 2 12
0 0 0 13
u
v
w
After second round of elimination:
There is no Solution!
These two equations are inconsistent.
Gaussian elimination III
BMI II SS06 – Class 3 “Linear Algebra” Slide 33
What if Ax = b is of maximal rank, but underdetermined?
1 1 1 5
4 6 8 9
u
v
w
1 1 1 5
0 2 4 11
u
v
w
This is as far as we can go!
2v + 4w = -11 v = -2w - 11/2
u + v + w = u + (-2w - 11/2) + w = u - w - 11/2 = 5 u = w + 21/2
212
112
1
2
0 1
u
v s
w
Both remaining variables are defined in terms of w, the free variable.
Gaussian elimination IV
BMI II SS06 – Class 3 “Linear Algebra” Slide 34
u
v
w
s = 0: [u v w]T = [10.5 -5.5 0]T
s = -1: [u v w]T = [9.5 -3.5 -1]T
s = -2: [u v w]T = [8.5 -1.5 -2]T
is the minimum norm, or minimum power, or pseudoinverse, solution.
Gaussian elimination V
BMI II SS06 – Class 3 “Linear Algebra” Slide 35
1 1 1 5:
4 6 8 9
u
v
w
Ax b
x = A-1b
(AT)-1x = (AT)-1A-1b = (AAT)-1b
AT(AT)-1x = x = AT(AAT)-1b
A+ AT(AAT)-1
1
1 29 36 4
3 14 8
11 16 4
13
7 16 4
1 4 1 4 1 4 1 41 1 1 3 18
1 6 1 6 1 6 1 64 6 8 18 116
1 8 1 8 1 8 1 8
0
A
Gaussian elimination VI
BMI II SS06 – Class 3 “Linear Algebra” Slide 36
1 1 1 5:
4 6 8 9
u
v
w
Ax b
11 16 4
13
7 16 4
0
A
8311 16 4 12
513 3
7 4316 4 12
5: 0
9
u
v
w
x A b
Corresponds to s = -43/12
Gaussian elimination VII
BMI II SS06 – Class 3 “Linear Algebra” Slide 37
What if Ax = b is of full rank, but overdetermined?
y = ax + b
1 1
2 2
1
1
1N N
x y
x ya
b
x y
Gaussian elimination VIII
BMI II SS06 – Class 3 “Linear Algebra” Slide 38
What if Ax = b is of full rank, but overdetermined?
Gaussian elimination VIII
BMI II SS06 – Class 3 “Linear Algebra” Slide 39
What if Ax = b is of full rank, but overdetermined?
Ax = b; A(M×N), M > N
Let Y be any N×M matrix for the product YA (N×N) is non-singular (i.e., invertible). Then:
YAx = Yb
(YA)-1YAx = x = (YA)-1Yb
Thus (YA)-1Y is a left inverse of A.
Notice that the x so obtained does not solve the original system. (How could it? Ax = b does not have a Solution!)
Is there a particular choice of Y that gives us a“solution” that is in some sense the best?
Gaussian elimination IX
BMI II SS06 – Class 3 “Linear Algebra” Slide 40
“solution” to an overdetermined system
Optimal choice for Y turns out to be Y = AT:
A+ = AT(ATA)-1
x = A+b = AT(ATA)-1b
In what sense is this “solution” optimal, or best?
b
column space of A: s1a1 + s2a2 + …
Ax+: projection of b onto s1a1 + s2a2 + …
Gaussian elimination X
BMI II SS06 – Class 3 “Linear Algebra” Slide 41
Overdetermined system example I
1
2
3
4
1 1 0 0 4
0 0 1 1 6
0 1 0 0 3
1 0 0 1 3
0 0 1 0 4
0 1 0 1 5
1 0 1 0 5
0 0 0 1 2
0 1 1 0 7
1 0 0 0 1
x
x
x
x
BMI II SS06 – Class 3 “Linear Algebra” Slide 42
1
2
3
4
1 1 0 0 4
0 0 1 1 6
0 1 0 0 3
1 0 0 1 0 0 1 0 0 1 1 0 0 1 1 0 0 1 0 0 1 0 0 1
1 0 1 0 0 1 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 1 0
0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 1 0 1 0 0
0 0 0 1
0 1 1 0
1 0 0 0
x
x
x
x
3
4
5
5
2
7
1
1
2
3
4
4 1 1 1 13
1 4 1 1 19
1 1 4 1 22
1 1 1 4 16
x
x
x
x
Instead of explicitly computing the pseudoinverse, it is more efficient to use Gaussian elimination to solve the 4×4 system.
Overdetermined system example II
BMI II SS06 – Class 3 “Linear Algebra” Slide 43
1 1
15 3 3 634 4 4 42 2
18 3 785 5 53 3
724 4
4 1 1 1 13 4 1 1 1 13
1 4 1 1 19 0
1 1 4 1 22 0 0
1 1 1 4 16 0 0 0 7
x x
x x
x x
x x
7/2x4 = 7 x4 = 2
6x3 + x4 = 6x3 + 2 = 26 x3 = 4
5x2 + x3 + x4 = 5x2 + 4 + 2 = 5x2 + 6 = 21 x2 = 3
4x1 + x2 + x3 + x4 = 4x1 + 3 + 4 + 2 = 4x1 + 9 = 13 x1 = 1
Overdetermined system example III
BMI II SS06 – Class 3 “Linear Algebra” Slide 44
Some systems have a Solution, but…
1 1
2 2
1 2 2 1 2 2
10 20 30 0 0 10
x x
x x
1 1 1
2 2 2
1 2 2 1 2 2 18
10 21 30 0 1 10 10
x x x
x x x
1 1 1
2 2 2
1 2 2 1 2 2 22
10 19 30 0 1 10 10
x x x
x x x
The second and third systems are ill-conditioned. (The first is ill-posed, but we’ll hold off discussion of that concept for another time.) A small change in A yields a large change in x.
The conditioning of a system is intimately related to the anglesbetween the rows/columns of A. The smaller the angle between anytwo rows or columns, the more ill-conditioned the system is. Thecloser all rows/columns are to being orthogonal, the more well-conditioned.
BMI II SS06 – Class 3 “Linear Algebra” Slide 45
Significance of angle between x and b
Given an arbitrary N×N matrix A and N×1 vector x: ordinarily, b = Ax is different from x in both magnitude and direction.
x
b
However, for any A there will always be some particular directions such that b will be parallel to x (i.e., b is a simple scalar multiple of x, or Ax = λx) if x lies in one of these directions.An x that satisfies Ax = λx is an eigenvector, and λ is the corresponding eigenvalue.
BMI II SS06 – Class 3 “Linear Algebra” Slide 46
Significance of eigenvalues and eigenvectors
An N×N A always has N eigenvalues.
If A is symmetric, and λ1 and λ2 are two distinct eigenvalues, the corresponding eigenvectors x1 and x2 are necessarily orthogonal.
If λ1 = λ2, we can always use the method described earlier tosubtract off x1’s projection onto x2 from x1.
If A is not symmetric, then its eigenvectors generally are not mutually orthogonal. But recall that the matrices AAT and ATA are always symmetric.
The square roots of the eigenvalues of AAT or ATA are thesingular values of A. The eigenvectors of AAT or ATA are thesingular vectors of A.
Computation of the eigenvalues and eigenvectors of AAT and ATA underlies a very useful linear algebraic technique called singular value decomposition (SVD).
SVD is the method that allows us to, among other things, tackle the one case we have not yet seen an explicit example of: finding the “solution” of a linear system when A is not of full rank.
BMI II SS06 – Class 3 “Linear Algebra” Slide 47
What happens if we try to use Gaussian elimination to solve Ax = b, but A is singular?
1 1 1 5
2 3 4 2
4 6 8 9
u
v
w
1 1 1 5
0 1 2 12
0 0 0 13
u
v
w
After second round of elimination:
There is no Solution!
These two equations are inconsistent.
Gaussian elimination III (cont.)
But there is a pseudoinverse, A+, which we can find by using SVD:
11 1 16 10 5
13
7 1 16 10 5
1 1 1
2 3 4 0 0
4 6 8
BMI II SS06 – Class 3 “Linear Algebra” Slide 48
As indicated, for this case AA+ ≠ I and AA+ ≠ I:
Gaussian elimination III (cont.)
11 1 16 10 5
1 1 23 5 5
7 1 1 2 46 10 5 5 5
511 1 1 1 16 10 5 6 3 6
1 1 1 13 3 3 3
7 51 1 1 16 10 5 6 3 6
1 1 1 1 0 0
2 3 4 0 0 0
4 6 8 0
1 1 1
0 0 2 3 4
4 6 8
What is the pseudoinverse “solution,” and what is its significance?
227 22711 1 16 10 5 30 30
5 5 1613 3 3 5
7 127 127 321 16 10 5 30 30 5
5 1 1 1 5 5
0 0 2 2 3 4 2
9 4 6 8 9
x A b b Ax
BMI II SS06 – Class 3 “Linear Algebra” Slide 49
We are not surprised that b+ ≠ b, because we already knew that the original system has no Solution.
Gaussian elimination III (cont.)
That is, that no linear combination of the columns of A is equal to b.
165
325
2 2 216 32
5 5
1695
5 5
2
9
5 5 2 9
5.8138
Ax b b b
However, the “solution” x+ gives us that linear combination of columns of A which is closest to b, in the sense of minimizing the distance between Ax and b.
BMI II SS06 – Class 3 “Linear Algebra” Slide 50
Homogeneous Linear System: Ax = 0Homogeneous Linear System: Ax = 0
1 1 1 0
2 2 5 0
4 6 8 0
u
v
w
Even without performing Gaussian elimination + backsubstitution, we know right away that the unique Solution is [u v w]T = [0 0 0]T.
How do we know this?
1) For any matrix A with finite elements, it is the case that A·0 = 0.
2) In the specific system shown here, A is non-singular, which means that there is one and only one Solution.
BMI II SS06 – Class 3 “Linear Algebra” Slide 51
Homogeneous Linear System: Ax = 0Homogeneous Linear System: Ax = 0
1 1 1 0
2 3 4 0
4 6 8 0
u
v
w
1 1 1 0
0 1 2 0
0 2 4 0
u
v
w
After first round of elimination:
1 1 1 0
0 1 2 0
0 0 0 0
u
v
w
After second round of elimination:
2 0
2
v w
v w
0
2
u v w
u w w
u w
u w
1
2
1
u
sv
w
BMI II SS06 – Class 3 “Linear Algebra” Slide 52
Homogeneous Linear System: Ax = 0Homogeneous Linear System: Ax = 0
1 1 1 1 1 2 1 0
2 3 4 2 2 6 4 0
4 6 8 1 4 12 8 0
s s
s[1 -2 1]T is the nullspace of the matrix A.
A non-singular matrix’s nullspace consists of only the 0 vector. Only singular matrices have nullspaces with a non-zero number of dimensions.
Suppose A represents the action of a linear system, such as a instrument used to detect some type of physical signal. The nullspace is a class of input signals that the device can not detect or measure.
BMI II SS06 – Class 3 “Linear Algebra” Slide 53
Homogeneous Linear System: Ax = 0Homogeneous Linear System: Ax = 0
Recall definition of eigenvectors and eigenvalues:
Ax = λx, x 0.
Then Ax - λx = Ax - λIx = (A - λI)x = 0.
That is, the eigenvalues are those specific values of λ for which the matrix A - λI is singular, and the eigenvectors are the corresponding nullspaces.