4 - 1
Chapter 3 Chapter 3 Vector SpacesVector Spaces 3.1 Vectors in 3.1 Vectors in RRnn
3.2 Vector Spaces3.2 Vector Spaces 3.3 Subspaces of Vector Spaces3.3 Subspaces of Vector Spaces 3.4 Spanning Sets and Linear Independence3.4 Spanning Sets and Linear Independence 3.5 Basis and Dimension3.5 Basis and Dimension 3.6 Rank of a Matrix and Systems of Linear Equations3.6 Rank of a Matrix and Systems of Linear Equations 3.7 Coordinates and Change of Basis3.7 Coordinates and Change of Basis
The idea of vectors dates back to the early 1800’s, but the generality of the concept waited until Peano’s work in 1888. It took many years to understand the importance and extent of the ideas involved. The underlying idea can be used to describe the forces and accelerations in Newtonian mechanicsNewtonian mechanics and the potential functions of electromagnetismelectromagnetism and the states of systems in quantum mechanicsquantum mechanics and the least-square fitting of fitting of experimental dataexperimental data and much more.
4 - 2
is defined to be the set of all ordered n-tuple n-space: Rn
The idea of a vector is far more general than the picture of a line with an arrowhead attached to its end. A short answer is “A vector is an element of a vector space”.
(1) An n-tuple can be viewed as a point in Rn with the xi’s
as its coordinates.
(2) An n-tuple can be viewed as a vector
in Rn with the xi’s as its components.
3.1 Vectors in 3.1 Vectors in RRnn
which is shown to be a sequence of n real number 1, 2, ,( )nx x xL Vector in R
n is denoted as an ordered n-tuple:
Ex:Ex:
a point
21, xx
a vector
21, xx
0,0
1 2( , , , )nx x xL
1 2( , , , )nx x xL
1 2( , , , )nx x x x L
4 - 3
Note:Note: A vector space is some set of thingssome set of things for which the operation of addition and the operation of multiplication by a scalar are defined.
You don’t necessarily have to be able to multiply two vectors by each other or even to be able to define the length of a vector, though those are very useful operations.
The common example of directed line segments (arrows) in 2D or 3D fits this idea, because you can add such arrows by the parallelogram law and you can multiply them by numbers, changing their length (and reversing direction for negative numbers).
4 - 4
A complete definition of a vector space requires pinning down these properties of the operators and making the concept of vector space less vague.
A vector space is a set whose elements are called “vectorsvectors” and such that there are two operations defined on them:
you can add vectors to each other and you can multiply them by scalars (numbers). These operations must obey certain simple rules, the axiomsaxioms for a vector space.
4 - 5
1 2 1 2u , , , , v , , ,n nu u u v v v L L (two vectors in Rn)
Equal:Equal: if and only if vu 1 1 2 2, , , n nu v u v u v L
Vector addition (the sum of u and v):Vector addition (the sum of u and v):
1 1 2 2u v , , , n nu v u v u v L
Scalar multiplication (the scalar multiple of u by Scalar multiplication (the scalar multiple of u by cc):):
1 2u , , , nc cu cu cu L
4 - 6
Negative:Negative:
1 2 3( 1) ( , , , ..., )nu u u u u u Difference:Difference:
1 1 2 2 3 3( 1) ( , , , ..., )n nu v u v u v u v u v u v
Zero vector:Zero vector:)0 ..., ,0 ,0(0
Notes:Notes:
(1) The zero vector 0 in Rn is called the additive identityadditive identity in Rn.
(2) The vector –v is called the additive inverseadditive inverse of v.
4 - 7
Thm 3.1:Thm 3.1: (the axioms for a vector space)
Let v1, v2, and v3 be vectors in Rn , and let , and be scalars.
4 - 8
Ex :Ex : (Vector operations in R4)
Sol:Sol:
Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be
vectors in R4.
Solve x for 3(x+w) = 2u – v+x
312 2
3 91 12 2 2 2
9112 2
3(x w) 2u v x
3x 3w 2u v x
3x x 2u v 3w
2x 2u v 3w
x u v w
2,1,5,0 2, , , 9, 3,0,
9, , , 4
4 - 9
Thm 3.2Thm 3.2: (Properties of additive identity and additive inverse)
Let v be a vector in Rn and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=vu+v=v, then u = 0u = 0
(2) The additive inverse of v is unique. That is, if v+u=0v+u=0, then u = u = ––vv
4 - 10
ThmThm 3.3: 3.3: (Properties of scalar multiplication)
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
(1) 0v=0
(2) c0=0
(3) If cv=0, then c=0 or v=0
(4) (-1)v = -v and –(– v) = v
4 - 11
Notes:Notes:
A vector in can be viewed as:1 2u ( , , , )nu u u K nR
1 2u [ , , , ]nu u u L
1
2u
n
u
u
u
M
((The matrix operations of addition and scalar multiplication give The matrix operations of addition and scalar multiplication give the same results as the corresponding vector operationsthe same results as the corresponding vector operations))
or
a n×1 column matrix (column vector):
a 1×n row matrix (row vector):
4 - 12
1 2 1 2
1 1 2 2
u v ( , , , ) ( , , , )
( , , , )n n
n n
u u u v v v
u v u v u v
L L
L
1 2 1 2
1 1 2 2
u v [ , , , ] [ , , , ]
[ , , , ]n n
n n
u u u v v v
u v u v u v
L L
L
1 1 1 1
2 2 2 2u v
n n n n
u v u v
u v u v
u v u v
M M M
Vector additionVector addition Scalar multiplicationScalar multiplication
1 1
2 2u
n n
u cu
u cuc c
u cu
M M
1 2
1 2
u ( , , , )
( , , , )n
n
c c u u u
cu cu cu
L
L
1 2
1 2
u [ , , , ]
[ , , , ]n
n
c c u u u
cu cu cu
L
L
Matrix AlgebraMatrix Algebra
4 - 13
Notes:Notes:
(1) A vector space consists of four entities:
(2) 0 :V zero vector space containing only additive identityadditive identity
V : nonempty set c : scalar
:),(
:),(
uu
vuvu
cc vector addition
scalar multiplication
, ,V is called a vector space
a set of vectorsa set of vectors, a set of scalarsa set of scalars, and two operationstwo operations
4 - 14
Examples of vector spaces:Examples of vector spaces:
(1) n-n-tuple space:tuple space: Rn
1 2 1 2 2 1 1 2 2( , , ) ( , , ) ( , , ) n n nu u u v v v u v u v u v L L L
1 2 1 2( , , ) ( , , )n nu u u u u u L L
(2) Matrix space: Matrix space: (the set of all m×n matrices with real values)nmMV
Ex: : (m = n = 2)
11 12 11 12 11 11 12 12
21 22 21 22 21 21 22 22
u u v v u v u v
u u v v u v u v
11 12 11 12
21 22 21 22
u u cu cuc
u u cu cu
vector additionvector addition
scalar multiplicationscalar multiplication
vector addition
scalar multiplication
4 - 15
(3) n-n-th degree polynomial space:th degree polynomial space: (the set of all real polynomials of degree n or less)
{ ( )}nV P x
0 0 1 1( ) ( ) ( ) ( ) ( ) nn np x q x a b a b x a b x L
0 1( ) nncp x ca ca x ca x L
(4) Function space:Function space: The set of square-integrable real-valued functions of a real variable on the domain [ax b]. That is, those functions with .
simply note the combination
So the axiom-1 is satisfied. You can verify the rest 9 axioms are also satisfied.
2 2| ( ) | | ( ) |b b
a adx f x and dx g x
4 - 16
Function Spaces:Function Spaces:
Is this a vector space?Is this a vector space? How can a function be a vector? This comes down to your understanding of the word “function.” Is f(x) a function or is f(x) a number?
Answer:Answer: It’s a number. This is a confusion caused by the conventional notation for functions. We routinely call f(x) a function, but it is really the result of feeding the particular value, x, to the function f in order to get the number f(x).
Think of the function Think of the function f f as the whole graph relating input to output; the pair as the whole graph relating input to output; the pair {{xx, , ff((xx)} is then just one point on the graph. Adding two functions is adding )} is then just one point on the graph. Adding two functions is adding their graphs.their graphs.
4 - 17
Notes:Notes: To show that a set is not a vector space, you need only find one axiom that is not satisfied.
Ex2:Ex2: The set of all second-degree polynomials is not a vector space.
Pf: Let and2)( xxp 1)( 2 xxxq
Vxxqxp 1)()(
(it is not closed under vector addition)
R,V 211
V21
21 )1)(( (it is not closed under scalar multiplication)
scalar
Pf:
Ex1:Ex1: The set of all integer is not a vector space.
integernoninteger
4 - 18
3.3 Subspaces of Vector Spaces3.3 Subspaces of Vector Spaces
Subspace:Subspace:
),,( V : a vector space
VW
W : a nonempty nonempty subset
),,( W : a vector space (under the operations of addition and scalar multiplication defined in V)
W is a subspace of V
Trivial subspace:Trivial subspace:
Every vector space V has at least two subspaces.
(1) Zero vector space {0} is a subspace of V.
(2) V is a subspace of V.
4 - 19
Thm 3.4:Thm 3.4: (Test for a subspace)
If W is a nonempty subset of a vector space V, then W is
a subspace of V if and only if the following conditions hold.
(1) If u and v are in W, then u+v is in W.
(2) If u is in W and c is any scalar, then cu is in W.
Axiom 1Axiom 1
Axiom 2Axiom 2
4 - 20
Ex: (A subspace of M2×2)
Let W be the set of all 2×2 symmetricsymmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
2 2 2 2 : vector spacesW M M QSol:
) ( Let 221121 AA,AA WA,A TT
)( 21212121 AAAAAAWAW,A TTT
)( kAkAkAWA,Rk TT
22 of subspace a is MW
)( 21 WAA
)( WkA
4 - 21
Ex: (Determining subspaces of R3)
RxxxxxxW
RxxxxW
R
313311
2121
3
,),,( (b)
,)1,,( (a)
? of subspace a is subsets following theofWhich
Sol:W )1,0,0(Let (a) v
W )1,0,0()1( v
of subspace anot is 3RW
WW )u,uu,u( ,)v,vv,v(Let (b) 33113311 uv
1 1 1 1 3 3 3 3v u v u , v u v u , v u W Q
Wv,vv,v 3311 kkkkkv3 of subspace a is RW
4 - 22
ThmThm 3.5: 3.5: (The intersection of two subspaces is a subspace)
If and are both subspaces of a vector space ,
then the intersection of and (denoted by )
is also a subspace of .
V W U
V W V W
U
Proof:Proof: Automatically from Thm 3.4Thm 3.4.
4 - 23
3.4 Spanning Sets and Linear Independence3.4 Spanning Sets and Linear Independence
1 1 2 2v u u uk kc c c K
1 2
A vector v in a vector space is called a linear combination of
the vectors u u u in if v can be written in the formk
V
, , , VL
Linear combination:Linear combination:
1 2 : scalarskc ,c , ,cL
ExEx:
Sol:Sol:
2224
2
13
cba
cba
cb
1 ,2 ,1 cba
1 2 3Thus v 2u u u
Given v = (– 1, – 2, – 2), u1 = (0,1,4), u2 = (– 1,1,2), and
u3 = (3,1,2) in R3, find a, b, and c such that v = 1 2 3u u u .a b c
4 - 24
Ex:Ex: (Finding a linear combination)
1 2 3
1 2 3
v (1,2,3) v (0,1,2) v ( 1,0,1)
Prove w (1,1,1) is a linear combination of v , v , v
Sol:Sol: 1 1 2 2 3 3w v v vc c c
1 2 31,1,1 1,2,3 0,1,2 1,0,1c c c
1 3 1 2 1 2 3( , 2 , 3 2 )c c c c c c c
1 3
1 2
1 2 3
-c 1
2 1
3 2 1
c
c c
c c c
4 - 25
1 0 1 1
2 1 0 1
3 2 1 1
Gauss Jordan 1 0 1 1
0 1 2 1
0 0 0 0
321
1
32 vvvw t
tctctc 321 , 21 , 1
(this system has infinitely many solutions)
4 - 26
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of the span of SS is the set of all linear combinations of
the vectors in S,
the span of a set: span (the span of a set: span (SS))
( )U span S 1 1 2 2v v v
(the set of all linear combinations of vectors in )k k ic c c c R
S
L
a spanning set of a vector space:a spanning set of a vector space:
If every vector in a given vector space UU can be written as a linear combination of vectors in a given set S, then S is called a spanning set of the vector space UU.
4 - 27
0)( (1) span
)( (2) SspanS
)()(
, (3)
2121
21
SspanSspanSS
VSS
Notes:Notes:
VS
SV
V S
VS
ofset spanning a is
by )(generated spanned is
)(generates spans
)(span
Notes:Notes:
4 - 28
31 2 3Show that the set { , , } (1,2,3),(0,1,2), ( 2,0,1) spans S v v v R
Ex: (A spanning set for R3)
. and ,, ofn combinatiolinear a as becan in
),,(vector arbitrary an whether determinemust We
3213
321
vvv
u
R
uuuSol:
3322113 vvvuu cccR
3321
221
131
2 3
2
2
uccc
ucc
ucc
. and , , of valuesallfor consistent is
system this whether gdeterminin toreduces thusproblem The
321 uuu
4 - 29
1 0 2
2 1 0 0
3 2 1
A
Q
has exactly one solution c for every u.Ac u
3( )span S R
1 3 1
1 2 2
1 2 3 3
2
2
3 2
c c u
c c u
c c c u
4 - 30
ThmThm 3.6: 3.6: (Span (S) is a subspace of V)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then
(a) span (S) is a subspace of V.
(b) span (S) is the smallest subspacethe smallest subspace of V that contains the spaning S.
i.e.,
Every other subspace of Every other subspace of VV that contains that contains SS must contain span ( must contain span (SS).).
4 - 31
1 2(1) If the equation has only the trivial solution ( 0)
then is called linearly independent.
(2) If the equation has a nontrivial solution (i.e., not all zeros),
then is ca
kc c c
S
S
L
lled linearly dependent.
1 2
1 1 2 2
v , v , , v
v v v 0k
k k
S
For the equation c c c
L
L
Linear Independent (L.I.) and Linear Dependent (L.D.):Linear Independent (L.I.) and Linear Dependent (L.D.):
: a set of vectors in a vector space V
4 - 32
(1) is linearly independent
(2) 0 is linearly dependent. S S
(3) v 0 v is linearly independent Single nonzero vector set
1 2(4) S S
dependentlinearly is dependent linearly is 21 SS
t independenlinearly is t independenlinearly is 12 SS
Notes:Notes:
4 - 33
1 2 3{ , , } 1, 2, 3 , 0, 1, 2 , 2, 0, 1S v v v
Ex: (Testing for linearly independent)
0 23
0 2
02
321
21
31
ccc
cc
cc
0vvv 332211 ccc
Sol:
Determine whether the following set of vectors in R3 is L.I. or L.D.
0123
0012
0201
nEliminatioJordan - Gauss
0100
0010
0001
solution trivialonly the 0321 ccc
tindependenlinearly is S
4 - 34
Ex: (Testing for linearly independent)
Determine whether the following set of vectors in P2 is L.I. or L.D.
S = {1+x – 2x2 , 2+5x – x2 , x+x2}
c1v1+c2v2+c3v3 = 0
i.e. c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2
c1+2c2 = 0 c1+5c2+c3 = 0–2c1+ c2+c3 = 0
v1 v2 v3
Sol:
This system has infinitely many solutions. (i.e., This system has nontrivial solutionsThis system has nontrivial solutions.)
S is linearly dependent. (Ex: c1=2 , c2= – 1 , c3=3)
1 2 0 0
1 5 1 0
2 1 1 0
0000
03111
0021
J. G.
4 - 35
Ex: (Testing for linearly independent)
Determine whether the following set of vectors in 2×2
matrix space is L.I. or L.D.
02
01,
12
03,
10
12S
Sol:
00
00
02
01
12
03
10
12321 ccc
c1v1+c2v2+c3v3 = 0
v1 v2 v3
4 - 36
(This system has only the trivial solution.)c1 = c2 = c3= 0
S is linearly independent.
2c1+3c2+ c3 = 0c1 = 0
2c2+2c3 = 0c1+ c2 = 0
0011
0220
0001
0132
0000
0100
0010
0001
nEliminatioJordan - Gauss
4 - 37
Thm 3.7:Thm 3.7: (A property of linearly dependent sets)
A set S = {v1,v2,…,vk}, k2, is linearly dependent if and only if at least one
of the vectors vj in S can be written as a linear combination of the other
vectors in S.
is linearly dependentSince S
ci 0 for some i
1 111 1 1
i
v v v v vi i ki i i k
i i i
c c cc
c c c c
L L
c1v1+c2v2+…+ckvk = 0
Pf: )(
4 - 38
)(
Let
(nontrivial solution)
S is linearly dependent
Corollary to Theorem 3.7:Corollary to Theorem 3.7:
Two vectors u and v in a vector space V are linearly dependent
if and only if one is a scalar multiple of the other.
vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
d1v1+…+di-1vi-1+di+1vi+1-vi+…+dkvk = 0
c1=d1 , c2=d2 ,…, ci=-1 ,…, ck=dk
4 - 39
3.5 Basis and Dimension3.5 Basis and Dimension
Basis:Basis:
V : a vector space
)(
)(
b
a SS spans spans VV (i.e., span (S) = V )
SS is linearly independent is linearly independent
SpanningSets
BasesBasesLinearly
IndependentSets
S is called a basis for V
S ={v1, v2, …, vn}V
Bases and DimensionBases and DimensionA basis for a vector space VV is a linearly independent spanning set of the vector space V,V, i.e.,i.e.,any vector in the space can be written as a linear combination of elements of this set. The dimension of the space is the number of elements in this basis.
4 - 40
Note:Note:Beginning with the most elementary problems in physics and mathematics, it is
clear that the choice of an appropriate coordinate system can provide great computational advantages.
For examples, 1. for the usual two and three dimensional vectors it is useful to express an
arbitrary vector as a sum of unit vectors. 2. Similarly, the use of Fourier series for the analysis of functions is a very
powerful tool in analysis. These two ideas are essentially the same thing when you look at them as
aspects of vector spaces.
Notes:Notes:
(1) Ø is a basis for {0}
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
4 - 41
(3) the standard basis for Rn :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
Ex: matrix space:
10
00,
01
00,
00
10,
00
01
22
(4) the standard basis for mn matrix space:
{ Eij | 1im , 1jn }
(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x) {1, x, x2, x3}
4 - 42
Thm 3.8:Thm 3.8: (Uniqueness of basis representation)
If is a basis for a vector space V, then every
vector in V can be written as a linear combination of vectors in
S in one and only one way.
1 2v , v , , vnS L
Pf: is a basis Note S
1. Span (S) = V2. S is linearly independent
QSpan (S) = V Let v = c1v1+c2v2+…+cnvn
v = b1v1+b2v2+…+bnvn
0 = (c1–b1)v1+(c2 – b2)v2+…+(cn – bn)vn
is linearly independentSQ
(i.e., uniqueness) c1= b1 , c2= b2 ,…, cn= bn
4 - 43
Thm 3.9:Thm 3.9: (Bases and linear dependence)
If is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.
1 2v , v , , vnS L
Pf:
S1 = {u1, u2, …, um} , m > nLet
( )span S VQ
uiV
1 11 1 21 2 1
2 12 1 22 2 2
1 1 2 2
u v v v
u v v v
u v v v
n n
n n
m m m nm n
c c c
c c c
c c c
L
L
M
L
4 - 44
is L.I.SQ
di=0 i i.e. 11 1 12 2 1
21 1 22 2 2
1 1 2 2
0
0
0
m m
m m
n n nm m
c k c k c k
c k c k c k
c k c k c k
L
L
M
L
Let k1u1+k2u2+…+kmum= 0
with di = ci1k1+ci2k2+…+cimkm d1v1+d2v2+…+dnvn= 0
If the homogeneous system (n<m) has fewer equations than variables, then it must have infinitely many solution.
Q
m > n k1u1+k2u2+…+kmum = 0 has nontrivial solution
S1 is linearly dependent
4 - 45
Notes:Notes:
(1) dim({0}) = 0 = #(Ø)
(2) dim(V) = n , SV
S : a spanning set #(S) n
S : a L.I. set #(S) n
S : a basis #(S) = n
(3) dim(V) = n , W is a subspace of V dim(W) n
SpanningSets
BasesBasesLinearly
IndependentSets
#(S) > n #(S) = n #(S) < n
dim(V) = n
4 - 46
Thm 3.10:Thm 3.10: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. i.e.,
All bases for a finite-dimensional vector space has the same All bases for a finite-dimensional vector space has the same
number of vectorsnumber of vectors.)
Pf:Pf: S ={v1, v2, …, vn}
S'={u1, u2, …, um} are two bases for a vector space
mnmn
S
S
mnS
S
basis a is '
L.I. is L.I. is '
basis a is
4 - 47
Finite dimensional:Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
Infinite dimensional:Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.
Dimension:Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space S : a basis for V
dim(dim(VV) = #() = #(SS)) (the number of vectors in S)
4 - 48
Ex: (Finding the dimension of a subspace)
(a) W1={(d, c–d, c): c and d are real numbers}
(b) W2={(2b, b, 0): b is a real number}
Sol: Find a set of L.I. vectors that spans the subspace.
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
S = {(0, 1, 1) , (1, – 1, 0)} (S is L.I. and S spans W1)
S is a basis for W
dim(W1) = #(S) = 2
S = {(2, 1, 0)} spans W2 and S is L.I.
S is a basis for W
dim(dim(WW22) = #() = #(SS) = 1) = 1
2 , ,0 2,1,0b b bQ(b)
4 - 49
Ex: (Finding the dimension of a subspace)
Let W be the subspace of all symmetric matricessymmetric matrices in M22.
What is the dimension of W?Sol:
, ,a b
W a b c Rb c
1 0 0 1 0 0
0 0 1 0 0 1
a ba b c
b c
Q
10
00,
01
10,
00
01S spans W and S is L.I.
S is a basis for W dim(W) = #(S) = 3
4 - 50
Thm 3.11:Thm 3.11: (Basis tests in an n-dimensional space)
Let V be a vector space of dimension nof dimension n.
(1) If is a linearly independent set of
vectors in V, then S is a basis for V.
(2) If spans V, then S is a basis for V.
SpanningSets BasesBases
LinearlyIndependent
Sets
dim(V) = n
#(S) > n#(S) = n
#(S) < n
1 2 nS v , v , , v L
1 2 nS v , v , , v L
4 - 51
3.6 Rank of a Matrix and Systems of Linear Equations3.6 Rank of a Matrix and Systems of Linear Equations
111 12 1n
221 22 2n
m1 m2 mn m
Aa a aAa a a
A
a a a A
L
L
M M M ML
(1)
(2)
(m)
( , , , )
( , , , )
( , , , )
11 12 1n
21 22 2n
m1 m2 mn
a a a A
a a a A
a a a A
K
K
M
K
Row vectors of A row vectors:row vectors:
11 12 1n
1 2 n21 22 2n
m1 m2 mn
a a a
a a aA A A A
a a a
L
LM ML M
M M M
L
11 12 1n
21 22 2n
m1 m2 mn
a a a
a a a
a a a
LM M M
Column vectors of A column vectors:column vectors:
|| || ||A
(1) A
(2) A
(n)
4 - 52
Let A be an m×n matrix. Row space:Row space: The row space of A is the subspace of Rn spanned by the m row vectors of A.
(1) (2) ( )1 2 1 2( ) { ... | , , ..., }mm m RS A RA A A Column space:Column space: The column space of A is the subspace of Rm spanned by the n column vectors of A.
(1) (2) ( )1 2 1 2{ , , }n
n nA A A R L LCS A
( ) {x | x 0}A nNS AR
Null space:Null space:
The null space of A is the set of all solutions of Ax=0 and
it is a subspace of Rn.
4 - 53
Notes:Notes:
(1) The row space of a matrix is not changed by The row space of a matrix is not changed by
elementary row operationselementary row operations.
RS((A)) = RS(A) : elementary row operations
(2) However, elementary row operations do change the lementary row operations do change the
column spacecolumn space.
ThmThm 3.12: 3.12: (Row-equivalent matrices have the same row space)
If an mn matrix A is row equivalent to an mn matrix B,
then the row space of A is equal to the row space of B.
4 - 54
Thm 3.13:Thm 3.13: (Basis for the row space of a matrix)
If a matrix A is row equivalent to a matrix B in row-echelon
form, then the nonzero row vectorsnonzero row vectors of B form a basis for the
row space of A.
4 - 55
Find a basis of row space of A =
2402
1243
1603
0110
3131
Ex: ( Finding a basis for a row space)
Sol:
2402
1243
1603
0110
3131
A=
0000
0000
1000
0110
3131
3
2
1
w
w
w
B = .E.G
bbbbaaaa 43214321
4 - 56
Notes:
213213 22(1) aaabbb
L.I. is },,{ L.I. is },,{(2) 421421 aaabbb
a basis for RS(A) = {the nonzero row vectors of B} (Thm Thm
3.133.13)
= {w1, w2, w3} = {(1, 3, 1, 3) , (0, 1, 1, 0) ,(0, 0, 0, 1)}
4 - 57
Ex: (Finding a basis for the column space of a matrix)
Find a basis for the column space of the matrix A.
2402
1243
1603
0110
3131
A
Sol. 1:
3
2
1
..
00000
11100
65910
23301
21103
42611
04013
23301
w
w
w
BA EGT
1 2 43cc c c
4 - 58
CS(A)=RS(AT)
(a basis for the column space of A)
a basis for CS(A)
= a basis for RS(AT)
= {the nonzero row vectors of B}
= {w1, w2, w3}
1
1
1
0
0
,
6
5
9
1
0
,
2
3
3
0
1
Q
Note: This basis is not a subset of {c1, c2, c3, c4}.
4 - 59
Notes:Notes: (1) This basis is a subset of {c1, c2, c3, c4}.
(2) v3 = –2v1+ v2, thus c3 = – 2c1+ c2 .
Sol. 2:
0000
0000
1000
0110
3131
2402
1243
1603
0110
3131
.. BA EG
The column vectors with leading 1 locate
{v1, v2, v4} is a basis for CS(B)
{c1, c2, c4} is a basis for CS(A)
vvvvcccc 43214321
4 - 60
ThmThm 3.14: 3.14: (Solutions of a homogeneous system)
If A is an mn matrix, then the set of all solutions of Ax = 0
is a subspace of a subspace of RRnn called the nullspace of A.
1 2 1 2
1 2 1 2
1 1
( )
0 0 ( )
Let x ,x ( ) (i.e. x 0, x 0)
Then (1) (x x ) x x 0 0 0 Addition
(2) ( x ) ( x ) (0) 0 Scalar multiplication
Thus ( ) is a subspace of
n
n
NS A R
A NS A
NS A A A
A A A
A c c A c
NS A R
Q
Proof:Proof:
Notes:Notes: The nullspacenullspace of A is also called the solution spacesolution space of
the homogeneous system Ax = 0.
4 - 61
Ex: Find the solution space of a homogeneous system Ax = 0.
Sol: The nullspace of A is the solution space of Ax = 0.
3021
4563
1221
A
0000
1100
3021
3021
4563
1221.. EJGA
x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
21 vvx tsts
t
t
s
ts
x
x
x
x
1
1
0
3
0
0
1
232
4
3
2
1
},|{)( 21 RtstsANS vv
4 - 62
ThmThm 3.15: 3.15: (Row and column space have equal dimensions)
If A is an mn matrix, then the row space and the column
space of A have the same dimension.
dim(dim(RSRS((AA)) = dim()) = dim(CSCS((AA))))
Rank:Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A.
rank(rank(AA) = dim() = dim(RSRS((AA)) = dim()) = dim(CSCS((AA))))
4 - 63
Notes: rank(AT) = dim(RS(AT)) = dim(CS(A)) = rank(A)
Nullity:Nullity:
The dimension of the nullspace of A is called the nullity of A.
nullity(nullity(AA) = dim() = dim(NSNS((AA))))
Therefore rank(AT ) = rank(A)
4 - 64
ThmThm 3.16: 3.16: (Dimension of the solution space)
If A is an mn matrix of rank r, then the dimension of
the solution space of Ax = 0 is n – r. That is
nullity(nullity(AA)) =nn - rank( - rank(AA)= )= nn--rr
nn=rank(=rank(AA)+nullity()+nullity(AA)) Notes:Notes: ( n = #variables= #leading variables + #nonleading variables )( n = #variables= #leading variables + #nonleading variables )
(1) rank(rank(AA):): The number of leading variables in the solution of Ax=0.
(i.e., The number of nonzero rows in the row-echelon form of A)
(2) nullity (nullity (AA):): The number of free variables (non leading variables)
in the solution of Ax = 0.
4 - 65
Fundamental Space Dimension
RSRS((AA)=)=CSCS((AATT)) rr
CS(A)=RS(AT) r
NSNS((AA)) n n –– r r
NS(AT) m – r
Notes:
If A is an mn matrix and rank(A) = r, then
4 - 66
Ex: (RankRank and nullitynullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2,
a3, a4, and a5.
120930
31112
31310
01201
A
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
4 - 67
Sol: BB is the reduced row-echelon form is the reduced row-echelon form of A.
00000
11000
40310
10201
120930
31112
31310
01201
BA
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5
nullity( ) rank( ) 5 3 2A n A
(a) rank(A) = 3 (the number of nonzero rows in B)
4 - 68
(b) Leading 1
,
0
1
1
1
and,
3
1
1
0
,
0
2
0
1
421
aaa
)(for basis a is },,{
)(for basis a is },,{
421
421
ACS
BCS
aaa
bbb
213 32 aaa 213 32 bbb(c)
4 - 69
Thm 3.17:Thm 3.17: (Solutions of an inhomogeneous linear system)
If xp is a particular solution of the inhomogeneous system
Ax = b, then every solution of this system can be written in
the form x = xp + xh , wher xh is a solution of the corresponding
homogeneous system Ax = 0.
Pf:Pf:
.)( 0bbxxxx pp AAA
ph xxx hp xxx Let
Let x be any solution of Ax = b.
)( pxx is a solution of Ax = 0
4 - 70
Ex: (Finding the solution set of an inhomogeneous system)
Find the set of all solution vectors of the system of linear equations.
Sol:
952
853
52
421
321
431
xxx
xxx
xxx
00000
73110
51201
95021
80513
51201.EG
s t
4 - 71
0
0
7
5
1
0
3
1
0
1
1
2
00
00
73
52
x
4
3
2
1
ts
ts
ts
ts
ts
x
x
x
x
pts xuu 21
i.e.
0
0
7
5
px
xh = su1 + tu2 is a solution of Ax = 0
is a particular solution vector of Ax=b.
4 - 72
Thm 3.18:Thm 3.18: (Solution of a system of linear equations)
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A (b is in the column space of A (i.ei.e., b., bCS(A))CS(A)).
Pf:
Let11 12 1 1 1
21 22 2 2 2
1 2
, x , and b
n
n
m m mn n n
a a a x b
a a a x bA
a a a x b
L
L
M M M M M
L
be the coefficient matrix, the column matrix of unknowns, and the right-hand side, respectively, of the system Ax = b.
4 - 73
Then
11 12 1 1 11 1 12 2 1
21 22 2 2 21 1 22 2 2
1 2 1 1 2 2
11 12 1
21 22 21 2
1 2
x
n n n
n n n
m m mn n m m mn n
n
n
m m
a a a x a x a x a x
a a a x a x a x a xA
a a a x a x a x a x
a a a
a a ax x x
a a
L L
L L
M M M M M M
L L
LM M
.n
mn
b
a
M
Hence, Ax = b is consistent if and only if b is a linear combinationof the columns of A. That is, the system is consistent if and only if b is in the subspace of Rn spanned by the columns of A.
4 - 74
Ex: (Consistency of a system of linear equations)
123
3
1
321
31
321
xxx
xx
xxx
000
210
101
123
101
111..EGA
Sol:
Notes:
If rank([A|b])=rank(A) (Thm 3.18Thm 3.18)
Then the system Ax=b is consistent.
4 - 75
. .
1 1 1 1 1 0 1 3
[ b] 1 0 1 3 0 1 2 4
3 2 1 1 0 0 0 0
G EA
M
c1 c2 c3 b w1 w2 w3 v
(b is in the column space of A)
1 2 3
1 2 3
v 3w 4w ( : w is not the leading-1 column vector)
b 3c 4c 0c
Note
Q
The system of linear equations is consistent.
Check:
rank( ) rank([ b]) 2A A
4 - 76
Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
(3) Ax = 0 has only the trivial solution
(4) A is row-equivalent to In
0|| A(5)
(6) rank(A) = n
(7) The n row vectors of A are linearly independent.
(8) The n column vectors of A are linearly independent.
4 - 77
3.7 Coordinates and Change of Basis3.7 Coordinates and Change of Basis
Coordinate representation relative to a basis
Let B = {v1, v2, …, vn} be an ordered basis for a vector space V
and let x be a vector in V such that
1 1 2 2x v v v .n nc c c L
The scalars c1, c2, …, cn are called the coordinatescoordinates of x relative
to the basis B. The coordinate matrix (or coordinate vector) of x relative to B is the column matrix in Rn whose components are the coordinates of x.
1
2xB
n
c
c
c
M
4 - 78
Ex: (Coordinates and components in Rn)
Find the coordinate matrix of x = (–2, 1, 3) in R3
relative to the standard basis
S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
x ( 2, 1, 3) 2(1, 0, 0) 1(0, 1, 0) 3(0, 0, 1) Q
.
3
1
2
][
Sx
4 - 79
Ex: (Finding a coordinate matrix relative to a nonstandard basis)
Find the coordinate matrix of x=(1, 2, –1) in R3
relative to the (nonstandard) basis
B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)}
Sol:
)5 ,3 ,2()2 ,1 ,0()1 ,0 ,1()1 ,2 ,1( 321 ccc
332211 uuux ccc
1
2
1
521
310
201
152
23
12
3
2
1
321
32
31
c
c
c
ccc
cc
cc
1
2
3
5
[ ] 8
2B
c
c
c
x
4 - 80
Change of basisChange of basis:
You were given the coordinates of a vector relative to one
basis B and were asked to find the coordinates relative to
another basis B'.
},{ },,{ 2121 uuuu BB
d
c
b
aBB ][ ,][ fI 21 uu
212211 , .e.,i uuuuuu dcba
Ex: (Change of basis)
Consider two bases for a vector space V
4 - 81
2
1][ ,k
kV BvvLet
221121
212211
2211
)()(
)()(
uu
uuuu
uuv
dkbkckak
dckbak
kk
1 2 1
1 2 2
1 2 '
[v]
u u v v
B
BBB BB B
k a k c ka c
k b k d kb d
P
4 - 82
Transition matrix from B' to B:
V
BB nn
space vector afor
bases twobe }...,,{ nda },...,,{et L 2121 uuuuuu
then [v] [v]B BP
1 2u u u vn BB B B, , ...,
1 2u , u , ..., unB B BP
where
is called the transition matrixtransition matrix from B' to B
If [v]B is the coordinate matrix of v relative to B
[v]B’ is the coordinate matrix of v relative to B'
4 - 83
Thm 3.19:Thm 3.19: (The inverse of a transition matrix)
If P is the transition matrix from a basis B' to a basis B in Rn,
then
(1) P is invertible
(2) The transition matrix from B to B' is P–1
BBBnBBB
BBBnBBB
nn
P
P
BB
vvuuuv
vvuuuv
uuuuuu
1
1
][ ..., ,][ ,][
][ ..., ,][ ,][
} ..., , ,{' ,} ..., , ,{
121
2
221
Notes:
4 - 84
Thm 3.20:Thm 3.20: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
for Rn. Then the transition matrix P–1 from B to B' can be found
by using Gauss-Jordan elimination on the n×2n matrix
as follows.
B BM
B BM 1nI P M
4 - 85
Ex: (Finding a transition matrix)
B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R2
(a) Find the transition matrix from B' to B.
(b)
(c) Find the transition matrix from B to B' .
BB ][ find ,2
1][Let ' vv
4 - 86
Sol:
(a) 3 4 1 2
2 2 2 2
M
M
0
1
2
1
12
23][ ][ BB P vv
(b)
1 0 3 2
0 1 2 1
M
MG.J.E.
B B' I P
12
23P (the transition matrix from B' to B)
4 - 87
(c)
(the transition matrix from B to B')
32
211P
1 2 3 4
2 2 2 2
M
M
1 0 1 2
0 1 2 3
M
MG.J.E.
B' B I P-1
Check:
21
10
01
32
21
12
23IPP
4 - 88
Ex: (Coordinate representation in P3(x))
Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis in P3(x), S = {1, 1+x, 1+ x2, 1+ x3}.
Sol:
p = 3(1) + 0(1+x) + (–2)(1+x2 ) + 3(1+x3 )
[p]s =
3
2
0
3
4 - 89
Ex: (Coordinate representation in M2x2)
Find the coordinate matrix of x = relative to
the standardbasis in M2x2.
B =
Sol:
87
65
10
00,
01
00,
00
10,
00
01
8
7
6
5
10
008
01
007
00
106
00
015
87
65
Bx
x