Upload
paul-webster
View
12
Download
3
Embed Size (px)
DESCRIPTION
linear algebra
Citation preview
Chapter 1
Vector Spaces
1.1 What is a vector space?
The notion of a vector space is an abstraction of the familiar set of vectors in two or three dimen-
sional Euclidean space.
Let R denote the set of all real numbers. Let C denote the set of all complex numbers. F denoteseither R or C. Elements of F are called scalars.
We will use the Greek letters , , , . . . and a, b, c for scalars.
A nonempty set V with two operations + (addition) and (scalar multiplication) is said to be a
vector space over F iff it satisfies the following axioms:
1. x+ y = y + x, for all x, y V.
2. (x+ y) + z = x+ (y + z), for all x, y, z V .
3. There exists 0 V such that x+ 0 = x for all x V .
4. For each x V , there exists (x) V such that x+ (x) = 0.
5. (x+ y) = x+ y for all F and for all x, y V .
6. ( + ) x = x+ y, for all , F and for all x V .
7. () x = ( x) for all , F and for all x V .
8. 1 x = x for all x V .
If the underlying field F is R we say that V is a real vector space; if F = C, we say that V is acomplex vector space.
Elements of a vector space are called vectors. We will use x, y, z, u, v, w for vectors.
The symbol 0 will stand for the zero vector as well as zero scalar. From the context, you should
know which one it represents. We will often write x as x.
3
Example 1.1.
1. V = {0} is a vector space over F.
2. R2 = {(a, b) : a, b R} is a real vector space with usual addition and scalar multiplication.
3. R21 :=
{[a
b
]: a, b R
}with the usual addition and scalar multiplication is a real vector
space.
4. Rn := {a1, . . . , an) : ai R} is a vector space with component-wise addition and scalarmultiplication.
5. Rn1 :=
a1...
an
: ai R
with the usual addition and scalar multiplication is a real vector
space. To minimize printing space, we sometimes write a column in Rn1 as (a1, . . . , an)t.
6. Define Cn and Cn1 as above. These are complex vector spaces.
7. V = {(x1, x2) R2 : x2 = 0} is a real vector space under the usual addition and scalarmultiplication.
8. V = {(x1, x2) R2 : 2x1 x2 = 0} is a real vector space under the usual addition andscalar multiplication.
9. Is V = {(x1, x2) R2 : 3x1 + 5x2 = 1} a vector space over R?
10. Pn := {a0 + a1t + + antn : ai F} with the usual polynomial addition and scalar
multiplication is a vector space over F.
11. The set Fmn of all m n matrices with entries from F with the usual matrix addition andscalar multiplication is a vector space over F.
12. Take V = R2. For (a1, a2), (b1, b2) V and R, define
(a1, a2) + (b1, b2) = (a1 + b1, a2 + b2), (a1, a2) =
{(0, 0) if = 0
(a1, a2/) if 6= 0.
Is V a real vector space?
13. Let V = {f : f is a function from [a, b] to R}.For f, g V , define f + g to be the map (f + g)(x) = f(x) + g(x) for all x R.For R and f V , define f to be the map (f)(x) = f(x) for all x R.The zero vector in V is the map f such that f(x) = 0 for all x [a, b].
For f V , define f by (f)(x) = f(x).
Then V is a real vector space.
4
14. V ={f : R R : d
2f
dx2+ f = 0
}. Define addition and scalar multiplication as in the previ-
ous example.
For f, g V , d2(f+g)dx2
+ (f + g) = (d2f
dx2+ f) + ( d
2g
dx2+ g) = 0
Similarlyd2(f)dx2
+ (f) = (d2f
dx2+ f
)= 0.
Therefore, V is closed under addition and scalar multiplication. Other properties can easily
be verified. Hence V is a vector space over R.
We will write F1n as Fn.
Theorem 1.1. Let V be a vector space over F. Then
1. The zero element is unique,
i.e., if there exists 1, 2 such that x+ 1 = x and x+ 2 = x for all x V , then 1 = 2.
2. Additive inverse of each vector is unique,
i.e., for x V , if there exist x1 and x2 such that x+ x1 = 0 = x+ x2, then x1 = x2.
3. for any x, y, z V and any F:(a) x+ y = x+ z implies y = z (b) 0 x = 0 (c) (1) x = x (d) 0 = 0.
Proof.
1. 1 = 1 + 2 = 2.
3. x1 = x1 + 0 = x1 + x+ x2 = x2 + x+ x1 = x2 + 0 = x2.
3(a). x+ y = x+ z x+ x+ y = x+ x+ z 0 + y = 0 + z y = z.
3(b). 0 x+ 0 = 0 x = (0 + 0) x = 0 x+ 0 x 0 x = 0.
3(c). x+ (1)x = 1 x+ (1) x = (1 + (1)) x = 0 x = 0 (1)x = x.
3(d). 0 + 0 = 0 = (0 + 0) = 0 + 0 0 = 0.
1.2 Subspaces
Subspace of a vector space is a subset which has the same structure.
Let U be a subset of a vector space V . Then U is called a subspace of V iff U is a vector space
with the operations of addition and scalar multiplication inherited from V .
The operation on U inherited from V means the following:
Suppose x, y U, F. Given that U V. Consider x, y as elements of V. We havew = x+y V and z = x V. Then the inherited addition of x and y is this w, and the inherited
scalar product of and x is this z.We use the same + and in U also.
Example 1.2.
1. {0} V is a subspace for any vector space V .
2. W = {(x1, x2, x3) R3 : 2x1 x2 + x3 = 0} is a subspace of R3.More generally, if A is anmn matrix and x = (x1, , xn), then the set of all x such that
Axt = 0 is a subspace of Rn. (Prove it!)
5
3. Pn is a subspace of Pm for n m.
4. C[a, b] := {f : [a, b] R : f is a continuous function} is a subspace ofF [a, b] := {f : [a, b] R : f is a function}.
5. R[a, b] := {f : [a, b] R : f is integrable } is a subspace of C[a, b].
6. Ck[a, b] := {f : [a, b] R : dkf
dxkis continuous} is a subspace of C[a, b].
7. Pn can also be seen as a subspace of C[a, b].
Theorem 1.2. Let W be a subset of a vector space V . Then W is a subspace of V iff W 6= andfor all x, y W and for each F, both x+ y W and x W.
Proof. IfW is a subspace, then obviously the given condition is satisfied.
Conversely supposeW is a subset which satisfies the given condition. The commutativity and
associativity of addition, distributive properties and scalar multiplication with 1 are satisfied in V
and hence true in W too. Therefore, we only need to verify the existence of zero vector and
additive inverse.
SinceW 6= , there exists an x W. In V, we know that x = (1)x. Since (1)x W, wesee that x W. Now, since both x and x W, we have x+ (x) = 0 W.
Therefore,W is a subspace of V .
Notice that the condition that for all x, y W and for each F, both x + y W andx W is equivalent to
for all x, y W and for each F, x+ y W.
Example 1.3. Let V = C[1, 1] and V0 = {f V : f is an odd function }. Check whether V0 is
a subspace of V.
Solution: As a convention, the zero function belongs to V0. So, V0 6= .If f, g V0 and R, then
(f + g)(x) = f(x) + g(x) = f(x) + (g(x)) = (f + g)(x).
That is, f + g is an odd function; f + g V0. Therefore, V0 is a subspace of V.
Theorem 1.3. Let V1 and V2 be subspaces of a vector space V . Then V1 V2 is a subspace of V .
Proof. V1 and V2 are subspaces. So, 0 V1 V2. Therefore V1 V2 6= .Suppose x, y V1 V2 and F, then x + y belongs to both V1 and V2 (since they are
subspaces) and hence they belong to V1 V2.
Therefore, V1 V2 is a subspace.
Theorem 1.4. Let V1 and V2 be subspaces of a vector space V. Then V1 V2 is a subspace of V iff
V1 V2 or V2 V1.
6
Proof. If V1 V2, then V1 V2 = V2; so it is a subspace. Similarly, if V2 V1, then V1 V2 = V1,
a subspace of V.
Conversely, assume that V1 V2 is a subspace. Suppose, on the contrary that V1 * V2 andV2 * V1. Then there exist vectors x V1, x 6 V2 and y V2, y 6 V1. Now, x V1 V2 andy V1 V2.
If x+y V1, then y = (x+y)x V1; which is wrong. If x+y V2, then x = (x+y)y
V2; which is also wrong. Then x + y 6 V1 V2. This contradicts the assumption that V1 V2 is a
subspace of V.
1.3 Span
Let V be a vector space and v1, . . . , vn V . A linear combination of v1, . . . , vn is the vector
1v1 + + nvn, where 1, 2, . . . , n F.Let S be a nonempty subset of V . Then the set of all linear combinations of elements of S is
called the span of S, and is denoted by spanS.
Span of the empty set is taken to be {0}.
Notice that a linear combination is always a finite sum. Further, span() = {0}. Also, if S isa finite set, say S = {v1, . . . , vm}, then
span(S) = {1v1 + + mvm : 1, . . . , m F}.
In general, we can write the span in set notation as follows:
span(S) = {1v1 + + nvn : i F, vi S, i = 1, . . . , n for some n N}.
Example 1.4.
1. span() = span({0}) = {0}.
2. C = span{1, i} with scalars from R.
3. Let e1 = (1, 0), e2 = (0, 1). Then R2 = span{e1, e2}.
4. If ei denotes the vector inRn having 1 at the ith place and 0 elsewhere, thenRn = span{e1, . . . , en}.
5. P3 = span{1, t, t2, t3}.
6. Let P denote the set of all polynomials of all degrees. Then P = span{1, t, t2, . . .}.
Caution: The set S can have infinitely many elements, but a linear combination is a sum of finitely
many elements, multiplied with some scalars.
Theorem 1.5. Let S be a subset of a vector space V . Then spanS is the smallest subspace of V
that contains S.
7
Proof. If S = , then spanS = {0}; which is clearly the smallest subspace. Suppose that S 6= .Let x, y spanS, then x = a1x1 + + anxn and y = b1y1 + + bmym for some ai, bj Fand xi, yj S. Let F. Then
x+ y = a1x1 + + anxn + b1y1 + + bmym spanS.
Therefore, span(S) is a subspace. If U is a subspace containing S, then U contains all linear
combination of elements of S. That is, U contains span(S). So, span(S) is the smallest subspace
containing S.
Let V1 and V2 be subspaces of a vector space V . Sum of these subspaces is defined as
V1 + V2 = {x+ y : x V1, y V2}.
Notice that V1 V2 V1 + V2.
Theorem 1.6. Let V1, V2 be subspaces of a vector space V. Then V1 + V2 = span(V1 V2).
Proof. Let x + y V1 + V2, where x V1 and y V2. Then u + v span(V1 V2). So,
V1 + V2 span(V1 V2).
Conversely, let x span(V1 V2). There exist u1, . . . , un V1 and v1, . . . , vm V2 such that
x = 1u1 + + nun + 1v1 + + mvm.
where i F and j F. Since 1u1+ +nun V1 and 1v1+ +mvm V2, x V1+V2.So, span(V1 V2) V1 + V2.
We thus see that V1 + V2 is a subspace of V . Moreover, it is the smallest subspace of V
containing V1 V2.
Let S be a subset and U, a subspace of a vector space V.When U = span(S), we say that S spans
U. In this case, we also say that S is a spanning set of U.
For example, {(1, 1), (1, 0), (2, 3)} spansR2.Also, {(1, 1), (1, 0)} spansR2. The set {e1, . . . , en}is a spanning set of Rn. And {1, t, . . . , tn} is a spanning set of Pn.
If B is a spanning set, then B E is also a spanning set for every set of vectors E. However, a
subset of B may or may not be a spanning set.
1.4 Linear independence
A set of vectors {v1, . . . , vn} is said to be linearly dependent iff one of the vectors can be written
as a linear combination of others, i.e., there exists j {1, . . . , n} and i F, i 6= j such thatvj =
i 6=j ivi.
Thus, {v1, . . . , vn} is linearly dependent if there exists j such that vj span{vi : i 6= j}.
A set of vectors {v1, . . . , vn} is said to be linearly independent iff the set is not linearly
dependent, i.e., none of the vectors in the set can be written as a linear combination of the others.
Also, is linearly independent.Notice that {0} is linearly dependent.
8