View
1
Download
0
Category
Preview:
Citation preview
Professor William Haboush305 Altgeld333-6498haboush@math.uiuc.edu
1:30 to 2:30 Tuesday1 to 2 WednesdayOften around for much of afternoonWill find in office or lounge upstairs (likely)
Office Hours (may change)
Course is first six chapters of the book
2 midterms + 1 final
Midterm 1 100 points
Midterm 2 100 points
Final 200 points
HW 80 points
Total 480 points
Grading
Just puts them in order and decides A+, A…
Median B-Fair number of A'sLots of homework
First classSunday, August 22, 2010
20:14
Notes Page 1
The rigid geometry of n-spaceLeaves straight lines straight and 0 point alone
Google search uses eigenvaluesOften related to computers
Solves systems of linear equations
Gaussian elimination
2x1-x2+x3-x4-x5=1X1+x2+x3 +x4+x5=-2x1-x3-x4-x5=1
Example
2 systems of equations are equivalent if all the solutions of one and all the solutions of the other are the same
2 -1 1 -1 -1 1
1 1 1 1 1 -2
1 0 -1 -1 1 0
1 1 1 1 1 -2
0 -3 -1 -3 -3 5
0 1 2 2 0 -2
1 1 1 1 1 -2
2 -1 1 -1 -1 1
1 0 -1 -1 1 0
Switch rows one and 2
Subtract 2 row one from row 2Subtract row one from row 3, multiply by -1
1 1 1 1 1 -2
0 1 2 2 0 -2
0 -3 -1 -3 -3 5
Switch rows 2 and 3
1 1 1 1 1 -2
0 1 2 2 0 -2
0 0 5 3 -3 1
Row 3 plus 3 row 2
Leave out the x's
What linear algebra isTuesday, August 24, 2010
09:41
Notes Page 2
1 0 -1 -1 1 0
0 1 2 2 0 -2
0 0 5 3 -3 1
R1-R2
1 0 -1 -1 1 0
0 1 2 2 0 -2
0 0 1 3/5 -3/5 1/5
R3/5
1 0 -1 -1 1 0
0 1 0 4/5 6/5 -12/5
0 0 1 3/5 -3/5 1/5
R2-2R3
Just put x1,x2,x3 in terms of x4,x5
Replace continuously by simplified systems of equations which are equivalent
Switch equations aroundMultiply an equation by a non-zero constantAdd a multiple of one equation to another
What can one do?
Gaussian elimination always works with any system of equation
Can choose the free variables, but bound variables are then determined
Rank: number of bound variables
Also the dimension of the solution spaceNullity: number of free variables
Notes Page 3
Due next ThursdayAssignments always due the following Thursday (7 or 9 days)
Assignment: pg 10: 3cd, 4(c,d), 5bd, 6 bdh: pg 23: 1d -h, 2abc, 5 bcj
If no solution, then inconsistentOtherwise one solution
n equations and n unknowns
will probably have no solutionIf more equations then unknowns, then overdetermined system
If fewer equations then unknowns, then underdetermined system
Terminology:
Always has at least one solution (all x i=0)If more unknowns then equations, then at least one other solution
Homogeneous system (all 0's)
If use reduced Gauss-Jordan form then don't really need to (already has essentially done it)
Back substitution: stressed in the book. If system is in Gauss-Jordan form (not reduced), then put each back into the previous to get answers
Systems of EquationsThursday, August 26, 2010
09:30
Notes Page 4
On pg 8
4-x2-x3+x4=0x1+x2+x3+x4=62x1+4x2+x3-2x4=-13x1+x2-2x3+2x4=3
0 -1 1- 1 -4
1 1 1 1 6
2 4 1 -2 -1
3 1 -2 2 3
1 1 1 1 6
0 -1 1- 1 -4
2 4 1 -2 -1
3 1 -2 2 3
1 1 1 1 6
0 1 1 1- 4
0 2 -1 -4 -13
0 -2 -5 -1 -15
1 1 1 1 6
0 1 1 1- 4
0 0 -3 -2 -21
0 0 -3 -3 -7
1 1 1 1 6
0 1 1 1- 4
0 0 -3 -2 -21
0 0 0 -1 14
1 0 0 0 30
0 1 0 0 -79/3
0 0 1 0 49/3
0 0 0 1 -14
Whole thing is augmented matrix
First 4 are matrix of coefficients
In book they forget to put the -4 in
The thing you use to remove everything underneath is called the pivot
Switch R1 and R2
R3-2R1R4-3R1-1*R2
R3-2R2R4+2R2
R4-R3
Reduce it
Example
1 1 1 1 1 1
-1 -1 0 0 1 -1
-2 -2 0 0 3 1
0 0 1 1 3 3
1 1 2 2 4 4
1 1 1 1 1 1
0 0 1 1 2 0
0 0 2 2 5 3
0 0 1 1 3 3
0 0 1 1 3 3
1 1 1 1 1 1
0 0 1 1 2 0
0 0 0 0 1 3
0 0 0 0 1 3
0 0 0 0 1 3
1 1 1 1 1 1
0 0 1 1 2 0
0 0 0 0 1 3
0 0 0 0 0 0
0 0 0 0 0 0
1 1 1 1 0 -2
0 0 1 1 0 -6
0 0 0 0 1 3
0 0 0 0 0 0
System is very underdetermined
ExamplesThursday, August 26, 2010
09:47
Notes Page 5
0 0 0 0 0 0
1 1 0 0 0 4
0 0 1 1 0 -6
0 0 0 0 1 3
0 0 0 0 0 0
0 0 0 0 0 0
x1+x2=4x3+x4=-6x5=3
x1=4-x2
x3=-6-x4
x5=3
Solution
Each row starts with a 1Everything below a 1 is a 0Above any INITIAL 1 there is nothing but zeros
Bound variables:x1,x3,x5
Free variables: x2,x4
This is a singular system
Notes Page 6
'r' rows'n' columns
An r x n matrix
A rectangle of numbers with a bracket on each end
Use a capital letter to represent a matrix
Can add, subtract, multiply a constant
Matrices and Matrix arithmetic (algebra)Thursday, August 26, 2010
10:20
Notes Page 7
Notes Page 8
Will have a web page eventually
AB != BAAB = 0 does not imply that either A or B is zero
1 -1 2
1 1 -1
x1
x2
x3=
x1-x2+2x3x1+x2-x3
AX=B
If AC=0Then A(M+C)=AM+AC=AM
Cannot simplify further, as it is not commutativeBinomial theorem does not apply
A(AA)=(AA)AAnAm=AmAn=Am+n
Powers only valid with square matrices
However A commutes with all powers of itself
(A+B)(A+B)=AA+AB+BA+BB
Im= Special matrices
1 0 … 0
0 1 … 0
… … 1 …
0 0 … 1
Identity Matrix - plays role of 11 if i=j0 otherwise
Im(ij) =
a11x1+ … +a1mxm=b1
… an1x1+ … +anmxm=bn
Same as AX=B(A-1A)X=A-1BIX=A-1BX=A-1B
Inverse of a matrixSquare matrices only
B is called a right inverse for A if AB=In
B is called a left inverse for A if BA =In
B is an inverse for A if AB=BA=In
-1 2
3 -2
Find the inverse:
-2/-4 -2/-4
-3/-4 -1/-4
-1 2
3 -2
1/2 1/2
3/4 1/4
1 0
0 1=
1/2 1/2
3/4 1/4-1 2
3 -2=
1 0
0 1
Thus it is both a right and left inverse
1 4 3
-1 -2 0
2 2 3
a b
c d Inverse is:
d/(ad-bc) -b/(ad-bc)
-c/(ad-bc) a/(ad-bc)
1/(ad-bc) !=0 and this is the invere
inverse-1/2 -1/2 1/2
1/4 -1/4 -1/4
1/6 1/2 1/6
A right inverse for A exists1.Every system of equations AX=B has a unique solution2.A left inverse for A exists3.Reduced Gauss-Jordan form of A is the identity4.2 implies 1 here and 1 implies 2, thus 1 iff 23 implies 24 implies 2
CA=In
AB=In
These 4 are equivalentThese conditions are the definition of when the square matrix A is non-singular
AA*=A*A=I
B-1A-1(AB)=B-1(A-1A)B=B-1IB=B-1B=ICan extend for more than 2 also(A1A2…Aq)-1=Aq
-1…A1-1
(AB)-1=B-1A-1 as
Matrix algebraTuesday, August 31, 201009:31
Notes Page 9
Switch 2 rows1.Add a multiple of row i to row j2.Multiply row j by a non-zero constant3.
Elementary row operations
a11 … a1n
… …
ar1 … arn
Ir
Do E to A where E is an elementary row operation1.Do E to Ir and get E, consider EA2.
2 things to do
1 and 2 produce the same resultTheorem: (Main theorem of linear algebra)
1 -1 2
2 1 1
2 -3 2
R2=R2-2R1
1 -1 2
0 3 -3
2 -3 2
Do E to A
Do E to Ir 1 0 0
0 1 0
0 0 1
1 0 0
-2 1 0
0 0 1
EA 1 0 0
-2 1 0
0 0 1
1 -1 2
2 1 1
2 -3 2
1 -1 2
0 3 -3
2 -3 2
*
=
Proof of theorem is just done by hand sticking them together (like in further maths group theory)
Calculating the inverse
1 4 3 1 0 0
-1 -2 0 0 1 0
2 2 3 0 0 1
1 4 3 1 0 0
0 2 3 1 1 0
0 -6 -3 -2 0 1
1 4 3 1 0 0
0 2 3 1 1 0
0 0 6 1 3 1
1 4 0 1/2 -3/2 -1/2
0 2 0 1/2 -1/2 -1/2
0 0 6 1 3 1
1 0 0 -1/2 -1/2 1/2
0 1 0 1/4 -1/4 -1/4
0 0 1 1/6 1/2 1/6
Put I in after, do row operations to flip other side to be the identity then have inverse on side which originally had identity
Works because of theorem from earlier
A I
A1=E1A E1
A2=E2E1A E2E1
EjEj-1…E1A=Aj EjEj-1…E1
Eventually, Aj=I, thus will be inverse
Thus if reduced gauss-jordan form of A is identity then there is a left inverse
Notes Page 10
Homework:Pg 42-43: 1abcde, 2abcd, 4ab, 13Pg 56: 8, 24ab, 25
The result of doing one of the elementary row operations to the identity
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Switch 1st and 3rd and get an elementary matrix
0 0 1 0
0 1 0 0
1 0 0 0
0 0 0 1
Or do any of the row operationsIf there are 2 operations done, it is not elementaryAdding a multiple
1 0 0 0
0 1 0 3
0 0 1 0
0 0 0 1
Multiplying
1 0 0 0
0 4 0 0
0 0 1 0
0 0 0 1
Elementary matrix
Take the r by m matrix A and do the elementary row operation E to it1.Do E to the IR, get E, and then multiply EA2.
2 procedures
1 and 2 produce the same resultTheorem:
Find inverse by the writing adjacent method - proved last week
A I
A1 E1
… …
Aq-1 N
Aq EqN
EqNA=EqAq-1=A by theorem
E1A=A1
Show by induction
A right inverse for A exists1.A left inverse for A exists2.Every system of equations AX=B has a unique solution3.Reduced GJ form of A is I r4.Det(A) != 05.
Equivalent statements
Not every matrix has oneThe LU factorization
Where L is lower triangularU is upper triangular
A=LU
THM: If A can be reduced to an upper triangular matrix using no switches then A=LU for some L and U such that L is lower triangular with 1's on the diagonal and U is upper triangular
Upper triangular matrix uUpper triangular if everything below diagonal is zeroLower triangular if everything above diagonal is zero
Product of two upper triangular matrices is upper triangularProduct of two lower triangular matrices is lower triangular
GLN is the set of non-singular n by n matrices
LaA=Aq, A=L1-1Aq
AqEq…E1=L1
Put -m instead of mPut 1/c instead of cSwitch same two rows (is its own inverse)
Any elementary matrix has an inverse, so this is true
If M1 up through Ml all have inverses (M1-1…Ml
-1) then the inverse of (M1…Ml)-1=Ml-1…M1-1
Inverse of a product is product of inverses in reverse order
Every non-singular matrix can be written as a multiple of L( )U where what is in the brackets has no switches
More MatricesThursday, September 02, 2010
09:37
Notes Page 11
1 -1 3 2 1 0
4 -2 1 1 2 1
3 1 1 0 -1 2
4 1 2 1 3 1
Broken up into
A11 A12 A13
A21 A22 A23
A31 A32 A33
Where each is what they are in the aboveA11=[[1,-1][4,-2]]A12=[1,0]
If the sizes match up can multiply partition matrices just as normal numerical matrices
Not used too much, but is occasionally
a11 … a1m
… …
an1 anm
Ai is the ith columnRj is the jth row
Can be also then
A1 … An
OR
B1
…
Bm
BA=[BA1,…,Ban]
If have two matrices, A, B
Row matrix by column matrix is a 1 by 1 matrix or unit matrix - a dot product
b1A1+…bnAn is a linear combination
If AX=B then this system is consistent if and only if B can be written as a linear combination of the columns of A.
This is kind of silly, as it is obvious
Consistency theorem:
Partitioned MatricesThursday, September 02, 2010
10:27
Notes Page 12
Website: math.illinois.edu/~haboush/math415-FA10.html
Determinants based on geometry
Can do same thing in 3 dimensions - parallelepiped
Volume
DeterminantsTuesday, September 07, 2010
09:29
Notes Page 13
Notes Page 14
Can do same thing for n-spaceLemma: if two rows are the same then the determinant is zero (as it is collapsed in on itself). OR switch them, it must be the same, but must be negative too, thus is zero.
If reduced GJ form is not equal to I then the determinant is zero
Notes Page 15
Will have n! monomialsTotal number of computations is n.n!With Gauss Jordan can do in n3
So 4 by 4 is better by GJ3 by 3 and smaller do formula
n n3 n.n!
1 1 1
2 8 4
3 27 18
4 64 96
5 125 600
6 216 4320
7 343 35280
Can partition matrix to find determinantCould do everything in terms of columns instead of rows
Turn rows into columns and columns into rowsTranspose of matrix
Find the determinant
1 -2 1 4
2 1 1 2
-3 0 1 2
1 1 2 1
Notes Page 16
1 -2 1 4
0 5 -1 -6
0 6 4 14
0 3 1 -3
1 -2 1 4
0 3 1 -3
0 -6 4 14
0 5 -1 -6
Switch, so multiply by -1
1 -2 1 4
0 3 1 -3
0 0 6 8
0 0 -8/3 -1
1 -2 1 4
0 3 1 -3
0 0 6 8
0 0 0 23/9
-1*1*3*6*23/9=-46 Thus determinant is
Notes Page 17
Polynomials of degree n x n. Question: does the determinant exist? Answer: yes
1 2 3 4 … nr1 r2 … rn
Integeres in wrong order
1 2 … n
r1 r2 … rn
Denoted by a greek letter
1 2 3 4
3 2 1 4
3 4 1 2
3 4 1 2
Number of switches not unique, but whether it is odd or even is(-1)σ=1 if even, -1 if odd
Take n determinantsProof
Can get from one to another switching only insome number of switches
Permutations
Each row has only one '1' and each column has only one '1', the rest are columns
0 0 1 0
0 0 0 1
1 0 0 0
0 1 0 0
Determinant:
1 2 3 4
3 4 1 2
This is σ, and the matrix Sσ'1' is in the 3rd place of the first row'1' is in the 4th place of the second row
More determinantsThursday, September 09, 2010
09:35
Notes Page 18
Iterative statement: the determinant of the whole thing. Take an arbitrary row, take ( -1)i+1x1iM1i+… +(-1)1+mMim=det(X)
Notes Page 19
Homework due NEXT ThursdayQuestion 3 acg, not 2 DO that
Notes Page 20
HMWK 3 due 23rd SeptemberExam on not next Thursday
1 2 3 4 5
4 5 2 1 3
Switch, get negative1.Row by constant, total by constant2.Sum two rows, sum determinant3.
1 2 3 4 5
4 3 5 1 2
Can do column operations as well as row operations
Iterative fact about determinants
Even more determinantsTuesday, September 14, 2010
09:32
Notes Page 21
Replace entries by corresponding minorsTransposeSign changesDeterminant is ith row by ith columnDivide by determinant for X-1.
Closed formula for the determinant
Cramer's rule
Notes Page 22
Determinant is ith row by ith columnDivide by determinant for X-1.
Closed formula for the determinant
Cramer's rule
Closed formula, but computationally intensive
Say have 30, only want to know x17. Works well if you only want to know one or two of the values in a large system of equations
18th century ruleCramer: interested in Riemann curves, needed to solve large systems of linear equations
Notes Page 23
Notes Page 24
det(AB)=det(A).det(B)
Determinant CalculationsThursday, September 16, 2010
09:33
Notes Page 25
Notes Page 26
Notes Page 27
A function from V x V to Vi.Binary operation taking 2 elements of V and combining them to obtain a third1.
An operation of F on V2.
A vector space over F is a set V with two kinds of operations (F is a field)
(a+b)+c=a+(b+c)a+b=b+aa+0=0+a=a for all aa+a'=a'+a=0
For +
a*(b*c)=(a*b)*ca*b=b*aa*1=1*a=aAll nonzero a, aa1=a1a
For *
These are a group (commutative)Examples: rationals, real numbers, complex numbers,
Field: a set F with 2 operations + and *
Fields: subsets of complex numbers that are closed under addition and multiplication and follow group definition
We do linear algebra over reals and complexes
u+(v+w)=(u+v)+wu+v=v+uExists 0 such that u+0=0+u=uFor all u, u' exists such that u'+u=u+u'a(u+v)=au+av(a+a')=au+a'ua(uv)=(au)v1v=v
Subset of C, closed under + and *, if x not equal to 0 in F then 1/x is also in F.Usually use reals, but can also do rationals or other stuff
F: field of coefficients
Set V with two operations, one a binary operation VVector space:
Vector SpacesThursday, September 16, 2010
10:27
Notes Page 28
Finite dimensional vector space
What is an infinite dimensional vector space?
Notes Page 29
Notes Page 30
Definition: suppose V is a vector space over F. A subspace of V, U is a subset of V so that for each u,u' in U, u+u' is also in U and if u is in U and lambda is in F, then lambda.u is in U and if U is a vector space with these operations
A fortiori: if associativity… holds in V it also holds in U presuming the answer, u, is in U
Exercises - prove that something is or isn't a vector subspace
Notation: S(v1,…vq) is the span of v1 … vq
Lemma: If V is a vector space over F and v1…vq are vectors in V then S(v1…vq) is a vector subspace
Notes Page 31
Span is a plane
Notes Page 32
Exam on 7 October (2 thursdays)
Pg 116: 3 show complex numbers a+bi is a vector space over R, 6,13, 15Pg 125: 1abe, 2ac, 3adef, 5ac, 6ad,8Pg 137: 2ace
HMWK:
A vector space over F is a set V, with two operations:
V/F
Linear independenceThursday, September 23, 2010
09:34
Notes Page 33
1 -1 2 0 1 1 2
2 1 1 2 0 3 0
1 -1 2 2 1 1 1
1 -1 2 0 1 1 2
0 3 -3 2 -2 1 -4
0 0 0 2 0 0 -1
Thus linearly independent
OR do column operations - but determinant is much easier
A bunch of linearly independent vectors that satisfy certain conditionsBasis
Real numbers between zero and one are more numerous than integers
S(v1…vq)=Vi.v1 … vq are linearly independentii.
Def: V is a vector space over F. We say {v1…vq} is said to be a basis of V if
Number of elements in a basis is dimension of vector space
v1 … vq is a basis for V1.v1 … vq is a minimal spanning set for V2.v1 … vq is a maximal linearly independent set3.
Proposition TFAE (the following are equivalent)
Proof:
Dimension is the number in the basis
Notes Page 34
Suppose V is a finite dimensional vector space over F. Then any two bases have the same number of elements
Theorem:
Proof
A vector space is finite dimensional if it admits a finite spanning space
Notes Page 35
Notes Page 36
Maximum linear independent set1.Minimal spanning set2.
A basis is a Add that which is not in the span THM: For any set v1…vl which is linearly independent, it can be extended to be a basis
THM: Every spanning set can be diminished to a basisTHM: all bases for same system have same number of elements (proved on Thursday)PROP: if v1…vr are linearly independent and u 1…un span, then r<= n.
L=an(x)dn/dxn+an-1(x)dn-1/dxn-1+…+a1(x)d/dx+a0(x)VL= {f: L(f) = 0}If f and g are members of V L then f+g is in VL and af is in VL - this(VL) is a vector subspace of the continuous functions.
BasisTuesday, September 28, 2010
09:33
Notes Page 37
Theorem:
Notes Page 38
Notes Page 39
Notes Page 40
Jordan-canonical formCn to Cn find a basis such that T =
Matrix TransformationsThursday, September 30, 2010
09:36
Notes Page 41
Notes Page 42
Notes Page 43
1's below the diagonal, zeros everywhere else
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
EX:
Notes Page 44
1's down the diagonal, except the first which is a -1EX:
-1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Can find a basis so it can be written this way always for a reflection
Basis for a reflection
On exam will have a problem: here's a matrix, here's a basis write the matrix in that basis
Notes Page 45
A: linear transformation from standard basis to standard basisBut want to go to a new coordinate system - M is the equivalent of A in the new coordinate systemS-1AS=M
Notes Page 46
Notes Page 47
First question: list of matrices of different sizesWhat is on the exam?
Then will say:
AB, CA, DAB, DTA… Will be worth 6-8 points
Find the following if they make sense
Next question:
EX: elementary row operationsFind the inverse of some matrix S using some method (will give you the method)
Or might just say to find the inverseMight ask to find adjoint
Typical exam questionsTuesday, October 05, 2010
10:09
Notes Page 48
Will be a section saying - which of the following sets is linearly independent? A basis of a given space?
Can give a series of linear equations with, say, 5 unknowns but only 3 equations and asks for the solution of those equations (reduce to gaussian form, write the solution)
1 1 1 1- 1 2
1 2 0 1 -2 3
1 2 1 -2 1 0
1 1 1 1- 1 2
0 1 -1 2 -3 1
0 1 0 -5 6 4
Continue on
1 0 0 7 -8 9
0 1 0 -3 3 -3
0 0 1 -5 6 4 Solution vector
We're covering: chapter 2 (find determinants)Put a big matrix with lots of zeros - find it's determinant by elementary operations to a diagnol and multiply the diagnols Can have cramer's rule - just one of many
Could ask for det(adj(X))
Could ask for det(A)=3, det(B)=2 with A,B 4x4 - find the determinant of 2BA-1BAANS: det(2B)det(A-1)det(B)det(A)=42(2)(1/3)(2)(3)=64
Give a proof: prove that the solution set of a homogeneous system of equations is a vector subset of the space
Show it is a subspaceDo so by showing Av+Av' in K and so is cv
Could have K is the set of v such that Av is equal to zero (presuming all is in V)
Associative, Commutative, 0, inverse, distributive, distributive, associative multipliation, 1.v=v
Know the 8 axioms
Could ask to prove something is a vector space or a subspace of a vector space
Will go through section 3.5 on test
Can ask questions about dimensions - are 4 vectors in 3-space linearly independent - of course not!If there are 3, make matrix and take determinant - non-zero then good
EX:
1 0 1 0 1
1 -1 2 1 1
3 -1 4 1 3 Could ask for definition of basis or if something is a basis
Notes Page 49
3 -1 4 1 3
1 0 1 0 1
0 0 1 0 0
0 0 0 0 0
Thus not linearly independent
Could ask for definition of basis or if something is a basisAsk to prove something is a basis -show span and lin. independent
Notes Page 50
Homework is online - due next ThursdayLinear transformationsTuesday, October 12, 201009:38
Notes Page 51
g
Notes Page 52
Dimensions of row space are equal to dimensions of column space
What is im(A)
A times something is a linear combination of the columns of A
This is called the column space of A and is the image of AAX=S(A1…An)
The dimension of the kernel plus the dimension of the image equals the full width of the matrix
Can define the rank and nullity
ExampleTuesday, October 12, 2010
09:49
Notes Page 53
Span after elementary row operations is same as span before
Definition: row space
Notes Page 54
dim(rowspace(A))=dim(columnspace(A))
THEOREM: Thus dimension of column space is number of initial ones, is the dimensions of row space
Rank(A)+Nullity(A)=width of A (the number of columns)
Rank doesn't change when transposed, nullity mayRank: size of the largest minor with a non-vanishing determinant
Nullity is number of columns minus the rank
a11 a1n
am1 amn
Same as
R1
Rn
When L=A, a matrix
Notes Page 55
Linear TransformationsThursday, October 14, 2010
09:54
Notes Page 56
Notes Page 57
Notes Page 58
Notes Page 59
Notes Page 60
Inner products - chapter 5Tuesday, October 19, 201009:34
Notes Page 61
Notes Page 62
Notes Page 63
Notes Page 64
Notes Page 65
Pg 212: 1&2ac, 3,4,5,8,9,11Pg 221: 1ac, 2, 4, 12, 13
(u+v,w)=(u,w)+(v,w)1.(u,v+w)=(u,v)+(u,w)2.c(u,v)=(cu,v)=(u,cv)3.(u,u)>=04.If (u,u)=0 then u = 05.
Def: V is a real vector space. Then an inner product on V is a pairing ( , ): VxV =>R such that for all u,v,w in V, c in the reals
Consider the set of f on R continuous on a set S, such that on any closed interval [a,b], (R\S) intersection [a,b] has a finite number of discontinuities
Definition: f is periodic if f(x+P)=f(x) for all x
Can do Fourier series
Suppose V is finite dimensional with basis v1…vn
Suppose ( , ) is a positive definite inner product (redundant, all inner products are positive definite)
By (3), aij=aji
a11 a1n
an1 ann
A =
Definition: let aij=(vi,vj)
Inner ProductsThursday, October 21, 2010
09:34
Notes Page 66
Notes Page 67
Notes Page 68
Shortest distance between two points is a straight line
Note:
Notes Page 69
Least squares: resolve a vector into components parallel to and perpendicular to a given vector space
Orthogonality and least squaresTuesday, October 26, 2010
09:37
Notes Page 70
Notes Page 71
Notes Page 72
Truth: If B is rank r and A is non-singular, then BA is rank r
Notes Page 73
Notes Page 74
Notes Page 75
Page 231: 1ac, 3&4ab, 5Page 239: 1, 3Page 257: 1
New homework: Least squaresThursday, October 28, 201009:35
Notes Page 76
Table
y X1 X2 X3
1 0.5 2 1
2 .8 1 0
1.2 .6 1 1
1 .7 1 2
Notes Page 77
EX on 229: find the quadratic best squares fit
x 0 1 2 3
y 3 2 4 4
y=Ax2+Bx+C
y X2 X 1
3 0 0 1
2 1 1 1
4 4 2 1
4 9 3 1
C = 3A+B+C=24A+2B+C=49A+3B+C=4
0 0 1
1 1 1
4 2 1
9 3 1
0 1 4 9
0 1 2 3
1 1 1 1
0 1 4 9
0 1 2 3
1 1 1 1
3244
=
ABC
98 36 14
36 14 6
14 6 4
ABC
= 542213
98 36 14 54
36 14 6 22
14 6 4 13
-5 -3 -2 -6
4 1 -1 -2
2 3 7 20
49-3*14=7 49-3*18=-5
-1 -2 -3 -8
0 -5 -15 -42
0 1 1 4
1 2 3 8
0 1 1 4
0 5 15 42
1 2 3 8
0 1 1 4
0 0 10 22
1 2 3 8
0 1 1 4
0 0 1 2.2
1 2 3 8
0 1 0 1.8
0 0 1 2.2
1 0 0 -2.2
0 1 0 1.8
0 0 1 2.2
Example from book: Thursday, October 28, 2010
10:09
Notes Page 78
Given 3 vectors(0,-1,2,2)(1,-2,0,2)(3,0,4,5)And a 4th vectorWrite the projection of the 4th on the first 3
Subspace = S(v1…vr), have v not in spanFind proj(v) in that space
This is an orthonormal setSuppose: (vi,vj)=(1 if i=j, 0 otherwise) If one leaves out length one, that is
(vi,vj)=(ci if i=j, 0 otherwise) this is called an orthogonal set.
Orthogonal setsThursday, October 28, 2010
10:19
Notes Page 79
Notes Page 80
Notes Page 81
Pg 268: 1,5,6,7Pg 280: 1,4,7, 10
HMWK due 11 November:
The columns of Q form an orthonormal basisi.The rows of Q form an orthonormal basisii.For any vectors u,v: (Qu,Qv)=(u,v)iii.For any vector v ||Qv||=||v||iv.QTQ=Iv.QT=Q-1vi.
Theorem: Let Q be an m x n matrix TFAE (the following are equivalent)
RHS: 1/2((u+v,u+v)-(u,u)-(v,v))=1/2((u,u)+(v,v)+2(u,v)-(u,u)-(v,v))=(u,v)=LHSThus ||Qu||=||u|| and ||Q(u+v)||=||u+v|| and ||Qu+Qv||=||u+v||Thus(Qu,Qv)=||Qu+Qv||2-||Qu||2-||Qv||2
And (u,v)=||u+v||2-||u||2-||v||2
Thus iv implies iii and iii implies iv ||Qu||=sqrt((Qu,Qu))=||u||Suppose iii is trueei=(0…i…0)T Thus Qei = ith column of Q, and Qe i
T is the ith rowThus Qe1…Qen are an orthonormal basis and iii <=> IAnd this is true iff columns of Q are an orthonormal basisThe rows form an orthonormal basis by the same argumentShow i implies vCi=Qei= ith columnQTQ=[aij]such that aij=Ci
TCj and CiTCj = 1 if i=j, and 0 if i!=j, thus QTQ=I
(Qu,Qv)=(Qv)TQu=vTQTQu=vTIu=vTu=(u,v), thus implies iii(u,v)=vTu=(Qu,Qv)=(Qv)TQu=vTuThus ei
Tej=(Qej,Qei) or columns of Q, thus (Ci,Cj)If QT = Q-1 (v and vi imply one another, by definition)If Q1=QT then the columns of Q1 are orthonormal, which are the rows of Q and hence 1<=>v,vi
Proof: (u,v)=1/2(||u+v||2-||u||2-||v||2)
Definition: Q is orthogonal if it satisfies any or all of the conditions of the theorem
V=V++V-
V+={u:Su=u} and V-={u:Su=-u}If u is in both V+ and V- then u = 0, thus a direct sumThus V=V+(+)V-
Thus dim(V+)=dim(V)-1V+=ker(S-I)Thus V+ is a hyperplane and V- is a line perpendicular to V+
Show (Sv,Su)=0 where u is in the hyperplane and v is in line
-1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Thus not a rotation as det = -1
Matrix is
Example: Let S be a reflection
Thus (S-I)(S+I)=0=S2-S+S-I2=S2-I2=0Let u = (u+Su)/2 + (u-Su)/2S(u+Su)/2=(Su+S2u)/2=(Su+u)/2=(u+Su)/2Every u = u1+u2 where Su1=u1 and Su2=-Su2
Definition: S2=I and S-I is of rank 1
Final Exam: 16 December from 19:00-21:00
0 0 1 0
1 0 0 0
0 0 0 1
0 1 0 0
It just switches order of normal bases
Example:
Definition: A permutation matrix has exactly one 1 in every row and column, and is orthogonal but not necessarily a rotation
We have u1…un and want to convert to an orthonormal basisStart: v1=u1/||u||S(v1…vi)=S(u1…ui)Now: wi+1=ui+1-sum[j=1 to i](ui+1,vj)vj
This is just subtracting the projectionvi+1=wi+1/||wi+1||
Strict Gram Schmidt ProcessModified Gram Schmidt is not making them unit vectors
A = (u1…ur)Now get a new basis, v1…vr which are an orthonormal setWe also know that vi=a1iu1+a2iu2+…+aiiui where aii != 0 due to the fact that it is not in the span of the first i-1 vectors by the definition of basis
a11 a12 …
0 a22 a23
0 0 …
….
...
Thus U=(u1…ur) will be
U is upper triangular r x r matrix with non-zero diagonal matrixLet AU=Q where Q =(v1…vr)A=QU-1=QRThis is called the QR factorization of A
Start with Q = (u1…ur) and A = (v1…vr) which is an orthonormal setAlso A = QR
R=QTA as just multiply by QT
Must calculate R, which is simple
R=QTA
u1Tv1 u1
Tv2 … u1Tvr
u2Tv1
…
(u1T…ur
T)T(v1…vr)=
This is an r x r upper triangular matrixQTQ= r x r identity
AX=bCould try orthonormalizing A with Gramm Schmidt, get QQTAX=QTbNow have an upper triangular system times X equals a vector - this allows for back substitution
If have a least squares problem
( , ) is an abstract inner productv1… vr
u1=v1/||v1||Have u1…ul
PS(u1…ul) = (vl+1,u1)u1+(vl+1,u2)+…+(vl+1,ul)
Gram Schmidt is OK on abstract inner product spaces (but won't ask about it)
Orthonormal Matrixes Tuesday, November 02, 2010
09:37
Notes Page 82
Good for orthogonal polynomialsPn: polynomials of degree n or lessPn={a0+a1x+…+anxn}Choose an interval [a,b] with w(x) >0Find an inner product<f,g>W=fnint(f(x)g(x)w(x)dx,x,a,b)Do gram schmidt to the standard polynomials 1,x,x2…xn
Do this and will get Hermite polynomials, H0…Hn - use gram schmidt to relate each of them to the first ones (get iterative relation, don't compute directly)
Say w(x) = (1-x2)(1+x2) on [-1,1] (always will be positive)
On page 272 on the bottom
Let's say w(x)=1-x2
Notes Page 83
That is Acv=qv where q,c are scalarsFind vectors such that Av=cv where c is a scalar
Definition: an axis is a line L such that AL=L
An eigenvector for A is a vector v such that Av=cv for some c (usually c is in the field, but occasionally will allow it to be imaginary)
1.
An eigenvalue for A is a value c in the field such that there exists a vector v in V such that Av=cv
2.
det(A-cIn)= ΧA(c) =0i.The characteristic polynomial of A is the polynomial3.
A is an n x n matrix
Eigenvectors and eigenvaluesThursday, November 04, 2010
10:10
Notes Page 84
-3*3*3+4*3*3-5*3+6=0
There exists a matrix A. Suppose there exist n different vectors (a basis) so that each is an eigenvectorAvi=civi with vectors v i…vn
S=(v1…vn)Take S-1ASFirst take AS = (Av1,…,Avn)=(c1v1…cnvn)So could multiply by a diagonal matrix and get it
Notes Page 85
If there exists a vector v != 0 and a value c such that Av=cv, then c is called an eigenvalue of A and v is a called an eigenvector of A.
I.
If ΧA(x)=det(A-xIn), then ΧA(c) is called the characteristic polynomial of A.II.
A is an n x n matrix.
Example:
-4 -4 2
3 4 -1
-3 -2 3
A =
A-xI3=-4-x -4 2
3 4-x -1
-3 -2 3-x
-4-x -4 2
3 4-x -1
0 2-x 2-x
2-x
-4-x -4 2
3 4-x -1
0 1 1
(2-x)-4-x 2
3 -1
-4-x -4
3 4-x+-1 | | ||[ ]
(2-x)(-4-x+6+-16+x^2+12)(2-x)(x2-x-2)=(2-x)(x-2)(x+1)ΧA(x)=-(x-2)(x-2)(x+1)=(x-2)2(x+1)Thus the eigenvalues are 2,2,-1
Start with 2
-4-2 -4 2
3 4-2 -1
-3 -2 3-2
-6 -4 2
3 2 -1
-3 -2 1
3 2 -1
0 0 0
0 0 0
3x=-2y+zx=-2/3*y+1/3*z
Thus v=(-2y/3+z/3,y,z)T
Those are a basis for the eigenspace that belongs to 2v=y(-2/3,1,0)T+z(1/3,0,1)T
-2/310
1/301
Eigenvectors for 2
For -1-3 -4 2
-3 5 -1
-3 -2 4
-3 -4 2
-3 5 -1
0 3 3
-3 -4 2
0 1 1
0 1 1
-3 0 6
0 1 1
0 0 0
y+z=0-3x+6z=0
y= -zx=2z
2-11
eigenvector
The conjugate of x by a is axa-1.
-230
103
2-11
S =
-2 1 2
3 0 -1
0 3 1
AS=
-2 1 2
3 0 -1
0 3 1
-4 -4 2
3 4 -1
-3 -2 3
=
-4 2 2
6 0 1
0 6 -1
AS=S
2 0 0
0 2 0
0 0 -1
S-1AS=
2 0 0
0 2 0
0 0 -1
Eigenvalues and eigenvectorsTuesday, November 09, 2010
09:34
Notes Page 86
A is a matrixv1…vr are eigenvectorsFor λ1….λn where no two c's are identicalTheorem: the vi's are linearly independent
Suppose they're not linearly independent - then there is some c1v1+…+crvr=0
ci1vi1+…+cilvil=0Divide thruA(vi1+ci2vi2+…+cilvil)=Avi1+ci2Avi2+…+cilAvil
Avij=λijvij
vi1+ci2vi2+…+cilvil=0
None are zero - thus this is a one shorter combination, but the one above already was, thus this is not possible
Ci2(λi2-λi1)vi1+ci3(λi3-λi2)vi2+…+cil(λil-λi1)vil=0
QED
There is thus some shortest such combination
Proof:
If ΧA(x)=+/-(x-λ1)r1(x-λ2)r2…(x-λq)rq
And if the eigenspace of λi is of dimension ri. Then there exists a basis of eigenvectors
λ1 v11…v1r1
λ2 v21…v2r2
…
λq vq1…vqrq
That is
And S is all the v'sAnd S-1AS= λ1
λ2
...
But not necessarily possible - take A = 1 1
0 1
1-x 1
0 1-x| |=(1-x)2
Subtract: 0 1
0 0
Solution space is merely (1 0)T and is only eigenvector, thus cannot be basis
When working over the complexes can find a matrix such that it is all the λ's in order (but can be more than one of each) and some 1's in the row immediately above, but everywhere else are zeros
Jordan Canonical form
Example:
λ1 1 0 0 0
0 λ1 0 0 0
0 0 λ1 0 0
0 0 0 λ2 1
0 0 0 0 λ2
Theorem: Let A be an n x n matrix. Suppose A has n eigenvalues with multiplicities. That is ΧA(x)=(x-λ1)r1…(x-λq)rq where none are equal and r1+…rq=n. and suppose the eigenspace of λi
is the of dimension ri. Then there exists S such that S-1AS=[matrix with diagonal of λi's
Corollary: suppose ΧA(x) has n distinct eigenvalues. Then there exists a basis of eigenvalues and A can be diagonalized. That is A is regular and semisimple.
A subspace of 1 dimension less, in general, thus takes up little of whole space
Most of the time the matrices are nice like this - only not if the determinant (in whatever dimension) happens to be zero
Notes Page 87
Say we have a vector space over C, need for eigenvalues of certain matricesSay z is complex, z=a+biz*=a-biNorm of z: N(z)=zz*z*w*=(zw)*z*+w*=(z+w)*N(zw)=N(z)N(w)N(z)=a2+b2.|z|=sqrt(N(z))|zw|=|z||w|z=|z|(cos(θ)+isin(θ))=|z|cis(θ)eiθ=cis(θ)
Now take the vector space Cn, where C is the complex numbers
0 1
-1 0
x2+1=0λ=+/-iFind eigenvectors for i and -i
-i 1
-1 -i
1 i
0 0
ab =0
x=-iy-iyy=y*(-i,1)T
Eigenvector for i is (-i,1)T
Eigenvector for -i is (i,1)T
z-1=z*/N(z)
i 1
-1 i
-1 i
0 0
x=yi=yiy
-i i
1 1
S=
S-1AS=-i 0
0 -i
Dot product in normal way won't work with complexes
Hermite Hermitian
(λu,v)=λ(u,v)1.(u,λv)=λ*(u,v)2.
(u,v+v')=(u,v)+(u,v')i.(u+u',v)=(u,v)+(u',v)3.
(u,v)=(v,u)*4.(u,u) is in the reals, and (u,u)>=0, and if (u,u)=0 then u = 05.
Let V be a C vector space. A Hermitian inner product is a pairing u,v goes to (u,v) such that
Say we have Cn
Complex numbersThursday, November 11, 201009:41
Notes Page 88
Notes Page 89
Notes Page 90
A=
1 0 0
-2 1 3
1 1 -1
y=(y1,y2,…,yn)
Example of linear systems of differential equations with constant coefficientsThursday, November 11, 201010:32
Notes Page 91
Notes Page 92
1+i 2-2i 5-3i
3+i 1-i i
3+7i -i 4-i
Find the gauss jordan form of (1+i)-1=(1-i)/2
1 2-2i 1-4i
0 -7+3i
Very tedious with complex matrices
(5-3i)(1-i)/2=1-4i
(1-i)-(2-2i)(3+i)=(1-i)-(8-4i)=(-7+3i)
i 1
2+i -i
Make
To an orthonormal basis
Complex matricesTuesday, November 16, 2010
09:31
Notes Page 93
Definition: M is Hermitian iff MH=M
Notes Page 94
Definition: U is unitary if UH=U-1.That is UHU=In.Corollary: U is unitary iff the columns (or the rows) form an orthonormal basis. As with the above (the dot product matrix) - by definition it will only be and identity matrix if the rows or columns are an orthornormal basis as then (C i,Cj)=1 iff i=j, and if i!= j, (C i,Cj)=0
a b
c d
Ha* c*
b* d*= =
a b
c d
a*=a, thus a is reald*=d, thus d is realb*=c, thus b and c are complex conjugatesc*=b
Any real number Any complex number
Bar of the complex number in top right Any real number
a11 a12 … a1n
a12*
…
a1n* ann
This means there is a unitary change of basisProve it by inductionTrue for 1 x 1 (n=1), triviallyAssume it is true for n = k, that is k x k matricesSuppose m is a k+1 x k+1 matrix
That is Mw1=λw1
M has an eigenvector, w1.
Presume w1 is length 1 (can be done by dividing by its length if it is not already)Make a change of basisTake w1 and (Cw1)perp, W is a subspace, looking at the set of v such that (w,v)=0, that is v is perpendicular to all of W. Thus if w1,w2 are in Wperp.V=Cn.
Let M be any matrix. Then there exists a unitary matrix U so that UHMU is upper triangular. U is not necessarily unique.
Notes Page 95
V=Cn.(Cw1)perp=W is a subspace of VTake (w1,u2…un) and do gram schmidt to it, this is a full basis of Cn.Will now get (w1…wn) as a basis , BIn this basis Mw1=λw1
λ Stuff
0
…
0
[M]B=
Thus UHMU is a k x k matrixBy inductive hypothesis can get another one such that has first column of 0's below top and stuff above it. Continue getting these down to a 1x1 and put together
M'
1 0
0 VH
λ R
0 M' =
1 RV
0 VHM'V
The product of 2 unitary matrices is unitaryUV(UV)H(UV)=VHUHUV=VHV=I
This is on the final, but not this midterm
Notes Page 96
Recommended