96
Lecture 8: The matrix of a linear transformation. Applications Danny W. Crytser April 7, 2014

The matrix of a linear transformation. Applications

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The matrix of a linear transformation. Applications

Lecture 8: The matrix of a linear transformation.Applications

Danny W. Crytser

April 7, 2014

Page 2: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.

Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 3: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2.

Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 4: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).

The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 5: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2.

Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 6: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.

So A has those vectors as columns

A =

1 12 00 3

.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 7: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 8: The matrix of a linear transformation. Applications

Example

Let T : R2 → R3 be the linear transformation defined by

T

([x1x2

])=

x1 + x22x13x2

.Let’s find the matrix A such that T (x) = Ax for all x ∈ R2. Notethat A must have 2 columns (domain R2) and 3 rows (domain R3).The two basis vectors in the domain are e1, e2. Their images are

T (e1) =

120

, T (e2) =

103

.So A has those vectors as columns

A =

1 12 00 3

.Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 9: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 10: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]

= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 11: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 12: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 13: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 14: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2.

There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 15: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn,

then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 16: The matrix of a linear transformation. Applications

We can check to see we got the right answer:

A

[x1x2

]= x1

120

+ x2

103

=

x1 + x22x13x2

= T

([x1x2

]).

Thus T (x) = Ax for all x ∈ R2. There’s a fancy term for thematrix we’ve cooked up.

Definition

If T : Rn → Rm is a linear transformation and e1, e2, . . . , en arethe standard basis vectors in Rn, then the matrix

A =[T (e1) T (e2) . . . T (en)

]which satisfies T (x) = Ax for all x ∈ Rn is called the standardmatrix for T .

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 17: The matrix of a linear transformation. Applications

Matrices and visualization of linear transformations

There are a few different types of linear transformations R2 → R2

that we can describe with words (“rotate the planecounterclockwise by π/2”) and then we get the matrix just by

tracking the image of the basis vectors e1 =

[10

]and

e2 =

[01

].

The book has a big catalog of such transformations.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 18: The matrix of a linear transformation. Applications

Matrices and visualization of linear transformations

There are a few different types of linear transformations R2 → R2

that we can describe with words (“rotate the planecounterclockwise by π/2”) and then we get the matrix just by

tracking the image of the basis vectors e1 =

[10

]and

e2 =

[01

]. The book has a big catalog of such transformations.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 19: The matrix of a linear transformation. Applications

Matrices and visualization of linear transformations

There are a few different types of linear transformations R2 → R2

that we can describe with words (“rotate the planecounterclockwise by π/2”) and then we get the matrix just by

tracking the image of the basis vectors e1 =

[10

]and

e2 =

[01

]. The book has a big catalog of such transformations.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 20: The matrix of a linear transformation. Applications

Reflection through x1 = x2

Suppose that T reflects through the line x1 = x2.

What does Tlook like? What is the matrix of T? The image of e1 is e2 andvice versa.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 21: The matrix of a linear transformation. Applications

Reflection through x1 = x2

Suppose that T reflects through the line x1 = x2. What does Tlook like? What is the matrix of T?

The image of e1 is e2 andvice versa.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 22: The matrix of a linear transformation. Applications

Reflection through x1 = x2

Suppose that T reflects through the line x1 = x2. What does Tlook like? What is the matrix of T? The image of e1 is e2 andvice versa.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 23: The matrix of a linear transformation. Applications

Reflection through x1 = x2

Suppose that T reflects through the line x1 = x2. What does Tlook like? What is the matrix of T? The image of e1 is e2 andvice versa.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 24: The matrix of a linear transformation. Applications

Contractions. Expansions

You can stretch/squeeze the plane in one direction while keepingthe other direction fixed.

For example, let’s say we stretched theplane along the x-axis by a factor of 2, but didn’t distort it in they -axis. What would the matrix look like? The image of e1 is 2e1and e2 doesn’t change. The graphic on the right represents thissituation, where we have k = 2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 25: The matrix of a linear transformation. Applications

Contractions. Expansions

You can stretch/squeeze the plane in one direction while keepingthe other direction fixed. For example, let’s say we stretched theplane along the x-axis by a factor of 2, but didn’t distort it in they -axis. What would the matrix look like?

The image of e1 is 2e1and e2 doesn’t change. The graphic on the right represents thissituation, where we have k = 2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 26: The matrix of a linear transformation. Applications

Contractions. Expansions

You can stretch/squeeze the plane in one direction while keepingthe other direction fixed. For example, let’s say we stretched theplane along the x-axis by a factor of 2, but didn’t distort it in they -axis. What would the matrix look like? The image of e1 is 2e1and e2 doesn’t change. The graphic on the right represents thissituation, where we have k = 2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 27: The matrix of a linear transformation. Applications

Contractions. Expansions

You can stretch/squeeze the plane in one direction while keepingthe other direction fixed. For example, let’s say we stretched theplane along the x-axis by a factor of 2, but didn’t distort it in they -axis. What would the matrix look like? The image of e1 is 2e1and e2 doesn’t change. The graphic on the right represents thissituation, where we have k = 2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 28: The matrix of a linear transformation. Applications

Shear transformations

Shear transformations add some multiple of one basis vector toanother basis vector (not to be confused with row operations).

Forexample e1 7→ e1 + 2e2 and e2 7→ e2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 29: The matrix of a linear transformation. Applications

Shear transformations

Shear transformations add some multiple of one basis vector toanother basis vector (not to be confused with row operations). Forexample e1 7→ e1 + 2e2 and e2 7→ e2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 30: The matrix of a linear transformation. Applications

Shear transformations

Shear transformations add some multiple of one basis vector toanother basis vector (not to be confused with row operations). Forexample e1 7→ e1 + 2e2 and e2 7→ e2.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 31: The matrix of a linear transformation. Applications

Onto

Definition

A function T : Rn → Rm is said to be onto if the range is thecodomain, that is, for each vector y ∈ Rm there is at least onex ∈ Rn with T (x) = y.

The preceding definition is supposed to address the potentialdifference between range and codomain.

Example

Let T : R2 → R2 be given by T (x1, x2) = (x1, 0). Then T is notonto: the range is the x-axis, an object in mathematics noteworthyfor not being the entire plane. The reflection across the linex1 = x2 given by T (x1, x2) = (x2, x1) is onto: every vector in R2 isthe reflection of some other vector (that other vector is itsreflection)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 32: The matrix of a linear transformation. Applications

Onto

Definition

A function T : Rn → Rm is said to be onto if the range is thecodomain, that is, for each vector y ∈ Rm there is at least onex ∈ Rn with T (x) = y.

The preceding definition is supposed to address the potentialdifference between range and codomain.

Example

Let T : R2 → R2 be given by T (x1, x2) = (x1, 0). Then T is notonto: the range is the x-axis, an object in mathematics noteworthyfor not being the entire plane. The reflection across the linex1 = x2 given by T (x1, x2) = (x2, x1) is onto: every vector in R2 isthe reflection of some other vector (that other vector is itsreflection)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 33: The matrix of a linear transformation. Applications

Onto

Definition

A function T : Rn → Rm is said to be onto if the range is thecodomain, that is, for each vector y ∈ Rm there is at least onex ∈ Rn with T (x) = y.

The preceding definition is supposed to address the potentialdifference between range and codomain.

Example

Let T : R2 → R2 be given by T (x1, x2) = (x1, 0). Then T is notonto: the range is the x-axis, an object in mathematics noteworthyfor not being the entire plane.

The reflection across the linex1 = x2 given by T (x1, x2) = (x2, x1) is onto: every vector in R2 isthe reflection of some other vector (that other vector is itsreflection)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 34: The matrix of a linear transformation. Applications

Onto

Definition

A function T : Rn → Rm is said to be onto if the range is thecodomain, that is, for each vector y ∈ Rm there is at least onex ∈ Rn with T (x) = y.

The preceding definition is supposed to address the potentialdifference between range and codomain.

Example

Let T : R2 → R2 be given by T (x1, x2) = (x1, 0). Then T is notonto: the range is the x-axis, an object in mathematics noteworthyfor not being the entire plane. The reflection across the linex1 = x2 given by T (x1, x2) = (x2, x1) is onto: every vector in R2 isthe reflection of some other vector (that other vector is itsreflection)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 35: The matrix of a linear transformation. Applications

Onto

Definition

A function T : Rn → Rm is said to be onto if the range is thecodomain, that is, for each vector y ∈ Rm there is at least onex ∈ Rn with T (x) = y.

The preceding definition is supposed to address the potentialdifference between range and codomain.

Example

Let T : R2 → R2 be given by T (x1, x2) = (x1, 0). Then T is notonto: the range is the x-axis, an object in mathematics noteworthyfor not being the entire plane. The reflection across the linex1 = x2 given by T (x1, x2) = (x2, x1) is onto: every vector in R2 isthe reflection of some other vector (that other vector is itsreflection)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 36: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm.

Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms. Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 37: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm. Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms.

Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 38: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm. Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms. Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 39: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm. Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms. Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 40: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm. Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms. Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.

Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 41: The matrix of a linear transformation. Applications

Onto, ctd.

The quality of being onto has to do with existence of solutions: alinear transformation T given by T (x) = Ax is onto if Ax = b isconsistent for all b ∈ Rm. Reviewing the following theorem allowsus to describe onto linear transformations with echelon forms. Wealready saw a version of this theorem

Theorem

Let A be an m × n matrix. Then Ax = b is consistent for allb ∈ Rm if and only if every row in the echelon form of A (notaugmented) has a nonzero entry.

We can reformulate it in terms of linear transformations.

Theorem

Let T (x) = Ax. Then T is onto if and only if every row in theechelon form of A (non-augmented) has a nonzero entry.Thishappens if and only if the columns of A span Rm.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 42: The matrix of a linear transformation. Applications

One-to-one

Definition

A function T : Rn → Rm is said to be one-to-one if T (x) = T (x′)implies x = x′ for vectors x, x′ ∈ Rn.

That is, T is one-to-one iftwo vectors in the domain have the same image under T onlywhen they are equal.

Example

The map T : R3 → R3 given by T (x1, x2, x3) = (x1, x2, 0) is notone-to-one: T (0, 0, 1) = T (0, 0, 0) = 0 and yet the vectors are notequal. The map T : R2 → R3 given by

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 43: The matrix of a linear transformation. Applications

One-to-one

Definition

A function T : Rn → Rm is said to be one-to-one if T (x) = T (x′)implies x = x′ for vectors x, x′ ∈ Rn. That is, T is one-to-one iftwo vectors in the domain have the same image under T onlywhen they are equal.

Example

The map T : R3 → R3 given by T (x1, x2, x3) = (x1, x2, 0) is notone-to-one: T (0, 0, 1) = T (0, 0, 0) = 0 and yet the vectors are notequal. The map T : R2 → R3 given by

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 44: The matrix of a linear transformation. Applications

One-to-one

Definition

A function T : Rn → Rm is said to be one-to-one if T (x) = T (x′)implies x = x′ for vectors x, x′ ∈ Rn. That is, T is one-to-one iftwo vectors in the domain have the same image under T onlywhen they are equal.

Example

The map T : R3 → R3 given by T (x1, x2, x3) = (x1, x2, 0) is notone-to-one:

T (0, 0, 1) = T (0, 0, 0) = 0 and yet the vectors are notequal. The map T : R2 → R3 given by

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 45: The matrix of a linear transformation. Applications

One-to-one

Definition

A function T : Rn → Rm is said to be one-to-one if T (x) = T (x′)implies x = x′ for vectors x, x′ ∈ Rn. That is, T is one-to-one iftwo vectors in the domain have the same image under T onlywhen they are equal.

Example

The map T : R3 → R3 given by T (x1, x2, x3) = (x1, x2, 0) is notone-to-one: T (0, 0, 1) = T (0, 0, 0) = 0 and yet the vectors are notequal.

The map T : R2 → R3 given by

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 46: The matrix of a linear transformation. Applications

One-to-one

Definition

A function T : Rn → Rm is said to be one-to-one if T (x) = T (x′)implies x = x′ for vectors x, x′ ∈ Rn. That is, T is one-to-one iftwo vectors in the domain have the same image under T onlywhen they are equal.

Example

The map T : R3 → R3 given by T (x1, x2, x3) = (x1, x2, 0) is notone-to-one: T (0, 0, 1) = T (0, 0, 0) = 0 and yet the vectors are notequal. The map T : R2 → R3 given by

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 47: The matrix of a linear transformation. Applications

One-to-one, ctd.

The quality of being one-to-one has to do with uniqueness ofsolutions: the linear transformation T given by T (x) = Ax ifwhenever Ax = b is consistent it has unique solutions.

Theorem

Let A be a matrix. Then T is one-to-one if and only if everycolumn in the echelon form of A (non-augmented) has a pivot.This happens if and only if the columns of A are linearlyindependent.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 48: The matrix of a linear transformation. Applications

One-to-one, ctd.

The quality of being one-to-one has to do with uniqueness ofsolutions: the linear transformation T given by T (x) = Ax ifwhenever Ax = b is consistent it has unique solutions.

Theorem

Let A be a matrix. Then T is one-to-one if and only if everycolumn in the echelon form of A (non-augmented) has a pivot.

This happens if and only if the columns of A are linearlyindependent.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 49: The matrix of a linear transformation. Applications

One-to-one, ctd.

The quality of being one-to-one has to do with uniqueness ofsolutions: the linear transformation T given by T (x) = Ax ifwhenever Ax = b is consistent it has unique solutions.

Theorem

Let A be a matrix. Then T is one-to-one if and only if everycolumn in the echelon form of A (non-augmented) has a pivot.This happens if and only if the columns of A are linearlyindependent.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 50: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 51: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 52: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivot

if and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 53: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivot

if and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 54: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 55: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 56: The matrix of a linear transformation. Applications

Onto/one-to-one: echelon form

We can summarize all of this in one biggish theorem:

Theorem

Let A be an m × n matrix. The linear transformationT : Rn → Rm given by T (x) = Ax is

1 onto if and only if every row of the echelon form of A has apivotif and only if the columns of A span Rm

2 one-to-one if and only if every column of the echelon form ofA has a pivotif and only if the columns of A are linearlyindependent

You can see that the only way that T can be both onto andone-to-one is if m = n.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 57: The matrix of a linear transformation. Applications

Example

Example

Let A =

1 72 34 2

and define T : R2 → R3 by T (x) = Ax.

Is T

onto? Is T one-to-one?

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 58: The matrix of a linear transformation. Applications

Example

Example

Let A =

1 72 34 2

and define T : R2 → R3 by T (x) = Ax. Is T

onto? Is T one-to-one?

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 59: The matrix of a linear transformation. Applications

APPLICATIONS

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 60: The matrix of a linear transformation. Applications

Voltage loops

1 A closed loop in a network has three things affiliated to it:

some voltage sources, some resistors, and an oriented(clockwise or counter-clockwise) current.

2 Each of these has a weight measuring how much voltage,resistance, or current there is (one current for the whole loop).

3 A voltage source is positive for a loop if the the current flowsfrom the positive (long) terminal to the negative (short)terminal.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 61: The matrix of a linear transformation. Applications

Voltage loops

1 A closed loop in a network has three things affiliated to it:some voltage sources,

some resistors, and an oriented(clockwise or counter-clockwise) current.

2 Each of these has a weight measuring how much voltage,resistance, or current there is (one current for the whole loop).

3 A voltage source is positive for a loop if the the current flowsfrom the positive (long) terminal to the negative (short)terminal.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 62: The matrix of a linear transformation. Applications

Voltage loops

1 A closed loop in a network has three things affiliated to it:some voltage sources, some resistors, and an oriented(clockwise or counter-clockwise) current.

2 Each of these has a weight measuring how much voltage,resistance, or current there is (one current for the whole loop).

3 A voltage source is positive for a loop if the the current flowsfrom the positive (long) terminal to the negative (short)terminal.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 63: The matrix of a linear transformation. Applications

Voltage loops

1 A closed loop in a network has three things affiliated to it:some voltage sources, some resistors, and an oriented(clockwise or counter-clockwise) current.

2 Each of these has a weight measuring how much voltage,resistance, or current there is (one current for the whole loop).

3 A voltage source is positive for a loop if the the current flowsfrom the positive (long) terminal to the negative (short)terminal.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 64: The matrix of a linear transformation. Applications

Ohm’s law

OHM’S LAW: If the current of I amps passes across a resistor ofR ohms, then the voltage drops by V = RI volts.

Passing from A to B the sum of the voltage drops is 3I1 − 3I2.(Notice that the voltage source in the first loop is positive.)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 65: The matrix of a linear transformation. Applications

Ohm’s law

OHM’S LAW: If the current of I amps passes across a resistor ofR ohms, then the voltage drops by V = RI volts.

Passing from A to B the sum of the voltage drops is 3I1 − 3I2.(Notice that the voltage source in the first loop is positive.)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 66: The matrix of a linear transformation. Applications

Ohm’s law

OHM’S LAW: If the current of I amps passes across a resistor ofR ohms, then the voltage drops by V = RI volts.

Passing from A to B the sum of the voltage drops is 3I1 − 3I2.

(Notice that the voltage source in the first loop is positive.)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 67: The matrix of a linear transformation. Applications

Ohm’s law

OHM’S LAW: If the current of I amps passes across a resistor ofR ohms, then the voltage drops by V = RI volts.

Passing from A to B the sum of the voltage drops is 3I1 − 3I2.(Notice that the voltage source in the first loop is positive.)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 68: The matrix of a linear transformation. Applications

Kirchhoff’s law

Kirchhoff’s law governs how much current and resistance (so, howmuch voltage dropped) can be in an electrical network with givenvoltage sources.

It’s basically a balancing between the voltage lostthrough voltage drops and the voltage put into loops from voltagesources.

KIRCHHOFF’S LAW: If you add up the voltage drops in a loopthat equals the sum of the voltage sources in the loop.

Remember: when adding voltage sources you have to check to seeif they’re positive (current runs positive terminal to negativeterminal) or negative (vice versa).

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 69: The matrix of a linear transformation. Applications

Kirchhoff’s law

Kirchhoff’s law governs how much current and resistance (so, howmuch voltage dropped) can be in an electrical network with givenvoltage sources. It’s basically a balancing between the voltage lostthrough voltage drops and the voltage put into loops from voltagesources.

KIRCHHOFF’S LAW: If you add up the voltage drops in a loopthat equals the sum of the voltage sources in the loop.

Remember: when adding voltage sources you have to check to seeif they’re positive (current runs positive terminal to negativeterminal) or negative (vice versa).

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 70: The matrix of a linear transformation. Applications

Kirchhoff’s law

Kirchhoff’s law governs how much current and resistance (so, howmuch voltage dropped) can be in an electrical network with givenvoltage sources. It’s basically a balancing between the voltage lostthrough voltage drops and the voltage put into loops from voltagesources.

KIRCHHOFF’S LAW: If you add up the voltage drops in a loopthat equals the sum of the voltage sources in the loop.

Remember: when adding voltage sources you have to check to seeif they’re positive (current runs positive terminal to negativeterminal) or negative (vice versa).

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 71: The matrix of a linear transformation. Applications

Kirchhoff: example

Let’s look at this.

First loop: Voltage source=30. Voltage drop infirst loop is 41 + (3I1 − 3I2) + 4I1 = 11I1 − 3I2. Must equal voltagesources in first loop = 30. So the equation for the first loop is11I1 − 3I2 = 30.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 72: The matrix of a linear transformation. Applications

Kirchhoff: example

Let’s look at this. First loop: Voltage source=30.

Voltage drop infirst loop is 41 + (3I1 − 3I2) + 4I1 = 11I1 − 3I2. Must equal voltagesources in first loop = 30. So the equation for the first loop is11I1 − 3I2 = 30.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 73: The matrix of a linear transformation. Applications

Kirchhoff: example

Let’s look at this. First loop: Voltage source=30. Voltage drop infirst loop is 41 + (3I1 − 3I2) + 4I1 = 11I1 − 3I2.

Must equal voltagesources in first loop = 30. So the equation for the first loop is11I1 − 3I2 = 30.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 74: The matrix of a linear transformation. Applications

Kirchhoff: example

Let’s look at this. First loop: Voltage source=30. Voltage drop infirst loop is 41 + (3I1 − 3I2) + 4I1 = 11I1 − 3I2. Must equal voltagesources in first loop = 30.

So the equation for the first loop is11I1 − 3I2 = 30.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 75: The matrix of a linear transformation. Applications

Kirchhoff: example

Let’s look at this. First loop: Voltage source=30. Voltage drop infirst loop is 41 + (3I1 − 3I2) + 4I1 = 11I1 − 3I2. Must equal voltagesources in first loop = 30. So the equation for the first loop is11I1 − 3I2 = 30.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 76: The matrix of a linear transformation. Applications

Kirchhoff: example

11I1 − 3I2 = 30 (1)

−3I1 + 6I2 − I3 = 5 (2)

−I2 + 3I3 = −25 (3)

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 77: The matrix of a linear transformation. Applications

Kirchhoff: example

11I1 − 3I2 = 30 (4)

−3I1 + 6I2 − I3 = 5 (5)

−I2 + 3I3 = −25 (6)

Has a unique solution: I1 = 3 amps, I2 = 1 amps, I3 = −8 amps.

The negative I3 answer says that the current flows clockwise inloop 3.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 78: The matrix of a linear transformation. Applications

Kirchhoff: example

11I1 − 3I2 = 30 (4)

−3I1 + 6I2 − I3 = 5 (5)

−I2 + 3I3 = −25 (6)

Has a unique solution: I1 = 3 amps, I2 = 1 amps, I3 = −8 amps.The negative I3 answer says that the current flows clockwise inloop 3.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 79: The matrix of a linear transformation. Applications

Difference equations

In many situations you will be measuring some system and all theinformation about the system at time kwill be contained in somevector xk .

(Could be age, salary, population, microbe count,whatever).

Definition

Suppose your data take the form of vectors xk ∈ Rn, wherek = 0, 1, 2, . . .s. If there is an n × n matrix A such thatx1 = Ax0, x2 = Ax1 and generally xk+1 = Axk (*) , we say thatequation (*) is a linear difference equation (some people call it arecursion relation, because it gives new measurement in terms ofthe old measurement).

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 80: The matrix of a linear transformation. Applications

Difference equations

In many situations you will be measuring some system and all theinformation about the system at time kwill be contained in somevector xk . (Could be age, salary, population, microbe count,whatever).

Definition

Suppose your data take the form of vectors xk ∈ Rn, wherek = 0, 1, 2, . . .s. If there is an n × n matrix A such thatx1 = Ax0, x2 = Ax1 and generally xk+1 = Axk (*) , we say thatequation (*) is a linear difference equation (some people call it arecursion relation, because it gives new measurement in terms ofthe old measurement).

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 81: The matrix of a linear transformation. Applications

Difference equations and population

You can study population dynamics using differenceequations.

Let’s say that in the nation of Zembla there is one cityand one suburb. The population distribution in Zembla in year 0can be recorded in a vector in R2:

x0 =

[r0s0

]where r0 is the population in the city in year 0 and s0 is thepopulation in the suburb in year 0. The vectors

x1 =

[r1s1

], x2 =

[r2s2

]record the popluation distribution in year

1, year 2, etc.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 82: The matrix of a linear transformation. Applications

Difference equations and population

You can study population dynamics using differenceequations.Let’s say that in the nation of Zembla there is one cityand one suburb. The population distribution in Zembla in year 0can be recorded in a vector in R2:

x0 =

[r0s0

]where r0 is the population in the city in year 0 and s0 is thepopulation in the suburb in year 0. The vectors

x1 =

[r1s1

], x2 =

[r2s2

]record the popluation distribution in year

1, year 2, etc.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 83: The matrix of a linear transformation. Applications

Difference equations and population

You can study population dynamics using differenceequations.Let’s say that in the nation of Zembla there is one cityand one suburb. The population distribution in Zembla in year 0can be recorded in a vector in R2:

x0 =

[r0s0

]

where r0 is the population in the city in year 0 and s0 is thepopulation in the suburb in year 0. The vectors

x1 =

[r1s1

], x2 =

[r2s2

]record the popluation distribution in year

1, year 2, etc.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 84: The matrix of a linear transformation. Applications

Difference equations and population

You can study population dynamics using differenceequations.Let’s say that in the nation of Zembla there is one cityand one suburb. The population distribution in Zembla in year 0can be recorded in a vector in R2:

x0 =

[r0s0

]where r0 is the population in the city in year 0 and s0 is thepopulation in the suburb in year 0.

The vectors

x1 =

[r1s1

], x2 =

[r2s2

]record the popluation distribution in year

1, year 2, etc.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 85: The matrix of a linear transformation. Applications

Difference equations and population

You can study population dynamics using differenceequations.Let’s say that in the nation of Zembla there is one cityand one suburb. The population distribution in Zembla in year 0can be recorded in a vector in R2:

x0 =

[r0s0

]where r0 is the population in the city in year 0 and s0 is thepopulation in the suburb in year 0. The vectors

x1 =

[r1s1

], x2 =

[r2s2

]record the popluation distribution in year

1, year 2, etc.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 86: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb.

Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]= r0

[.95.05

]+

[.03.97

]=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 87: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb. Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.

Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]= r0

[.95.05

]+

[.03.97

]=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 88: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb. Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]= r0

[.95.05

]+

[.03.97

]=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 89: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb. Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]=

r0

[.95.05

]+

[.03.97

]=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 90: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb. Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]= r0

[.95.05

]+

[.03.97

]

=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 91: The matrix of a linear transformation. Applications

Difference equations and population

Let’s say that in any one year 95 percent of city people remain inthe city and 5 percent of city people go to the suburb. Let’s say inthe same time frame, 97 percent of suburb people remain in thesuburb and 3 percent go to the city.Thus, after year 0 we can see what the population looks like inyear 1:

r1 = .95r0 + .03s0

ands1 = .5r0 + .97s0

Thus [r1s1

]= r0

[.95.05

]+

[.03.97

]=

[.95 .03.05 .97

].

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 92: The matrix of a linear transformation. Applications

The Transition Matrix

Now we can write down

x1 =

[.95 .03.05 .97

]x0

and

x2 =

[.95 .03.05 .97

]x1

and, in general,

xk =

[.95 .03.05 .97

]xk−1.

If A is the matrix above, then we can write xk = Axk−1. You canuse this to predict the future.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 93: The matrix of a linear transformation. Applications

The Transition Matrix

Now we can write down

x1 =

[.95 .03.05 .97

]x0

and

x2 =

[.95 .03.05 .97

]x1

and, in general,

xk =

[.95 .03.05 .97

]xk−1.

If A is the matrix above, then we can write xk = Axk−1. You canuse this to predict the future.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 94: The matrix of a linear transformation. Applications

The Transition Matrix

Now we can write down

x1 =

[.95 .03.05 .97

]x0

and

x2 =

[.95 .03.05 .97

]x1

and, in general,

xk =

[.95 .03.05 .97

]xk−1.

If A is the matrix above, then we can write xk = Axk−1. You canuse this to predict the future.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 95: The matrix of a linear transformation. Applications

The Transition Matrix

Now we can write down

x1 =

[.95 .03.05 .97

]x0

and

x2 =

[.95 .03.05 .97

]x1

and, in general,

xk =

[.95 .03.05 .97

]xk−1.

If A is the matrix above, then we can write xk = Axk−1.

You canuse this to predict the future.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications

Page 96: The matrix of a linear transformation. Applications

The Transition Matrix

Now we can write down

x1 =

[.95 .03.05 .97

]x0

and

x2 =

[.95 .03.05 .97

]x1

and, in general,

xk =

[.95 .03.05 .97

]xk−1.

If A is the matrix above, then we can write xk = Axk−1. You canuse this to predict the future.

Dan Crytser Lecture 8: The matrix of a linear transformation. Applications