63
Lecture 1 Describing Inverse Problems

Lecture 1 Describing Inverse Problems

  • Upload
    adie

  • View
    44

  • Download
    0

Embed Size (px)

DESCRIPTION

Lecture 1 Describing Inverse Problems. Syllabus. - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture 1 Describing Inverse Problems

Lecture 1

Describing Inverse Problems

Page 2: Lecture 1 Describing Inverse Problems

SyllabusLecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized InversesLecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value DecompositionLecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empircal Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Page 3: Lecture 1 Describing Inverse Problems

Purpose of the Lecture

distinguish forward and inverse problems

categorize inverse problems

examine a few examples

enumerate different kinds of solutions to inverse problems

Page 4: Lecture 1 Describing Inverse Problems

Part 1

Lingo for discussing the relationship between observations and the things

that we want to learn from them

Page 5: Lecture 1 Describing Inverse Problems

three important definitions

Page 6: Lecture 1 Describing Inverse Problems

things that are measured in an experiment or observed in nature…

data, d = [d1, d2, … dN]T

things you want to know about the world …

model parameters, m = [m1, m2, … mM]T relationship between data and model parameters

quantitative model (or theory)

Page 7: Lecture 1 Describing Inverse Problems

gravitational accelerationstravel time of seismic waves

data, d = [d1, d2, … dN]T model parameters, m = [m1, m2, … mM]T quantitative model (or theory)

densityseismic velocity

Newton’s law of gravityseismic wave equation

Page 8: Lecture 1 Describing Inverse Problems

Quantitative Model

Quantitative Model

mest dpre

mest dobs

Forward Theory

Inverse Theory

estimates predictions

observationsestimates

Page 9: Lecture 1 Describing Inverse Problems

Quantitative Model

Quantitative Model

mtrue dpre

dobs≠

mest

due to observational

error

Page 10: Lecture 1 Describing Inverse Problems

Quantitative Model

Quantitative Model

mtrue dpre

dobs≠

mest

due to observational

error≠ due to error

propagation

Page 11: Lecture 1 Describing Inverse Problems

Understanding the effects ofobservational error

is central to Inverse Theory

Page 12: Lecture 1 Describing Inverse Problems

Part 2

types of quantitative models(or theories)

Page 13: Lecture 1 Describing Inverse Problems

A. Implicit Theory

L relationships between the data and the model are known

Page 14: Lecture 1 Describing Inverse Problems

Example

mass = density ⨉ length ⨉ width ⨉ height

MH

M = ρ ⨉ L ⨉ W ⨉ HLρ

Page 15: Lecture 1 Describing Inverse Problems

weight = density ⨉ volumemeasure

mass, d1size, d2, d3, d4,

want to knowdensity, m1 d1 = m1 d2 d3 d4 or d1 - m1 d2 d3 d4 = 0

d=[d1, d2, d3, d4]T and N=4m=[m1]T and M=1f1(d,m)=0 and L=1

Page 16: Lecture 1 Describing Inverse Problems

note

No guarantee thatf(d,m)=0contains enough information

for unique estimate m

determining whether or not there is enoughis part of the inverse problem

Page 17: Lecture 1 Describing Inverse Problems

B. Explicit Theory

the equation can be arranged so that d is a function of m

L = N one equation per datum

d = g(m) or d - g(m) = 0

Page 18: Lecture 1 Describing Inverse Problems

Example

Circumference = 2 ⨉ length + 2 ⨉ heightL

rectangle H

Area = length ⨉ height

Page 19: Lecture 1 Describing Inverse Problems

C = 2L+2Hmeasure

C=d1A=d2

want to knowL=m1 H=m2

d=[d1, d2]T and N=2m=[m1, m2]T and M=2

Circumference = 2 ⨉ length + 2 ⨉ heightArea = length ⨉ heightA=LH

d1 = 2m1 + 2m2d2 m1m2 d=g(m)

Page 20: Lecture 1 Describing Inverse Problems

C. Linear Explicit Theory

the function g(m) is a matrix G times m

G has N rows and M columns

d = Gm

Page 21: Lecture 1 Describing Inverse Problems

C. Linear Explicit Theory

the function g(m) is a matrix G times m

G has N rows and M columns

d = Gm“data kernel”

Page 22: Lecture 1 Describing Inverse Problems

Example

M = ρg ⨉ V g+ ρq ⨉ V q

gold quartz

total mass = density of gold ⨉ volume of gold+ density of quartz ⨉ volume of quartzV = V g+ V q

total volume = volume of gold + volume of quartz

Page 23: Lecture 1 Describing Inverse Problems

M = ρg ⨉ Vg+ ρq ⨉ V qV = V g+ V q

measure V = d1M = d2

want to know Vg =m1 Vq =m2assume ρg ρg

d=[d1, d2]T and N=2m=[m1, m2]T and M=2

d = 1 1ρg ρq

mknown

Page 24: Lecture 1 Describing Inverse Problems

D. Linear Implicit Theory

The L relationships between the data are linear

L rowsN+M columns

Page 25: Lecture 1 Describing Inverse Problems

in all these examples m is discrete

one could have a continuous m(x) instead

discrete inverse theory

continuous inverse theory

Page 26: Lecture 1 Describing Inverse Problems

as a discrete vector m

in this course we will usually approximate a continuous m(x)

m = [m(Δx), m(2Δx), m(3Δx) … m(MΔx)]T

but we will spend some time later in the course dealing with the continuous

problem directly

Page 27: Lecture 1 Describing Inverse Problems

Part 3

Some Examples

Page 28: Lecture 1 Describing Inverse Problems

1965 1970 1975 1980 1985 1990 1995 2000 2005 2010-0.5

0

0.5

1

calendar year

tem

pera

ture

ano

mal

y, d

eg C

time, t (calendar years)

temperatu

re anomal

y, T i (deg

C)A. Fitting a straight line to data

T = a + bt

Page 29: Lecture 1 Describing Inverse Problems

each data pointis predicted by a

straight line

Page 30: Lecture 1 Describing Inverse Problems

matrix formulation

d = G m M=2

Page 31: Lecture 1 Describing Inverse Problems

B. Fitting a parabola

T = a + bt+ ct2

Page 32: Lecture 1 Describing Inverse Problems

each data pointis predicted by a

strquadratic curve

Page 33: Lecture 1 Describing Inverse Problems

matrix formulation

d = G m M=3

Page 34: Lecture 1 Describing Inverse Problems

straight line

note similarity

parabola

Page 35: Lecture 1 Describing Inverse Problems

in MatLab

G=[ones(N,1), t, t.^2];

Page 36: Lecture 1 Describing Inverse Problems

C. Acoustic Tomography

1 2 3 4

5 6 7 8

13 14 15 16

hh

source, S receiver, R

travel time = length ⨉ slowness

Page 37: Lecture 1 Describing Inverse Problems

collect data along rows and columns

Page 38: Lecture 1 Describing Inverse Problems

matrix formulation

d = G m M=16N=8

Page 39: Lecture 1 Describing Inverse Problems

In MatLabG=zeros(N,M);for i = [1:4]for j = [1:4] % measurements over rows k = (i-1)*4 + j; G(i,k)=1; % measurements over columns k = (j-1)*4 + i; G(i+4,k)=1;endend

Page 40: Lecture 1 Describing Inverse Problems

D. X-ray Imaging

S

R1R2R3 R4 R5enlarged

lymph node

(A) (B)

Page 41: Lecture 1 Describing Inverse Problems

theory

I = Intensity of x-rays (data)s = distancec = absorption coefficient (model parameters)

Page 42: Lecture 1 Describing Inverse Problems

Taylor Seriesapproximation

Page 43: Lecture 1 Describing Inverse Problems

Taylor Seriesapproximationdiscrete pixelapproximation

Page 44: Lecture 1 Describing Inverse Problems

Taylor Seriesapproximationdiscrete pixelapproximation length of beam i in pixel jd = G m

Page 45: Lecture 1 Describing Inverse Problems

d = G m

matrix formulation

M≈106N≈106

Page 46: Lecture 1 Describing Inverse Problems

note that G is huge106⨉106

but it is sparse(mostly zero)

since a beam passes through only a tiny fraction of the total number of

pixels

Page 47: Lecture 1 Describing Inverse Problems

in MatLab

G = spalloc( N, M, MAXNONZEROELEMENTS);

Page 48: Lecture 1 Describing Inverse Problems

E. Spectral Curve Fitting

0 2 4 6 8 10 120.65

0.7

0.75

0.8

0.85

0.9

0.95

1

velocity, mm/s

coun

ts

Page 49: Lecture 1 Describing Inverse Problems

single spectral peak

0 5 100

0.1

0.2

0.3

0.4

0.5

d

p(d)

0 5 100

10

20

30

d

coun

ts

0 5 100

0.1

0.2

0.3

0.4

0.5

d

p(d)

area, Aposition, f

width, czp(z)

Page 50: Lecture 1 Describing Inverse Problems

q spectral peaks“Lorentzian”

d = g(m)

Page 51: Lecture 1 Describing Inverse Problems

e1e2e3e4e5

e1e2e3e4e5

s1 s2

ocean

sediment

F. Factor Analysis

Page 52: Lecture 1 Describing Inverse Problems

d = g(m)

Page 53: Lecture 1 Describing Inverse Problems

Part 4

What kind of solution are we looking for ?

Page 54: Lecture 1 Describing Inverse Problems

A: Estimate of model parameters

meaning numerical values

m1 = 10.5m2 = 7.2

Page 55: Lecture 1 Describing Inverse Problems

But we really need confidence limits, too

m1 = 10.5 ± 0.2m2 = 7.2 ± 0.1 m1 = 10.5 ± 22.3m2 = 7.2 ± 9.1orcompletely different implications!

Page 56: Lecture 1 Describing Inverse Problems

B: probability density functions

if p(m1) simplenot so different than confidence intervals

0 5 100

0.1

0.2

0.3

0.4

m

p(m

)

0 5 100

0.1

0.2

0.3

0.4

m

p(m

)

0 5 100

0.1

0.2

0.3

0.4

m

p(m

)

Page 57: Lecture 1 Describing Inverse Problems

0 5 100

0.1

0.2

0.3

0.4

m

p(m

)

0 5 100

0.1

0.2

0.3

0.4

m

p(m

)0 5 10

0

0.1

0.2

0.3

0.4

m

p(m

)

m is about 5plus or minus 1.5m is eitherabout 3plus of minus 1or about 8plus or minus 1but that’s less likely

we don’t really know anything useful about m

Page 58: Lecture 1 Describing Inverse Problems

C: localized averages

A = 0.2m9 + 0.6m10 + 0.2m11might be better determined than eitherm9 or m10 or m11 individually

Page 59: Lecture 1 Describing Inverse Problems

Is this useful?

Do we care aboutA = 0.2m9 + 0.6m10 + 0.2m11?

Maybe …

Page 60: Lecture 1 Describing Inverse Problems

Suppose if m is a discrete approximation of m(x)

m(x)x

m10 m11m9

Page 61: Lecture 1 Describing Inverse Problems

m(x)x

m10 m11m9

A= 0.2m9 + 0.6m10 + 0.2m11weighted average of m(x)

in the vicinity of x10

x10

Page 62: Lecture 1 Describing Inverse Problems

m(x)x

m10 m11m9

average “localized’in the vicinity of x10

x10 weightsof weighted

average

Page 63: Lecture 1 Describing Inverse Problems

Localized average meancan’t determine m(x) at x=10

but can determineaverage value of m(x) near x=10 Such a localized average might very

well be useful