22
1 119 Solving the problem numerically transform the continuous problem into a discrete one by: Looking at finite time/space steps Assuming all functions are sufficiently smooth, a straightforward Taylor expansion gives: Thus is computed: Finite Difference Approximation ... ! 3 ) ( ! 2 ) ( ) ( ) ( ) ( 3 2 t t u t t u t t u t u t t u ) (t u ) ( ) ( ) ( ) ( 2 t O t t u t t u t u 120 The approximation is obtained by replacing a differential operator by a finite difference Substituting this in gives If We thus get the Explicit (forward) Euler difference equation Finite Difference Approximation (2) ) , ( ) ( u t f t u t t u t t u t u ) ( ) ( ) ( )) ( , ( ) ( ) ( t u t f t t u t t u k k k k u t u t k t t t t ) ( , ) 1 ( ... , 0 1 0 ) , ( 1 k k k k u t f t u u

Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

1

119

Solving the problem numerically → transform

the continuous problem into a discrete one by:

– Looking at finite time/space steps

Assuming all functions are sufficiently

smooth, a straightforward Taylor expansion

gives:

Thus is computed:

Finite Difference Approximation

...!3

)(!2

)()()()(32

t

tut

tuttututtu

)(tu

)()()(

)( 2tOt

tuttutu

120

The approximation is obtained by replacing a

differential operator by a finite difference

Substituting this in gives

If

We thus get the Explicit (forward) Euler

difference equation

Finite Difference Approximation (2)

),()( utftu t

tuttutu

)()()(

))(,( )()( tutfttuttu

kkkk ututktttt )(,)1(...,0 10

),( 1 kkkk utftuu

Page 2: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

2

121

Considering the general first order

differential equation

with some initial condition

Euler's (explicit) method based on

Instead of letting, take a finite and

Explicit Euler: Alternative (easier)

Formulation

))(,()( txtftx

0)0( xx

h

txhtxtx

h

)()(lim)(

0

0h 0h

h

txhtxtx

)()()(

122

Thus Euler‘s Explicit Method becomes

Which is:

– Simple to program

– Very inefficient

– It sometimes gives totally erroneous results

– Highly dependent on the ―right‖ choice for h

– Error is proportional in the first order with the

step-size h → Euler is a ‖first order method‖

Explicit Euler: Alternative (easier)

Formulation (2)

))(,()()()()( txthftxtxhtxhtx

Page 3: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

3

123

Let us apply Euler's method to the equation

which has as solution

Considering the initial condition

Euler‘s Method - Example

xtx 05.0)(tex 05.0100

100)0( x

5h

Green is the exact solution

Red is Euler‘s Method

625.0;25.1;5.2h

40;20;10h

Slow & not that good

Fast &

dead wrong

48 Steps

1 Step

124

Instead of using the value of the derivative at the

present point in time t, use the value of the derivative

at the future point in time t+h:

Again, with a finite h we get an approximate equation

And thus

Implicit Euler's method

h

htxtxtx

h

)()(lim)(

0

h

htxtxtx

)()()(

))(,()()()()( txthfhtxtxhhtxtx

Page 4: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

4

125

As can be seen in

The unknown value x(t) appears on both sides of

the equation, and as an argument for the function

f(t,x) on the right hand side.

This method is known as the Implicit Euler method.

Finding x(t) requires solving (possibly numerically)

the equation for x(t)

However, if an analytic formula for x(t) exists, the

method is very easy to use

To solve Implicit Euler you need an iterative solver

Implicit Euler's method (2)

))(,()()()()( txthfhtxtxhhtxtx

126

Considering

And therefore

If which is the condition for a stable equation,

we find that for all values of λ and t.

This method is called unconditionally stable

The main advantage of an implicit method over an

explicit one is clearly the stability

Stability of Implicit Euler

xxtf ),(

kkkkk xxhhxxx 111 )1(

011

1

1

1x

hx

hx

k

kk

0

0kx

Page 5: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

5

127

It is possible to take larger time steps

without worrying about unphysical behavior

Large time steps

– Can make convergence to the steady state

slower

– But at least there will be no divergence

Drawback – implicit methods are more

complicated

– Can involve nonlinear systems of equations to

be solved in every time step

Stability of Implicit Euler (2)

128

A slightly more complicated method for the solution

of the generic first order differential equation

is the Runge-Kutta method

A fundamental theorem of calculus reads

An approximation for is the approximation

of the integral on the right hand side

For numerical integration we apply the midpoint rule

Other Methods – Runge-Kutta

))(,()( txtftx

ht

t

ht

t

jj

j

j

j

j

dxfdxtxhtx ))(,()()()(

)( htx

b

a

bagabdg

2)()(

Page 6: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

6

129

First, this approximation leads to the following

formula

There is a big problem however:

– We do not know the value of at the midpoint

We must use an approximation, and we choose

Euler's explicit method for this

The final form for the new method thus reads

Other Methods – Runge-Kutta (2)

))(,()()(22h

jh

jjj txthftxhtx

)( htx 2h

jt

))(,()()(22 jjh

jh

j txtftxtx

)))(,()(,()()(22 jjh

jh

jjj txtftxthftxhtx

130

Two species system:

– Predators P

– Preys V

Predators are the only factor cutting down

the prey population

The remaining prey population breeds

without limitations

Predators' breeding rate is proportional to

their catch

Classical Predator-Prey Model

Page 7: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

7

131

The number of predators is limited by death

rate

Where:

– k is the birth rate for prey

– a is catch rate

– b characterizes the efficiency with which predators

use their catch to produce more predators

– d is the death rate for predators

Classical Predator-Prey Model (2)

aVPkVV

dPbVPP

Also known as Lotka–Volterra equations

132

The non-trivial equilibrium of the system

Note the dependence on parameters at the

equilibrium

The lines and divide the first

quadrant into four parts with different signs for

the derivatives and

Classical Predator-Prey Model (3)

eqbd

eqak

VV

PP

dPbVP

aVPkv

0

0

eqPP eqVV

P V

Page 8: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

8

133

Predator-Prey Model Examples

Problem at the lower

limit of the prey

population

The solution is in a

state of dynamic

equilibrium – inside of

the ellipses

134

Systems of differential equations describe the

dynamics of complicated models

There are usually two or more state variables

whose time development should be studied

– For a specified time interval

– For an arbitrary long period of time

Some systems contain both algebraic and

differential equations (DAEs)

Two-dimensional models are easy to analyze

– It is often possible to draw pictures of what is going on

Systems of Differential Equations

)(tx j

Page 9: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

9

135

Generic form of a system of differential equations

If none of the functions fj depend explicitly on t, the

system is autonomous

If every function fj is linear, the system is said to be

linear

Systems of Differential Equations (2)

))(),...,(),(,()(

...

))(),...,(),(,()(

))(),...,(),(,()(

21

2122

2111

txtxtxtftx

txtxtxtftx

txtxtxtftx

nnn

n

n

136

In order to have a unique solution

– Initial and/or boundary conditions are needed

Initial conditions are given at the moment t =0

Sometimes conditions are also given at the

end point of the time interval

It is also possible that part of the conditions

are specified at the starting point and part at

the end

Systems of Differential Equations (3)

Page 10: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

10

137

Systems with two state variables can exhibit:

equilibrium and limit cycles

Equilibrium behavior:

– Stable: starting near an equilibrium, will keep you near that

equilibrium

– Asymptotically stable: starting near an equilibrium, will

drift you closer and closer to the equilibrium

– Unstable: starting exactly at the equilibrium, will keep you

there. Any perturbation, however small, will drive you away

– Saddle point: depending on the direction w.r.t. the

equilibrium you can either drift towards it or away from it

General Model Behavior

138

Limit cycles:

– Neutral: small perturbations will move you to

another cycle

– Stable: the effect of small perturbations will

gradually disappear and the system drifts back to

the original cycle

– Unstable: small perturbations drive the system

away from the cycle

Possibly only numerical results

Usually the long-term behavior is what matters

General Model Behavior (2)

Page 11: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

11

139

Table of Contents

Motivation & Trends in HPC

R&D Projects @ PP

Mathematical Modeling

Numerical Methods used in HPSC

– Systems of Differential Equations: ODEs & PDEs

– Automatic Differentiation

– Solving Optimization Problems

– Solving Nonlinear Equations

– Basic Linear Algebra, Eigenvalues and Eigenvectors

– Chaotic systems

HPSC Program Development/Enhancement: from Prototype to Production

Debugging, Profiling, Performance Analysis & Optimization

140

Automatic Differentiation – Motivation

Derivative information required

• Design optimization

• Sensitivity analysis & Parameter identification

• Data assimilation problems

• Inverse problems (data assimilation)

• Solving ODE, PDE, DAE, …

• Linear approximation & Curve fitting

Numeric Differentiation

Symbolic Differentiation

Automatic Differentiation

Page 12: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

12

141

Automatic Differentiation (AD)

Semantic transformation:

– Given a computer code F, AD generates a

new program F'which applies the chain rule of

differential calculus to elementary operations

for which the derivatives are known

F F'AD

Computer

Program

Augmented

Computer

Program

142

AD Advantages

Generates truncation- and cancellation

error-free derivatives

Generates a program for the computation

of derivative values, not of derivative

formulae

The associativity of the chain rule allows for

a wide range of choices in accumulating

derivatives → forward & reverse modes

and generalizations

Page 13: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

13

143

The AD Process

Program Code

Control

Files

Code with

DerivativesAD Tool

Compile

& Link

User’s

Derivative

Driver

Libraries, e.g.,

• SparsLinC

• ADIntrinsics

Derivative

Program

After initial setup,

usually only input

code changes.

144

AD-Tool Implementations

Source Transformation (ST)– Compiler-based technique → generate new code that

explicitly computes derivatives

– Advantages: entire context known at compile-time → efficient code, transparency

– Drawback: difficult implementation

Operator Overloading (OO)– Extends elementary operations & functions to also

implicitly compute derivatives

– Requires: redeclaration of active variables to the new overloaded types

– Advantages: easy implementation & „same― source code

– Drawback: granularity → inefficient code

Page 14: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

14

145

Some AD Toolshttp://www.autodiff.org

Fortran 77, some Fortran 90: ADIFOR 3.0/ADJIFOR (RM, f , 2f): Alan Carle & Mike Fagan (Rice)

Tapenade (RM, f): Laurent Hascoet (INRIA)

TAMC/TAF (RM, f): Ralf Giering (FastOpt GmbH).

ANSI-C/C++: ADOL-C: (FM/RM, kf): Andreas Griewank & Co. (TU Dresden)

ADIC: (FM/RM, kf): Hovland (ANL)

Application Specific: Cosy-Infinity (Remainder Differential Algebra): Martin Berz (U Michigan)

TOMLAB/MAD (AD of Matlab): Shaun Forth (RMCS Shrivenham)

ADiMat (AD of Matlab): Andre Vehreschild (RWTH Aachen)

OpenAD/F: AD Framework: J. Utke & U. Naumann (ANL & RWTH Aachen)

146

Original code

Each variable t is associated with a derivative object t

Forward Mode - FM

Example code f: w = x * sin(y+z)

w

xzy

u

v

z xy

+

sin

*

v

u

wu = y + z

u = y + z

v = cos(u) * u

v = sin(u)

w = v*x + v*x

w = v * x

and derivative statements:

If (x, y, z) = (1, 0, 0)

→ w computes w/ x

Page 15: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

15

147

Computes f/ t*S (S is called ―seed matrix‖

– [x, y, z]) by propagating sensitivities of

intermediate values with respect to input

values

For p input values of interest, runtime and

memory scale approximately with p. May

be much less e.g. for sparse Jacobians

FM is appropriate for moderate p‘s

Forward Mode Facts

148

Original code

& derivative

statements:

Reverse Mode - RM

The same example code f: w = x * sin(y+z)

z xy

+

sin

*

v

u

wu = y + z

v = sin(u)

w = v * x

w = w/w¯

v

u

z xy

x += w * v¯ ¯

v += w * x¯ ¯

¯ u += v*cos(u)¯z += u¯ ¯

y += u¯ ¯

If w = 1 → (x, y, z) contains

w = (w/x, w/y, w/z)

¯ ¯ ¯¯

Each variable t is associated with an adjoint object t, the

differentiated expressions are accumulated in the reverse order¯

,...)(..., jkk vfv

k j

kkj

v

fvv

Page 16: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

16

149

Computes WT *f/ t by propagating sensitivities

of output values with respect to intermediate

values

For q output values of interest, the runtime

scales with q. Memory requirements are harder

to predict → greatly depend on the structure &

implementation of program/AD-tool

RM is great for computing „long gradients― –

small q‗s and big p‗s

Reverse Mode Facts

150

2 2 2

2 2 2 2 3 2

2 2 2 3 2 2

2 2

given function ( , , ) ( cos )( 2 3 ), the partial derivatives are

( 2 3 ) ( cos ) 2 3 2 3 2 cos ,

( 2 3 ) ( cos ) 4 6 3 4 cos ,

sin ( 2

f x y z xy z x y z

fy x y z xy z x x y y yz x z

x

fx x y z xy z y x xy xz y z

y

fz x y

z

2

2 2 2

3 ) ( cos ) 6

sin 2 sin 3 sin 6 6 cos .

T

z xy z z

x z y z z z xyz z z

f f ff

x y z

Forward & Reverse Mode Example

©LPT@RWTH Aachen

Page 17: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

17

151

Code list

1

2

3

4 1 2

5 3

6 4 5

2

7 1

2

8 2

2

9 3

10 7 8 9

11 6 10

,

,

,

,

cos ,

,

,

2 ,

3 ,

,

,

u x

u y

u z

u u u

u u

u u u

u u

u u

u u

u u u u

u u u

1

2

3

4 1 2 2 1 1 2 2 1

5 3 3 3

6 4 5 2 1 3

7

[1,0,0],

[0,1,0],

[0,0,1],

+ [0, ,0]+[ ,0,0]=[ , ,0],

( sin ) [0,0, sin ],

[ , , sin ],

u

u

u

u u u u u u u u u

u u u u

u u u u u u

u

1 1 1

8 2 2 2

9 3 3 3

10 7 8 9 1 2 3

11 6 10 10 6 6 1 10 2 6 2 10 1 6 3 10 3

2 [2 ,0,0],

4 [0,4 ,0],

6 [0,0,6 ],

[2 ,4 ,6 ],

[2 ,4 ,6 sin ].

u u u

u u u u

u u u u

u u u u u u u

u u u u u u u u u u u u u u u u u

Gradient entries+

2 3 2

11

2 3 2

2 2 2

( , , ) [3 2 cos 2 3 ,

6 4 cos 3 ,

6 6 cos sin 2 sin 3 sin ].

f x y z u x y x z y yz

xy y z x xz

xyz z z x z y z z z

Forward Mode

152

Code list

1

2

3

4 1 2

5 3

6 4 5

2

7 1

2

8 2

2

9 3

10 7 8 9

11 6 10

,

,

,

,

cos ,

,

,

2 ,

3 ,

,

,

u x

u y

u z

u u u

u u

u u u

u u

u u

u u

u u u u

u u u

1011 11 11 116 6

11 10 9 10 9

10 1011 11 11 116 6

8 10 8 7 10 7

6 611 11 11 11 1110 10 10

6 5 6 5 4 6 4

1, , ,

= , = ,

, ,

uu u u uu u

u u u u u

u uu u u uu u

u u u u u u

u uu u u u uu u u

u u u u u u u

9 511 11 116 3 10 3

3 9 3 5 3

811 11 4 1110 1 6 2

2 4 2 8 2

711 11 4 1110 2 6 1

1 4 1 7 1

,

6 sin ,

4 ,

+ = 2 .

u uu u uu u u u

u u u u u

uu u u uu u u u

u u u u u

uu u u uu u u u

u u u u u

Adjoints+

2 3 211 11 11

1 2 3

2 3 2

2

( , , ) [ , , ] [3 2 cos 2 3 ,

6 4 cos 3 ,

6 6 cos si

u u uf x y z x y x z y yz

u u u

xy y z x xz

xyz z z x

2 2n 2 sin 3 sin ].

z y z z z

Reverse Mode

Page 18: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

18

153

1 1

1 1

( , , , , ) ( , , , , )Forward differentiation: ( )

( , , , , ) ( , , , , )Backward differentiation: ( )

Centered differentiation:

First order differentiation

m n m n

m

m n m n

m

m

f x x h x f x x xfO h

x h

f x x x f x x h xfO h

x h

f

x

21 1

221 1 1

2 2

( , , , , ) ( , , , , )( )

2

( , , , , ) 2 ( , , , , ) ( , , , , )( )

Second order differentiation

m n m n

m n m n m n

m

f x x h x f x x h xO h

h

f x x h x f x x x f x x h xfO h

x h

Divided Differences

154

Which mode to use?

Use forward mode when

– # independents is very small

– Only a directional derivative Jv is needed

– Reverse mode is not tractable

Use reverse mode when

– # dependents is very small

– Only JTv is needed

Page 19: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

19

155

Case Study – Matrix Coloring

Jacobian matrices are often sparse

The forward mode of AD computes J × S, where S

is usually an identity matrix or a vector

One can ―compress‖ Jacobian by choosing S such

that structurally orthogonal columns are combined

A set of columns are structurally orthogonal if no

two of them have nonzeros in the same row

Equivalent problem: color the graph whose

adjacency matrix is JTJ

Equivalent problem: distance-2 color the bipartite

graph of J©Paul Hovalnd @ANL

156

Compressed Jacobian

Page 20: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

20

157

What is feasible & practical

Key point:

– Forward mode computes JS at cost proportional to number of

columns in S

– Reverse mode computes JTW at cost proportional to number of

columns in W

Jacobians of functions

– Small number (1—1000) of independent variables (FM)

– Small number (1—100) of dependent variables (RM)

Extremely large, but (very) sparse Jacobians and Hessians

Jacobian-vector products (forward mode)

Transposed-Jacobian-vector products (adjoint mode)

Hessian-vector products (forward + adjoint modes)

158

Scenarios

n small: use FM on full computation

m small: use RM on full computation

m & n large, P small: use RM on A, FM on B&C

m & n large, K small: use RM on A&B, FM on C

n, p, k, m large, Jacobians of A, B, C sparse: compressed

FM

n, p, k, m large, Jacobians of A, B, C low rank: scarce FM

n, p, k, m large: Jacobians of A, B, C dense: what to do?

n

p k

mA B C

Page 21: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

21

159

Issues with Black Box Differentiation

Source code may not be available or may be difficult

to work with

Simulation may not be (chain rule) differentiable

– Feedback due to adaptive algorithms

– Non-differentiable functions

– Noisy functions

– Convergence rates

– Etc.

Accurate derivatives may not be needed – FD might

be cheaper

Differentiation and discretization do not commute

160

Application highlights

Atmospheric chemistry

Breast cancer biostatistical analysis

CFD: CFL3D, NSC2KE, Fluent 4.52

Chemical kinetics

Climate and weather: MITGCM, MM5, CICE

Semiconductor device simulation

Water reservoir simulation

Page 22: Finite Difference Approximationandrei.clubcisco.ro/cursuri/f/f-sym/5master/aac-mtphpc/cursuri/Mtp… · 4 125 As can be seen in The unknown value x(t) appears on both sides of the

22

161Sensitivity Analysis:

Mesoscale Weather Modeling

162

AD Conclusions & Future Work

Automatic differentiation research involves a wide

range of combinatorial problems

AD is a powerful tool for scientific computing

Modern automatic differentiation tools are robust and

produce efficient code for complex simulation codes

– Requires an industrial-strength compiler infrastructure

– Efficiency requires sophisticated compiler analysis

Effective use of automatic differentiation depends on

insight into problem structure

Future Work

– Further develop and test techniques for computing Jacobians

that are effectively sparse or effectively low rank

– Develop techniques to automatically generate complex and

adaptive checkpointing strategies