Epistemic UQ

Preview:

DESCRIPTION

Using Machine Learning for Epistemic Uncertainty Quantification in Combustion and Turbulence Modeling. Epistemic UQ. Use machine learning to learn the error between the low fidelity model and the high fidelity model Want to use it as a correction and an estimate of error - PowerPoint PPT Presentation

Citation preview

1

Using Machine Learning for Epistemic Uncertainty

Quantification in Combustion and Turbulence Modeling

2

Epistemic UQ

• Use machine learning to learn the error between the low fidelity model and the high fidelity model– Want to use it as a correction and an estimate of error

• Working on two aspects -- Approximate the real source term (in progress equation)

given a RANS+FPVA solution– Approximate the real Reynolds stress anisotropy given an

eddy-viscosity based RANS solution • Preliminary work

– We will show a way it could be done, not how it should be done

3

Basic Idea

• We can compare low fidelity results to high fidelity results and learn an error model– Model answers: “What is the true value given the low-

fidelity result”• If the error model is stochastic (and correct), draws

from that model give us estimates of uncertainty. • To make model fitting tractable we decouple the

problem– Model of local uncertainty based on flow-features– Model of coupling of uncertainty on a macro scale

4

Local Model

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-3

-2

-1

0

1

2

3

Fake Low Fidelity Feature

Fake

Hig

h Fi

delit

y Fe

atur

e

5

Model Generation Outline

• Get a training set which consists of low-fidelity solutions alongside the high-fidelity results

• Choose a set of features in high-fidelity to be learned ( y )

• Choose a set of features in low-fidelity which are good representations of the error ( x )

• Learn a model for the true output given the input flow features

6

Example

• In the RANS/DNS case, we are interested in the RANS turbulence model errors

• Input of the model is RANS location of the barycentric map, the marker, wall distance, and (5 dimensional)

• Output of the model is DNS location in the barycentric map (2 dimensional)

7

Local Model

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-3

-2

-1

0

1

2

3

Fake Low Fidelity Feature

Fake

Hig

h Fi

delit

y Fe

atur

e

8

Sinker

• For a test location, each point in the training set is given a weight set by a kernel function

• Then, using the true result at the training points and the weights, compute a probability distribution over the true result

9

Example Problem

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-30

-20

-10

0

10

20

30

10

30 Samples

-1.5 -1 -0.5 0 0.5 1 1.5 2-4

-3

-2

-1

0

1

2

3

4

11

100 Samples

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-4

-3

-2

-1

0

1

2

3

12

300 Samples

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-4

-3

-2

-1

0

1

2

3

13

1000 Samples

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-4

-3

-2

-1

0

1

2

3

14

10000 Samples

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-4

-3

-2

-1

0

1

2

3

4

15

Combustion Modeling

• DNS finite rate chemistry dataset as high fidelity model, RANS flamelet model is low fidelity model

• Input flow features are the flamelet table variables (mixture fraction, mixture fraction variance, progress variable)

• Output flow variable is source term in progress-variable equation

• Use a GP as the spatial fit

16

‘Truth’ Model

Dataset used : Snapshots of temporal mixing layer data from Amirreza

17

Trajectory Random Draws

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

x 10-3

0

500

1000

1500

2000

2500

3000

Location

Sou

rce

FPVA Table

18

Initial condition

19

Results of ML scheme

20

Application to EUQ of RANS

21

Input Data

• Add in marker, normalized wall distance, and p/ε as additional flow features, and use Sinker

22

Model Output

23

• Not perfect, but way better

24

Generating Errorbars

• Each point also has a variance associated with it (which is an ellipse for now)

• We can use these uncertainties to generate error bars on macroscopic quantities

• Draw two Gaussian random variables, and tweak the barycentric coordinate by that many standard deviations in x and y

• If the point goes off the triangle, project it back onto the triangle

• Gives us a family of new turbulence models

25

Random Draws

26

Random Draws

27

Conclusions

• Promising early results• Basic idea:

Learn `mean and variance’ of error distribution of modeling terms in the space of FEATURES

• There is a lot of work to be done– Feature selection– Better uncertainty modeling (non-Gaussian)– Kernel selection

• Need to develop a progressive / logical test suite to evaluate the quality of a model

Recommended