Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
UQ12 Abstracts 61
IP1
Sparse Tensor Algorithms in Uncertainty Quantifi-cation
We survey recent mathematical and computational re-sults on sparse tensor discretizations of Partial DifferentialEquations (PDEs) with random inputs. The sparse Ten-sor discretizations allow, as a rule, to overcome the curse ofdimensionality in approximating infinite dimensional prob-lems. Results and Methods surveyed include a) regularityand N-term gpc approximation results and algorithms forelliptic and parabolic PDEs with random coefficients [jointwork with A. Cohen, R. DeVore and V.H.Hoang] and b) ex-istence and regularity results for solutions of classes of cer-tain hyperbolic PDEs from conservation and balance laws.Multi-Level Monte-Carlo (MLMC) [joint with A. Barth]and Multi-Level Quasi-Monte-Carlo (MLQMC) [joint withI. Graham, R. Scheichl, F.Y.Kuo and I.M. Sloan] methodscan be viewed as particular classes of sparse tensor dis-cretizations. Numerical MLMC and MLQMC results forelliptic PDEs with random coefficients and for nonlinearsystems of hyperbolic conservation laws are presented [jointwork with S. Mishra and J. Sukys (ETH)]. We compare theperformance of these methods with that of adaptive gen-eralized polynomial chaos discretizations.
Christoph SchwabETH [email protected]
IP2
Integrating Data from Multiple Simulation Models- Prediction and Tuning
Simulation of complex systems has become commonplacein most areas of science. In some settings, several simula-tors are available to explore the system, each with varyinglevels of fidelity. In this talk, Bayesian methodology for in-tegrating outputs from multi-fidelity computer models, andfield observations, to build a predictive model of a physicalsystem is presented. The problem is complicated becauseinputs to the computer models are not all the same andthe simulators have inputs (tuning parameters) that mustbe estimated from data. The methodology is demonstratedon an application from the University of Michigans Centerfor Radiative Shock Hydrodynamics.
Derek BinghamDept. of Statistics and Actuarial ScienceSimon Fraser [email protected]
IP3
Polynomial Chaos Approaches to Multiscale andData Intensive Computations
Prediction of multiscale systems is increasingly being basedon complex physical models that depend on large uncertaindata sets. In this talk, we will outline recent developmentsin polynomial chaos (PC) methods for uncertainty quantifi-cation in such systems. Implementations will be illustratedin light of applications to chemical kinetics, and geophysi-cal flows. Conclusions are in particular drawn concerningthe potential of parallel databases in providing a platformfor discovery, assessing prediction fidelity, and decision sup-port.
Omar M. Knio
Duke [email protected]
IP4
Large Deviation Methods for Quantifying Uncer-tainty
Large deviation methods have been particularly successfulin assessing small probabilities of rare events, which aredifficult to compute. They are also at the center of manyimportance sampling methods that aim to estimate prob-abilities of rare events by Monte Carlo methods. Sinceprobabilities of rare events are an important part of theemerging field of uncertainty quantification in complex sci-entific problems, the question arises as to how effective theyare. This is because the results of large deviation theory de-pend sensitively on details of the probabilistic model used,and such details may not be available or may themselvesbe subject to uncertainty. I will address these issues withsome examples using mean-field models and conservationlaws and attempt to draw some broader conclusions.
George C. PapanicolaouStanford UniversityDepartment of [email protected]
IP5
Model Reduction for Uncertainty Quantification ofLarge-scale Systems
Uncertainty quantification is becoming recognized as anessential aspect of development and use of numerical sim-ulation tools, yet it remains computationally intractablefor large-scale complex systems characterized by high-dimensional uncertainty spaces. In such settings, it is es-sential to generate surrogate models – low-dimensional,efficient models that retain predictive fidelity of high-resolution simulations. This talk will discuss formulationsof projection-based model reduction approaches for appli-cations in uncertainty quantification. For systems governedby partial differential equations, we demonstrate the useof reduced models for uncertainty propagation, solution ofstatistical inverse problems, and optimization under uncer-tainty.
Karen E. WillcoxMassachusetts Institute of [email protected]
IP6
Statistical Approaches to Combining Models andObservations
Numerical models and observational data are critical inmodern science and engineering. Since both of these infor-mation sources involve uncertainty, the use of statistical,probabilistic methods play a fundamental role. I discussa general Bayesian framework for combining uncertain in-formation and indicate how various approaches (ensembleforecasting, UQ, etc.) fit in this framework. A paleoclimateanalysis illustrates the use of simple physics and statisticalmodeling to produce inferences. A second example involvesglacial dynamics and illustrates how updating models anddata can lead to estimates of model error. A third exam-ple involves the extraction of information from multi-modelensembles in climate projection studies.
Mark Berliner
62 UQ12 Abstracts
The Ohio State [email protected]
CP1
Capturing Signatures of microcracks frommacrolevel Responses
The current work considers the problem of characterizingmicrocracks that are typically at 10–100 μm level (depend-ing on the material). These microcracks are invisible tonaked eyes, and can cause catastrophic failure once theygrow to a certain critical length. A stochastic upscalingscheme based on previous work by Das is extended in thepresent work to include the effects of microcracks. Theuncertainty (due to microstructural heterogeneities, andrandom size, orientation, and distribution of microcracks)is accounted for in the macroscopic (continuum) consti-tutive elasticity matrices by modeling them as boundedpositive-definite random matrices. Distinct difference inprobabilistic characteristics of macrolevel response vari-ables is observed depending on the presence or absence ofmicrocracks. This finding opens up a new way of captur-ing signatures of microcracks from macrolevel experimentalresponse measurements.
Sonjoy Das, Sonjoy Das, Sourish ChakravartyUniversity at [email protected], [email protected],[email protected]
CP1
Identification of Uncertain Time-Dependent Mate-rial Behavior with Artificial Neural Networks
Reliability assessment of structures requires knowledge ofthe structural behavior. In case of new materials, tests arerequired to investigate structural behavior. Obtained mea-surements are usually limited and imprecise, which leadsto difficulties in selecting or developing material models(stress-strain-time dependencies) and determine their pa-rameters. In this work, we will present an alternative ap-proach that is based on artificial neural networks. Recur-rent neural networks are used to construct relationshipsbetween material-loads, and material-responses under un-certainty.
Steffen Freitag, Rafi L. MuhannaGeorgia Institute of [email protected],[email protected]
CP1
Uncertainty Quantification of Discrete ElementMicro-Parameters Conditioned on Sample Prepa-ration of Homogeneous Particle Materials
This lecture analyzes the effect of varying micro-parameters of a Discrete Element Model (DEM) developedto analyze the effect of homogeneous particle materials.An experimental database is populated using steel spheresof constant diameter to build cylindrical specimens undercontrolled experimental conditions. Specimens vary fromthe loosest to the densest conditions. A 3D-DEM model isbuilt to reproduce the same experimental sample prepara-tion conditions, which is used to explore all varying micro-parameters combinations that lead to the reference exper-imental samples. X-Ray Computer Tomography is used to
validate the numerical simulations.
Patrick R. Noble, Tam DoungTexas A&M [email protected], [email protected]
Zenon Medina-CetinaZachry Department of Civil EngineeringTexas A&M [email protected]
CP1
Random Fields for Stochastic Mechanics of Mate-rials
An approach based on a Hill-Mandel condition allows de-termination of scaling laws and mesoscale random fieldswith continuous realizations from the microstructural prop-erties described by random fields with discontinuous real-izations, typically given by image analyses of just aboutany microstructure. We discuss one- and two-point statis-tics of such fields, the (an)isotropy of realizations ver-sus (an)isotropy of correlation functions, as well as theirunavoidable scale-dependence, non-uniqueness, and corre-spondence with specific variational principles of solid me-chanics.
Martin Ostoja-StarzewskiDepartment of Mechanical Science and EngineeringUniversity of Illinois at [email protected]
Luis CostaInstitute for Multiscale Reactive ModelingUS Army [email protected]
Emilio PorcuInstitute for Mathematical StochasticsGeorg-August-Universitaet, Goettingen, [email protected]
CP1
Uncertainty Quantification in Image Processing
Based on the polynomial chaos and stochastic finite ele-ments we construct stochastic extensions of image process-ing operators like Perona-Malik smoothing and Mumford-Shah resp. Ambrosio-Tortorelli segmentation and of thelevel set method. The stochastic extensions allow propa-gating information about the image noise from the acquisi-tion step to the result, thus equipping the image processingresults with reliability estimates. This modeling is impor-tant in medical applications where treatment decisions arebased on quantitative information extracted from the im-ages.
Torben PatzJacobs University Bremen and Fraunhofer [email protected]
Michael KirbyUniversity of UtahSchool of [email protected]
Tobias PreusserFraunhofer MEVIS, Bremen
UQ12 Abstracts 63
CP1
Statistics of Mesoscale Conductivities and Resis-tivities in Polycrystalline Graphite
We consider mesoscale heat conduction in random poly-crystalline graphite under uniform essential (Dirichlet) anduniform natural (Neumann) boundary conditions. Usingnumerical experiments, we obtain rigorous bounds on allthe six independent components of the mesoscale con-ductivity (and resistivity) tensor and thereby the threeinvariants. The bounds on conductivity obtained usingthe proposed approach are the tightest when compared tothe Voigt, Reuss, upper and the lower Hashin-Shtrikmanbounds and incorporate the effects of scaling unlike theother bounds. The scale-dependent probability densitiesof the invariants of the mesoscale conductivities (resistivi-ties) are also determined.
Shivakumar I. RanganathanDepartment of Mechanical EngineeringAmerican University of [email protected]
Martin Ostoja-StarzewskiDepartment of Mechanical Science and EngineeringUniversity of Illinois at [email protected]
CP2
Model Discrepancy and Uncertainty Quantification
When quantifying uncertainty in computer models it is im-portant to account for all sources of uncertainty. One im-portant source of uncertainty is model discrepancy; thedifference between reality and the computer model output.However, the challenge with incorporating model discrep-ancy in a statistical analysis of computer models is theconfounding with calibration parameters. In this talk weexplore, through examples, ways to include model discrep-ancy in a Bayesian setting.
Jenny BrynjarsdottirStatistical and Applied Mathematical Sciences [email protected]
Anthony O’HaganUniv. Sheffield, [email protected]
CP2
Model Inadequacy Quantification for Simulationsof Structures Using Bayesian Inference
We quantify the level of model inadequacy (aka modelstructure uncertainty) in simulations of the eigenfrequen-cies of a simple metal structure. Starting with a statisti-cal model of the system, including parametric uncertainty,discretization error, as well as a generic representation ofmodel inadequacy, we calibrate against an extensive exper-imental data-set, which includes attempts to identify andestimate sources of measurement noise and bias. A reliableestimate of model inadequacy for this simple situation isobtained.
Richard DwightDelft University of Technology
Hester BijlFaculty Aerospace EngineeringDelft University of Technology, [email protected]
CP2
Quantitative Model Validation Techniques: NewInsights
This research develops new insights into quantitative modelvalidation techniques. Traditional Bayesian hypothesistesting is extended based on interval hypotheses on distri-bution parameters and equality hypotheses on probabilitydistributions. Two types of validation experiments (wellcharacterized and partially characterized) are considered.It is shown that under some specific conditions, the Bayesfactor metric in Bayesian equality hypothesis testing andthe reliability-based metric can both be mathematically re-lated to the p-value metric in classical hypothesis testing.
You LingDepartment of Civil and Environmental EngineeringVanderbilt [email protected]
Sankaran MahadevanVanderbilt [email protected]
CP2
Dynamic Model Validation of a 160 Mw Steam Tur-bine Lp Last Stage (L-0) Blade
Laboratory vibration testing of turbine blades has alwaysprovided a major contribution to the understanding, de-tection and the possibility to predict the probable failuresencountered under real operation conditions. The impor-tance of testing for large steam turbines, with long slen-der blades with increased risk of fatal vibrations, is higherdue to cost and safety. In this regard, the main focus ofthis work is to identify the Siemens Co. benchmark blademodal model based on experimental test data, using SDOFand MDOF methods. The accuracy of the Circle and linefit methods were compared with a state-space subspacemethod called N4SID. Furthermore, in order to validatethe smaller size model with capability to capture main fea-tures of the full model, SVD-based model approximationmethods are used to achieve a minimal state space repre-sentation of the original model. Finally the accuracy andefficiency of a Hankel-norm based reduction and a balancedtruncation method are compared and the results are pre-sented.
Sadegh RahrovaniChalmers University of [email protected]
Thomas [email protected]
Sadegh SarviSenior [email protected]
64 UQ12 Abstracts
CP2
Quantifying Uncertainty and Accounting ModelDiscrepancy in Slider Air Bearing Simulations forCalibration and Prediction
Coupled multiphysics simulation of a slider air-bearing isroutinely performed during a hard disk design process toachieve various critical performance requirements. Charac-terizing uncertainty through these simulations is critical topredict variation in outputs of interest and to drive sigmareduction of main contributing inputs. A non-intrusivesparse stochastic collocation scheme is used to propagateuncertainty here and its efficiency compared to the Latinhyper-cube Monte Carlo sampling approach. Experimen-tal results are used to update unknown functions in themodel inputs using a Bayesian formulation and MarkovChain Monte Carlo approach. In another specific appli-cation, the discrepancy between the simulation and theexperimental results is accounted for by augmenting themodel with discrepancy term and specifying a GaussianProcess model for the same. Two examples of applicationsin active fly height loss during environmental conditionsand operational shock survival is presented here.
Ajaykumar Rajasekharan, Paul J SondaMechanical Research & DevelopmentSeagate [email protected],[email protected]
CP2
Integration of Verification, Validation, and Calibra-tion Activities for Overall Uncertainty Quantifica-tion
This talk presents a Bayesian methodology to integrateinformation from verification, validation, and calibrationactivities for overall prediction uncertainty quantification.Multi-level systems are considered, with two types of infor-mation flow - (1) lower-level output becomes higher-levelinput, and (2) parameters calibrated with lower-complexitymodels and experiments are applied to full system simula-tion. The various sources of uncertainty, multiple levelsof models and experiments are connected through a Bayesnetwork, to quantify the uncertainty in system-level pre-diction.
Shankar Sankararaman, Sankaran MahadevanVanderbilt [email protected],[email protected]
CP3
Dimension Reduction and Measure Transformationin Stochastic Multiphysics Modeling
We present a computational framework based on stochas-tic expansion methods for the efficient propagation of un-certainties through multiphysics models. The frameworkleverages an adaptation of the Karhunen-Loeve decom-position to extract a low-dimensional representation ofinformation passed from component to component in astochastic coupled model. After a measure transforma-tion, the reduced-dimensional interface thus created en-ables a more efficient solution in a reduced-dimensionalspace. We demonstrate the proposed approach on an il-lustration problem from nuclear engineering.
Maarten Arnst
Universite’ de [email protected]
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
John Red-HorseValidation and Uncertainty Quantification ProcessesSandia National [email protected]
CP3
Contour Estimation Via Two Fidelity ComputerSimulators under Limited Resources
In this work, we study the contour estimation problem forcomplex systems with two fidelity simulators. Using Gaus-sian process for surrogate construction, the sequential pro-cedure is designed to choose the best suited simulator andinput location for each simulation trial so that the overallestimation of the desired contour can be as good as possibleunder limited resources. Finally, the methodology is illus-trated on a queueing system and its efficiency evaluatedvia a simulation study.
Ray-Bing ChenDepartment of StatisticsNational Cheng Kung [email protected]
Ying-Chao HungDepartment of StatisticsNational Chengchi [email protected]
Weichung WangNational Taiwan UniversityDepartment of [email protected]
Sung-Wei YenDepartment of MathematicsNational Taiwan [email protected]
CP3
An Uncertainty Quantification Approach to AssessGeometry-Optimization Research Spaces throughKarhunen-Loeve Expansion
UQ is exploited to assess geometry-optimization researchspaces and support the design of the optimization task.A set of prescribed geometries defines the research spacethrough morphing. Karhunen-Loeve Expansion gives thetotal geometric variance and the geometric variability ofeach principal direction. This is used to compare differentspaces and reduce their dimension and the computationalcosts. Finally, UQ methods identify the statistical proper-ties of the relevant objectives. Hydrodynamic application
UQ12 Abstracts 65
in ship design is presented.
Matteo DiezCNR-INSEAN, The Italian Ship Model [email protected]
Daniele PeriINSEAN - The Italian Ship Model [email protected]
Frederick SternIIHR-Hydroscience and Engineering, The University [email protected]
Emilio CampanaINSEAN The Italian Ship Model [email protected]
CP3
Improvement in Multifidelity Modeling UsingGaussian Processes
Using different types of fidelity measurements can result ina large improvement of predictions by surrogate modelingin terms of computing time and precision. O’Hagan pro-posed an autoregressive Gaussian process to model the re-lationship between high and low accurate responses. How-ever, we show that in practice the linearity is not alwaysobserved. Therefore in the present work the estimation ofthe relationship between the cheap and expensive responsesby using smoothing splines is studied.
Celine HelbertLaboratoire Jean KuntzmannUniversity of [email protected]
Federico ZertucheUniversity of [email protected]
CP3
A Multiscale Framework for Bayesian Inference inElliptic Problems
Motivated by applications in hydrology, we present a mul-tiscale framework for efficient sampling of Bayesian posteri-ors. The method is broadly applicable, but we concentrateon length scales and a conditional independence structuredefined by the multiscale finite element method. The prob-lem has two key components: construction of a coarse-scaleprior, and an iterative method for conditioning fine-scalerealizations on coarse-scale constraints. Theoretical justi-fication and numerical examples show that the frameworkcan strongly decouple scales and lead to more efficient in-ference.
Matthew Parno, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
CP4
Super Intrusive Polynomial Chaos for An EllipticPde for Heat Transfer
We study uncertainty quantification via an elliptic partial
differential equation for certain heat transfer problems. Weuse a polynomial chaos formulation that is super intrusive –we use the structure of the equations to derive exact closedform expressions for the mean and standard deviation ofa random field (verified via Monte Carlo simulation). Weharness the problem specific stochastic dependence struc-ture in a way that could be useful in a variety of otherapplications with a similar mathematical structure.
Andrew BarnesGeneral ElectricGlobal Research [email protected]
Jose CordovaGeneral Electric Global Research [email protected]
CP4
A Library for Large Scale Parallel Modeling andSimulation of Spatial Random Fields
Spatial random fields (RFs) model heterogeneous fields incomputational simulations, including material properties,boundary/initial conditions, and geometric properties. Wepresent a parallel API/library for modeling spatial RFsusing either Karhunen-Loeve (KL) or Polynomial Chaos(PC) expansions. In addition to generating KL/PC real-izations, the library includes a preprocessor to solve KLeigenproblems and a posteriori error estimators for the re-sulting approximate solutions. Finally we describe severalapplications, including structural mechanics and flow inporous media.
Brian CarnesSandia National [email protected]
John Red-HorseValidation and Uncertainty Quantification ProcessesSandia National [email protected]
CP4
Efficient Evaluation of Integral Quantities in Rdoby Sequential Quadrature Formulas.
Accuracy of the computation of the integral quantities in-volved in RDO represents a key element for the success ofthe optimization. If numerical quadrature is adopted, aconvergence study of a benchmark UQ may be not suffi-cient, due to the changing characteristics of the objectivefunction thru the optimization process. To preserve ac-curacy, different sequential quadrature formulas are hereapplied to a practical application for ship design, compar-ing their efficiency and the associated numerical effort.
Daniele PeriINSEAN - The Italian Ship Model [email protected]
Matteo DiezCNR-INSEAN, The Italian Ship Model [email protected]
CP4
Application of a Novel Adaptive Finite DifferenceSolution of The Fokker-Planck Equation for Uncer-
66 UQ12 Abstracts
tainty Quantification
An accurate and computationally efficient intrusive ap-proach for UQ is introduced. It utilizes the diffusionlessFokker-Planck equation for the time evolution of the PDF.This equation is solved numerically using a TVD finite dif-ference scheme and a novel method for obtaining the griddistribution. The results for some time-dependent prob-lems will be compared with exact solutions or other com-putational UQ methodologies to demonstrate the method’sability to accurately and efficiently obtain statistical quan-tities of interest.
Mani Razi, Peter AttarSchool of Aerospace and Mechanical EngineeringUniversity of [email protected], [email protected]
Prakash VedulaUniversity of [email protected]
CP4
Adjoint Error Estimation for Stochastic Colloca-tion Methods
We use stochastic collocation on sparse grids for solvingpartial differential equations with random parameters. Ouraim is to combine the method with an adjoint approach inorder to estimate and control the error of some stochasticquantity, such as the mean or variance of a solution func-tional. Therefore, our major goal is to provide appropriateerror estimates that require less computational effort thenthe collocation procedure itself.
Bettina SchiecheGraduate School of Computational Engineering, TUDarmstadtDepartment of Mathematics, TU [email protected]
Jens LangDepartment of MathematicsTechnische Universitat [email protected]
CP4
Robust Approximation of Stochastic Discontinu-ities
The robust approximation of discontinuities in stochasticproblems is important, since they can lead to high sensitiv-ities and oscillatory interpolations. To that end, we intro-duce finite volume robustness concepts from computationalfluid dynamics into uncertainty quantification. The ap-plication of these new uncertainty propagation algorithmsto engineering problems shows that they require a mini-mal number of samples to resolve discontinuities, for ex-ample, in transonic airfoil flow simulations with uncertainfree stream conditions.
Jeroen WitteveenStanford UniversityCenter for Turbulence [email protected]
Gianluca IaccarinoStanford [email protected]
CP5
Constrained Gaussian Process Modeling
In this paper, we introduce a new framework for incorpo-rating constraints in Gaussian process modeling, includ-ing bound, monotony and convexity constraints. This newmethodology mainly relies on conditional expectations ofthe truncated multinormal distribution. We propose sev-eral approximations based on correlation-free assumptions,numerical integration tools and sampling techniques. Froma practical point of view, we illustrate how accuracy ofGaussian process predictions can be enhanced with suchconstraint knowledge.
Sebastien Da VeigaIFP Energies [email protected]
Amandine [email protected]
CP5
A Learning Method for the Approximation of Dis-continuous Functions in High Dimensions
Tractable uncertainty quantification in high dimensions re-quires approximating computationally intensive functionswith fast and accurate surrogates. Functions exhibitingdiscontinuities or sharp variations with respect to theirinputs cause significant difficulty for traditional approxi-mation methods. We have developed an unstructured andsampling-based method to address such problems. First welocate discontinuities via a kernel SVM classifier and un-certainty sampling. Then we map each resulting smoothregion to a regular domain on which to build sparse poly-nomial approximations utilizing existing theory.
Alex A. GorodetskyMassachussets Institute of [email protected]
Youssef M. MarzoukMassachusetts Institute of [email protected]
CP5
Improvement of the Accuracy of the PredictionVariance Using Bayesian Kriging. Application toThree Industrial Case Studies.
Kriging is often used in computer experiments, in par-ticular because it quantifies the variance of predictionwhich plays a major role in many applications (EfficientGlobal Optimization, Uncertainty Quantification, Sensi-tivity Analysis,...). In this talk we present three indus-trial case studies from different application fields (reser-voir engineering, nuclear security, optics) where we showhow bayesian kriging provides an accurate variance of pre-diction especially when prior information is available, forexample when derived from less accurate but faster simu-lations.
Celine HelbertLaboratoire Jean KuntzmannUniversity of [email protected]
UQ12 Abstracts 67
CP5
Reduced Basis Surrogate Models and SensitivityAnalysis
Computation of Sobol indices requires a large number ofruns of the considered model. These runs often take toomuch time, and one has to approximate the model by a sur-rogate model which is faster to run. We present a reducedbasis method, a way to build efficient surrogate models,coming with a rigorous error bound certifying the modelapproximation, which can be used to provide an error es-timation on the estimated Sobol indices.
Alexandre Janon, Maelle Nodet, Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected], [email protected], [email protected]
CP5
Shape Invariant Model Approach to FunctionalOutputs Modelling
To tackle the problem of uncertainty propagation in dy-namic simulators such as multiphase flow transport we pro-pose a response surface method based on Gaussian Process.To reduce the dimension a shape invariant model approachis proposed. Multiple runs from the experimental designare modelled by a parametrical transformation applied toa template curve. The response surface model is then ap-plied to the reduced set of transformation parameters fromwhich the functional output is reconstructed.
Ekaterina SergienkoUniversity of Toulouse and IFP Energies [email protected]
Fabrice GamboaLaboratory of Statistics and ProbabilityUniversity of [email protected]
Daniel BusbyIFP Energies [email protected]
CP5
Svr Regression with Polynomial Kernel and Mono-tonicity Constraints
Very often, regression models have to respect some ad-ditionnal constraints like monotonicity constraints. Wepresent here a new method based on Support Vector Re-gression and Polynomial Kernel. As a major feature thismethod produces an equation valid in the whole domain ofinterest, which can be used without any additionnal workby engineers. Avoiding any discretization of the domain,this method can be applied up to 10 regressions variables.
Francois [email protected]
CP6
Quantification of Uncertainty of Low Fidelity Mod-
els and Application to Robust Design
One of the major difficulties concerning uncertainty prop-agation techniques is to identify and quantify the sourcesof uncertainty. Predictive uncertainty of models has beendefined as the main source of uncertainty in aircraft pre-liminary design. Moreover, its quantification is made morecomplex by model interdependencies. The aim of thisstudy is to propose a way to quantify model predictive un-certainty and propagate it through an overall design pro-cess to figure out the robustness of its result.
Jessie BirmanAirbus Operation [email protected]
CP6
Path Variability Due To Sensor Geometry
The concept of Path Variability Due To Sensor Geometry(PVDSG) is fundamental to geomatic engineering. ThePVDSG is a measure of the variability “built into’ a statis-tic based on the distances from a target to the elements ofa sensor array, due to the particular geometry of that array,as the target traverses a given path. This paper documentsPVDSG measures as functions of the covariance matrix ofthe proportional target/sensor separations.
Timothy HallPQI [email protected]
CP6
Risk-Averse Control of Linear Stochastic Sys-tems with Low Sensitivity: An Output-FeedbackParadigm
The research investigation treats the problem of controllingstochastic linear systems with quadratic criteria, includingsensitivity variables is investigated, when noisy measure-ments are available. It is proved that the low sensitivitycontrol strategy with risk aversion can be realized by thecascade of: i) the conditional-mean estimates of the cur-rent states using a Kalman filter and ii) optimally feedback,which is comprised of a custom set of mathematical statis-tics associated with the finite-horizon quadratic randomcost. In other words, the certainty equivalence principlestill holds for this class of optimal statistical control prob-lem.
Khanh D. PhamThe Air Force Research LaboratorySpace Vehicles [email protected]
CP6
An Information-Based Sampling Scheme with Ap-plications to Reduced-Order Modeling
This work presents a novel methodology to sample effec-tively relevant points in the parameter space when de-signing computer experiments, or collecting snapshots forreduced-order models from an expensive to evaluate com-puter model. Principles from information geometry areemployed, where the parameter space is interpreted as aRiemannian manifold equipped with a sensitivity relatedmetric. An adaptive Sequential Monte Carlo scheme en-ables an effective sampling of such points, while allowing acontrolled computational cost related to the evaluations of
68 UQ12 Abstracts
the computer model. Numerical examples, in the contextof reduced order modeling, are provided.
Raphael SternfelsSchool of Civil & Environmental Eng, Cornell [email protected]
Christopher J. EarlsCornell [email protected]
CP7
A Method for Solving Stochastic Equations by Re-duced Order Models
A method is developed for calculating statistics of the so-lution U of a stochastic equation depending on an Rd-valued random variable Z. The method is based on tworecent developments. The first approximates the mappingZ �→ U(Z) by a finite family of hyperplanes tangent toU(Z) at points selected by geometrical considerations. The
second represents Z by a simple random vector Z deliveredby an optimization algorithm minimizing the discrepancybetween properties of Z and Z. The methods uses samplesof Z to construct approximate representations for U(Z), sothat these representations account for the probability lawof Z.
Mircea GrigoriuSchool of Civil EngineeringCornell [email protected]
CP7
Effect of Limited Measurement Data on State andParameters Estimation in Dynamic Systems
This work characterizes the effect of data uncertainty onestimates of state and parameters of dynamic systems. Thesample moments of state and parameters of dynamic sys-tem based on limited available measurements are treated asrandom variates that characterize the effects of uncertaintydue to limited measurement data. The Hankel matrices as-sociated with each sample moment estimator (that arisesin the context of corresponding Hamburger moment prob-lem) and the covariance matrix (that characterizes statis-tical dependencies among the sample moment estimators),thus, turn out to be random positive-definite matrices. Anarray of methodologies (Bayes’ theorem, random matrixtheory, maximum entropy principle, and polynomial chaosexpansion) are employed to estimate the probability den-sity functions of these positive-definite random matrices.Efficient sampling schemes are also presented to samplefrom these pdfs.
Sonjoy Das, Sonjoy Das, Reza MadankanUniversity at [email protected], [email protected],[email protected]
Puneet SinglaMechanical & Aerospace EngineeringUniversity at [email protected]
CP7
Managing Uncertainty in Complex Dynamic Mod-
els
Various fields of science/engineering have developed appli-cation specific formulations of uncertainty quantificationwith computational approaches. Such efforts in controlsystem analysis employ functional analysis and semidefi-nite programming, yielding tractable algorithms for quan-tifying the effect of individual component uncertainty andunknown external influences on behavior of large, hetero-geneous interconnections of components. This talk will re-view the formalism, its use in practice, and explore thepotential connections across other areas to better exploiteach other’s successes.
Ufuk [email protected]
Peter SeilerU of [email protected]
Michael FrenklachUniversity of California at [email protected]
Gary BalasU of [email protected]
Andrew PackardU of California at [email protected]
CP7
On-Line Parameter Estimation and Control of Sys-tems with Uncertainties
We present a method for parameter estimation and controlof dynamical systems with uncertainties in real time. Themethod combines the polynomial chaos (PC) theory foruncertainty propagation and the Bayesian framework forparameter estimation in a novel way. The recursive natureof the algorithm requires the estimation of the PC coeffi-cients of the parameters from their posterior distribution.We present methods for estimating these coefficients. Theproblem is motivated by a biological application.
Faidra StavropoulouHelmholtz Zentrum [email protected]
Johannes MuellerTU [email protected]
CP7
Estimation and Verification of Curved RailwayTrack Geometry Using Monte Carlo Particle Fil-ters
A railway track geometry differs from the pre-designed one.It has track irregularities due to the frequent traffic of thetrain. A railway track geometry of level and alignmentis measured based on the principle of three-point chordmeasurement, called 10m chord versine. Reconstructingthe true track geometry from the measured data is a kind ofthe inverse problem. In this study a method of estimating
UQ12 Abstracts 69
and verifying the true track geometry in the curved sectionfrom measured data using Monte Carlo particle filters isdescribed.
Akiyoshi YoshimuraTokyo University of TechnologyEmeritus [email protected]
CP8
Investigations on Sensitivity Analysis of ComplexFinal Repository Models
The complex computation models for assessing the long-term safety of final repositories for radioactive waste typi-cally show a number of particularities that can cause prob-lems when a probabilistic sensitivity analysis is performed,such as skewed output distributions over many orders ofmagnitude, including the possibility of zero-output, or non-continuous behavior. Several numerical experiments arepresented, demonstrating these problems and some ideas todeal with them. Different correlation-based and variance-based techniques of sensitivity analysis are applied andcompared.
Dirk-Alexander BeckerGesellschaft fuer Anlagen- und Reaktorsicherheit (GRS)[email protected]
CP8
Global Sensitivity Analysis for Disease and Popu-lation Models
Variance based sensitivity measures are used to examinethe structure of biological models. To limit expensivemodel evaluations we use a Gaussian process emulator toestimate sensitivity indices. The use of a stochastic sur-rogate model allows us to quantify the uncertainty due tolack of data in sensitivity estimates. Our methods are ap-plied to an ODE based model for the spread of an epidemicon a network and to an agent based model of disease in ametropolitan region.
Kyle S. Hickmann, James (Mac) Hyman, John E.OrtmannTulane [email protected], [email protected],[email protected]
CP8
A Global Sensitivity Analysis Method for Reliabil-ity Based Upon Density Modification
For global sensitivity analysis of numerical models, vari-ance decomposition is widely applied. However, when thequantity of interest is a failure probability this methodrequires numerous function evaluations and does not al-ways provide relevant information . We propose a newimportance sampling method based upon modification ofthe distribution of inputs controlled by Kullback-Leiblerdivergence. The function sample is evaluated only once,resulting in significant time saving.
Paul LematreEDF R&DINRIA [email protected]
Ekaterina SergienkoUniversity of Toulouse and IFP Energies [email protected]
Fabrice GamboaLaboratory of Statistics and ProbabilityUniversity of [email protected]
Bertrand IoossEDF R&[email protected]
CP8
Fast and Rbd Revisited
This is a new introduction to the existing ANOVA-basedglobal sensitivity analysis methods: Fourier AmplitudeSensitivity Test (FAST) and Random Balance Design(RBD). First this work consists in clarifying the under-lying mathematical theory, then in performing a rigorouserror analysis and last in suggesting some improvementsand generalizations of both these methods. Numerical ex-amples are also provided to illustrate this work.
Jean-Yves TissotUniversity of Grenoble - [email protected]
Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]
CP8
Locality Aware Uncertainty Quantification andSensitivity Algorithms
A need exists for UQ and sensitivity analysis algorithmsthat leverage information from one system realization toanother. Although the size of the computational modelsused in many engineering and scientific simulations is ex-tremely large, i.e. millions of equations, the uncertaintypresent is oftentimes very localized to small regions of themodel. Here, recent efforts to develop computational meth-ods that exploit this localization and the ensemble natureof the requisite iterative analyses will be discussed.
Steven F. WojtkiewiczDepartment of Civil EngineeringUniversity of [email protected]
Erik A. JohnsonAstani Dept of Civil & Env EngrgUniversity of Southern [email protected]
CP9
Emulating Dynamic Models - Numerical Challenge
Dynamic models can be emulated by linearizing its equa-tions and compensating for the non-linearities with Gaus-sian white noise. Coupling replica of the resulting lin-ear stochastic system according to the distance betweentheir inputs and conditioning the coupled system on off-line simulation outputs results in a mechanistic emulator.For piece-wise constant inputs, the covariance matrix of
70 UQ12 Abstracts
the coupled system can be derived explicitly. Condition-ing of the emulator boils down to an (off-line) inversion ofthis covariance matrix and the emulation to mere (on-line)multiplications of matrices of the dimension of the statespace. This approach is compared with Kalman filteringand smoothing.
Carlo [email protected]
CP9
Multi-Modal Oil Reservoir History Matching UsingClustered Ensemble Kalman Filter
A multi-modal history-matching algorithm using Regular-ized Ensemble Kalman Filter was developed. The EnKFalgorithm is augmented with k-means clustering algorithmto form sub-ensembles that explore different parts of thesearch space. Clusters are updated at regular intervals tomerge clusters approaching the same local minima. The re-sulting mean of each sub-ensemble is used for robust pro-duction optimization. The proposed algorithm providesan efficient method for parameter estimation, uncertaintyquantification and robust optimization of oil production.
Ahmed H. ElSheikhDepartment of Earth Science and EngineeringImperial College London, South Kensington [email protected]
CP9
A Spectral Approach to Linear Bayesian Updating
The linear Bayesian methods of the family of low-rankKalman filters have become popular for the solution ofinverse and data assimilation problems, with one exam-ple being the ensemble Kalman filter. In this work wepresent a related approach which is based on well-knownspectral expansions of the stochastic space, like the poly-nomial chaos expansion. The connection of this ‘Bayeslinear’ approach with the original Bayes formula is high-lighted, an it is demonstrated that spectral expansions canbe used directly in the update equation without any detourto sampling. Numerical examples show the advantages andchallenges of this approach. Examples will be given involv-ing the well-known Lorenz systems and the estimation ofa diffusion coefficient.
Oliver PajonkInstitute of Scientific ComputingTechnische Universitat [email protected]
Bojana V. RosicInstitute of Scientific ComputingTU [email protected]
Alexander LitvinenkoTU Braunschweig, [email protected]
Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]
CP9
Uncertainty Quantification in the EnsembleKalman Filter
The Ensemble Kalman Filter (EnKF) is a data assimila-tion algorithm that over the last decade has been appliedto numerous fields. However, it is well known that thestandard implementation of the EnKF will lead to an un-derrepresentation of the prediction uncertainty. In this talkwe will explain, using classical results from statistics, whythe EnKF is not able to quantify the uncertainty correctly,and present statistical methods that can resolve this issue.
Jon SaetromStatoil [email protected]
Henning OmreNorwegian University of Science and [email protected]
CP9
Collocation Inference for Nonlinear Stochastic Dy-namic Systems with Noisy Observations
Collocation-based methods are proposed for estimating pa-rameters and performing inference for nonlinear stochasticcontinuous-time dynamic systems with noisy observations.EM algorithm is applied to handle the systematic stochas-tic input and the observational error simultaneously. Theapproaches use collocation method to avoid the computa-tional intensity. Two computation algorithms: Laplace ap-proximations and Monte Carlo integration are examined.The convergence rates are hp, where h is the bandwidthof the approximation scheme for stochastic Ronge-Kuttatype penalty.
Yuefeng WuDepartment of Applied Mathematics and StatisticsUniversity of California Santa [email protected]
Giles HookerBiostatistics and Computational Biology DepartmentCornell [email protected]
CP10
Langevin Based Inversion of Pressure TransientTesting Data
In the oilfield, pressure transient testing is an ideal tool fordetermining average reservoir parameters, but this tech-nique does not fully quantify the uncertainty in the spa-tial distribution of these parameters. We wish to de-termine plausible parameter distributions, consistent withboth the pressure transient testing data and prior geolog-ical knowledge. We used a Langevin-based MCMC tech-nique, adapted to the large number of parameters and data,to identify geological features and characterize the uncer-tainty.
Richard Booth, Kirsty MortonSchlumberger Brazil Research and Geoengineering [email protected], [email protected]
Mustafa OnurIstanbul Technical [email protected]
UQ12 Abstracts 71
Fikri KuchukSchlumberger Riboud Product [email protected]
CP10
Some Inverse Problems for Groundwater
Perhaps the most appealing aspect of Bayesian inferencein the context of groundwater engineering is that it makesproper use of data, and provides the engineer with a toolthat may be used to inform the judicious design of pumptests so that quantities of interest are determined withinacceptable bounds. Low dimensional inverse problems forgroundwater may capture the essential physics and pro-vide guidance on what is important in the solution of theirhigher dimensional cousins; we present some recent workon low and high dimensional inverse problems for ground-water.
Nicholas Dudley WardOtago Computational Modelling [email protected]
Tiangang CuiUniversity of [email protected]
Jari P KaipioDepartment of Applied Physics, University of EasternFinland2Department of Mathematics, University of [email protected]
CP10
Bayesian Acoustic Wave-Field Inversion of a 1DHeterogeneous Media
We discuss a Bayesian parameter estimation scheme to re-construct the distribution of the subsurface elastic char-acteristics of a 1D heterogeneous earth model, given theprobed medium’s response to interrogating acoustic wavesmeasured at the surface. We consider a generic multilayermedia where the number of layers along with their veloci-ties and thicknesses are treated as random variables, wherethe solution’s non-uniqueness is fully accounted for by theposterior probability density of the unknowns. A syntheticstudy is provided to indicate the applicability of the tech-nique.
Saba S. EsmailzadehZachry Department of Civil Engineering,Texas A&M University, College Station, TX, USA [email protected]
Zenon Medina-CetinaZachry Department of Civil EngineeringTexas A&M [email protected]
Jun Won KangDepartment of Civil EngineeringNew Mexico State University, NM 88003, [email protected]
Loukas F. KallivokasThe University of Texas at [email protected]
CP10
Inverse Problem: Information Extraction fromElectromagnetic Scattering Measurement
The estimation of local radioelectric properties of materialsfrom the global electromagnetic scattering measurement isa challenging ill-posed inverse problem. It is intensivelyexplored on High Performance Computing machines by aMaxwell solver and statistically reduced to a simpler prob-abilistic metamodel. Considering the properties as a dy-namic stochastic process, it is shown how advanced MarkovChain Monte Carlo methods, called interacting Kalman fil-ters, can perform Bayesian inference to estimate the prop-erties and their associated uncertainties.
Pierre Minvielle-Larrousse, Marc Sancandi, FranCoisGiraud, Pierre BonnemasonCEA, DAM, [email protected], [email protected],[email protected], [email protected]
CP10
Set-Based Parameter and State Estimation in Non-linear Differential Equations Using Sparse DiscreteMeasurements
Bounded experimental uncertainty and sparse measure-ments limit our ability to generate estimates of compo-nent concentrations and kinetic parameters in biologicalmodels because of algorithm instability and computationalintractability. We present a set-based approach for pro-ducing estimates that address these issues. We use first-principles and a-priori knowledge to generate stabilizingbounds that maintain algorithm stability while removingspurious solutions that occur due to limited measurements.This is implemented in a multithreaded architecture to ad-dress computational limitations.
Cranos M. Williams, Skylar MarvelNorth Carolina State [email protected], [email protected]
CP10
Probabilistic Solution of Inverse Problems underVarying Evidence Conditions
Results of the impact of varying evidence conditions onthe probabilistic calibration of model parameters are pre-sented. A synthetic case study is populated where data isfully controlled, including the effect of location, number,variance and correlation, when used to calibrate a simpledifferential equation used to simulate diffusion problems.Since the calibration follow a Bayesian approach, it is thuspossible to describe the uncertainty changes on the modelparameters as the evidence conditions vary, as this is re-flected on independent analyses of the posterior distribu-tion.
Zenon Medina Cetina, Negin YousefpourTexas A&M [email protected], [email protected]
Hans Petter Langtangen, Are Magnus Bruaset, StuartClarkSimula Research [email protected], [email protected], [email protected]
72 UQ12 Abstracts
CP11
Adaptive Error Modelling in Mcmc Sampling forLarge Scale Inverse Problems
We present a new adaptive delayed-acceptance MCMC al-gorithm (ADAMH) that adapts to the error of a reducedorder model to enable efficient sampling from the posteriordistribution arising in large scale inverse problems. Thisuse of adaptivity differs from existing algorithms that tunethe proposals, though ADAMH also implements that. Con-ditions given by Roberts and Rosenthal (2007) are used togive constructions that are provably convergent. We ap-plied ADAMH to several large scale groundwater studies.
Tiangang CuiUniversity of [email protected]
Nicholas Dudley WardOtago Computational Modelling [email protected]
Colin FoxUniversity of Otago, New [email protected]
Mike O’SullivanThe University of [email protected]
CP11
Scalable Methods for Large-Scale Statistical In-verse Problems, with Applications to SubsurfaceFlow
We address the challenge of large-scale nonlinear statisticalinverse problems by developing an adaptive Hessian-basednon-stationary Gaussian process response surface methodto approximate the posterior pdf solution. We employan adaptive sampling strategy for exploring the parame-ter space efficiently to find interpolation points and build aglobal analytical response surface far less expensive to eval-uate than the original. The accuracy and efficiency of theresponse surface is demonstrated with examples, includinga subsurface flow problem.
H. Pearl FlathInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]
Tan Bui-ThanhThe University of Texas at [email protected]
Omar GhattasUniversity of Texas at [email protected]
CP11
Chemical Reaction Mechanism Generation UsingBayesian Variable Selection
The generation of detailed chemical reaction mechanisms,for applications ranging from electrochemistry to combus-tion, must often rely on noisy and indirect system-leveldata. We present a novel approach to mechanism gen-eration based on Bayesian variable selection and a PDE-
based physical model. Adaptive Markov chain Monte Carlomethods are used to efficiently explore a posterior that en-compasses models comprised of subsets of elementary reac-tions. In contrast to conventional techniques, our approachprovides a statistically rigorous assessment of uncertaintyin reaction mechanism structure and in reaction rate pa-rameters.
Nikhil Galagali, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
CP11
Inferring Bottom Bathymetry from Free-SurfaceFlow Features
We are solving an inverse problem of determiningbathymetry of a channel from the mean flow features atthe free surface. Our approach is optimization-based oper-ating on a database of simulations with parameterized bed-forms. Sensitivity of the inversion process to noise (e.g., in-strument error, turbulence) is investigated. In the presentframework, we rely on the time-averaged surface flow fea-tures, although also consider a more complex problem ofusing multiple free-surface snapshots of the unsteady flow.
Sergey Koltakov, Gianluca IaccarinoStanford [email protected], [email protected]
Oliver FringerEnvironmental Fluid Mechanics LaboratoryStanford [email protected]
CP11
Bayesian Updating of Uncertainty in the Descrip-tion of Transport Processes in Heterogeneous Ma-terials
The prediction of thermo-mechanical behaviour of hetero-geneous materials such as heat and moisture transport isstrongly influenced by the uncertainty in parameters. Suchmaterials occur e.g. in historic buildings, and the durabil-ity assessment of these therefore needs a reliable and prob-abilistic simulation of transport processes, which is relatedto the suitable identification of material parameters. Inorder to include expert knowledge as well as experimentalresults, we employ a Bayesian updating procedure.
Jan SykoraFaculty of Civil EngineeringCzech Technical University in [email protected]
Anna KucerovaFaculty of Civil Engineering, Prague, Czech [email protected]
Bojana V. RosicInstitute of Scientific ComputingTU [email protected]
Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]
UQ12 Abstracts 73
CP12
Bayesian Models for the Quantification of ModelErrors in Pde-Based Systems
Bayesian formulations offer a unified statistical treatmentfor the solution of model-based inverse problems. Apartfrom the noise in the data, another source of uncertainty,which is largely unaccounted for, is model uncertainty.Ingeneral, there will be deviations between the physical real-ity where measurements are made, and the idealized math-ematical/computational description (in our case a systemof PDEs consisting of conservation laws and constitutiveequations). We propose an intrusive Bayesian strategyand efficient inference tools that are capable of quantifyingmodel errors in the governing equations.
Phaedon S. KoutsourelakisHeriot Watt [email protected]
CP12
Predictive Simulation for CO2 Sequestration inSaline Aquifers
One of the most difficult tasks in CO2 sequestrationprojects is the reliable characterization of properties of thesubsurface. A typical situation employs dynamic data in-tegration such as sparse CO2 measurement data in timeand sparse measurements of permeability and porosity inspace to be matched with simulated responses associatedwith a set of permeability and porosity fields. Among thechallenges found in practice are proper mathematical mod-eling of the flow, persisting heterogeneity in porosity andpermeability, and the uncertainties inherent in them. Inthis talk, we propose a Bayesian framework Monte CarloMarkov Chain simulation (MCMC) to sample a set of char-acteristics of the subsurface from the posterior distributionthat are conditioned to the measured CO2 data. This pro-cess requires obtaining the simulated responses over manyiterations. We use the model proposed in Obi and Blunt[Water Resources Research, 2006] to perform numericalsimulations of the injection of CO2 in saline aquifers, andillustrate the MCMC procedure and its use in predictivesimulation for CO2 sequestration.
Victor E. Ginting, Marcos MendesDepartment of MathematicsUniversity of [email protected], [email protected]
Felipe PereiraCenter for Fundamentals of Subsurface FlowUniversity of [email protected]
Arunasalam RahunanthanCenter for Fundamentals of Subsurface FlowUniversity of Wyoming, Laramie, WY [email protected]
CP12
The Effect of Prediction Error Correlation onBayesian Model Updating
In this contribution, a Bayesian inference framework isadopted to quantify uncertainties due to measurement andmodeling errors in vibration-based model updating. Inmost cases, independent normal distributions are assumedfor the observed prediction errors on natural frequencies
and mode shape components. However, such an assump-tion is not expected to hold in case of mode shape compo-nents identified from dense sensor networks. A correlatedprediction error is therefore introduced and its effect on theidentification is investigated.
Ellen [email protected]
Costas PapadimitriouUniversity of ThessalyDept. of Mechanical [email protected]
Guido De Roeck, Geert LombaertK.U.LeuvenDept. of Civil [email protected],[email protected]
CP12
Investigating Uncertainty in Aerothermal ModelPredictions for Hypersonic Aircraft Structures
Validating compu-tational models of fluid-thermal-structural interactions inhypersonic aircraft structures remains a challenge due tolimited experimental data. This research investigates un-certain model parameters and model errors in aerodynamicpressure and heating predictions. The modeled conditionscorrespond to aerothermal test data from hypersonic windtunnel experiments of a spherical dome protuberance on aflat plate. Bayesian calibration and sensitivity analysis areused with the test data to improve understanding of modeland experimental uncertainty.
Benjamin P. SmarslokAir Force Research [email protected]
Sankaran MahadevanVanderbilt [email protected]
CP13
Reservoir Uncertainty Quantification Using Clus-tering Methods
Quantifying uncertainty in reservoir predictions is impor-tant in the petroleum industry, since more accurate quan-tification of the uncertainty contributes to prudent multi-million dollar decision makings. Nowadays predictions aremade from conditioned multi-model ensembles obtained inhistory-matching using optimisation with evolutionary al-gorithms. These ensembles are usually multi-modal anddifferent models will result in different forecasts. We usek-mean, probabilistic distance, and agglomerate clusteringmethods to locate and track multiple distinct optima rep-resenting uncertainties of the reservoir.
Asaad AbdollahzadehHeriot-Watt [email protected]
Mike ChristieHeriot-Watt University, [email protected]
74 UQ12 Abstracts
David W. CorneHeriot-Watt [email protected]
CP13
Adaptive Smolyak Pseudospectral Expansions
Smolyak pseudospectral expansions provide an efficientand accurate approach for computing sparse polynomialchaos approximations of functions with weak coupling be-tween the input dimensions. We rigorously develop theSmolyak algorithm and show that it is well behaved.This behavior stands in contrast to the traditional sparsequadrature approach for building polynomial expansions,which is either inaccurate or inefficient in almost all cases.Furthermore, we tailor the Smolyak algorithm to individ-ual functions at run-time through an adaptive strategy.
Patrick R. [email protected]
Youssef M. MarzoukMassachusetts Institute of [email protected]
CP13
Uncertainty and Differences in Global ClimateModels
Climate models have greatly aided investigation of globalclimate change. The uncertainty, similarities, and differ-ences between climate models can be examined using ex-ceedance regions. We propose methodology for drawingconclusions about entire exceedance regions with knownconfidence. The methodology uses kriging and conditionalsimulation to create confidence regions having the neces-sary statistical properties. Discussion will include assess-ment of future climate change for several regions of NorthAmerica and highlight differences between various climatemodels.
Joshua P. FrenchUniversity of Colorado [email protected]
Stephan SainInstitute for Mathematics Applied to GeosciencesNational Center for Atmospheric [email protected]
CP13
Comparison of Stochastic Methods for BayesianUpdating of Uncertainty in Parameters of Nonlin-ear Models
An extensive development of efficient methods for stochas-tic modelling enabled uncertainty propagation throughcomplex models. In this contribution, we present a reviewand comparison of several approaches such as stochasticGalerkin method or stochastic collocation method in con-text of Bayesian uncertainty updating.
Anna KucerovaFaculty of Civil Engineering, Prague, Czech [email protected]
Jan Sykora
Faculty of Civil EngineeringCzech Technical University in [email protected]
Bojana V. RosicInstitute of Scientific ComputingTU [email protected]
Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]
CP13
Stochastic Anova-Galerkin Projection Schemes
We present numerical schemes based on ANOVA-Galerkinprojection for stochastic PDEs with large number of ran-dom variables. The central idea is to represent the so-lution using a functional ANOVA decomposition supple-mented by appropriate orthogonality constraints so thatthe weak form can be decoupled into a set of independentlow-dimensional subproblems. Numerical studies will bepresented for the steady-state and time-dependent stochas-tic diffusion equation with more than 500 variables.
Prasanth B. NairUniversity of SouthamptonUniversity of [email protected]
Christophe AudouzeThe laboratory of signals and systems (L2S)University Paris-Sud Orsay - CNRS - [email protected]
CP13
Multilevel Monte Carlo for Uncertainty Quantifi-cation in Subsurface Flow
Multilevel Monte Carlo is an efficient variance reductiontechnique to quantify uncertainties in simulations of sub-surface flows with highly varying, rough coefficients, e.g.in radwaste disposal. We study model elliptic problems ofsingle phase flow in random media described by correlatedlognormal distributions, discretized in space via (standardand mixed) finite elements. We will demonstrate (theo-retically and numerically) significant gains with respect tostandard Monte Carlo, leading (in the best case) to anasymptotic cost that is proportional to solving one deter-ministic PDE.
Robert ScheichlDepartment of Mathematical SciencesUniversity of [email protected]
Andrew CliffeSchool of Mathematical SciencesUniversity of [email protected]
Michael B. GilesMathematical InstituteOxford [email protected]
UQ12 Abstracts 75
Minho ParkUniversity of [email protected]
Aretha Teckentrup, Elisabeth UllmannUniversity of [email protected], [email protected]
CP14
Toward Reduction of Uncertainty in ComplexMulti-Reservoir River Systems
Uncertainties are inherent in the operation of any system,however assuming that they are all purely stochastic andindependent, both spatially and temporally, may result inunnecessarily large uncertainties that may not be useful forthe operation of a system. In this paper we present a novelframework for better modeling uncertainty in complex sys-tems and demonstrate the approach applied to regulatedriver systems. This framework aims to combine “purelystochastic processes’ (not well understood or unpredictedat this stage) with the physical description of the system(e.g., flow dynamics in the case of complex regulated riversystems) with the goal of better modeling uncertainties andhence reduce the ranges of the confidence intervals. In thisframework, only the stream inflows (external source) areassumed to be purely stochastic. Other uncertain quanti-ties are correlated to the uncertainty of the stream inflowsusing the dynamics of the river as the physical descriptionof the system. In this framework the dynamics of the riveris simulated using the performance graphs approach. Forthe quantification of uncertainty, rather than doing sam-pling of the input distributions, we explicitly model therandom space (via random variables and processes). Werepresent uncertainty in stream inflows via an error termmodeled as a stochastic process. The resulting system isdiscretized (in random space) using stochastic collocation,which is distinct from random sampling methods in thatthe Gaussian nodes are deterministic. By recycling exist-ing deterministic codes, this non-intrusive approach is bothmore efficient than Monte-Carlo methods and as flexible intheir application.
Nathan L. GibsonOregon State [email protected]
Arturo Leon, Christopher Gifford-MiearsSchool of Civil and Construction EngineeringOregon State [email protected], [email protected]
CP14
Stochastic Analysis of the Hydrologic CascadeModel, a Polynomial Chaos Approach
In this talk we analyse a Nash cascade of linear reservoirsrepresenting a watershed. It is specified by a system of cou-pled linear differential equations with random coefficients.A stochastic spectral representation of these parametersis used, together with a polynomial chaos expansion. Asystem of differential equations is obtained which describethe evolution of the mean and higher-order moments withrespect to time.
Franz KonecnyBOKU-University of Natural Resources and AppliedLife Sciences, [email protected]
Hans-Peter NachtnebelBOKU-University of Natural Resources and Life Sciences,Vienhans [email protected]
CP14
Likelihood-Based Observability Analysis and Con-fidence Intervals for Model Predictions
Nonlinear dynamic models of biochemical networks con-tain a large number of parameters. This renders classi-cal approaches for the calculation of confidence regions forparameter estimates and model predictions hardly feasi-ble. We present the so-called prediction profile likelihoodwhich is utilized to generate reliable confidence intervals formodel predictions and for a data-based observability anal-ysis. The presented approach constitutes a general conceptand is therefore applicable for any kind of prediction model.
Clemens KreutzUniversity of FreiburgInstitute of [email protected]
Andreas RauePhysics Institute, University of [email protected]
Jens TimmerUniversity of FreiburgDepartment of [email protected]
CP14
Hybrid Stochastic Collocation - Stochastic Pertur-bation Method for Linear Elastic Problems withUncertain Material Parameters
We propose a hybrid formulation combining the stochasticcollocation (SC) and stochastic perturbation (SP) methodsto obtain the response of a linear elastic system with ma-terial uncertainties. The material properties are modelledas a memory-less transformation of a Gaussian field, whichdiscretized by a series expansion method. The response ofthe system is estimated using SC to account for the mostimportant (lower) modes in the series expansion and SPfor the other (higher) modes.
Boyan S. LazarovDepartment of Mechanical EngineeringTechnical University of [email protected]
Mattias SchevenelsDepartment of Civil Engineering, K.U.LeuvenKasteelpark Arenberg 40, B-3001 Leuven, [email protected]
CP14
A Comparison Of Uncertainty QuantificationMethods Under Practical Industry Requirements
The objective of this paper is to establish comprehensiveguidelines for engineers to effectively perform probabilis-tic design studies through a systematically comparison ofuncertainty quantification or probabilistic methods. Thesemethods include 1) Simulation-based approaches such as
76 UQ12 Abstracts
Monte Carlo simulation, importance sampling and adap-tive sampling, 2) Local expansion-based methods such asTaylor series method or perturbation method, 3) First Or-der Second Moment (FOSM) based methods, 4) Stochas-tic expansion-based methods, such as Neumann expansionand polynomial chaos expansion, 5) Numerical integration-based or moments-based methods, such as Point EstimateMethod, Eigenvector Dimension Reduction, 6) Regression-or metamodel-based methods, 7) Bayesian methods, and8) Data classification methods. Both aleatory & epistemicuncertainty will be addressed in the paper. A set of bench-mark problems representing different levels of dimension-ality, non-linearity and noise level will be used for the com-parison.
Liping Wang, Gulshan Singh, Arun SubramaniyanGE Global [email protected], [email protected], [email protected]
CP15
Rare Events Detection and Analysis of Equity andCommodity High-Frequency Data
We present a methodology to detect unusual trading ac-tivity defined as high price movement with relatively littlevolume traded. The analysis is applied to high-frequencytransactions of thousands of equities and the probabilityof price recovery in the proximity of these rare events iscalculated. Similar results are obtained when analyzingcommodities with different expiration dates. The propa-gation of rare events in the commodity structure and theliquidity problems are addressed.
Dragos Bozdog, Ionut FlorescuStevens Institute of TechnologyDepartment of Mathematical [email protected], [email protected]
Khaldoun Khashanah, Jim WangStevens Institute of TechnologyFinancial Engineering [email protected], [email protected]
CP15
Glimpses of Uq, Qmu, and Reliability
We will provide an introduction to quantification of mar-gins and uncertainties (QMU) and reliability. QMU is asummary number that is relatively easy to interpret fordecision-making. Reliability is defined as the probabilitythat a given item will perform its intended function fora given period of time under a given set of conditions.The underlying concepts in this definition are probability,intended function, time, and conditions. Reliability is aquantity that can be very difficult to compute for complexsystems, and viewed rigorously, is inherently a statisticaldistribution on the probability of success (or failure) ofa system. Both concepts are concerned with uncertaintyquantification. We will illustrate with examples from com-plex systems from our work at Los ALamos National Lab-oratory.
Aparna V. Huzurbazar, David CollinsLos Alamos National [email protected], [email protected]
CP15
Bayesian Subset Simulation
One of the most important and computationally chal-lenging problems in reliability engineering is to estimatethe failure probability of a dynamic system. In casesof practical interest, the failure probability is given byan integral over a high-dimensional uncertain parameterspace. Subset Simulation method, provides an efficientalgorithm for computing failure probabilities for generalhigh-dimensional reliability problems. In this work, aBayesian counterpart of the original Subset Simulationmethod is developed.
Konstantin M. ZuevDepartment of MathematicsUniversity of Southern [email protected]
James BeckDivision of Engineering and Applied ScienceCalifornia Institute of [email protected]
CP16
Delay in Neuronal Spiking
We study how some delay mechanisms influence interac-tion between a neuron and its network. Specifically, welook at conduction delay, spike-timing dependent plastic-ity, and time delay induced by the partial differential equa-tion describing a particular neuron. We further comparethe contribution of different delay mechanisms to numer-ical solutions of the governing cable equation. We studytwo cases: the case of passive electrical activity and thecase of propagating action potential.
Mikhail M. ShvartsmanUniversity of St. [email protected]
CP16
A Gaussian Hierarchical Model for Random Inter-vals
Many statistical data are imprecise due to factors such asmeasurement errors, computation errors, and lack of infor-mation. In such cases, data are better represented by inter-vals than single-valued numbers. To model both random-ness and imprecision, we propose a Gaussian hierarchicalmodel for random intervals, and develop a minimum con-trast estimator that we show is both consistent and asymp-totically normal. We extend this model to the time seriescontext and build an interval-valued GARCH model.
Yan SunDepartment of Mathematics & StatisticsUtah State [email protected]
Dan RalescuDepartment of Mathematical SciencesUniversity of [email protected]
CP17
Interval-Based Inverse Problems with Uncertain-
UQ12 Abstracts 77
ties
Inverse problems in science and engineering aim at estimat-ing model parameters of a physical system using observa-tions of the models response. However, response measure-ments are usually associated with uncertainties, and deter-ministic inverse algorithms hardly provide the associatederror estimates for the model parameters. In this work, wewill present an interval-based iterative solution that pre-dicts such error exploiting an adjoint -based optimizationand the containment-stopping criterion.
Rafi L. Muhanna, Francesco FedeleGeorgia Institute of [email protected], [email protected]
CP17
Probability Bounds for Nonlinear Finite ElementAnalysis
A set of possible cumulative probability distributions de-fined by its bounding functions is one way for representingepistemic and aleatory uncertainty separately. Such setsare known as a probability box (P-box). In this work wewill present an interval Monte Carlo method for calculatingthe non-linear response of structures in terms of P-boxeswhen system parameters and loading are described as P-boxes. Results obtained show very tight enclosures evenfor large uncertainties.
Rafi L. MuhannaGeorgia Institute of [email protected]
Robert MullenUniversity of South [email protected]
CP17
A Complete Closed-Form Solution to StochasticLinear Systems with Fixed Coefficients
Given a fixed but arbitrary coefficient matrix A and ameaningful input probability density d(b) that stochasti-cally describes the other input data b for the system Ax= b, this paper: (1) derives explicit formulas for numeri-cally computing the induced probability that this systemhas at least one solution x, (2) formulates and then de-scribes the important properties of the optimization prob-lem whose solutions b* (if any) maximize d(b) over allb for which Ax = b has at least one solution x, (3) for-mulates and then describes the important properties of ageneral optimization problem whose solutions x* (if any)optimize a given function g(x) over all x for which Ax =b*. This methodology can numerically compete with boththe simulation approaches and the simplified deterministicapproaches based on expected values and other momentinformation. In modified form, it might also provide newapproaches to systems Ax = b with ”fuzzy” data b.
Elmor L. PetersonSystems Science [email protected]
CP17
Cramer-Rao Bound for Models of Fat-Water Sep-aration Using Magnetic Resonance Imaging
Magnetic resonance images typically contain both water
and fat but most of the information is in the water signal.Different assumptions about the imaging conditions resultlinear and non-linear models of the image generation. Themultiple parameters and non-linearity lead to a version ofthe Cramer-Rao bound for a parameter if all other pa-rameters were known. We study how the choice in modelaffects the optimal data acquisition based on minimizingthis Cramer-Rao bound.
Angel R. Pineda, Emily BiceCalifornia State University, [email protected], [email protected]
CP17
Automated Exact Calculation of Test Statistics inHypothesis Testing
Many classical statistical tests rely on large-sample approx-imations for determining appropriate critical values. Weuse a computer algebra system to compute the exact dis-tribution of test statistics under the null hypothesis for theWilcoxon signed-rank test and the chi-square goodness offit test, which gives the exact critical value for small samplesizes.
Lawrence LeemisCollege of William and [email protected]
Vincent YannelloThe College of William & [email protected]
CP17
Evidence-Based Approach for the Probabilistic As-sessment of Unknown Foundations of Bridges
As of 2005, there were approximately sixty thousandbridges throughout the US identified as having UnknownFoundations. Missing substructure information has madesafety monitoring of bridges very difficult for departmentsof transportation, especially for over-river bridges whichare susceptible to scour. An evidence-based methodologyis proposed in this paper to determine the unknown charac-teristics of bridge foundations conditional on informationcollected from existing bridges. The proposed approachcombines the use of a Neural-Network model to computethe foundations bearing capacity, with a Bayesian formu-lation that allows for defining probability distributions ofthe foundation type, size and embedment depth, as well asof the soil properties supporting the foundation.
Negin Yousefpour, Zenon Medina Cetina, Jean-LouisBriaudTexas A&M [email protected], [email protected],[email protected]
CP18
An Approach to Weak Solution Approximation ofStochastic Differential Equations
This paper presents an approach to approximate the weaksolution of nonlinear stochastic differential equations. Uti-lizing the results from the Radon-Nikodym theorem andthe Girsanov theorem, the proposed approach introducesa measure transformation so that the Ito process underconsideration becomes a Wiener process under the trans-
78 UQ12 Abstracts
formed measure. Here we propose the Euler-Maruyamascheme to approximate the stochastic integral involved incalculating the Radon-Nikodym derivative associated withthe measure transformation. A detailed analysis of theproposed approximation scheme reveals that the proposedapproach is exactly the same as calculating the weak so-lution using the traditional Euler-Maruyama scheme andthus the proposed approach is weakly convergent with aconvergence rate of γ = 1. A numerical example is pre-sented to validates the proposed approach and illustrateits capability.
Jemin GeorgeU.S. Army Research [email protected]
CP18
Asymptotic Normality and Efficiency for Sobol In-dex Estimator
In this talk we will study the convergence and the efficiencyof Sobol index estimators. More precisely, we will considerthe following model
Y = f(X,Z)
where X and Z are independent random vectors. Theimportance of each input variable can be measured by
S =(
Var(E(Y|X))Var(Y)
, Var(E(Y|Z))Var(Y)
). We will construct an es-
timator Sn of S and show that it is asymptotically normaland efficient. At the end of the talk, we will give similarresults when f is replaced by some approximation f .
Alexandre JanonLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]
Thierry KleinInstitut de Mathematiques de ToulouseUMR CNRS 5219 Universite de [email protected]
Agnes LagnouxInstitut de Mathematiques de Toulouse UMR CNRS [email protected]
Maelle NodetUniversite Joseph Fourier and INRIA, [email protected]
Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]
CP18
Frechet Sensitivity Analysis for the Convection-Diffusion Equation
In this work, we consider Frchet derivatives of solutionsto the convection-diffusion equation with respect to dis-tributed parameters and demonstrate their applicabilityin parameter estimation. Under certain conditions, theFrchet derivative operator is Hilbert-Schmidt and thusthere are parametric variations that have more impact onthe solution than others. These variations can be used
to identify locations for new measurements or data sam-ples. We also analyze an algorithm for approximating theseparametric variations.
Vitor Leite NunesInterdisciplinary Center for Applied Mathematics,Virginia Polytechnic Institute & State [email protected]
CP18
Analysis of Quasi-Monte Carlo FE Methods for El-liptic PDEs with Lognormal Random Coefficients
We devise, implement and analyse quasi-Monte Carlomethods for computing the expectations of (nonlinear)functionals of solutions of a class of elliptic partial differ-ential equations with lognormal random coefficients. Ourmotivation comes from fluid flow in random porous media,where a very large number of random variables is usuallyneeded to give a realistic model of the coefficient field. Byusing deterministically chosen sample points in an appro-priate (usually high-dimensional) parameter space, quasi-Monte Carlo methods can obtain substantially higher con-vergence rates than classical Monte Carlo. An analysisin appropriate function spaces confirms this also theoreti-cally. The analysis is particularly challenging since in thelognormal case the PDE is neither uniformly elliptic noruniformly bounded and the parametrisation of the coeffi-cient is nonlinear.
Robert ScheichlDepartment of Mathematical SciencesUniversity of [email protected]
Ivan G. GrahamUniversity of [email protected]
Frances Y. KuoSchool of Mathematics and StatisticsUniversity of New South [email protected]
James NicholsUniversity of New South Wales, [email protected]
Christoph SchwabETH [email protected]
Ian H. SloanUniversity of New South WalesSchool of [email protected]
CP19
Maximum Entropy Construction for Data-DrivenAnalysis of Diffusion on Random Manifolds
A data-driven approach is developed for the diffusion onmanifolds with unknown geometry. Given limited numberof samples drawn from the manifold, our objective is tocharacterize the geometry, while accounting for inevitableuncertainties due to measurement noise and scarcity ofsamples. We will present a MaxEnt construction for un-
UQ12 Abstracts 79
certainty representation of the graph Laplacian used to ap-proximate the Laplace-Beltrami operator. We also discussthe adaptation of this formulation for information diffusionin social networks.
Hadi MeidaniUniversity of Southern [email protected]
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
CP19
Multiple Signal Classification in Covariance Spaceof EEG Data
The objective is to find discriminative features in the EEGsignals to mark various stages of drowsiness in vehicle op-erators. Existing methods are only able to distinguish be-tween extreme stages. In this research, we are trying to usemetrics on the covariance space of EEG data to identifythe evolution of the state of fatigue from low to high. Wehave conducted experiments to induce continuous fatiguein human subjects while keeping them awake for about 36hours. The EEG data has been recorded over 3-hours in-terval. Eventually we intend to build a spatio-temporalmodel of evolution of fatigue in the covariance space ofEEG data.
Deepashree SenguptaIndian Institute of Technology, [email protected]
Aurobinda RoutrayProfessor, Department of Electrical EngineeringIndian Institute of Technology, [email protected]
CP19
Dynamic Large Spatial Covariance Matrix Estima-tion in Application to Semiparametric Model Con-struction Via Variable Clustering: the Sce Ap-proach
We consider estimating a large spatial covariance matrix ofthe high-dimensional time series by thresholding, quantifythe interplay between the spatial estimators’ consistencyand the temporal dependence level, and prove a CV resultfor threshold selection. Given a consistently estimated ma-trix and its natural links with semiparametrics, we screen,cluster the explanatory variables and construct the cor-responding semiparametric model to be further estimatedwith sign constraints. We call this the SCE approach.
Song SongUniversity of California, [email protected]
CP19
Random Matrix Based Approach for Adaptive Es-timation of Covariance Matrix
The work considers the problem of estimating unknown co-variance matrix, Q, of process noise in linear dynamic sys-tem based on corrupted state measurements data. Here, Q
is directly treated as a symmetric, positive-definite randommatrix unlike other approaches where each element of Qis treated as random variable or Q is itself parameterizedby a set of scalar-valued random variables. The probabil-ity density function (pdf) of Q is estimated by employingmaximum entropy principle. The theory of multiple modeland Kalman Filter are then used to update this pdf overtime, resulting in a pdf with sharp peak around the truecovariance matrix once sufficient number of measurementsare made.
Sonjoy Das, Sonjoy Das, Kumar VishwajeetUniversity at [email protected], [email protected],[email protected]
Puneet SinglaMechanical & Aerospace EngineeringUniversity at [email protected]
CP20
Determining Critical Parameters of Sine-Gordonand Nonlinear Schrodinger Equations with aPoint-Like Potential Using Generalized PolynomialChaos Methods
We consider the sine-Gordon and nonlinear Schrodingerequations with a point-like singular source term. The soli-ton interaction with such a singular potential yields a crit-ical solution behavior. That is, for the given value of thepotential strength or the soliton amplitude, there exists acritical velocity of the initial soliton solution, around whichthe solution is either trapped by or transmitted throughthe potential. In this talk, we propose an efficient methodfor finding such a critical velocity by using the general-ized polynomial chaos (gPC) method. For the proposedmethod, we assume that the soliton velocity is a randomvariable and expand the solution in the random space us-ing the orthogonal polynomials. We consider the Legendreand Hermite chaos with both the Galerkin and colloca-tion formulations. The proposed method finds the criticalvelocity accurately with spectral convergence. Thus thecomputational complexity is much reduced. The very coreof the proposed method lies in using the mean solution in-stead of reconstructing the solution. The mean solutionconverges exponentially while the gPC reconstruction mayfail to converge to the right solution due to the Gibbs phe-nomenon in the random space. Numerical results confirmthe accuracy and spectral convergence of the method.
Debananda ChakrabortyPh.D. [email protected]
Jae-Hun JungDepartment of MathematicsSUNY at [email protected]
CP20
A Comparison of Dimensionality Reduction Tech-niques in Scientific Applications
High-dimensional data sets, where each data item is rep-resented by many attributes or features, are a challenge indata mining. Irrelevant attributes can reduce the accuracyof classification algorithms, while distance-based cluster-ing algorithms suffer from a loss of meaning of the term
80 UQ12 Abstracts
”nearest neighbors”. Using data sets from scientific appli-cations, we present a comparison of different dimensional-ity reduction methods, including various feature selectiontechniques, as well as linear and non-linear feature trans-form methods.
Ya Ju Fan, Chandrika KamathLawrence Livermore National [email protected], [email protected]
CP20
Computational Methods for Statistical LearningUsing Support Vector Machine Classifiers
An augmented Lagrangian algorithm for solving equalityand bound constrained quadratic problems will be pre-sented for use in support vector machine (SVM) classifi-cation. Examples will include images and a diabetes dataset. The algorithm will be compared with existing opti-mization techniques and the SVM method will be com-pared with quadratic discriminant analysis, as well as ageneral linearized model for the diabetes data set.
Marylesa HowardThe University of [email protected]
CP20
A Hidden Deterministic Solution to the QuantumMeasurement Problem
In this paper we present a nonconventional method of es-tablishing a hidden but computationally realizable deter-ministic solution to quantum measurement problems in thenon-infinite, non-zero quantized energy limits. Our ap-proach is not in contradiction with Heisenberg’s Uncer-tainty Principle, rather it determines energy (E) and mo-mentum (p) of a quantum particle as functions of uncer-tainties,viz E = f(ΔE) and p = f(Δp). Thus an observergets E ±�E and p±�p as the observed values of theenergy and momentum of the said particle.
Pabitra Pal ChoudhuryProfessor in Applied Statistics Unit, ISI Kolkata [email protected]; [email protected]
Swapan Kumar DuttaRetired [email protected]
MS1
Super-parameterisation of Oceanic Deep Convec-tion using Emulators
Misrepresentation of sub-grid scale processes can increasethe uncertainty in the results of large scale models. Despitethe existence of detailed numerical models that resolve sub-grid scale processes, embedding these in a large scale modelis often computationally prohibitive. A viable alternativeis to encapsulate our knowledge about the sub-grid scalemodel in an emulator and use this as a surrogate. We ap-ply this paradigm of ‘super-parameterisation’ to an oceanicdeep convection problem.
Yiannis AndrianakisNational Oceanography CentreUniversity of Southampton, [email protected]
MS1
Effective and Efficient Handling of Ill-ConditionedCorrelation Matrices in Kriging and GradientEnhanced Kriging Emulators through PivotedCholesky Factorization
Kriging or Gaussian Process emulators are popular be-cause they interpolate build data if the correlation ma-trix is R.S.P.D. and produce a spatially varying errorestimate. However, handling ill-conditioned correlationmatrices can be challenging. The rows/columns of anill-conditioned matrix substantially duplicate information.Pivoted Cholesky ranks points by their new informationcontent; then low information points are discarded. Sincediscarded points contain the least new information, theybest remediate ill-conditioning and are the easiest to pre-dict.
Keith DalbeySandia National [email protected]
MS1
History Matching: An alternative to Calibrationfor a Galaxy Formation Simulation
In the context of computer models, calibration is a vitaltechnique whereby an emulator is used to obtain a pos-terior distribution over the input space of the computersimulator. However, this process can be extremely chal-lenging. We discuss a powerful alternative known as his-tory matching that can either serve as a prelude to a fullcalibration to greatly improve efficiency, or that can in sev-eral cases replace calibration entirely. Work from MitchellPrize winning paper.
Ian VernonUniversity of Durham, [email protected]
MS1
Emulation of Count Data in Healthcare Models
Natural History Models are simulators used in healthcareapplications to simulate patients’ outcomes from certainillnesses. Their output is typically count data and for rareillnesses zeros are common. The simulators rely on manyunknown parameters, but calibration using Monte Carlomethods is impractical due to computational expense. In-stead we calibrate using an emulator as a surrogate for thesimulator, showing how appropriate regions of parameterspace are identified for the emulator to perform well.
Ben YoungmanUniversity of Sheffield, [email protected]
MS2
Bayesian Inference with Optimal Maps
We present a new approach to Bayesian inference that en-tirely avoids Markov chain simulation, by constructing amap that pushes forward the prior measure to the pos-terior measure. Existence and uniqueness of a suitablemeasure-preserving map are established by formulating theproblem in the context of optimal transport theory. Wediscuss various means of explicitly parameterizing the mapand computing it efficiently through solution of a stochas-tic optimization problem, exploiting gradient information
UQ12 Abstracts 81
from the forward model. The resulting algorithm over-comes many of the computational bottlenecks associatedwith Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical ex-pressions for posterior moments and the ability to generatearbitrary numbers of independent posterior samples with-out additional likelihood evaluations or forward solves. Theoptimization approach also provides clear convergence cri-teria for posterior approximation and facilitates model se-lection through automatic evaluation of the marginal like-lihood. We demonstrate the accuracy and efficiency of theapproach on nonlinear inverse problems of varying dimen-sion, involving the inference of parameters appearing inordinary and partial differential equations.
Tarek El Moselhy, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
MS2
Sample-based UQ via Computational Linear Alge-bra (Not MCMC)
Sample-based inverse and predictive UQ has proven effec-tive, though MCMC sampling is expensive. Several groupsare now developing links between computational linear al-gebra and sampling algorithms with the aim of reducingthe cost of sampling. Currently these links are exact forGaussian distributions, only, but do indicate how the moregeneral case should look.
Colin FoxUniversity of Otago, New [email protected]
MS2
Toward Extreme-scale Stochastic Inversion
We are interested in inferring earth models from globalseismic wave propagation waveforms. This inverse prob-lem is most naturally cast as a large-scale statisticalinverse wave propagation problem in the framework ofBayesian inference. The complicating factors are the high-dimensional parameter spaces (due to discretization ofinfinite-dimensional parameter fields) and very expensiveforward problems (in the form of 3D elastic-acoustic cou-pled wave propagation PDEs). Here we present a so-calledstochastic Newton method in which MCMC is acceleratedby constructing and sampling from a proposal density thatbuilds a local Gaussian approximation based on local gradi-ent and Hessian (of the log posterior) information. Hessianmanipulations (inverse, square root) are made tractable bya low rank approximation that exploits the compact natureof the data misfit operator. This amounts to a reducedmodel of the parameter-to-observable map. We apply themethod to 3D seismic inverse problems with several millionparameters, illustrating the efficacy and scalability of thelow rank approximation.
Tan BuiUniversity of Texas at [email protected]
Carsten BursteddeInstitut fuer Numerische SimulationUniversitaet [email protected]
Omar Ghattas
University of Texas at [email protected]
James R. MartinUniversity of Texas at AustinInstitute for Computational Engineering and [email protected]
Georg StadlerUniversity of Texas at [email protected]
Lucas [email protected]
MS2
Nonparametric Bayesian Inference for InverseProblems
I will consider the Bayesian approach to inverse prob-lems in the setting where Gaussian random field priorsare adopted. I will study random walk type Metropolisalgorithms and demonstrate how small modifications inthe standard algorithm can lead to an order of magnitudespeed up, in terms of the size of the discretization used.
Andrew StuartMathematics Institute,University of [email protected]
MS3
Global Sensitivity Analysis with Correlated Inputs:Numerical Aspects in Distribution-Based Sensitiv-ity Measures
In this work, we address the sensitivity of model out-put utilizing density-based importance measures and a re-cently introduced sensitivity measures based on repeatedKolmogorov-Smirnov Test. We offer a comparison of thetwo statistics which, as we are to see, are closely related.We show that they hare closely related. We then show thatthey can be estimated from given samples. We use this factto study their estimation in the presence of correlations.Numerical results for examples in which the analytical so-lution is known are discussed.
Emanuele Borgonovo
Bocconi University (Milan)ELEUSI Research [email protected]
MS3
Sensitivity Indices Based on a Generalized Func-tional ANOVA
Functional ANOVA allows defining variance based sensitiv-ity indices of any order in the case of independent inputs.Under a dependence-type condition on the joint inputs dis-tribution, we derive a generalization of Hoeffding decom-position, from which we introduce sensitivity indices of anyorder even for correlated inputs. We also propose an algo-rithm to compute these indices in the particular case whereinputs are correlated two by two.
Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)
82 UQ12 Abstracts
INRIA Rhone Alpes, [email protected]
Gaelle ChastiangProject-Team INRIA [email protected]
Fabrice GamboaLaboratory of Statistics and ProbabilityUniversity of [email protected]
MS3
Global Sensitivity Analysis for Systems with Inde-pendent and/or Correlated Inputs
Structural and Correlative Sensitivity Analysis (SCSA)provides a new unified framework of global sensitivity anal-ysis for systems with independent and correlated variablesbased on a covariance decomposition of the output vari-ance. Three sensitivity indices were defined to give a fulldescription, respectively, of the total, structural and cor-relative contributions of the inputs upon the output. Forindependent inputs, these indices reduce to a single indexconsistent with the variance-based method as a special caseof the new treatment.
Herschel RabitzDepartment of ChemistryPrinceton [email protected]
Genyuan LiPrinceton Universitygenyuan [email protected]
MS3
Variance-based Sensitivity Indices for Models withCorrelated Inputs Using Polynomial Chaos Expan-sions
The variance decomposition techniques used for global sen-sitivity analysis are well established as a means for quan-tifying the uncertainty of the output of a computer modelthat may be attributed to each input parameter. The clas-sical framework only holds for independent input parame-ters. In this paper the covariance decomposition proposedby Rabitz is adapted to polynomial chaos expansions ofthe model response in order to estimate the structural andcorrelative importance of each input parameter.
Bruno SudretUniversite Paris-EstLaboratoire [email protected]
Yann Caniou, Alexandre [email protected], [email protected]
MS4
Quantifying and Reducing Uncertainties on a Setof Failure Using Random Set Theory and Kriging
In computer-assisted robust design and safety studies, cal-culating probability of failures based on approximationmodels of the objective function is a problem of growing in-
terest, and has recently inspired several strategies relyingon Kriging (e.g., Stepwise Uncertainty Reduction strate-gies). An even more challenging problem is to describethe set of all dangerous configurations, i.e. the set of in-put parameters such the output of interest exceeds a giventhreshold T. Mathematically, if f is the response, such setof failure is denoted by G = {x : f(x) ≥ T}. Estimating Gin a severely limited budget of evaluations of f is a difficultproblem, and a suitable use of Kriging may help solving itin practice. Starting from an initial set of evaluations of fat an initial Design of Experiments, one can indeed calcu-late coverage probabilities and/or simulate Gaussian Fieldrealizations conditionally on the available information. Wepropose to use such notions to sum up the current knowl-edge on the excursion set, and also to define new SURstrategies based on decreasing some measure of variabil-ity of the excursion set distribution. We recently revisitedclassical notions of random set theory to obtain candidate“expectations’ for random sets of excursion, as well as can-didate quantifications of uncertainty related to the inves-tigated expectation notions. In this talk we focus on theso called Vorob’ev expectation (for which there is a can-didate quantification of uncertainty naturally associated)and derive a new SUR sampling criterion associated to thisuncertainty. Implementation issues are discussed and thecriterion is illustrated on mono and multi-dimensional ex-amples.
Chevalier ClementIRSN and [email protected]
David GinsbourgerDepartment of Mathematics and Statistics, University [email protected]
Julien BectSUPELEC, Gif-sur-Yvette, [email protected]
Ilya MolchanovDepartment of Mathematics and StatisticsUniversity of [email protected]
MS4
Sampling Strategy for Stochastic Simulators withHeterogeneous Noise
Kriging-based approximation is a useful tool to approxi-mate the output of a Monte-Carlo simulator. Such simula-tor has the particularity to have a known relation betweenits accuracy and its computational cost. Our objective isto find a sampling strategy optimizing the tradeoff betweenthe fidelity and the number of simulation given a limitedcomputational budget when the fidelity of the M-C simu-lator depends on the value of the input space parameter.The presented strategies are illustrated on an industrialexample of neutron transport code.
Loic Le GratietCEA, DAM, [email protected]
Josselin GarnierUniversite Paris [email protected]
UQ12 Abstracts 83
MS4
Optimization of Expensive Computer ExperimentsUsing Partially Converged Simulations and Space-time Gaussian Processes
To alleviate the cost of numerical experiments, it is some-times possible to stop at early stage the convergence of asimulation and exploit the obtained response, the gain intime being at the price of convergence error. We proposeto model such data using a GP model in the joint space ofdesign parameters and computational time, and performoptimization using an EGO-like strategy with the help ofa new statistic: the Expected Quantile Improvement.
Victor PichenyEuropean Center for Research and Advanced Training [email protected]
MS4
Models with Complex Uncertain Inputs
We consider the problem of investigating the consequencesof an event that releases contamination into the atmo-sphere. Simulators for such events depend on atmosphericand meteorological conditions over at least a six-hour timespan. For example, wind strength and direction will be in-fluential in assessing the severity of contamination across aspatial window. The common strategy for simulating theseconditions is to sample observed stretches of data. We con-sider how to generate an informative sample and how tocarry out the subsequent analysis.
David M. SteinbergTel Aviv [email protected]
MS5
Approximation and Error Estimation in High Di-mensional Stochastic Collocation Methods on Ar-bitrary Sparse Samples
Stochastic collocation methods are an attractive choice tocharacterize uncertainty because of their non-intrusive na-ture. High dimensional stochastic spaces can be approxi-mated well for smooth functions with sparse grids. Therehas been a focus in extending this approach to non-smoothfunctions using adaptive sparse grids. This presentationwill present both adaptive approximation methods and er-ror estimation techniques for high dimension functions.
Rick ArchibaldComputational Mathematics GroupOak Ridge National [email protected]
MS5
Spatially Varying Stochastic Expansions for Em-bedded Uncertainty Quantification
Existing discretizations for stochastic PDEs, based on atensor product between the deterministic basis and thestochastic basis, treat the required resolution of uncer-tainty as uniform across the physical domain. However,solutions to many PDEs of interest exhibit spatially local-ized features that may result in uncertainty being severelyover or under-resolved by existing discretizations. In thistalk, we explore the mechanics and accuracy of using aspatially varying stochastic expansion. This is achieved
through an adaptive refinement algorithm where simple er-ror estimates are used to independently drive refinement ofthe stochastic basis at each point in the physical domain.Results will be presented comparing the performance ofthis method with other embedded UQ techniques.
Eric C. CyrScalable Algorithms DepartmentSandia National [email protected]
MS5
Simultaneous Local and Dimension AdaptiveSparse Grids for Uncertainty Quantification
Polynomial methods, such as Polynomial Chaos, are oftenused to construct surrogates of expensive numerical mod-els. When the response surface is smooth these techniquesaccurately reproduce the model variance. However whendiscontinuities are present or local metrics, such as proba-bility of failure, are of interest the utility of these methodsdecreases. Here we propose a greedy, locally and dimensionadaptive sparse grid interpolation method which restrictsfunction evaluations to dimensions and regions of interest.The adaptivity is guided by a weighted local interpolationerror estimate. The proposed method is highly efficientwhen only a sub-region of a high-dimensional input spaceis important.
John D. JakemanSandia National [email protected]
Stephen G. RobertsAustralian National [email protected]
MS5
A Posteriori Error Analysis and Adaptive Samplingfor Probabilities using Surrogate Models
Surrogate methods have grown in popularity in recent yearsfor stochastic differential equations due to their high-orderconvergence rates for moments of the solution. Oftentimes,however, the objective is to compute probabilities of cer-tain events, which is performed by sampling the surrogatemodel. Unfortunately, each of these samples contains dis-cretization error which affects the reliability. We applyadjoint techniques to quantify the accuracy of statisticalquantities by estimating the error in each sample and seekto improve the surrogate model through drive adaptive re-finement.
Tim WildeySandia National [email protected]
Troy ButlerInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
MS6
Heat Kernel Smoothing on an Arbitrary ManifoldUsing the Laplace-Beltrami Eigenfunctions
We present a novel kernel smoothing framework on curvedsurfaces using the Laplace-Beltrami eigenfunctions. The
84 UQ12 Abstracts
Green’s function of an isotropic diffusion equation is ana-lytically represented using the Laplace-Beltrami eigenfunc-tions. The Green’s function is then used in explicitly con-structing heat kernel smoothing as a series expansion. Ouranalytic approach offers many advantages over previous fil-tering methods that directly solve diffusion equations usingFEM. The method is illustrated with various surfaces ob-tained from medical images.
Moo ChungDepartment of Biostatistics and Medical InformaticsUniversity of [email protected]
MS6
Sparse Grids for Regression
We consider the sparse grid combination technique for re-gression, which we regard as a problem of function re-construction in some given function space. We use aregularised least squares approach, discretised by sparsegrids and solved using the so-called combination technique,where a certain sequence of conventional grids is employed.The sparse grid solution is then obtained by the addition ofthese partial solutions with certain predefined combinationcoefficients. We show why the original combination tech-nique is not guaranteed to converge with higher discreti-sation levels in this setting, but that a so-called optimisedcombination technique can be used instead.
Jochen GarckeUniversity of [email protected]
MS6
Second-Order Comparison of Random Functions
Given two samples of random square-integrable functions,we consider the problem of testing whether they share thesame second-order structure. Our study is motivated bythe problem of determining whether the mechanical prop-erties of short DNA loops are significantly affected by theirbase-pair sequence. The testing problem is seen to involveaspects of ill-posed inverse problems and a test based on aKarhunen-Love approximation of the Hilbert-Schmidt dis-tance of the empirical covariance operators is proposed andinvestigated. More generally, we develop the notion of adispersion operator and construct a family of tests. Ourprocedures are applied to a sample of DNA minicircles ob-tained via the electron microscope.
Victor M. PanaretosSection de [email protected]
MS6
Spatial Spline Regression Models
We deal with the problem of surface estimation over ir-regularly shaped domains, featuring complex boundaries,strong concavities and interior holes. Adopting an ap-proach typical of Functional Data Analysis, we proposea Spatial Spline Regression model that efficiently handlethis problem; the model also accounts for covariate infor-mation, can comply with general conditions at the domainboundary and has the capacity to incorporate a priori in-
formation about the surface shape.
Laura M. SangalliMOX Department of MathematicsPolitecnico Milano, [email protected]
Laura AzzimontiMOX Department of MathematicsPolitecnico di Milano, [email protected]
James O. RamsayMcGill University, Montreal, [email protected]
Piercesare SecchiMOX - Department of MathematicsPolitecnico di Milano, [email protected]
MS7
Global Sensitivity and Reduction of Very High-Dimensional Models with Correlated Inputs
Models originating from observed data or from computercode can be very high-dimensional. Total contributionfrom model input Xi to variance of model output Y isdefined as VTi = Var(Y ) − Var[E(Y |X−i)], where X−i
corresponds to all inputs except Xi. We develop an effi-cient algorithm to compute VTi for models with thousandsof possibly correlated inputs with arbitrary distributions.The number of required model samples is independent ofthe number of inputs. The resulting global sensitivitiesVTi/Var(Y ) can be used to create a reduced model for fastoutput prediction and uncertainty quantification. The de-veloped algorithms are implemented in our software Go-SUM (Global Optimizations, Sensitivity and Uncertaintyin Models).
Vladimir FonoberovChief ScientistAIMdyn, [email protected]
Igor MezicUniversity of California, Santa [email protected]
MS7
Uncertainty Quantification in Energy EfficientBuilding Retrofit Design
One area of engineering where uncertainty quantificationand mitigation is not only desired, but actually necessaryis energy efficient building retrofit design. Buildings arecomplex systems with large number of parameters that areoften not properly measured and recorded (e.g. equipmentefficiencies) or that have inherently large uncertainties as-sociated with them (e.g. occupancy patterns). In this talkwe discuss mathematical methods and technical challengesinvolved with UQ for retrofit design.
Slaven PelesUnited Technologies Research [email protected]
UQ12 Abstracts 85
MS7
Iterative Methods for Propagating Uncertaintythrough Networks of Switching Dynamical Systems
Networks of switching dynamical systems arise naturally ina vast variety of applications. These systems can be chal-lenging for analysis due to non-smooth dynamics. Here, wedevelop methods to perform uncertainty quantification onswitching systems using polynomial chaos based on waveletexpansions. Since the number of equations required scalesexponentially, we develop novel iterative methods to simu-late networks of switching dynamical systems. We demon-strate these methods on a large network of switching oscil-lators.
Tuhin SahaiUnited [email protected]
MS7
NIPC: A MATLAB GUI for Wider Usability ofNonintrusive Polynomial Chaos
Polynomial Chaos theory is a sophisticated method fora fast stochastic analysis of dynamic systems with fewrandom parameters. Classically, all system laws are re-formulated into deterministic equations, but recently, anonintrusive variation has been developed for applicationwith black boxes. After a brief introduction of this concept,we would like to present a tool which makes the numerousadjustments and the understanding of the advanced math-ematical roots of Polynomial Chaos irrelevant for the user.
Kanali TogawaE.ON Energy Research Center - RWTHAachen [email protected]
Antonello MontiE.ON Energy Research Center - RWTH [email protected]
MS8
Speeding up the Sensitivity Analysis of an Epi-demiological Model
This presentation describes a probabilistic sensitivity anal-ysis of an epidemiological simulator of rotavirus. We usedboth advanced input screening techniques and emulator-based methods to make the process more efficient and fo-cussed on the relevant regions of the input space. The keyto successfully applying these methods was the quantifi-cation of expert uncertainty about the input parameters.The methods for expert elicitation will be described andtheir use in the sensitivity analysis will be illustrated.
John Paul GoslingUniversity of [email protected]
MS8
Sensitivity Analysis of a Global Aerosol Model toQuantify the Effect of Uncertain Model Parameters
We present a sensitivity analysis of a global aerosol modelusing Gaussian process emulation. We present results for
the sensitivity of global cloud condensation nuclei to 37 un-certain process parameters and emissions. We show globalmaps of the total uncertainty in several aerosol quantitiesas well as a breakdown of the main factors controlling theuncertainty. These results provide a very clear guide forthe reduction in model uncertainty through targeted modeldevelopment.
Lindsay Lee, Kenneth Carslaw, Kirsty PringleUniversity of [email protected], [email protected],[email protected]
MS8
Sensitivity Analysis for Complex Computer Models
We discuss the problem of quantifying the importance ofuncertain input parameters in a computer model; whichinput parameters should we learn about to best reduceoutput uncertainty? Two standard tools are reviewed:variance-based sensitivity analysis, and decision theoreticmeasures based on the value of information. For compu-tationally expensive models, computational methods arepresented for calculating these measures using Gaussian-process surrogate models. These tools will be applied inthe remaining three talks.
Jeremy OakleyUniversity of [email protected]
MS8
Managing Model Uncertainty in Health EconomicDecision Models
Health economic models predict the costs and health ef-fects associated with competing decision options (e.g. rec-ommend drug X versus Y). Current practice is to quantifyinput uncertainty, but to ignore uncertainty due to defi-ciencies in model structure. Given a relatively simple butimperfect model, is there value in incorporating additionalcomplexity to better describe the decision problem? Wederive the ‘expected value of model improvement’ via a de-cision theoretic sensitivity analysis of model discrepancy.
Mark Strong, Jeremy OakleyUniversity of [email protected], [email protected]
MS9
Hybrid UQ Method for Multi-Species ReactiveTransport
Uncertainty quantification (UQ) for multi-physics applica-tions is challenging for both intrusive and non-intrusivemethods. A question naturally arises: Can we take thebest of both worlds to introduce a ’hybrid’ UQ method-ology for multi-physics applications? This minisymposiumbrings together researchers who are exploring hybrid UQmethods. In particular, we emphasize techniques thatpropagate global uncertainties yet allow individual physicscomponents to use their local intrusive or non-intrusiveUQ methods. Relevant issues for discussions are: hybriduncertainty propagation algorithms, dimension reduction,Bayesian data fusion techniques, design of hybrid computa-tional framework, and scalable algorithms for hybrid UQ.
Xiao Chen
86 UQ12 Abstracts
Lawrence Livermore National LaboratoryCenter for Applied Scientific [email protected]
Brenda Ng, Yunwei Sun, Charles TongLawrence Livermore National [email protected], [email protected], [email protected]
MS9
Preconditioners/Multigrid for Stochastic Polyno-mial Chaos Formulations of the Diffusion Equation
This talk presents (parallel) preconditioners and multigridsolvers for solving linear systems of equations arising fromstochastic polynomial chaos (PC) formulations of the dif-fusion equation with random coefficients. These precondi-tioners/solvers are an extension of the preconditioner de-veloped in [?] for strongly coupled systems of elliptic partialequations that are norm equivalent to systems that can befactored into an algebraic coupling component and a diag-onal differential component, as is the case for the systemsproduced by PC formulations of the diffusion equation.
Barry LeePacific Northwest National [email protected]
MS9
A High Performance Hybrid pce Framework forHigh Dimensional UQ Problems
In the domain of stochastic multi-physics problems, tradi-tional non-intrusive and intrusive UQ methods are expen-sive and preclude a much-needed modular plug and playsolver infrastructure. We present a hybrid UQ method-ology to tackle these challenges. A conditional polyno-mial representation of the stochastic solution vector is usedto track local and global uncertainty propagation betweenmodules. The frameworks parallel efficiency and spectralaccuracy is shown using a multi-species reaction-flow prob-lem modeled as a non-linear PDE system.
Akshay MittalDepartment of MathematicsStanford [email protected]
MS9
Stochastic Dimension Reduction Techniques forUncertainty Quantification of Multiphysics Sys-tems
Uncertainty quantification of multiphysics systems rep-resents numerous mathematical and computational chal-lenges. Indeed, uncertainties that arise in each physics ina fully coupled system must be captured throughout thewhole system, the so-called curse of dimensionality. Wepresent techniques for mitigating the curse of dimensional-ity in network-coupled multiphysics systems by using thestructure of the network to transform uncertainty represen-tations as they pass between components. Examples fromthe simulation of nuclear power plants will be discussed.
Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
Maarten ArnstUniversite’ de [email protected]
Paul ConstantineSandia National [email protected]
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
John Red-HorseValidation and Uncertainty Quantification ProcessesSandia National [email protected]
Tim WildeySandia National [email protected]
MS10
First Order k-th Moment Finite Element Analysisof Nonlinear Operator Equations with StochasticData
We develop and analyze a class of efficient algorithms foruncertainty quantification of nonlinear operator equations.The algorithms are based on sparse Galerkin discretiza-tions of tensorized linearizations at nominal parameters.Specifically, we analyze a class of abstract nonlinear, para-metric operator equations
J(α, u) = 0
for random parameters α with realizations in a neighbor-hood of a nominal parameter α0. Under some structuralassumptions on the parameter dependence, random param-eters α(ω) = α0+r(ω) are shown to imply a unique randomsolution u(ω) = S(α(ω)). We derive an operator equationfor the deterministic computation of k-th order statisticalmoments of the solution fluctuations u(ω) − S(α0), pro-vided that statistical moments of the random parameterperturbation r(ω) are known. This formulation is basedon the linearization in a neighborhood of a nominal pa-rameter α0. We provide precise bounds for the lineariza-tion error by means of a Newton-Kantorovich-type theo-rem. We present a sparse tensor Galerkin discretization forthe tensorized first order perturbation equation and deducethat sparse tensor Galerkin discretizations of this equationconverge in accuracy vs. complexity which equals, up tologarithmic terms, that of the Galerkin discretization of asingle instance of the mean field problem.
Alexey ChernovUniversity of [email protected]
Christoph SchwabETH [email protected]
MS10
An Adaptive Iterative Method for
UQ12 Abstracts 87
High-dimensional Stochastic PDEs
A new approach for constructing low-rank solutions tostochastic partial differential equations is presented. Ouralgorithm exploits the notion that the stochastic solutionand functionals thereof may live on low-dimensional man-ifolds, even in problems with high-dimensional stochasticinputs. We use residuals of the stochastic PDE to sequen-tially and adaptively build bases for both the random anddeterministic components of the solution. By tracking thenorm of the residual, meaningful error measures are pro-vided to determine the quality of the solution. The ap-proach is demonstrated on Maxwell’s equations with un-certain boundaries and on linear/nonlinear diffusion equa-tions with log-normal coefficients, characterized by severalhundred input dimensions. In these problems, we observeorder-of-magnitude reductions in computational time andmemory compared to many state-of-the-art intrusive andnon-intrusive methods.
Tarek El Moselhy, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
MS10
Adjoint Enhancement Within Global StochasticMethods
This presentation will explore the use of local adjoint sen-sitivities within global stochastic approximation methodsfor the purposes of improving the scalability of uncer-tainty quantification methods with respect to random di-mension. Methodologies of interest include generalizedpolynomial chaos, based on regression over unstructuredgrids; stochastic collocation, based on gradient-enhancedinterpolation polynomials on structured grids; and globalreliability methods, using gradient-enhanced kriging onunstructured grids. Efficacy of these gradient-enhancedapproaches will be assessed based on relative efficiency(speed-up with adjoint gradient inclusion) and numericalrobustness (ill-conditioning of solution or basis generation).
Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
Keith DalbeySandia National [email protected]
MS10
Stochastic Collocation Methods in UnstructuredGrids
We present a framework for stochastic collocation methodon arbitrary grids. The method utilizes a newly devel-oped least orthogonal interpolation theory to constructhigh-order polynomial approximations of the stochastic so-lution, based on the collocation results of any type of gridsin arbitrary dimensions. We present both the mathemat-ical theory and numerical implementation. Examples are
provided to demonstrate the flexibility of the methods.
Akil Narayan, Dongbin XiuPurdue [email protected], [email protected]
MS11
A Dynamic Programming Approach to Sequentialand Nonlinear Bayesian Experimental Design
We present a framework for optimal sequential Bayesianexperimental design, using information theoretic objec-tives. In particular, we adopt a “closed-loop” approachthat uses the results of intermediate experiments to guidesubsequent experiments in a finite horizon setting. Directsolution of the resulting dynamic programming problem iscomputationally impractical. Using polynomial chaos sur-rogates, stochastic optimization methods, and value func-tion approximations, we develop numerical tools to identifysequences of experiments that are optimal for parameterinference, in a computationally feasible manner.
Xun Huan, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
MS11
Identification of Polynomial Chaos Representationsin High Dimension
The usual identification methods of polynomial chaos ex-pansions in high dimension are based on the use of a seriesof truncations that induce numerical bias. We first quan-tify the detrimental influence of this numerical bias, wethen propose a new decomposition of the polynomial chaoscoefficients to allow performing relevant convergence anal-ysis and identification with respect to an arbitrary measurefor the high dimension case.
Guillaume PerrinUniversite Paris-EstNavier (UMR 8205 ENPC-IFSTTAR-CNRS)[email protected]
Christian SoizeUniversity of Paris-EstMSME UMR 8208 [email protected]
Denis DuhamelUniversity of Paris-EstNavier (UMR 8205 ENPC-IFSTTAR-CNRS)[email protected]
Christine FunfschillingSNCFInnovation and Research [email protected]
MS11
Inverse Problems for Nonlinear Systems viaBayesian Parameter Identification
Inverse problems are important in engineering science andappear very frequently such as for example in estimationof material porosity and properties describing irreversiblebehaviour, control of high-speed machining processes etc.At present, most of identification procedures have to cope
88 UQ12 Abstracts
with ill-possedness as the problems are often considered ina deterministic framework. However, if the parameters aremodelled as random variables the process of obtaining moreinformation through experiments in a Bayesian setting be-comes well-posed. In this manner the Bayesian informationupdate can be seen as a minimisation of variance. In thiswork we use the functional approximation of uncertaintyand develop a purely deterministic procedure for the up-dating process. This is then contrasted to a fully Bayesianupdate based on Markov chain Monte Carlo sampling ona few numerical nonlinear examples. Examples are basedon plasticity and nonlinear diffusion models.
Bojana V. RosicInstitute of Scientific ComputingTU [email protected]
Oliver PajonkInstitute of Scientific ComputingTechnische Universitat [email protected]
Alexander LitvinenkoTU Braunschweig, [email protected]
Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]
Anna KucerovaFaculty of Civil Engineering, Prague, Czech [email protected]
Jan SykoraFaculty of Civil EngineeringCzech Technical University in [email protected]
MS11
Data Analysis Methods for Inverse Problems
We present exploratory data analysis methods to assessinversion estimates using examples based on l2- and l1-regularization. These methods can be used to reveal thepresence of systematic errors s and validate modeling as-sumptions. The methods include: confidence intervals andbounds for the bias, resampling methods for model vali-dation, and construction of training sets of functions withcontrolled local regularity.
Luis TenorioColorado School of [email protected]
MS12
Analysis and Finite Element Approximations ofOptimal Control Problems for Stochastic PDEs
In this talk, we consider mathematically and computation-ally optimal control problems for stochastic partial differ-ential equations. The control objective is to minimize theexpectation of a cost functional, and the control is of thedeterministic distributed and/or boundary value type. Themain analytical tool is the Karhunen-Loeve (K-L) expan-sion. Mathematically, we prove the existence of an optimal
solution and derive optimality system. Computationally,we approximate the optimality system through the dis-cretizations of the probability space and the spatial spaceby the finite element method; we also derive error estimatesin terms of both types of discretizations. Some numericalexperiments are given.
Hyung-Chun LeeAjou UniversityDepartment of [email protected]
MS12
Generalized Methodology for Inverse ModelingConstrained by SPDEs
We formulate several stochastic inverse problems to esti-mate statistical moments (mean value, variance, covari-ance) or even the whole probability distribution of inputdata, given the PDF of some response (quantities of phys-ical interest) of a system of SPDEs. The identification ob-jectives minimize the expectation of a tracking cost func-tional or minimize the difference of desired statistical quan-tities in the appropriate norms. Numerical examples illus-trate the theoretical results and demonstrate the distinc-tions between the various objectives.
Catalin S. TrencheaDepartment of MathematicsUniversity of [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Clayton G. WebsterOak Ridge National [email protected]
MS12
Stochastic Inverse Problems via ProbabilisticGraphical Model Techniques
We build a graphical model for learning the probabilisticrelationship between the input and output of a multiscaleproblem (e.g. crystal plasticity based deformation). Thegraphical model is first learned (parameters in potentialfunctions) using a set of training data and suitable op-timization strategies (e.g. MLE, EM, etc.). After that,by applying belief propagation algorithm, we can solve thecoarse to fine scale inverse problem (e.g. find the initial mi-crostructure given desired properties) or the forward prob-lem (e.g. find the properties resulting from a given initialmicrostructure).
Nicholas Zabaras, Peng ChenCornell [email protected], [email protected]
MS12
Least-Squares Estimation of Distributed RandomDiffusion Coefficients
We formulate a stochastic inverse problem, estimating therandom diffusion coefficient in an elliptic PDE, as a leastsquares problem in the space of functions with bounded
UQ12 Abstracts 89
mixed derivatives. We prove existence and define the neces-sary optimality conditions using techniques from optimiza-tion theory. In addition, the regularization of the problemconnects the infinite dimensional problem with its finitenoise discretization. Numerical results that validate ourtheory will be presented.
Hans-Werner van WykVirginia [email protected]
Jeff BorggaardVirginia TechDepartment of [email protected]
MS13
Challenges on Incorporating Uncertainty in Com-putational Model Predictions
Bayes’ theorem allows for the treatment of uncertaintiesin the whole model prediction process. It can be usedfor model calibration (either batch or real-time), rankingof scientific hypotheses formulated as mathematical mod-els, ranking of data sets according to information content,and averaging of predictions made by competing models.We discuss the uncertainty quantification research we havebeen pursuing in the context of realistic models, includ-ing sampling with MCMC algorithms and emulation withGaussian processes.
Ernesto E. PrudencioInstitute for Computational Engineering and [email protected]
Gabriel A. TerejanuInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
MS13
A Stochastic Collocation Approach to ConstrainedOptimization for Random Data Estimation Prob-lems
We present a scalable, parallel mechanism for identifica-tion of statistical moments of input random data, giventhe probability distribution of some response of a systemof stochastic partial differential equations. To character-ize data with moderately large amounts of uncertainty, weintroduce a stochastic parameter identification algorithmthat integrates an adjoint-based deterministic algorithmwith the sparse grid stochastic collocation FEM. Rigor-ously derived error estimates are used to compare the effi-ciency of the method with other techniques.
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Catalin S. TrencheaDepartment of MathematicsUniversity of [email protected]
Clayton G. WebsterOak Ridge National [email protected]
MS13
A Randomized Iterative Proper Orthogonal De-composition (RI-POD) Technique for Model Iden-tification
In this talk, we shall present a data based technique forthe identification of the eigenmodes of very high dimen-sional linear systems. Our technique is based on the snap-shot proper orthogonal decomposition (POD) method. Weshow that using the snapshot POD with randomized forc-ing or initial conditions, in an iterative fashion, we canextract as many of the eigenmodes of the linear system asrequired. The technique works for symmetric as well asnon-symmetric systems.
Dan YuTexas A&M [email protected]
Suman ChakravortyTexas A&M UniversityCollege Station, [email protected]
MS13
Sparse Bayesian Techniques for Surrogate Creationand Uncertainty Quantification
We develop a multi-output, Bayesian, generalized linear re-gression model with arbitrary basis functions. The modelautomatically excludes redundant basis functions whileproviding access to the full Bayesian predictive distribu-tion of the statistics. By employing a sequentially builttree structure, we are able to demonstrate that it is ca-pable of identifying discontinuities in the response surface.We demonstrate our claims numerically using various ba-sis functions that vary from localized kernels to numericallybuild Generalized Polynomial Chaos basis.
Nicholas ZabarasCornell [email protected]
Ilias BilionisCenter for Applied MathematicsCornell University, [email protected]
MS14
Application of Goal-Oriented Error EstimationMethods to Statistical Quantities of Interest
The objective here is to present an extension of goal-oriented methods for a posteriori error estimation to thecase of stochastic problems with random parameters. Weshall consider the stochastic collocation method, for whichestimates of statistical moments of the error in a quantityof interest have been proposed in the past, but will focuson estimating the error in these moments themselves andon accurately representing the posterior distribution of thequantity of interest.
Corey BryantThe University of Texas at AustinInstitute for Computational Engineering and [email protected]
Serge PrudhommeICES
90 UQ12 Abstracts
The University of Texas at [email protected]
MS14
Estimating and Bounding Errors in DistributionsPropagated via Surrogate Models
We consider the problem of propagating distributionsthrough quantities of interest to differential equations withparameters modeled as random processes. Polynomialchaos expansions approximate random processes, and thedifferential equations are solved numerically. We estimatediscretization error, bound truncation error, and considerthe effect of finite sampling in the computed distributionfunctions for both forward and inverse problems. The re-sults are computable error bounds for distribution func-tions that we demonstrate with numerical examples.
Troy Butler, Clint DawsonInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected], [email protected]
Tim WildeySandia National [email protected]
MS14
An Adjoint Error Estimation Technique Using Fi-nite Volume Methods for Hyperbolic Equations
Many codes exist for hyperbolic equations that employ fi-nite volume methods, which are often nonlinear. A methodis presented for adjoint-based a posteriori error calculationswith finite volume methods. Conditions are derived underwhich the error in a quantity of interest may be recoveredwith guaranteed asymptotic accuracy, including for nonlin-ear forward solvers and weak solutions. We demonstratethe flexibility in implementation of this post-processingtechnique and accuracy with smooth and discontinuous so-lutions.
Jeffrey M. ConnorsLawrence Livermore National LaboratoryCenter for Applied Scientific [email protected]
Jeffrey W. BanksLawrence Livermore National [email protected]
Jeffrey A. HittingerCenter for Applied Scientific ComputingLawrence Livermore National [email protected]
Carol S. WoodwardLawrence Livermore Nat’l [email protected]
MS15
Computation of Posterior Distribution in OrdinaryDifferential Equations with Transport Partial Dif-ferential Equations
We consider the estimation of the initial conditions of anOrdinary Differential Equation from a noisy trajectory ina Bayesian setting. The posterior distribution is computed
by exploiting the fact that it is transported by the vectorfield defining the ODE. As a consequence, one can designan iterative computation based on the transport of prob-ability. This transported probability is the solution of aPDE, which is solved by an Eulerian-Lagrangian schemewith bi-orthogonal wavelets.
Nicolas J-B Brunel, Vincent TorriUniversite d’[email protected], [email protected]
MS15
Large-scale Seismic Inversion: Elastic-acousticCoupling, DG Discretization, and UncertaintyQuantification
We present our recent efforts on solving globally large-scale seismic inversion. We begin by a unified frame-work for elastic-acoustic coupling together with an hp-nonconforming discontinuous Galerkin (DG) spectral el-ement method. Next, consistency, stability, and hp-convergence of the DG method as well as its scalabilityup to 224k cores are presented. Finally, a method for un-certainty quantification based on the compactness of thedata mistfit is presented, followed by a brief discussion ona variance-driven adaptivity.
Tan BuiUniversity of Texas at [email protected]
Carsten BursteddeInstitut fuer Numerische SimulationUniversitaet [email protected]
Omar Ghattas, Georg StadlerUniversity of Texas at [email protected], [email protected]
James R. MartinUniversity of Texas at AustinInstitute for Computational Engineering and [email protected]
Lucas WilcoxUniversity of Texas at [email protected]
MS15
Nonparametric Estimation of Diffusions: A Differ-ential Equations Approach
We consider estimation of scalar functions that determinethe dynamics of diffusion processes. It has been recentlyshown that nonparametric maximum likelihood estimationis ill-posed in this context. We adopt a probabilistic ap-proach to regularize the problem by the adoption of a priordistribution for the unknown functional. A Gaussian priormeasure is chosen in the function space by specifying itsprecision operator as an appropriate differential operator.We establish that a Bayesian Gaussian conjugate analy-sis for the drift of one-dimensional non-linear diffusions isfeasible using high-frequency data, by expressing the log-likelihood as a quadratic function of the drift, with suffi-cient statistics given by the local time process and the endpoints of the observed path. Computationally efficient pos-terior inference is carried out using a finite element method.
UQ12 Abstracts 91
We embed this technology in partially observed situationsand adopt a data augmentation approach whereby we it-eratively generate missing data paths and draws from theunknown functional. Our methodology is applied to esti-mate the drift of models used in molecular dynamics andfinancial econometrics using high and low frequency obser-vations. We discuss extensions to other partially observedschemes and connections to other types of nonparametricinference.
Omiros PapaspiliopoulosDepartment of EconomicsUniversitat Pompeu [email protected]
Yvo PokernDepartment of Statistical Science, [email protected]
Gareth O. RobertsWarwick [email protected]
Andrew M. StuartUniversity of [email protected]
MS15
A Statistical Approach to Data Assimilation forHemodynamics
We discuss a Data Assimilation (DA) technique for hemo-dynamics based on a Bayesian approach. DA is formulatedas a control problem minimizing a weighted misfit betweendata and velocity under the constraint of the Navier-Stokesequations. The control variable is the probability densityfunction of the normal stress at the inflow; we derive sta-tistical estimators (maximum a posteriori and maximumlikelihood) and confidence intervals for the velocity. Wecompare statistical and deterministic estimators.
Alessandro VenezianiMathCS, Emory University, Atlanta, [email protected]
Marta D’EliaDepartment of Mathematics and Computer ScienceEmory University, [email protected]
MS16
Toward Improving Statistical Components ofMulti-scale Simulation Schemes
I will describe an ongoing collaboration with Sorin Mi-tran and a group of statisticians to develop more so-phisticated approaches to handling some issues concern-ing representation of and communication between kineticmesoscale PDFs and stochastically simulated microscaledata in Mitran’s novel time-parallel Continuum-Kinetic-Molecular (tpCKM) method. The mesoscale componentof this method can be viewed as an emulator for the mi-croscale dynamics, and our research regards offline and on-line control of its quality.
Peter R. KramerRensselaer Polytechnic InstituteDepartment of Mathematical [email protected]
Sorin MitranUniversity of North Carolina Chapel [email protected]
Sunli TangRensselaer Polytechnic InstituteDepartment of Mathematical [email protected]
M.J. BayarriUniversity of [email protected]
James BergerDuke [email protected]
Murali HaranPennsylvania State UniversityDepartment of [email protected]
Hans Rudolf KuenschETH ZurichDepartment of [email protected]
MS16
Improving Gaussian Stochastic Process Emulators
Gaussian Stochastic Process (GASP) emulators are fastsurrogates to the output of computationally expensive sim-ulators of complex physical processes used in many scien-tific applications. They have many advantages, but someimprovements are still needed. The talk will present severalmethods in a Bayesian context for improving the imple-mentation of GASP emulators under computational con-straints (including the use of large data sets) and obtainingstatistical approximations to the computer model that arecloser to the reality.
Danilo [email protected]
MS16
Emulating Dynamic Models: An Overview of Sug-gested Approaches
Dynamic models are characterized by a large number ofoutputs in the time domain and by a strong dependenceof outputs conditional on recent past outputs. This makesit difficult to apply standard emulation techniques to suchmodels. Several approaches have been suggested to over-come these problems. The talk will give an overview ofthese concepts, try to analyze which approach may be mostpromising under which circumstances, and identify needsfor further research.
Peter [email protected]
MS16
Statistical Approximation of Computer Model
92 UQ12 Abstracts
Output aka Emulation of Simulation
Computer model data can be used to construct a predictorof the computer model output at untried inputs. Such con-struction combined with assessment of prediction uncer-tainty is an emulator or surrogate of the computer model.A key strategy uses the Bayes paradigm to produce pre-dictors and quantify uncertainties. An overview of thisstrategy which uses gaussian stochastic processes as pri-ors will include comments on connections with approachessuch as polynomial chaos.
Jerry SacksNational Institute of Statistical [email protected]@niss.org
MS17
Adaptive Sparse Grids for Stochastic Collocationon Hybrid Architectures
We discuss the algorithmic developments required to builda scalable adaptive sparse grid library tailored for emerg-ing architectures that will allow the solution of very largestochastic problems. We demonstrate the first results ofadaptive sparse grids for hybrid systems and detail the al-gorithmic challenges that were overcome to get these tech-niques to work on these novel systems.
Rick ArchibaldComputational Mathematics GroupOak Ridge National [email protected]
MS17
A Compressive Sampling Approach for the Solu-tion of High-dimensional Stochastic PDEs
We discuss a non-intrusive sparse approximation approachfor uncertainty propagation in the presence of high-dimensional random inputs. The idea is to exploit the spar-sity of the quantities of interest with respect to a suitablebasis in order to estimate the dominant expansion coef-ficients using a number of random samples that is muchsmaller than the cardinality of the basis.
Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]
MS17
Tensor-based Methods for Uncertainty Propaga-tion: Alternative Definitions and Algorithms
Tensor approximation methods have recently received agrowing attention in scientific computing for the solutionof high-dimensional problems. These methods are of par-ticular interest in the context of uncertainty quantificationwith functional approaches, where they are used for therepresentation of multiparametric functionals. Here, wepresent different alternatives, based on projection or sta-tistical methods, for the construction of tensor approxima-tions, in various tensor formats, of the solution of param-eterized stochastic models.
Anthony NouyGeM, L’UNAM Universite, Ecole Centrale NantesUniversity of Nantes, CNRS
Billaud MarieGeM, LUNAM Universite, Ecole Centrale NantesUniversite de Nantes, [email protected]
Mathilde ChevreuilGeM, L’UNAM Universite, Universite de NantesEcole Centrale Nantes, [email protected]
Rai Prashant, Zahm OlivierGeM, L’UNAM Universite, Ecole Centrale NantesUniversite de Nantes, [email protected], [email protected]
MS17
PDE with Random Coefficient as a Problem inInfinite-dimensional Numerical Integration
This talk is concerned with the use of quasi-Monte Carlomethods combined with finite-element methods to handlean elliptic PDE with a random field as a coefficient. Anumber of groups have considered such problems (underheadings such as polynomial chaos, stochastic Galerkin andstochastic collocation) by reformulating them as determin-istic problems in a high di- mensional parameter space,where the dimensionality comes from the num- ber of ran-dom variables needed to characterise the random field.In recent joint work with Christoph Schwab (ETH) andFrances Kuo (UNSW) we have treated a problem of thiskind as one of infinite-dimensional integration - where in-tegration arises because a multi-variable expected value isa multidimensional integral - together with finite-elementapproximation in the physical space. We use recent devel-opments in the theory and practice of quasi-Monte Carlointegration rules in weighted Hilbert spaces, through whichrules with optimal properties can be constructed once theweights are known. The novel feature of this work is thatfor the first time we are able to design weights for theweighted Hilbert space that achieve what is believed to bethe best possible rate of convergence, under conditions onthe random field that are exactly the same as in a recentpaper by Cohen, DeVore and Schwab on best N-term ap-proximation for the same problem.
Ian H. SloanUniversity of New South WalesSchool of [email protected]
MS18
A Multiscale Domain Decomposition Method forthe Solution of Stochastic Partial Differential Equa-tions with Localized Uncertainties
The presence of numerous localized sources of uncertain-ties in physical models leads to high dimensional and com-plex multiscale stochastic models. A numerical strategy ishere proposed to propagate the uncertainties through suchmodels. It is based on a multiscale domain decompositionmethod that exploits the localized side of uncertainties.The separation of scales has the double benefit of improv-ing the conditioning of the problem as well as the conver-gence of tensor based methods (namely Proper General-ized Decomposition methods) used within the strategy forthe separated representation of high dimensional stochastic
UQ12 Abstracts 93
parametric solutions.
Mathilde ChevreuilGeM, L’UNAM Universite, Universite de NantesEcole Centrale Nantes, [email protected]
Anthony NouyGeM, L’UNAM Universite, Ecole Centrale NantesUniversity of Nantes, [email protected]
Safatly EliasLUNAM Universite, Ecole Centrale Nantes, [email protected]
MS18
Greedy Algorithms for Non Symmetric LinearProblems with Uncertainty
Greedy algorithms, also called Progressive Generalized De-composition algorithms, are known to give very satisfactoryresults for the approximation of the solution of symmetricuncertain problems with a large number of random param-eters. Indeed, their formulation leads to discretized prob-lems whose complexity evolves linearly with respect to thenumber of parameters, while avoiding the curse of dimen-sionality. However, naive adaptations of these methodsto non-symmetric problems may not converge towards thedesired solution. In this talk, different versions of these al-gorithms will be presented, along with convergence results.
Virginie Ehrlacher
CERMICS - Ecole des Ponts Paristech / [email protected]
MS18
Optimized Polynomial Approximations of PDEswith Random Coefficients
We consider the problem of numerically approximating sta-tistical moments of the solution of a partial differentialequation (PDE), whose input data (coefficients, forcingterms, boundary conditions, geometry, etc.) are uncertainand described by a finite or countably infinite number ofrandom variables. This situation includes the case of in-finite dimensional random fields suitably expanded in e.gKarhunen-Loeve or Fourier expansions. We focus on poly-nomial chaos approximations of the solution with respect tothe underlying random variables by either Galerkin projec-tion; Collocation on sparse grids of Gauss points or discreteprojection using random evaluations. We discuss in partic-ular the optimal choice of the polynomial space for linearelliptic PDEs with random diffusion coefficient, based ona-priori estimates. Numerical results showing the effective-ness and limitations of the approaches will be presented.
Joakim BeckApplied Mathematics and Computational ScienceKAUST, Saudi [email protected]
Fabio [email protected]
Lorenzo TamelliniMOX, Department of Mathematics
Politecnico di [email protected]
Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]
MS18
Solving Saddle Point Problems with Random Data
Stochastic Galerkin and stochastic collocation methodsare becoming increasingly popular for solving (stochastic)PDEs with random data (coefficients, sources, boundaryconditions, domains). The former usually require the so-lution of a large, coupled linear system of equations whilethe latter require the solution of a large number of smallsystems which have the dimension of the correspondingdeterministic problem. Although many studies on positivedefinite problems with random data can be found in the lit-erature, there is less work on stochastic saddle point prob-lems, both in terms of approximation theory, and solvers.In this talk, we discuss some of the approximation theoryand linear algebra issues involved in applying stochasticGalerkin and collocation schemes to saddle point problemswith random data. Model problems that will be used to il-lustrate the main points, include: 1) a mixed formulation ofa second-order elliptic problem (with random diffusion co-efficient), 2) the steady-state Navier-Stokes equations (withrandom viscosity) and 3) a second-order elliptic PDE on arandom domain, solved via the fictitious domain method.
Catherine PowellSchool of MathematicsUniversity of [email protected]
MS19
Optimization and Model Reduction for ParabolicPDEs with Uncertainties
Optimization of time-dependent PDEs with uncertain co-efficients is expensive, because the problem size dependson the spatial and temporal discretization, and on sam-pling of the uncertainties. I analyze an approach based onstochastic collocation and model reduction, which are usedto decouple the subproblems and reduce their sizes. For aclass of problems and methods, I prove an estimate for theerror between the full and the reduced order optimizationproblem. Numerical illustrations are provided.
Matthias HeinkenschlossDepartment of Computational and Applied MathematicsRice [email protected]
MS19
A Stochastic Galerkin Method for Stochastic Con-trol Problems
We examine a physical problem, fluid flow in porous me-dia, which is represented by a stochastic PDE. We first givea priori error estimates for an optimization problem con-strained by the physical model. We then use the conceptof Galerkin finite element methods to establish a numericalalgorithm to give approximations for our stochastic prob-lem. Finally, we develop original computer programs basedon the algorithm and present several numerical examples
94 UQ12 Abstracts
of various situations.
Jangwoon LeeUniversity of Mary [email protected]
Hyung-Chun LeeAjou UniversityDepartment of [email protected]
MS19
OptimalControl of Stochastic Flow Using Reduced-orderModeling
We explore an stochastic optimal control problem for in-compressible flow by using reduced-order modeling (ROM)based on proper orthogonal decomposition (POD). ThePOD basis is constructed via a set of “snapshots’ obtainedby sampling the solution of the Navier-Stokes system atseveral time instants and over several independent real-izations of the random data. Then, the stochastic ROMis constructed via a Galerkin method with respect to thePOD basis.
Ju MingFlorida State [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
MS19
AnEfficient Sparse-Grid-Based Method for BayesianUncertainty Analysis
We develop an efficient sparse-grid (SG) Bayesian approachfor quantifying parametric and predictive uncertainty ofphysical systems constrained by stochastic PDEs. An ac-curate surrogate posterior density is constructed as a func-tion of polynomials using SG interpolation and integration.It improves the simulation efficiency by accelerating theevaluation of the posterior density without losing much ac-curacy, and by determining an appropriate alternative den-sity which is easily-sampled and captures the main featuresof the exact posterior density.
Guannan ZhangFlorida State [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Ming YeDepartment of Scientific computingFlorida State [email protected]
MS20
Fundamental Limitations of Polynomial Chaos Ex-pansions for Uncertainty Quantification in Systems
with Intermittent Instabilities
Methods based on truncated Polynomial Chaos Expansion(PCE) have been a popular tool for probabilistic uncer-tainty quantification in complex nonlinear systems. Weshow rigorously that truncated PCE have fundamental lim-itations for UQ in systems with intermittently positive Lya-punov exponents, leading to non-realizability of probabilitydensities and even a blow-up at short times of the second-order statistics. Simple unambiguous examples are used toillustrate these limitations.
Michal BranickiNew York [email protected]
Andrew MajdaCourant Institute [email protected]
MS20
Physics-based Auto- and Cross-Covariance Modelsfor Gaussian Process Regression
Gaussian process analysis of systems with multiple out-puts is limited by the fact that far fewer good classes ofcovariance functions exist compared with the single-outputcase. We propose analytical and numerical auto- and cross-covariance models, which take a form consistent with phys-ical laws governing the underlying simulated process, andare suitable when performing uncertainty quantificationor inferences on multidimensional processes (co-kriging).High-order closures are also derived for nonlinear depen-dencies among the observables.
Emil M. Constantinescu, Mihai AnitescuArgonne National LaboratoryMathematics and Computer Science [email protected], [email protected]
MS20
Covariance Approximation Techniques in BayesianNon-stationary Gaussian Process Models
Bayesian Gaussian processes have been widely used inspatial statistics but face tremendous computational chal-lenges for very large data sets. The full rank approximationis among the best approximation techniques used recentlyin statistics to deal with large spatial dataset. This tech-nique is successfully applied in stationary spatial processeswhere the data are reasonably considered stationary butbeen tried with real non-stationary large spatial datasets.The goal of this presentation is to generalize this method.
Bledar Konomi, Guang LinPacific Northwest National [email protected], [email protected]
MS20
Inertial Manifold Dimensionality Finite-time In-stabilities in Fluid Flows
We examine the geometry of the inertial manifold associ-ated with fluid flows and relate its nonlinear dimensional-ity to energy exchanges among different dynamical compo-nents. Using dynamical orthogonality we perform efficientorder-reduction and we prove and illustrate in 2D Navier-Stokes that the underlying mechanism responsible for thefinite dimensionality of the inertial manifold is, apart from
UQ12 Abstracts 95
the viscous dissipation, the reverse flow of energy fromlarge to small scales. We discuss implications on data as-similation schemes.
Themistoklis SapsisNew York [email protected]
MS21
A Matrix-Free Approach for Gaussian ProcessMaximum Likelihood Calculations
In Gaussian processes analysis, the maximum likelihoodapproach for parameter estimation is limited by the use ofthe Cholesky factorization on the covariance matrix in com-puting the likelihood. In this work, we present a stochas-tic programming reformulation with convergence estimatesfor solving the maximum likelihood problem for large scaledata. We demonstrate that the approach scales linearlywith an increase in the data size on a desktop machine,and show some preliminary results on supercomputers withparallel processing.
Jie ChenArgonne National [email protected]
MS21
Sequential Optimization of a Computer Model:Whats So Special about Gaussian Processes Any-way?
In computer experiments, statistical models are commonlyused as surrogates for slow-running codes. In this talk, theusually ubiquitous Gaussian process models are nowhereto be seen, however. Instead, an adaptive nonparamet-ric regression model (BART) is used to deal with nastynonstationarities in the response surface. By providingboth point estimates and uncertainty bounds for predic-tion, BART provdes a basis for sequential design criteriato find optima with few function evaluations.
Hugh ChipmanAcadia University, [email protected]
Pritam RanjanAcadia [email protected]
MS21
Extracting Key Features of Physical Systems forEmulation and Calibration
Many physical models, such as material strength models,rely on a large number of parameters describe the phys-ical system over a large range of possible input settings.However, for particular model based prediction, the phys-ical complexity of the system is much reduced, and as aresult, the effective physical model needed to make predic-tions may be described by a few functions of the parameters(which we call feature).We present an example where physi-cal understanding and its implementation in the predictivecomputer code suggest a particular function of materialstrength to focus on. This allows us to greatly reduce thedimensionality and non-linearity of the problem, makingemulation and calibration a much simpler task.
Nicolas Hengartner
Information Sciences GroupLos Alamos National [email protected]
MS21
Sliced Cross-validation for Surrogate Models
Multi-fold cross-validation is widely used to assess the ac-curacy of a surrogate model. Despite its popularity, thismethod is known to have high variability. We proposea method, called sliced cross-validation, to mitigate thisdrawback. It uses sliced space-filling designs to constructstructured cross-validation samples such that the data foreach fold are space-filling. Numerical examples and the-oretical results will be given to illustrate the proposedmethod.
Peter QianUniversity of Wisconsin - [email protected]
Qiong ZhangDepartment of StatisticsUniversity of [email protected]
MS22
Simulation Informatics
Modern physical simulations produce large amounts ofdata. Proper uncertainty quantification on such data setsrequires scalable, fault tolerant methods for feature extrac-tion and machine learning. We frame the statistical ques-tions arising from uncertainty quantification as a machinelearning problem, which leverages existing work in infor-matics. We focus particularly on scalable model reductionand surrogate modeling for parameterized simulations – acommon scenario in uncertainty quantification.
David F. GleichSandia National [email protected]
MS22
Scientific Data Mining Techniques for ExtractingInformation from Simulations
Techniques from scientific data mining have long been usedfor the analysis of data from various domains includingastronomy, remote sensing, materials science, and simula-tions of turbulence in fluids and plasma. In this talk, Iwill present some of my work in classification of orbits inPoincare plots in fusion simulations, analysis of coherentstructures in Rayleigh-Taylor instability, and comparisonof experiments and simulations in Richtmyer-Meshkov in-stability.
Chandrika KamathLawrence Livermore National [email protected]
MS22
Machine Learning to Recognize Phenomena inLarge Scale Simulations
Clustering, classification, and anomaly detection areamong the most important problems in machine learning.The existing methods usually treat the individual data
96 UQ12 Abstracts
points as the object of consideration. Here we considera different setting. We assume that each instance corre-sponds to a group of data points. Our goal is to estimatethe distances between these groups and use them to createnew clustering and classification algorithms, and recognizephenomena in large scale simulations.
Barnabas PoczosCarnegie Mellon [email protected]
MS22
Extracting Non-conventional Information in Simu-lation of Chaotic Dynamical Systems
Many state-of-the-art large scale simulations are chaotic.Examples include turbulent fluid flows using techniquessuch as direct numerical simulation or large eddy simula-tion, protein folding using molecular dynamics, and nu-clear fusion engineering using plasma dynamics simula-tion. Performing these simulations often produces verylarge amounts of data; however, to extract physical in-sight and understanding from the result of the simulationis nontrivial. This talk focuses on a method of extractingnon-conventional information, such as robustness, sensitiv-ity and quantified uncertainty, from the data produced bychaotic simulations. We will discuss methods for identify-ing the attractor of the chaotic simulation, and introducethe attractor-Fokker-Planck equation that governs the er-godic density on the attractor. We then will introduce ourMonte Carlo solver for the adjoint of the Fokker-Planckequation, which can be used to compute the robustnessand sensitivity of the chaotic simulation with respect topossible parameter changes. Finally, we will show how theresulting sensitivity can be used for uncertainty quantifi-cation.
Qiqi WangMassachusetts Institute of [email protected]
MS23
Distinguishing and Integrating Aleatoric and Epis-temic Variation in UQ
Much of UQ to date has focused on determining the effectof variables modeled probabilistically, with known distri-butions. We develop methods to obtain information on asystem when the distributions of some variables are knownexactly, while others are known only approximately. Weuse the duality between risk-sensitive integrals and rela-tive entropy to obtain bounds on performance measuresover families of distributions. These bounds are computedusing polynomial chaos techniques.
Kenny ChowdharyBrown [email protected]
Jan S. HesthavenBrown UniversityDivision of Applied [email protected]
MS23
Stochastic Basis Reduction by QoI Adaptation
We describe a new stochastic model reduction approach
that characterizes the uncertainty in predictive models byrelying on the dominant subspace occupied by a Quan-tity of Interest. Orders of magnitude reduction in requi-site representation and associated computational costs areachieved in this manner. We demonstrate the methodologyon an elliptic problem describing an elastically deformingbody.
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
Ramakrishna TippireddyUniversity of Southern [email protected]
MS23
Adaptive Basis Selection and Dimensionality Re-duction with Bayesian Compressive Sensing
Bayesian inference is implemented to obtain a polynomialchaos expansion for the response of a complex model, givena sparse set of training runs. For polynomial basis reduc-tion, Bayesian compressive sensing is employed to detectbasis terms with strong impact on model output via rele-vance vector machine. Furthermore, a recursive algorithmis proposed to determine the optimal set of basis terms.The methodology is applied to global sensitivity studies ofthe Community Land Model.
Khachik Sargsyan, Cosmin Safta, Robert D. BerrySandia National [email protected], [email protected],[email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
Daniel Ricciuto, Peter ThorntonOak Ridge National [email protected], [email protected]
MS23
Stochastic Collocation Methods for Stochastic Dif-ferential Equations Driven by White Noise
We propose stochastic collocation methods for time-dependent stochastic differential equations (SDEs) drivenby white noise. To control the increasing dimensionalityin random space generated by the increments of Brownianmotion, we approximate Brownian motion with its spectralexpansion. We also use sparse grid techniques to furtherreduce the computational cost in random space. Numeri-cal examples shows the efficiency of the proposed methods.In particular, our proposed methods can run up to moder-ately large time with proper schemes in physical space andtime.
Zhongqiang ZhangBrown University
UQ12 Abstracts 97
zhongqiang [email protected]
George E. KarniadakisBrown UniversityDivision of Applied Mathematicsgeorge [email protected]
Boris RozovskyBrown UniversityBoris [email protected]
MS24
Probabilistic UQ for PDEs with Random Data: ACase Study
We present a case study for probabilistic uncertainty quan-tification (UQ) applied to groundwater flow in the contextof site assessment for radioactive waste disposal based ondata from the Waste Isolation Pilot Plant (WIPP) in Carls-bad, New Mexico. In this context, the primary quantity ofinterest is the time it takes for a particle of radioactivity tobe transported with the groundwater from the repositoryto man’s environment. The mathematical model consistsof a stationary diffusion equation for the hydraulic headin which the hydraulic conductivity coefficient is a randomfield. Once the (stochastic) hydraulic head is computed,contaminant transport can be modelled by particle trac-ing in the associated velocity field. We compare two ap-proaches: Gaussian process emulators and stochastic col-location combined with geostatistical techniques for deter-mining the parameters of the input random field’s prob-ability law. The second approach involves the numericalsolution of the PDE with random data as a parametrizeddeterministic system. The calculation of the statistics ofthe travel time from the solution of the stochastic model isformulated for each of the methods being studied and theresults compared.
Oliver G. ErnstTU Bergakademie FreibergFakultaet Mathematik und [email protected]
MS24
Proper Generalized Decomposition for StochasticNavier-Stokes Equations
We propose an application of the Proper Generalized De-composition to the stochastic Navier-Stokes equations inthe incompressible regime. The method aims at deter-mining the dominant components of the stochastic flow bymeans of suitable reduced bases approximations. The de-terministic reduced basis is here constructed using Arnoldi-type iterations, which involve the resolution of a series ofdeterministic problems whose structure is similar to theoriginal deterministic Navier-Stokes equations such thatclassical solvers can be re-used. The stochastic solutionis then approximated in the reduced space of determin-istic modes by means of a Galerkin projection, yieldinga low dimensional set of coupled quadratic equations forthe stochastic coefficients. The efficiency of the methodwill be illustrated and its computational complexity willbe contrasted with the classical Galerkin Polynomial Chaosmethod. Computation of the reduced solution residual andof the stochastic pressure field will also be discussed.
Olivier [email protected]
MS24
Computation of the Second Order Wave Equationwith Random Discontinuous Coefficients
In this talk we propose and analyze a stochastic collocationmethod for solving the second order wave equation with arandom wave speed and subjected to deterministic bound-ary and initial conditions. The speed is piecewise smoothin the physical space and depends on a finite number of ran-dom variables. The numerical scheme consists of a finitedifference or finite element method in the physical spaceand a collocation in the zeros of suitable tensor productorthogonal polynomials (Gauss points) in the probabilityspace. This approach leads to the solution of uncoupleddeterministic problems as in the Monte Carlo method. Weconsider both full and sparse tensor product spaces of or-thogonal polynomials. We provide a rigorous convergenceanalysis and demonstrate different types of convergence ofthe probability error with respect to the number of colloca-tion points for full and sparse tensor product spaces and un-der some regularity assumptions on the data. In particular,we show that, unlike in elliptic and parabolic problems, thesolution to hyperbolic problems is not in general analyticwith respect to the random variables. Therefore, the rateof convergence may only be algebraic. An exponential/fastrate of convergence is still possible for some quantities ofinterest and for the wave solution with particular types ofdata. We present numerical examples, which confirm theanalysis and show that the collocation method is a validalternative to the more traditional Monte Carlo method forthis class of problems. We also present extensions to theelastic wave equation.
Mohammad [email protected]
Fabio [email protected]
Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]
MS24
A Predictor-corrector Method for Fluid Flow Ex-hibiting Uncertain Periodic Dynamics
Generalized Polynomial Chaos (gPC) is known to exhibitcomputational difficulties in context of dynamical systemsinvolving random coefficients. This can in particular beattributed to the broadening of the solution spectrum astime evolves. In this work, an iterative numerical methodis developed to the computation of almost surely stablelimit cycles for the incompressible Navier-Stokes equationssubject to random inflow boundary conditions. It relies onthe reformulation of the model equations in terms of anuncertain period, which involves a rescaling of the physicaltime variable. Based on the scaled version, a predictor stepestimates the period with respect to a deterministic refer-ence solution. Afterwards, a convex optimization problemis solved to obtain a stochastic correction term to ensure aminimal distance between an initial and terminal state ofthe trajectory almost surely. An inexact Newton methodprovides a correction for the initial state such the distancecan be decreased even further. The developed numericalmethod is applied to a benchmark problem given by a flow
98 UQ12 Abstracts
around a circular domain subject to uniformly distributedinflow boundary conditions. The numerical results demon-strate the ability of capturing almost surely stable limitcycles with high accuracies, characterized by a vortex shed-ding scheme with an uncertain period.
Michael SchickKarlsruhe Institute of [email protected]
Vincent HeuvelineInstitute for Applied and Numerical MathematicsKarlsruhe Institute of Technology, Universitat [email protected]
Olivier [email protected]
MS25
Uncertainty Analysis in the Florida Public Hurri-cane Loss Model
The Florida Public Hurricane Loss Model was developedfor the purpose of predicting annual insured hurricane lossin the state of Florida. The model is an open public modelcomprised of atmospheric science, engineering, and actuar-ial science components and it serves as a benchmark againstexisting proprietary commercial models. This talk willpresent statistical techniques used to validate the modelwith a focus on the uncertainty and sensitivity analysis forthe loss costs.
Sneh GulatiDepartment of Mathematics and StatisticsFlorida International [email protected]
Shahid S. Hamid, Golam KibriaFlorida International [email protected], [email protected]
Steven CockeCenter for Ocean-Atmospheric Prediction StudiesThe Florida State [email protected]
Mark D. PowellAtlantic Oceanographic and Meteorological LaboratoryNational Oceanic and Atmospheric [email protected]
Jean-Paul PinelliCivil Engineering DepartmentFlorida Institute of [email protected]
Kurtis R. GurleyDepartment of Civil and Coastal EngineeringUniversity of [email protected]
Shu-Ching ChenSchool of Computing and Information SciencesFlorida International [email protected]
MS25
Toward Transparency and Refutability in Environ-mental Modeling
Environmental models would be made more effective byestablishing a base set of model fit, sensitivity, and uncer-tainty measures. Reporting the results of this base setof methods consistently, along with any other methodsconsidered and as innovations in model analysis continue,would improve model transparency and refutability. Thiswork proposes two base set of methods: base set I for allmodels; base set II for models with shorter execution times
Mary [email protected]
Dmitri KavetskiFaculty of Engineering and Built EnvironmentThe University of Newcastle [email protected]
Martyn ClarkNational Center for Atmospheric ResearchUniversity of Colorado [email protected]
Ming YeDepartment of Scientific computingFlorida State [email protected]
Mazdak ArabiDepartmet of Civil and Environmental EngineeringColorado State [email protected]
Laura FogliaUniversity of California [email protected]
Steffen MehlDepartment of Civil EngineeringCalifornia State University Chicosmehl @ csuchico.edu
MS25
Quantifying and Reducing Uncertainty throughSampling Design
Uncertainty reduction for geo-sphere based projects relieson data collection. A variety of performance measures andassociated sample-designs have been proposed. These de-signs are evaluated on a test problem with a focus on adap-tive sampling where information from previous samples isused to identify future sample locations. Example appli-cations of sampling design in from CO2 sequestration andground water contaminant detection are presented.
Sean A McKennaSandia National [email protected]
Jaideep RaySandia National Laboratories, Livermore, [email protected]
Bart G. Van Bloemen Waanders
UQ12 Abstracts 99
Sandia National [email protected]
MS25
Uncertainty Quantification of Shallow Groundwa-ter Impacts due to CO2 Sequestration
A suite of numerical tools including multi-phase and multi-component reactive transport modeling, uncertainty quan-tification and risk assessment are used to develop risk pro-files for the National Risk Assessment Program for CO2 Se-questration. Two quantities, pH and total dissolved solids,were chosen as proxies for assessing shallow groundwaterimpacts due to CO2 and brine leakage in to the shallowaquifer. A 3D flow and transport model of the Edwardsaquifer is used as a site example.
Daniel M. TartakovskyUniversity of California, San [email protected]
Hari ViswanathanEarth and Environmental Sciences DivisionLos Alamos National [email protected]
Elizabeth Keating, Zhenxue DaiLos Alamos National [email protected], [email protected]
Rajesh PawarEarth and Environmental Sciences DivisionLos Alamos National [email protected]
MS26
Random Fluctuations in Homogenization Theory;How Stochasticity Propagates Through Scales
The influence of small scale fluctuations on the solutions ofpartial differential equations (PDE) is relatively well under-stood at the most macroscopic level, ie the homogenizationlevel describing the ensemble average of the PDE solution.Many uncertainty quantification contexts require that weunderstand, at least qualitatively, the random fluctuationsof the PDE solution. Much less is known from the the-oretical standpoint. This talk will review some existingtheories.
Guillaume BalColumbia UniversityDepartment of Applied Physics and Applied [email protected]
MS26
Uncertainty Quantification in Multi-Scale, Multi-Physics MEMS Performance Prediction
This presentation describes a Bayes network approach toperform comprehensive uncertainty quantification in multi-scale, multi-physics model prediction for a MEMS device.The device performance is related to membrane deflection,affected by multiple physics: creep, damping, contact, anddielectric charging. The Bayes network is able to integrateepistemic and aleatory uncertainty information from multi-ple sources and activities (modeling, experiments, experts,calibration, verification, validation) and multiple formats(full distributions, sparse data, imprecise data, noise, error
estimates).
Sankaran MahadevanVanderbilt [email protected]
You LingDepartment of Civil and Environmental EngineeringVanderbilt [email protected]
Joshua Mullins, Shankar SankararamanVanderbilt [email protected],[email protected]
MS26
UQ in MD Simulations: Forward Propagation andParameter Inference
This work focuses on quantifying uncertainty in moleculardynamics (MD) simulations of TIP4P water accounting forintrinsic noise and parametric uncertainty. A functionalrelationship between selected macroscale observables anduncertain atomistic model parameters is constructed viapolynomial chaos expansions using a non-intrusive spectralprojection and a Bayesian inference approach. These PCrepresentations are then exploited in an inverse probleminvolving the estimation of force-field parameters of theTIP4P model using bulk-phase properties of water.
Francesco RizziDepartment of Mechanical EngineeringJohns Hopkins [email protected]
Omar M. KnioDuke [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Khachik SargsyanSandia National [email protected]
Maher SalloumSandia National LaboratoriesScalable & Secure Systems Research [email protected]
Helgi AdalsteinssonSandia National Laboratories, Livermore [email protected]
MS26
Quantifying Sampling Noise and Parametric Un-certainty in Coupled Atomistic-Continuum Simu-
100 UQ12 Abstracts
lations
We perform a coupled atomistic-to-continuum simulationin a model operating under uncertainty. This latter hasthe form of parametric uncertainty and sampling noise in-trinsic in atomistic simulations. We present mathematicalformulations that enable the propagation of uncertaintybetween the discrete and continuum components and thesolution of the exchanged variables in terms of polynomialchaos expansion. We consider a simple Couette flow modelwhere the variable of interest is the wall velocity. Resultsshow convergence of the exchanged variables at a reason-able computational cost.
Maher SalloumSandia National LaboratoriesScalable & Secure Systems Research [email protected]
Khachik SargsyanSandia National [email protected]
Reese [email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
Helgi AdalsteinssonSandia National Laboratories, Livermore [email protected]
MS27
Evaluation of Mixed Continuous-Discrete Surro-gate Approaches
Evaluating the performance of surrogate modeling ap-proaches is essential for determining their viability in un-certainty analysis. To this end, we evaluated categorical re-gression, ACOSSO splines, Gaussian processes (GPs) withspecialized covariance models, and treed GPs on a set oftest functions. We describe our evaluation principles andmetrics, the characteristics of the test functions considered,and our software testbed. Additionally, we present our nu-merical results and discuss our observations regarding themerits of each approach.
Patricia D. HoughSandia National [email protected]
MS27
Large-scale Surrogate Models
The Bayesian Additive Regression Tree (BART) is a sta-tistical model that represents observed data as a sum ofweak learners. Each tree represents a weak learner, andthe overall model is a sum of such trees. In this work, weextend the BART model to handle massive datasets using
a parellelized MCMC algorithm. Our approach handlesdatasets too massive to fit on any single data repository,and scales linearly in the number of processor cores.
Matthew PratolaLos Alamos National [email protected]
MS27
High Performance Bayesian Assessment of Earth-quake Source Models for Ground Motion Simula-tions
Reliable ground motion predictions, with quantified uncer-tainty, are very important for designing large civil struc-tures. Using Bayesian analysis, we rank candidate kine-matic earthquake model classes by comparing their com-puted quantities of interest (e.g. peak ground velocity,spectral acceleration) against reference attenuation rela-tions. The candidate model classes differ on hypothesesabout the physical parameters involved. We report resultsand give an overview of state of the art parallel statisticalalgorithms we have been researching.
Ernesto E. PrudencioInstitute for Computational Engineering and [email protected]
MS27
Use of Gaussian Process Modeling to Improve thePerez Model in the Study of Surface Irradiance ofBuildings
In building energy simulations, a model due to Perez isused to calculate the irradiance received by a tilted surfaceof a building based on the measurements of the irradianceon a horizontal surface. However, it is known that the pre-diction from this model can have a substantial discrepancyfrom the real measurement on a tilted surface. We treatthis discrepancy as an integral of an unknown function overthe space of the sky dome, and use Gaussian Process tomodel the integrated values in each small rectangular gridof the sky dome. Nugget is used to stabilize the parameterestimation. Results from different scales of the grid arepresented to illustrate this new approach.
Heng SuGeorgia [email protected]
MS28
Uncertainty Quantification of Building EnergyModels
We discuss the NSF funded EFRI-SEED project for riskconscious methods in the design and retrofit of buildingsfocusing on the quantification of parameter uncertainty inbuilding energy models, at different system scales and dif-ferent model fidelities. The UQ of the most sensitive pa-rameters is discussed and their influence on building per-formance predictions is demonstrated. A discussion of howthe translation of uncertainties in risk measures will informdesign and retrofit decisions concludes the presentation.
Godfried AugenbroeCollege of ArchitectureGeorgia Institute of [email protected]
UQ12 Abstracts 101
MS28
An Integrated Approach to Design and Operationof High Performance Buildings in the Presence ofUncertainty
In this talk we introduce how to use uncertainty quantifi-cation to leverage the design and operation of high perfor-mance buildings. As buildings contain thousands of param-eters (from the way they are designed to the way they areoccupied and operated), efficient parameter managementand identification of critical parameters is the key. We dis-cuss uncertainty and sensitivity analysis in the context ofan integrated design process including model calibration,optimization, and failure mode analysis.
Bryan [email protected]
MS28
UQ for Better Commercial Buildings: From Designto Operation
Commercial buildings are expensive assets characterizedby long lives and significant environmental and economicimpact; demanding great care in both their design as wellas ongoing operation. This talk discusses UQ for the ro-bust design of high-performance buildings, their improvedoperation using model based predictive control that relieson reduced order models and extracted near optimal rulesets in the face of uncertainty in human behavior, and faultdetection and diagnosis for existing buildings.
Gregor HenzeUniversity of Colorado at [email protected]
MS28
Sensitivity of Building Parameters with BuildingType, Climate, and Efficiency Level
As one embarks on building an energy model of an exist-ing building there are parameters that are known, some-what unknown, and very unknown. Based on the type ofbuilding, the climate it is operating in, and its energy useintensity, the uncertainty of the simulations energy resultsbased on the results sensitivity to the known and unknownparameters variations. Results will be presented of severalhundred thousand simulations showing these uncertaintiesand their corresponding parameter sensitivities.
John [email protected]
MS28
Uncertainty Quantification Methods for Multi-physics Applications
Uncertainty quantification is increasingly recognized as akey component in modeling and simulation-based scienceand engineering. There are, however, many UQ challengesfor large-scale applications including expensive model eval-uation, large parameter space with complex correlations,highly nonlinear models, diverse data sources for calibra-tion, model-form uncertainties, etc. This talk surveyscurrent techniques for these challenges and also describesa LLNL software package called PSUADE that capturesmany such techniques, may be applicable to energy effi-
ciency applications.
Charles TongLawrence Livermore National [email protected]
MS29
Application of the Implicit Particle Filter to aModel of Nearshore Circulation
Data assimilation is properly viewed as an exercise in con-ditional probabilities. Models of the ocean and atmospherehave typical state dimensions of O(105−107), so direct cal-culation of the relevant pdfs is not practical and we mustapply Monte-Carlo methods. We present results from anefficient Monte-Carlo method in which the conditional pdfis sampled directly. We show applications of the methodto high dimensional systems.
Robert MillerCollege of Oceanic and Atmospheric SciencesOregon State University, [email protected]
MS29
Implicit Particle Filtering for Geomagnetic DataAssimilation
A large amount of data of the earths magnetic field hasrecently become available due to satellite missions. Thisdata can be used to improve the predictions of geomagneticmodels and the interest in data assimilation techniques hasthus grown in this field. Thus far, a 4D-Var approach anda Kalman filter have been considered. Here, the implicitparticle filter is applied and its performance is comparedto two competing Monte Carlo algorithms.
Matthias MorzfeldDepartment of MathematicLawrence Berkeley National [email protected]
MS29
A Tutorial on Implicit Particle Filtering for DataAssimilation
I will give an introduction to implicit sampling, a newmethodology that finds high-probability samples of multi-dimensional probability densities by solving equations witha random input. The implicit sampling strategy focusesattention on regions of high probability and, thus, is ap-plicable even if the state dimension is large. I will presentdetails of implicit sampling in connection with data assim-ilation and will provide several examples to illustrate theefficiency and accuracy of the algorithm.
Xuemin TuUniversity of KansasLawrence, KS [email protected]
MS29
An Application of Rare Event Tools to a BimodalOcean Current Model
I will review some recent results on importance samplingmethods for rare event simulation as well as discuss theirapplication to the filtering problem for a simple model of
102 UQ12 Abstracts
the Kuroshio current. Assuming a setting in which bothobservation noise and stochastic forcing are present butsmall, I will demonstrate numerically and theoretically thatsophisticated importance sampling techniques can achievevery small statistical error. Standard filtering methods de-teriorate in this small noise setting.
Jonathan WeareUniversity of ChicagoDepartment of [email protected]
Eric Vanden-EijndenCourant InstituteNew York [email protected]
MS31
Multiple Computer Experiments: Design andModeling
A growing trend in uncertainty quantification is to use mul-tiple computer experiments to study the same complex sys-tem. We construct a new type of design, called a slicedorthogonal array based Latin hypercube design, intendedfor efficiently running such experiments. Such a design it-self achieves attractive uniformity and can be sliced intoLatin hypercube designs. This sliced structure also leadsto a two-stage modeling strategy for integrating data frommultiple computer experiments.
Youngdeok Hwang, Xu HeDepartment of StatisticsUniversity of [email protected], [email protected]
Peter QianUniversity of Wisconsin - [email protected]
MS31
Efficient Analysis of High Dimensional Data in Ten-sor Formats
In this article we introduce new methods for the analysis ofhigh dimensional data in tensor formats, where the under-ling data come from the stochastic elliptic boundary valueproblem. We approximate the high dimensional operatorvia sums of elementary tensors. This tensors representationcan be effectively used for computing different values of in-terest, such as maximum norm, level sets and cumulativedistribution function. As an intermediate step we describeefficient iterative algorithms for computing the character-istic and sign functions as well as pointwise inverse in thecanonical tensor format.
Alexander LitvinenkoTU Braunschweig, [email protected]
MS31
An Adaptive Sparse Grid Generalized StochasticCollocation Method for High-dimensional Stochas-tic Simulations
Our modern treatment of predicting the behavior of phys-ical and engineering problems relies on approximating so-lutions in terms of high dimensional spaces, particularly in
the case when the input data (coefficients, forcing terms,boundary conditions, geometry, etc) are affected by largeamounts of uncertainty. To approximate these higher di-mensional problems we present an adaptive sparse gridgeneralized stochastic collocation technique constructedfrom both global polynomial approximations and local mul-tiresolution decompositions. Rigorously derived error esti-mates, for the fully discrete problem, will be used to showthe efficiency of the methods at predicting the behavior ofboth smooth and/or discontinuous solutions.
Clayton G. WebsterOak Ridge National [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Fabio [email protected]
Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]
MS31
Uncertainty Quantification for Overcomplete BasisRepresentation in Computer Experiments
Overcomplete basis representation is a surrogate modelformed by a linear combination of overcomplete basis func-tions. The method can be effectively used to mimic com-plicated and non-stationary response surfaces in computerexperiments or numerical approximation. The coefficientsin the model are inferred via the matching pursuit tech-nique based on a sparse model assumption. Prediction ismade based on the fitted model. Because it is a numeri-cal method without a stochastic structure, it cannot read-ily give an assessment of the prediction uncertainty. Weuse a Bayesian approach to impose a multivariate normalprior on the collection of the coefficients in the overcom-plete basis representation. Since this is a large dictionary,the prior is over a very large space. But the number of sig-nificant coefficients in the representation is usually muchsmaller (called effect sparsity in design of experiments).For a given data, we can thus collapse the large space toa manageable size. This collapsing is achieved by usinga Bayesian variable selection technique called stochasticsearch variable selection. Once the Bayesian framework isformulated, UQ can be readily implemented but the com-putational details need to be worked out. The proposedUQ approach can be used to simulate posterior samples forthe predicted model, from which Bayesian credible inter-vals are constructed. Numerical study in two cases demon-strates the success of the approach. (Based on joint workwith R. B. Chen, W. C. Wang, and Rui Tuo.)
Jeff WuGeorgia Institute of [email protected]
MS32
Computational Nonlinear Stochastic Homogeniza-tion Using a Non-concurrent multiscale Approach
UQ12 Abstracts 103
for Hyperelastic Heterogeneous MicrostructuresAnalysis
We present an analysis of the effects of stochastic dimensionon a solver based on a tensorial product decomposition forstudying propagation of uncertainties in nonlinear bound-ary value problems. An application to computational non-linear stochastic homogenization for hyperelastic heteroge-neous materials, where the geometry of the microstructureis random, is proposed.
Alexandre ClementUniversite Paris-Est, Laboratoire Modelisation etSimulationMulti Echelle, MSME UMR 8208 [email protected]
Christian SoizeUniversity of Paris-EstMSME UMR 8208 [email protected]
MS32
Pink and 1/fα Noise Inputs in Stochastic PDEs
We consider colored noise having a power spectrum thatdecays as 1/fα, where f denotes the frequency and α ∈(0, 2]. We study, in the context of both 1-D and 2-D differ-ential equations, methods for modeling random inputs as1/fα random fields. We show that the solutions to differen-tial equations exhibit a strong dependence on α, indicatingthat further examination of how randomness in differentialequation is modeled and simulated is warranted.
John BurkardtDepartment of Computational ScienceFlorida State [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Miroslav StoyanovFlorida State UniversityDepartment of Scientific [email protected]
MS32
Efficient Algorithm for Computing Rare FailureProbability
Evaluation of failure probability is a fundamental task inmany important applications such as aircraft design, riskmanagement, etc. Mathematically it can be formulatedeasily as integrals. The difficulty lies in the fact that theintegrals are usually of high dimensions and more impor-tantly the geometry is irregular and known only throughsimulations of some complex systems under consideration.In practical applications when full-scale simulations aretime consuming, there exists no effective methods to accu-rately compute failure probability. The situation is moresevere when dealing with rare failure probability, i.e, fail-ure probability less than 10−5. In this work we discussthe strength and limitation of the existing popular meth-ods. We further present an efficient algorithm to simulatefailure probability accurately at very low simulation cost.The algorithm is able to capture rare failure probability
with only a few hundreds simulations and is suitable forpractical complex system. Both mathematical analysis andnumerical examples are provided.
Jing Li, Dongbin XiuPurdue [email protected], [email protected]
MS32
Hierarchical Multi-output Gaussian Process Re-gression for Uncertainty Quantification with Arbi-trary Input Probability Distributions
We develop an efficient, fully Bayesian, non-intrusiveframework for uncertainty quantification of computercodes. A hierarchical treed Gaussian process surrogate isbuilt by adaptively decomposing the stochastic space in asequential manner that calls upon Bayesian ExperimentalDesign techniques for data gathering. The resulting surro-gate provides point estimates as well as error-bars for thestatistics of interest. We demonstrate numerically that thesuggested framework can resolve local, non-isotropic fea-tures, e.g., discontinuities, slowly varying dimensions, etc.
Ilias BilionisCenter for Applied MathematicsCornell University, [email protected]
Nicholas ZabarasCornell [email protected]
MS33
Uncertainty in Geotechnical Engineering
The lecture will focus on the applications of probabilistictechniques to different problems in Geotechnical engineer-ing. The theoretical background, including representationof uncertainty on soil properties by means of random fieldswill be briefly presented. A number of examples will illus-trate the important contribution of probabilistic techniquesto the assessment of geotechnical structures safety and reli-ability and to decision taking in Geotechnical Engineering,including earth and rockfill dams, foundations, tunnels andshafts.
Gabriel Auvinet-GuichardInstituto de IngenieriaUniversidad Nacional Autonoma de Mexico, [email protected]
MS33
Causality and Stochastic Processes with VaryingEvidence Conditions
A methodology for introducing Causality and StochasticProcesses into the probabilistic solution of inverse problemsis presented. This is based on the representation on causalevents by the use of Bayesian Networks when coupledto different schemes of evidence availability. Particularlywhen evidence varies (data, model predictions and evenexpert’s beliefs) and this can be represented as stochasticprocesses. Illustrative cases including Multi-Physics Ge-omechanics, Natural Hazards and Energy-Based Develop-ments are discussed.
Zenon Medina-Cetina
104 UQ12 Abstracts
Zachry Department of Civil EngineeringTexas A&M [email protected]
MS33
Modelling Uncertainty in Risk Assessment for Nat-ural Hazards
Optimal strategies for natural hazards risk mitigation, riskanalysis professionals must assess and quantify hazard, vul-nerability and risk in the context of safety. These are sub-ject to major uncertainties. Uncertainty in different stepsof the analysis is often categorized into alreatory (natu-ral variability) and epistemic (due to lack of knowledge).Examples that illustrate how the uncertainty is modeledin quantitative risk analysis, and the shortcomings of theexisting methods, will be presented.
Farrokh NadimInternational Centre for Geohazards ICGNorwegian Geotechnical Institute [email protected]
MS33
Estimation of Multiscale Fields Representing An-thropogenic CO2 Emissions from Sparse Observa-tions
We investigate spatial models of anthropogenic CO2 emis-sions for use in inverse problems. Socioeconomic methodsof CO2 emissions estimation show that their spatial dis-tribution is strongly multiscale. We investigate the use ofeasily observed proxies e.g., images of nightlights and pop-ulation density for emission modeling. We consider basesdrawn from them e.g. bivariate Gaussian kernels, or com-bine them to develop reduced wavelet representations. Weperform trade-offs between model complexity, reconstruc-tion accuracy and the uncertainty.
Jaideep RaySandia National Laboratories, Livermore, [email protected]
Bart G. Van Bloemen Waanders, Sean A McKennaSandia National [email protected], [email protected]
MS34
Challenges in Devising Manufactured Solutions forPDE Applications
We describe some challenges we’ve encountered in devisingmanufactured solutions to specific PDE applications andthe ways in which the challenges were met. The applica-tions are Darcy flow with discontinuous permeability co-efficient, free surface flow with kinematic boundary condi-tion, the solute transport equations, and the drift-diffusionequations. The examples demonstrate the flexibility andwide applicability of manufactured solutions, along withsome limitations.
Patrick M. KnuppSandia National [email protected]
MS34
The Requirements and Challenges for Code Veri-
fication Needed to Support the Use of Computa-tional Fluid Dynamics for Nuclear Reactor Designand Safety Applications
Today, computational fluid dynamics (CFD) is being usedextensively in support of nuclear reactor design and safetyanalysis. Verification and Validation (VV) are the techni-cal tools (processes) by which CFD simulation credibilityis quantified. In particular, code verification, solution veri-fication, and validation are coupled into an overall processfor assessing the accuracy of the computed CFD solution.This presentation will discuss some of the specific require-ments, as well as challenges, for code verification needed tosupport of the use of CFD for nuclear reactor design andsafety applications.
Hyung B. LeeBettis [email protected]
MS34
Tools and Techniques for Code Verification usingManufactured Solutions
Verifying computational science software using manufac-tured solutions requires the generation of the solutions withassociated forcing terms and their reliable implementationin software. There are several issues that arise in generat-ing solutions, including ensuring that they are meaningful,and the algebraic complexity of the forcing terms. Ad-dressing these issues within the open source ManufacturedAnalytical Solution Abstraction (MASA) library, which isdesigned to provide a reliable platform for verification withmanufactured solutions, will be discussed.
Nicholas Malaya, Roy Stogner, Robert D. MoserUniversity of Texas at [email protected], [email protected],[email protected]
MS34
Code Verification: Practices and Perils
As scientific codes become more complex through use of so-phisticated algorithms, parallelization procedures and lay-ers of interfaces, probability for implementation mistakesthat affect the computed solution increases. In this envi-ronment, rigorous code verification through use of order-of-accuracy studies becomes essential to provide confidencethat the intended code solves its governing equations cor-rectly. We will discuss a set of best practices for code ver-ification and some common issues that arise in their use.
Carol S. WoodwardLawrence Livermore Nat’l [email protected]
MS35
Practical UQ for Engineering Applications withDAKOTA
Through DAKOTA, we strive to deploy general-purposeUQ software to a broad range of engineering applica-tions, supporting risk-informed design. Practical engineer-ing UQ methods must manage simulation cost, nonlin-earity, non-smoothness, and potentially large parameterspaces. Emerging reliability, stochastic expansion, and in-terval estimation algorithms in DAKOTA address thesechallenges. These can be employed in mixed determinis-
UQ12 Abstracts 105
tic/probabilistic analyses such as optimization under un-certainty. This talk will highlight application and envi-ronment complexities that can limit efficient uncertaintyquantification.
Brian M. AdamsSandia National LaboratoriesOptimization/Uncertainty [email protected]
Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]
Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
MS35
Exact Calculations for Random Variables in Uncer-tainty Quantification Using a Computer AlgebraSystem
The Maple-based APPL language performs exact proba-bility calculations involving random variables. This talkintroduces the langauge and presents several applications,including bootstrapping, goodness-of-fit, reliability, prob-ability distribution selection, stochastic activity networks,time series analysis, and transient queueing analysis.
Lawrence LeemisCollege of William and [email protected]
MS35
Open TURNS: Open source Treatment of Uncer-tainty, Risk ’N Statistics
The needs to assess robust performances for complex sys-tems have lead to the emergence of a new industrial sim-ulation challenge: to take into account uncertainties whendealing with complex numerical simulation frameworks.Open TURNS is an Open Source software platform dedi-cated to uncertainty propagation by probabilistic methods,jointly developed since 2005 by EDF RD, EADS Innova-tion Works, and PhiMECA. This talk gives an overview ofthe main features of the software from a user’s viewpoint.
Anne-Laure PopelinEDF R&[email protected]
Anne DutfoyEDF, Clamart, [email protected]
Paul LematreEDF R&DINRIA [email protected]
MS35
HPD Liberates Applied Probability AND EnablesComprehensively Tackling Design Engineering
HPD, which stands for Holistic Probabilistic Design, con-
sists of amethodology and two breakthrough software suitesfor probabilistically tackling problems of any complexity inDesign Engineering. Initially developed at Xerox Corpora-tion where variability of thousands of characteristics affect-ing system performance dominated product developmentefforts, the HPD software, in fact, liberates Applied Proba-bility and has exposed shortfalls of existing techniques! Asimportantly, HPD revolutionizes Stochastic Analysis andOptimization for manufacturing industries!
Jean Parks, Chun LiVariability [email protected], [email protected]
MS36
Local-Low Dimensionality of Complex Atmo-spheric Flows
We are investigating complex atmospheric flows in OwensValley, California, using the Weather Research and Fore-casting (WRF) Model. Owens Valley is particularly in-teresting since it displays complex flows, including clear-air turbulence and atmospheric rotors. We are explor-ing whether a Local Ensemble Transform Kalman Filter(LETKF) data assimilation scheme works well in such ageographical setting. This work determines how small ofan effective dimension the atmospheric dynamics reduce toover such a terrain.
Thomas BellskyMichigan State UniversityDepartment of [email protected]
MS36
Assimilation of Thermodynamic Profiler Retrievalsin the Boundary Layer: Comparison of Methodsand Observational Error Specifications
A 2009 National Research Council report recommendedinvesting in a nationwide network of thermodynamic pro-filing instruments for the atmospheric boundary layer tobetter forecast mesoscale storm and rainfall events. In thiswork, we complement the efforts of Otkin et al (2011) andHartung et al (2011) by considering variational and hy-brid data assimilation frameworks, and different approxi-mations of the retrieved observation error covariance ma-trices.
Sean CrowellNational Severe Storms LaboratoryNorman, [email protected]
Dave TurnerNational Severe Storms [email protected]
MS36
Lagrangian Data Assimilation and Control: Hideand Seek
Data Assimilation is the act of merging observed quan-tities of a physical system into a Mathematical model toprovide an estimate of the state. The Lagrangian frame-work is utilised when the observations are of positions ofunderwater autonomous vehicles that move with the flow
106 UQ12 Abstracts
field. In this talk I will incorporate the Data Assimilationframework with techniques developed in control theory toprovide a novel approach in inferring on important struc-tures of the flow field.
Damon McdougallUniversity of WarwickUnited [email protected]
MS36
Uncertainty Quantification and Data Assimilationin Multiscale Systems Using Stochastic Homoge-nization
We study ensemble Kalman filtering on a low-dimensionalmultiscale system with fast and slow metastable regimes.We find that, surprisingly, a reduced forecast model withstochastic parametrization of the fast dynamics can bettercapture transitions between slow metastable states thanwhen using the higher-dimensional, perfect deterministicmodel. This is explained in terms of greater reliabilityof the ensemble or improved ensemble spread using thestochastic model. Implications for stochastic parametriza-tions in data assimilation are discussed.
Lewis MitchellThe University of [email protected]
Georg A. GottwaldSchool of Mathematics and StatisticsUniversity of [email protected]
MS37
Overview of Volcanic Hazard Assessment
Straight Monte-Carlo determination of the probability ofrare catastrophic volcanic events is not possible. Our strat-egy uses a GaSP emulator to determine the region of theinput space defining the catastrophic event. A suitabledistribution on the inputs is then used to determine theprobability of this region. Adequate handling of the manyuncertainties present is possible by quantifying all uncer-tainties in terms of probability distributions and combiningthem in a Bayesian analysis.
M.J. BayarriUniversity of [email protected]
MS37
Modeling Large Scale Geophysical Flows: Physicsand Uncertainty
Mathematical simulations of volcanic mass flows supple-ment field deposit studies, providing data from which tomake predictions of volcanic hazards. Several parametersand functions must be specified upon input, in order tostart the simulation. These inputs include the total volumeof the mass flow, the initial direction of the flow, frictionand dissipation factors, and the terrain over which the massmoves. These inputs are often poorly characterized, or areknown to be inaccurate, and these inaccuracies affect thesimulation outputs. Here we study the interplay betweenthe physics of geophysical flows and the uncertain inputs,and examine the effect of uncertainty on outputs, in order
to provide a quantitative measure of hazard at locationsdownstream from the volcano. This approach draws oninsight from the geo-science community, methods of highperformance computing, and novel use of Bayesian statis-tics and mathematical modeling.
E. Bruce PitmanDept. of MathematicsUniv at [email protected]
M.J. BayarriUniversity of [email protected]
James BergerDuke [email protected]
Eliza CalderUniversity of [email protected]
Abani K. PatraSUNY at BuffaloDept of Mechanical [email protected]
Elaine SpillerMarquette [email protected]
Robert WolpertDuke [email protected]
MS37
Design and Use of Surrogates in Hazard Assess-ment
It is the rare, but large volume events that will cause somelocation of interest to be inundated by a volcanic flow.Standard statistical sampling techniques would focus toomany samples (e.g. volcanic flow simulations) on smallvolume events. Using a statistical emulator of flow sim-ulations, we can focus attention on these infrequent buthigh-consequence events and construct a boundary in pa-rameter space at the smallest volume events that wouldinundate a location of concern.
Elaine SpillerMarquette [email protected]
MS37
Combining Deterministic and Stochastic ModelsFor Hazard Assessment
The deterministic flow model ”TITAN-2D” predicts course,depth, and velocity for pyroclastic flows following a domecollapse events of specified volumes and initial directions,for a region whose topography is available as digital ele-vation maps (DEMs). I describe how to combine evidencefrom TITAN-2D, a rapid stochastic emulator, and heavy-tailed stochastic models of flow volumes and directions, toquantify the hazard of a catastrophic inundation at speci-fied locations, reflecting uncertainty about the models and
UQ12 Abstracts 107
data.
Robert WolpertDuke [email protected]
Elaine SpillerMarquette [email protected]
MS38
Numerical Strategies for Epistemic UncertaintyAnalysis
Most of the existing UQ analysis tools rely on the knowl-edge of the probability distribution of the uncertain inputs,also known as aleatory uncertainty. In practice, more of-ten one does not possess sufficient amount of knowledgeto completely specify the distributions of the input un-certainty — epistemic uncertainty. And most of the ex-isting tools of UQ do not readily apply. In this talk wediscuss some numerical strategies to efficiently quantifythe impacts of epistemic uncertainty. The methods arebased on numerically creating accurate surrogate of theproblem, without the need of input probability distribu-tions. And probabilistic analysis can then be conduct ina post-processing step, without incurring additional simu-lation effort. We present the numerical algorithms, theirconvergence analysis, and examples to demonstrate the ef-fectiveness of the strategies.
Xiaoxiao ChenPurdue [email protected]
Eun-Jae ParkDept of Computational Science and Engineering-WCUYonsei [email protected]
Dongbin XiuPurdue [email protected]
MS38
A Localized Strategy for Generalized PolynomialChaos Methods
We present a numerical strategy to treat stochastic differ-ential equations in high dimensions. The approach utilizesa local generalized polynomial chaos (gPC) expansion toapproximate the solution in ”elements”. And such a localexpansion can be low dimensional. Then connecting con-ditions are imposed to enforce the solution to satisfy thedesired mathematical properties of the original differentialequations. When properly implemented, the method canreduce a high-dimensional SPDE in the global domain intoa group of much lower dimensional SPDEs in elements.And the simulation speed-up could be highly significant.We will discuss the technical details of the approach anduse examples to demonstrate its efficiency.
Yi Chen, Dongbin XiuPurdue [email protected], [email protected]
MS38
Stochastic Integration Methods and their Applica-
tion to Reliability Analysis
Among the most popular methods for computing failureprobabilities is FORM. If the curvature of the failure func-tion is large, FORM is known to become inaccurate. Un-der certain conditions the failure integral may be carriedover to an expectation value. The computation of this inte-grals leads to high dimensional integration over the randomspace. Several sparse grid integration methods are consid-ered and compared on test examples from literature. Thecomplete method is applied to the life cycle of a wheel setof a train.
Utz WeverSiemens [email protected]
Albert B. GilgSiemens AGCorporate [email protected]
Meinhard PaffraSiemens [email protected]
MS38
Uncertainty Quantification with High Dimensional,Experimentally Measured Inputs
In this work, we tackle in a holistic manner the prob-lem of uncertainty quantification in the presence of high-dimensional, experimentally measured inputs. Non-linearmodel reduction techniques are used to reduce the dimen-sionality of the experimentally observed input and estimateits probability density. A surrogate of the underlying sys-tem is created via interplay of High Dimensional ModelReduction (HDMR/ANOVA) and the treed Bayesian re-gression frameworks developed recently by our group. Nu-merical emphasis is given in uncertainty quantification offlow through porous media.
Ilias BilionisCenter for Applied MathematicsCornell University, [email protected]
Nicholas ZabarasCornell [email protected]
MS39
Several Interesting Parabolic Test Problems
An important consideration for a posteriori error estima-tion methods is their performance on nonlinear PDEs forproblems that involve weak solution features or other de-generacies. Typically, the errors in such regions are large,and nonlinear aspects of the error evolution cannot beneglected. To test error estimation techniques in thisregime, we have formulated several problems based on non-linear, parabolic PDEs. We will discuss these problemsand demonstrate results from the nonlinear error transportmethod.
Jeffrey A. HittingerCenter for Applied Scientific ComputingLawrence Livermore National [email protected]
108 UQ12 Abstracts
Jeffrey W. BanksLawrence Livermore National [email protected]
Jeffrey M. ConnorsLawrence Livermore National LaboratoryCenter for Applied Scientific [email protected]
Carol S. WoodwardLawrence Livermore Nat’l [email protected]
MS39
Exact Error Behavior of Simple Numerical Prob-lems
Although there is significant mathematical literature re-garding convergence analysis, actual testing of codes isdone by numerical experiment. The gap between numeri-cal results and mathematical theory leaves many questionsunanswered. The most important being, how do we knowthe observed convergence rate is the asymptotic one? Inthis paper we outline a class of exact solutions for the mod-ified equation that allow us to explore this and other ques-tions in a more rigorous manner.
Daniel IsraelLos Alamos National [email protected]
MS39
Verification and Validation in Solid Mechanics
Verification and validation in solid mechanics lags behindother disciplines, possibly because of the larger number ofvariables involved. Verification in solid mechanics mustextend from small deformations using simple constitutivemodels to large deformations using complicated constitu-tive models. Innovative large-deformation manufacturedsolutions, applicable to history-dependent materials, arepresented. Also, novel visualizations of stress states actu-ally reached in applications are used to expose the needfor experiments involving transient loading under massivedeformations.
Krishna Kamojjala, Rebecca BrannonUniversity of [email protected], [email protected]
MS39
Cross-Application of Uncertainty Quantificationand Code Verification Techniques
In most applications, sensitivity analysis (SA) and uncer-tainty quantification (UQ) ignore numerical error when ap-plied to data from computer simulations, but in this work,numerical error is the focus. The differences in SA resultsfor models with and without numerical error are examined,with implications for SA and UQ consumers. Then, UQtechniques are used to directly investigate the truncationerror of numerical methods.
V. Gregory WeirsSandia National [email protected]
MS40
Sampling and Inference for Large-scale InverseProblems Using an Optimization-based Approach
In the Bayesian approach to inverse problems, a poste-rior probability density function is written down that de-pends upon the likelihood and prior probability densityfunctions. Estimators for unknown parameters can thenbe computed, and uncertainty quantification performed,using output from an MCMC method which samples fromthe posterior. In this talk, the focus is on efficient MCMCsampling techniques for large-scale inverse problems thatmake using of optimization algorithms.
Johnathan M. BardsleyUniversity of [email protected]
MS40
Approximate Marginalization of Uninteresting Dis-tributed Parameters in Inverse Problems
In the Bayesian inversion framework, all unknowns aretreated as random variables and all uncertainties can bemodeled systematically. Recently, the approximation er-ror approach was proposed for handling model errors dueto unknown nuisance parameters and model reduction. Inthis approach, approximate marginalization of these errorsis carried out before the estimation of the interesting vari-ables. We discuss the adaptation of the approximationerror approach for approximate marginalization of unin-teresting distributed parameters in inverse problems.
Ville P. KolehmainenDepartment of Applied PhysicsUniversity of Eastern [email protected]
Jari P KaipioDepartment of Applied Physics, University of EasternFinland2Department of Mathematics, University of [email protected]
Tanja TarvainenDepartment of Applied PhysicsUniversity of Eastern [email protected]
Simon ArridgeUniversity College [email protected]
MS40
Recovery from Modeling Errors in Non-stationaryInverse Problems: Application to Process Tomog-raphy
In non-stationary inversion, time-varying targets are esti-mated based on measurements collected during the evo-lutions of the targets. In the reconstructions, evolutionmodels of the targets are utilized. The solutions of in-verse problems are sensitive to errors in the models. Wedemonstrate that in some cases, however, it is possible torecover from modeling errors by statistical modeling of theerror sources. As an example case, we consider imaging ofmoving fluids in industrial process tomography.
Aku Seppanen, Antti Lipponen
UQ12 Abstracts 109
Department of Applied PhysicsUniversity of Eastern [email protected], [email protected]
Jari P. KaipioUniversity of Eastern Finland, FinlandUniversity of Auckland, New [email protected]
MS40
Variational Approximations inLarge-scale Data Assimilation: Experiments witha Quasi-Geostrophic Model
The extended Kalman filter (EKF) is one of the mostwidely used data assimilation methods. However, EKFis computationally infeasible in large-scale problems, dueto the need to store full covariance matrices. Recently,approximations to EKF have been proposed, where thecovariance matrices are computed via Limited MemoryBroyden-Fletcher-Goldfarb-Shanno (LBFGS) and Conju-gate Gradient (CG) optimization methods. In this talk,we review the proposed methods and present data assimi-lation results using a nonlinear Quasi-Geostrophic bench-mark model.
Antti SolonenLappeenranta University of TechnologyDepartment of Mathematics and [email protected]
MS41
A Spectral Uncertainty Quantification Approach inOcean General Circulation Modeling
We address the impact of uncertain model parameters onsimulations of the oceanic circulation. We focus on high-resolution simulations in Gulf of Mexico during the pas-sage of hurricane Ivan. Uncertainties in subgrid mixingand wind drag parametrizations for HYCOM are propa-gated using Polynomial Chaos (PC) expansions. A non-intrusive, sparse spectral projection is used to quantify thestochastic model response. A global sensitivity analysis isfinally presented that reveals the dominant contributors toprediction uncertainty.
Alen Alexanderian, Justin Winokur, Ihab SrajJohns Hopkins [email protected], [email protected], [email protected]
Ashwanth SrinivasanUniversity of [email protected]
Mohamed IskandaraniRosenstiel School of Marine and Atmospheric SciencesUniversity of [email protected]
WilliamCarlisle ThackerNational Oceanic and Atmospheric [email protected]
Omar M. KnioDuke [email protected]
MS41
Uncertainty Quantification of Equation-of-stateClosures and Shock Hydrodynamics
We seek to propagate uncertainties in the experimentaland numerical atomistic data used to define multiphaseequation-of-state (EOS) models to output quantities of in-terest computed by the equations of shock hydrodynam-ics. A Bayesian inference approach for characterizing andrepresenting the EOS parametric uncertainty and modeldiscrepancy information is presented. Given the model un-certainty representation, we explicitly demonstrate a viableapproach for tabular delivery of uncertain EOS informationto the shock hydrodynamics modeling community. SandiaNational Laboratories is a multi-program laboratory oper-ated by Sandia Corporation, a wholly owned subsidiary ofLockheed Martin Corporation, for the U.S. Department ofEnergy’s National Nuclear Security Administration undercontract DE-AC04-94AL85000.
Allen C. Robinson, Robert D. Berry, John H. CarpenterSandia National [email protected], [email protected],[email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Richard R. Drake, Ann E. MattssonSandia National [email protected], [email protected]
William J. RiderSandia National [email protected]
MS41
Optimal Uncertainty Quantification and (Non-)Propagation of Uncertainties Across Scales
Posing UQ problems as optimization problems — overinfinite-dimensional spaces of probability measures and re-sponse functions — gives a method for obtaining sharpbounds on output uncertainties given generic informationabout input uncertainties. These problems can be solvedcomputationally and sometimes in closed form. Surpris-ingly, our results show that input uncertainties may failto propagate across scales if the probabilities and responsefunctions are imperfectly known.
Tim Sullivan, Michael McKerns, Michael OrtizCalifornia Institute of [email protected], [email protected],[email protected]
Houman OwhadiApplied [email protected]
Clint ScovelLos Alamos National [email protected]
Florian TheilUniversity of WarwickMathematics Institute
110 UQ12 Abstracts
MS41
Coarse Graining with Uncertainty: A Relative En-tropy Framework
In this work, we develop a generic framework for uncer-tainty propagation from an all-atom (AA) system to acoarse-grained (CG) system. The AA system is assumedto follow a probability law that depends on some uncertainparameters. The question we explore is how this uncer-tainty affects the induced probability law of the CG sys-tem. Our solution is posed as a stochastic optimizationproblem in terms of relative entropies and an efficient so-lution scheme is developed. The efficacy of the frameworkis demonstrated numerically.
Nicholas ZabarasCornell [email protected]
Ilias BilionisCenter for Applied MathematicsCornell University, [email protected]
MS42
UQ Practices and Standards for Design and Man-ufacturing Process Engineering
Design and manufacturing process engineering have alwaysnecessitated some degree of uncertainty management. In-creasingly complicated systems, with greater performancerequirements/constraints, require corresponding advancesin uncertainty quantification (UQ) practices. To make ef-fective engineering decisions, these practices are typicallyimplemented with software tools that must address mul-tiple types of uncertainty across a spectrum of models,measurements, and simulations. This talk discusses thepresent evolution of UQ practices and the potential for im-provement through standards development.
Mark CampanelliNational Institute of Standards and [email protected]
MS42
Systematic Integration of Multiple UncertaintySources in Design Process
A design optimization methodology is presented thatsystematically accounts for various sources of uncer-tainty physical variability (aleatory uncertainty), datauncertainty (epistemic) due to sparse or imprecise data,and model uncertainty (epistemic) due to modeling er-rors/approximations. A Bayes network approach effec-tively integrates both aleatory and epistemic uncertainties,and fuses multiple formats of information from models, ex-periments, expert opinion, and model error estimates. Themethodology is illustrated using a three-dimensional wingdesign problem involving coupled multi-disciplinary simu-lation.
Sankaran MahadevanVanderbilt [email protected]
MS42
Generalized Chapman-Kolmogorov Equation forMultiscale System Analysis
In multiscale system analysis, the effect of incertitude dueto lack of knowledge may become significant such that itneeds to be quantified separately from inherent random-ness. A generalized Chapman-Kolmogorov equation basedon generalized interval probability is proposed to describedrift-diffusion and reaction processes under aleatory andepistemic uncertainties. Numerical algorithms are devel-oped to solve the generalized Fokker-Planck equation andinterval master equation, where the respective evolutionsof the two uncertainty components are distinguished.
Yan WangGeorgia Institute of [email protected]
MS43
Macroscopic Properties of Isotropic andAnisotropic Fracture Networks from the Percola-tion Threshold to Very Large Densities
Some recent studies of fracture networks are summarized.They are made possible by a very versatile numerical tech-nique based on a three-dimensional discrete description ofthe fractures. Systematic calculations have been made upto very large densities for isotropic, anisotropic networksconsisting of fractures distributed according to a power law.The percolation threshold and the macroscopic permeabil-ity are shown to be almost independent of the fractureshape when displayed as functions of a dimensionless den-sity calculated by means of the excluded volume.
Pierre AdlerUniversite Pierre et Marie [email protected]
Jean Francois Thovert, Valeri [email protected], [email protected]
MS43
Uncertainty Quantification Methods for Unsatu-rated Flow in Porous Media
We present a stochastic collocation (SC) method to quan-tify epistemic uncertainty in predictions of unsaturatedflow in porous media. SC provides a non-intrusive frame-work for uncertainty propagation in models based on thenon-linear Richards’ equation with arbitrary constitutivelaws describing soil properties (relative conductivity andretention curve). To illustrate the approach, we use theRichards’ equation with the van Genutchen-Mualem modelfor water retention and relative conductivity to describe in-filtration into an initially dry soil whose uncertain param-eters are treated as random fields. These parameters arerepresented using a truncated Karhunen-Lovve expansion;Smolyak algorithm is used to construct a structured set ofcollocation points from univariate Gauss quadrature rules.A resulting deterministic problem is solved for each collo-cation point, and together with the collocation weights, thestatistics of hydraulic head and infiltration rate are com-puted. The results are in agreement with Monte Carlo sim-ulations. We demonstrate that highly heterogeneous soils(large variances of hydraulic parameters) require cubature
UQ12 Abstracts 111
formulas of high degree of exactness, while their short cor-relation lengths increase the dimensionality of the problem.Both effects increase the number of collocation points andthus of deterministic problems to solve, affecting the overallcomputational cost of uncertainty quantification.
David A. Barajas-SolanoDept. of Mechanical and Aerospace EngineeringUniversity of California, San [email protected]
MS43
A Comparative Study on Reservoir Characteriza-tion using Two Phase Flow Model
We consider the problem of characterization of porosityand permeability of reservoir using a two phase flow model.We use a Bayesian MCMC framework to sample these pa-rameters conditioned to available dynamic measurementdata. A prefetching procedure to parallelize the MCMC isstudied and compared with the multistage MCMC. A setnumerical examples are presented to illustrate the compar-ison.
Victor E. GintingDepartment of MathematicsUniversity of [email protected]
Felipe PereiraCenter for Fundamentals of Subsurface FlowUniversity of [email protected]
Arunasalam RahunanthanCenter for Fundamentals of Subsurface FlowUniversity of Wyoming, Laramie, WY [email protected]
MS43
Adaptive AMG for Diffusion Equations withStochastic Coefficients
We are interested in solving a 2nd-order diffusion equationwhere there is uncertainty in the conductivity field. Condi-tioning conductivity fields to dynamic data using MarkovChain Monte Carlo requires repeated solution of the for-ward problem for many thousands of realizations of thefield. We present an adaptive algebraic multigrid methodwhich automatically adapts to the changing conductivityfield in an inexpensive manner. Numerical experiments arediscussed for an interesting model problem.
Christian KetelsenUniversity of Colorado at [email protected]
Panayot VassilevskiLawrence Livermore National [email protected]
MS43
Stochastic Models, Simulation and Analysis in Sub-surface Flow and Transport
We discuss stochastic models for heterogeneous randommedia in the context of numerically computing subsurfaceflow and transport of fluids. Simulated models are based
on series expansions in a variety of bases as well as usinggeostatistical libraries. The parameterizations employedallow integration of prior observations of the random sub-strate. Pressures and fluxes are computed numerically andtransport curves are compared under different stochasticregimes.
Mina E. OssianderMath DeptOregon State [email protected]
Malgorzata Peszynska, Veronika S. VasylkivskaDepartment of MathematicsOregon State [email protected],[email protected]
MS44
Spatial Analysis of Energy Consumption by Non-Domestic Buildings in Greater London
This research investigates the influence of local tempera-tures, building shapes, and mix of buildings across districtson heating energy consumed by non-domestic buildings inLondon. The goal is to reduce the uncertainty in energysimulation model outputs of buildings by accounting forspatial variations of energy consumption across the city.We explore (and compare) three statistical formulations toquantify the spatial autocorrelations: spatial autoregres-sive error model, Bayesian geographically weighted regres-sion, and Bayesian CAR model.
Ruchi ChoudharyDepartment of EngineeringUniversity of [email protected]
Wei Tianuniversity of [email protected]
MS44
Spatial Statistical Calibration of a TurbulenceModel
The k-ε turbulence model is widely used in CFD model-ing. However, it has problems predicting flow separationas well as unconfined and transient flows. We calibrate itsparameters against wind tunnel observations of turbulentkinetic energy (TKE) distributed over a two-dimensionalcross section of a street canyon. We use a sequential de-sign to pick optimal spatial locations. We then build anemulator to quantify the uncertainty of the modeled TKEs.
Nina Glover, Serge GuillasUniversity College [email protected], [email protected]
Liora [email protected]
MS44
Adding Spatial Dependence Constraints to a Geo-
112 UQ12 Abstracts
physical Inverse Problem
Given a geophysical model of seismic pressure wave at-tenuation, measurements of this attenuation at points inspace, and partial information on subsurface soil proper-ties (model parameters), I am interested in estimating theunknown soil properties. This inversion has been done viaa stochastic optimization routine because of the complex-ity of the geophysical model. The goal here is to reduceuncertainty in these estimates by incorporating spatial de-pendency constraints in the model inversion.
Eugene MorganDepartment of Civil and Environmental EngineeringDuke [email protected]
MS44
Asessing the Spatial Distribution of Fish SpeciesUsing Bayesian Latent Gaussian Models
We describe the use of the integrated nested Laplace ap-proximation (INLA) approach to infer the spatial distri-bution of fish species, taking into account the spatial au-tocorrelation, and quantifying the uncertainty. The typeof data available (and the methodology used to gather it)will determine the appropriate modelling scheme. In thefirst example, binary data has been collected that can beregarded as randomly sampled. In the second example, weanalyze quantitative data with preferential sampling.
Facundo MunozUniversity of [email protected]
Marıa PenninoSpanish Institute of [email protected]
David Conesa, Antonio Lopez-QuılezUniversity of [email protected], [email protected]
MS45
Uncertainty Quantification for Large MultivariateSpatial Computer Model Output with Climate Ap-plications
Characterizing uncertainty in climate predictions and mod-els is crucial for climate decision-making. One uncertaintysource is due to inability to resolve complex physical pro-cesses, which is approximated using a parameterization.We characterize parameter uncertainty by calibrating themodel to physical observations. Both model output andobservations are large multivariate space-time fields. Weuse Bayesian methods to infer these parameters while in-corporating space-time dependence and other uncertaintysources using a Gaussian process emulator and dimensionreduction.
K. Sham BhatLos Alamos National [email protected]
MS45
Quantifying Uncertainty for Climate Change Pre-dictions with Model Errorin non-Gaussian Systems
with Intermittency
Synergy between empirical information theory andfluctuation-dissipation theorem provides a systematicframework for improving sensitivity and predictive skill forimperfect models of complex natural systems. We utilizea suite of increasingly complex nonlinear models with in-termittent hidden instabilities and time-periodic featuresto illustrate the advantages of such an approach, as wellas the role of model errors due to coarse-graining, momentclosure approximations, and the memory of initial condi-tions in imperfect prediction.
Michal BranickiNew York [email protected]
Andrew MajdaCourant Institute [email protected]
MS45
Upper Ocean Singular Vectors of the North At-lantic: Implications for Linear Predictability andObservational Requirement
Recent interest in decadal prediction in the Atlantic sec-tor presumes that the ocean carries sufficient low-frequencypredictive skill. We explore the limits of predictability insea surface temperature and meridional overturning cir-culation from singular vector calculations with combinedtangent linear and adjoint versions of an ocean general cir-culation model.
Patrick [email protected]
Laure ZannaOxford [email protected]
Andrew M. MooreUniv. of California at Santa [email protected]
Eli TzipermanHarvard [email protected]
MS45
Quantifying Uncertainties in Fully Coupled Cli-mate Models
Fully coupled climate models are the most realistic toolsused to simulate climate change. They incorporatehighly detailed, interacting components for the atmo-sphere, ocean, and sea ice. There is a dire need to quantifytheir uncertainties, but coupled climate models are com-putationally very demanding and contain many sources ofuncertainty. We describe efforts to quantify uncertaintiesof the coupled climate system using perturbed-parameterensembles of the Community Earth System Model.
Scott Brandon, Donald D. Lucas, Curt Covey, DavlidDomyancic, Gardar Johannesson, Stephen Klein, JohnTannahill, Yuying ZhangLawrence Livermore National [email protected], [email protected], [email protected],
UQ12 Abstracts 113
[email protected], [email protected],[email protected], [email protected], [email protected]
MS46
Efficient Spectral Garlerkin Method with Orthgo-nal Polynomials for SPDES
In this talk we first develop a fast algorithm to evaluateorthogonal polynomial expansions for d-variable functionsin on sparse grids. Then we apply this fast algorithm todevelop an efficient spectral method to approximate the so-lutions of stochastic partial differential equations with ran-dom input data. Three aspects distinguishes work from theexisting ones: 1. Our algorithm is applicable L2 functionswith arbitrary probability density weight; 2. The use oforthogonal polynomials results in matrix sparsity; 3. Ex-ponential rate of convergence of the numerical algorithm isrigorously proved.
Yanzhao CaoDepartment of Mathematics & StatisticsAuburn [email protected]
MS46
Adaptive Methods for Elliptic Partial DifferentialEquations with Random Operators
Solutions of random elliptic boundary value problems ad-mit efficient approximations by polynomials on the param-eter domain, but it is not clear a priori which polynomialsare most significant. I will present numerical methods thatadaptively construct suitable spaces of polynomials. Thesemethods are based on adaptive finite elements and adaptivewavelet algorithms, with orthonormal polynomial systemsin place of wavelet bases. They construct efficient sparserepresentations of the approximate solution.
Claude GittelsonPurdue [email protected]
MS46
Efficient Nonlinear Filtering of a Singularly Per-turbed Stochastic Hybrid System
Astract not available at time of publication.
Boris RozovskiBrown Universityboris [email protected]
MS46
Two Stochastic Modeling Strategies for EllipticProblems
In this talk, we compare two stochastic elliptic mod-els: −∇ · (a(x,ω) · ∇u(x,ω)) = f(x) and −∇ ·((
a−1)�(−1) � ∇u(x, ω)
)= f(x), where a(x,ω) is a log-
normal random field, with respect to the two characteristicparameters of the underlying Gaussian random field, i.e.,the standard deviation and the correlation length. Somerelated numerical issues and numerical results will be dis-cussed.
Xiaoliang WanLouisiana State University
Department of [email protected]
MS47
Putting Computational Inference into an FPGA
We are building a compiler to map large numerical com-putations into FPGAs (Field Programmable Gate Array).Our target application is MCMC (Markov chain MonteCarlo) based inference for ECT (Electrical Capacitance To-mography). We aim to realize the good compute through-put of FPGAs, both in terms of performance-per-dollar andperformance-per-Watt, to perform real-time UQ embeddedinto industrial sensors.
Colin FoxUniversity of Otago, New [email protected]
MS47
Model Reductions by MCMC
Identification of models is often complicated by the factthat the available experimental data from field measure-ments is noisy or incomplete. Moreover, models may becomplex, and contain a large number of correlated param-eters. As a result, the parameters are poorly identified bythe data, and the reliability of the model predictions isquestionable. We consider a general scheme for reductionand identification of dynamic models using two modernapproaches, Markov chain Monte Carlo sampling methodstogether with asymptotic model reduction techniques. Theideas are illustrated with examples related to bio-medicalapplications and epidemiology.
Heikki HaarioLappeenranta University of TechnologyDepartment of Mathematics and [email protected]
MS47
Advanced Uncertainty Evaluation of Climate Mod-els by Monte Carlo Methods
Uncertainties in future climate projections are often evalu-ated based on the perceived spread of ensembles of multi-model climate projections. Here we focus on the uncertain-ties related to a single climate model and its closure param-eters that are used in physical parameterization schemesand cover topics such as specification of the cost functionand parameter identification by Monte Carlo methods.
Marko LaineFinnish Meteorological [email protected]
MS47
Covariance Estimation Using Chi-squared Tests
Regularized solutions of ill-posed inverse problems can beviewed as solutions of the original problem when the prob-lem is regularized by adding statistical information to it.The discrepancy principle finds a regularization parameterby applying a chi-square test, and we extend and improvethe approach to estimate a regularization matrix. Thisregularization matrix can be viewed as an estimate of anerror covariance matrix. Results will be shown from an
114 UQ12 Abstracts
event reconstruction problem.
Jodi MeadBoise State UniversityDepartment of [email protected]
MS48
Heterogeneous Deformation of PolycrystallineMetals and Extreme Value Events
Astract not available at time of publication.
Curt BronkhorstLos Alamos National [email protected]
MS48
Coupled Chemical Master Equation - FokkerPlanck Solver for Stochastic Reaction Networks
Stochastic reaction networks are generally governed by theChemical Master Equation (CME). While the CME, asa high-dimensional ODE system, is hard to solve in gen-eral, its continuum approximations, e.g. Focker-Planckequations (FPE), are reliable alternatives when the speciescount is large enough. We present a proof-of-conceptmethodology for a hybrid construction that effectively com-bines CME and FPE, employing a discrete scale CME so-lution in regions where the continuum FPE is inaccurate.
Khachik Sargsyan, Cosmin SaftaSandia National [email protected], [email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
MS48
A New Approach for Solving Multiscale SPDEs us-ing Probabilistic Graphical Models
We propose a hierarchical representation of random fieldsby capturing the stochastic input at fine scale with locallyreduced models and combining them with graphical modelson the coarse scale. Regression models are constructedsuch that local features are taken as input and multiscalebasis functions as output. In this way, we can make samplesof coarse scale properties from the graphical probabilisticmodel, obtain multiscale basis functions with regressionmodels and solve SPDEs on the coarse scale.
Nicholas ZabarasCornell [email protected]
Jiang WanCorness [email protected]
MS49
A Reduced Order Model for a Nonlinear, Parame-terized Heat Transfer Test Case
We present a reduced order model (ROM) for heat trans-fer through a composite fiber with two material properties.The ROM is constructed by interpolating the componentsof a modal decomposition (SVD) of a sampling of high-fidelity models – each computed with different materialproperties. We are able to scale the SVD using a novelimplementation in MapReduce. We then use the ROM asa cheap surrogate to study the probability of failure of thesystem.
Paul Constantine, Jeremy TempletonSandia [email protected], n/a
Joe RuthruffLivermoren/a
MS49
Uncertainty Quantification for Turbulent Mixingand Turbulent Combustion
Large eddy simulations of mixing and combustion with fi-nite rate chemistry eliminate a major source of chemistrymodel related (epistemic)uncertainty. The method is a no-tion of stochastic convergence, to generate space-time de-pendent PDFs (Young’s measures) out of a single simula-tion via coarse graining with the multiple mesh values in asupercell to generate the PDF. Comparison to experimen-tal data related to a Scram jet is included. Comparisonto RANS and other simulations allows model related errorassessment and uncertainty quantification.
James G. GlimmStony Brook UniversityBrookhaven National [email protected] or [email protected]
MS49
Chemical Kinetic Uncertainty Quantification forHigh-Fidelity Turbulent Combustion Simulations
Detailed chemical mechanisms consist of hundreds or thou-sands of chemical reactions with an uncertain rate for each.Direct propagation of this very high-dimensional uncer-tainty through high-fidelity Large Eddy Simulation (LES)of turbulent reacting flows is computationally intractable.Instead, a method is presented in which the pre-processingportion of the turbulent combustion model is used to con-dition this very high-dimensional uncertainty, resulting ina lower-dimensional uncertainty space that is efficientlypropagated through the LES. The approach is demon-strated with a laboratory-scale turbulent nonpremixed jetflame.
Michael [email protected]
Gianluca IaccarinoStanford [email protected]
Heinz PitschStanford
UQ12 Abstracts 115
na/
MS49
Criteria for Optimization Under Uncertainty
In this work different criteria for robust optimization underuncertainty are compared. In particular, a framework pro-posed by the authors and characterized by the use of all thepossible information in the probabilistic domain, namelythe Cumulative Distribution Function (CDF), is comparedto more classical approaches that rely on the usage of clas-sical statistical moments as deterministic attributes thatdefine the objectives of the optimization process. Further-more, the use of an area metric leads to a multi-objectivemethodology which allows an a posteriori selection of thecandidate design based on risk/opportunity criteria definedby the designer. A comparison with another new techniquebased on Evidence Theory, proposed by different authors,is also introduced.
Domenico QuagliarellaCentro Italiano Ricerche [email protected]
G. PetroneUniversity Federico II of [email protected]
Gianluca IaccarinoStanford [email protected]
MS50
Bayesian Uncertainty Quantification for Flows inHighly Heterogeneous Media
Abstract not available at time of publication.
Yalchin EfendievDept of MathematicsTexas A&M [email protected]
MS50
Emulators in Large-scale Spatial Inverse Problems
We consider a Bayesian approach to nonlinear inverse prob-lems in which the unknown quantity (input) is a randomfield (spatial or temporal). The forward model is complexand non linear, therefore computationally expensive. Wedevelop an emulator based approach where the Bayesianmultivariate adaptive splines (BMARS) has been used tomodel unknown functions of the model input. We considerdiscrete cosine transformation (DCT) for dimension reduc-tion of the input field. The estimation is carried out usingtrans dimensional Markov chain Monte Carlo. Numericalresults are presented by analyzing simulated as well as realdata for reservoir characterization.
Anirban MondalQuantum Reservoir [email protected]
MS50
Uncertainty Quantification for Reactive Transport
of Contaminants
The fate and transport of contaminants in the subsurfaceis strongly influenced by bio-geochemical reactions. Pre-dictions are routinely compromised by both model (struc-tural) and parametric uncertainties. I will first present aset of computational tools for quantifying these two typesof uncertainties for multi-component nonlinear chemical re-actions. The second part of the talk will deal with the com-bined effects of uncertainty in reactions and heterogeneityof the medium on the transport of reactive contaminants.
Gowri SrinivasanT-5, Theoretical Division, Los Alamos [email protected]
MS50
A Framework for Experimental Design for Ill-posedInverse Problems
A fremework for experimental design for ill-posed inverseproblems We consider the problem of experimental designfor ill-posed inverse problems. The main difficulties wehave to address are: the bias introduced by the regulariza-tion of the inverse problem, and the parameter selectionrequired by the regularization. We propose a Bayesianapproach based on a hierarchical sequence of prior mo-ment conditions. The methodology applies also to a non-Bayesian mathod based on training models. We providegeneralization of the usual equivalence theorems and pro-pose numerical methods to tackle large-scale problems.
Luis TenorioColorado School of [email protected]
MS51
An Approach for the Adaptive Solution of Opti-mization Problems Governed by PDEs with Un-certain Coefficients
Using derivative based numerical optimization routines tosolve optimization problems governed by partial differen-tial equations (PDE) with uncertain coefficients is com-putationally expensive due to the large number of PDEsolves required at each iteration. I propose a frameworkfor the adaptive solution of such optimization problemsbased on the retrospective trust region algorithm. I proveglobal convergence of the retrospective trust region algo-rithm under weak assumptions on gradient inexactness. Ifone can bound the error between actual and modeled gra-dients using reliable and efficient a posteriori error estima-tors, then the global convergence of the proposed algorithmfollows. I present a stochastic collocation finite elementmethod for the solution of the PDE constrained optimiza-tion. In the stochastic collocation framework, the state andadjoint equations can be solved in parallel. Initial numer-ical results for the adaptive solution of these optimizationproblems are presented.
Drew KouriDepartment of Computational and Applied MathematicsRice [email protected]
MS51
Uncertainty Quantification using Nonparametric
116 UQ12 Abstracts
Estimation and Epi-Splines
We address uncertainty quantification in complex systemsby a flexible, nonparametric framework for estimation ofdensity and performance functions. The framework sys-tematically incorporates hard information derived fromphysics-based sensors, field test data, and computer sim-ulations as well as soft information from human sourcesand experiences. The framework is based on epi-splinesfor consistent approximation of infinite-dimensional opti-mization problems arising in the estimation process. Pre-liminary numerical results indicate that the framework ishighly promising.
Johannes O. RoysetAsst. Prof.Naval Postgraduate [email protected]
Roger WetsUniversity of California, [email protected]
MS51
Numerical Methods for Shape Optimization underUncertainty
The proper treatment of uncertainties in the context ofaerodynamic shape optimization is a very important chal-lenge to ensure a robust performance of the optimized air-foil under real life conditions. This talk will propose a gen-eral framework to identify, quantize and include stochasticdistributed, aleatory uncertainties in the overall optimiza-tion procedure. Appropriate robust formulations of theunderlying deterministic problem and uncertainty quantifi-cation techniques in combination with adaptive discretiza-tion approaches are investigated in order to measure theeffects of the uncertainties in the input data on quantitiesof interest in the output. Finally, algorithmic approachesbased on multiple-setpoint ideas in combination with one-shot methods as well as numerical results are presented.
Claudia SchillingsUniversity of Trier, [email protected]
Volker SchulzUniversity of TrierDepartment of [email protected]
MS51
Risk Neutrality and Risk Aversion in Shape Opti-mization with Uncertain Loading
Conceptually inspired by finite-dimensional stochastic pro-gramming with recourse we propose infinite-dimensionaltwo-stage stochastic models for shape optimization of elas-tic bodies with linearized elasticity and stochastic load-ing. Risk neutral and risk averse approaches, the latterinvolving suitable risk measures or dominance relations,will be discussed. Reporting illustrative computational ex-periments concludes the talk.
Ruediger SchultzUniversity of [email protected]
MS52
Spatial ANOVA Modeling of High-resolution Re-gional Climate Model Outputs from NARCCAP
The differences between future (2041-2070) and current(1971-2000) average seasonal temperature fields, fromtwo regional climate models (RCMs) driven by the sameatmosphere-ocean general circulation model in the NorthAmerican Regional Climate Change Assessment Program(NARCCAP), are analyzed using a hierarchical two-wayANOVA model with the factors of season, RCM, and theirinteraction. The spatial variability across the domain ismodeled through the Spatial Random Effects (SRE) model,which allows for efficient computation.
Emily L. KangDepartment of Mathematical SciencesUniversity of [email protected]
Noel CressieOhio State [email protected]
MS52
Statistical Issues in Catchment Scale HydrologicalModeling
Many uncertainty quantification methods currently in useby the hydrological community make the mistake of im-plicitly conflating the hydrological model with the truephysical system. Rather than using a framework with anadditive discrepancy term, I will argue for making the hy-drological model itself stochastic, expressed using a systemof stochastic differential equations. I will discuss Bayesianinference under this model, which falls under the difficultcategory of dynamic models with non-time-varying param-eters.
Cari KaufmanUniversity of California, [email protected]
MS52
Quantifying Uncertainties in Hydrologic Parame-ters in the Community Land Model
Uncertainties in hydrologic parameters could have signifi-cant impacts on the simulated water and energy fluxes andland surface state, which can affect carbon cycle and at-mospheric processes in coupled earth system simulations.In this study, we introduce an uncertainty quantification(UQ) framework for sensitivity analyses and uncertaintyquantification of selected parameters related to hydrologicprocesses in the Community Land Model. Ten flux towersites and over fifteen watersheds spanning a wide range ofclimate and site conditions were selected to perform sensi-tivity analyses by perturbing the parameters identified. AnEntropy-based approach was used to derive prior PDF foreach input parameter and the Quasi-Monte Carlo (QMC)approach was used to generate samples of parameters fromthe prior PDFs. A simulation for each set of sampled pa-rameters was performed to obtain response curves. Basedon statistical tests, the parameters were ranked for theirsignificance depending on the simulated latent heat flux,sensible heat flux, and total runoff. The study providesthe basis for parameter dimension reduction and guidanceon parameter calibration framework design, under different
UQ12 Abstracts 117
hydrologic and climatic regimes.
Ruby LeungPacific Northwest National [email protected]
Zhangshuan Hou, Maoyi Huang, Yinghai KePacific Northwest National [email protected], [email protected],[email protected]
Guang LinPacific Northwest National [email protected]
MS52
Uncertainty Propagation in Land-Crop Models
Crop models were recently included in the land model ofthe Community Earth System Model (CESM). Good es-timates of physiology parameters for crops are essentialto get accurate outputs, particularly carbon cycle ones.To that end, we investigate the potential of intrusive un-certainty quantity approaches when used in combinationwith maximum likelihood/ regression models for calibrat-ing the crop model parameters from Ameriflux data. Nu-merical tests for this approach using soybean data from theBondville, IL site produced very promising initial results.
Xiaoyan ZengArgonne National [email protected]
Mihai Anitescu, Emil M. ConstantinescuArgonne National LaboratoryMathematics and Computer Science [email protected], [email protected]
MS53
Basis and Measure Adaptation for StochasticGalerkin Projections
Use of the stochastic Galerkin finite element methods leadsto large systems of linear equations.These systems are typ-ically solved iteratively.We propose a preconditioner whichtakes advantage of the recursive hierarchy in the structureof the global system matrices. Neither the global matrix,nor the preconditioner need to be formed explicitly.Theingredients include only the number of stiffness matricesfrom the truncated Karhunen-Loeve expansion and a pre-conditioner for the mean-value problem.The performanceis illustrated by numerical experiments.
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
Bedrich SousedıkUniversity of Southern [email protected]
Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]
MS53
A Stochastic Collocation Method Based on SparseGrid for Stokes-Darcy Model with Random Hy-draulic Conductivity
The Stokes-Darcy model has attracted significant attentionsince its higher fidelity than either Darcy or Stokes sys-tem is important for many applications, but uncertaintiesand coupling issues lead to a rather complex system. Wepresent a stochastic collocation method based on sparsegrid for the Stokes-Darcy model with random hydraulicconductivity. This method is natural for parallel computa-tion and some numerical results are presented to illustratethe features of this method.
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
Xiaoming He, Xuerong WenDepartment of Mathematics and StatisticsMissouri University of Science and [email protected], [email protected]
MS53
Adaptive Sparse Grid Generalized Stochastic Col-location Using Wavelets
Accurate predictive simulations of complex real world ap-plications require numerical approximations to first, op-pose the curse of dimensionality and second, convergequickly in the presence of steep gradients, sharp transitions,bifurcations or finite discontinuities in high-dimensionalparameter spaces. In this talk we present a novel multidi-mensional multiresolution adaptive (MdMrA) sparse gridstochastic collocation method, that utilizes hierarchicalmultiscale piecewise Riesz basis functions constructed frominterpolating wavelets. The basis for our non-intrusivemethod forms a stable multiscale splitting and thus, opti-mal adaptation is achieved. Error estimates and numericalexamples will used to compare the efficiency of the methodwith several other techniques.
Clayton G. WebsterOak Ridge National [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
John BurkardtDepartment of Computational ScienceFlorida State [email protected]
MS53
Error Analysis of a Stochastic Collocation Methodfor Parabolic Partial Differential Equations withRandom Input Data
A stochastic collocation method for solving linear parabolicPDEs with random input data is analyzed. The input dataare assumed to depend on a finite number of random vari-ables. Unlike previous analysis, a wider range of situationsis considered including input data that depend nonlinearlyon the random variables and random variables that are
118 UQ12 Abstracts
correlated or even unbounded. We demonstrate exponen-tial convergence of the interpolation error for both a semi-discrete and a fully-discrete scheme.
Guannan ZhangFlorida State [email protected]
Max GunzburgerFlorida State UniversitySchool for Computational [email protected]
MS54
Retrospective-Cost-Based Model Refinement ofSystems with Inaccessible Subsystems
Spatially discretized models are typically constructedbased on physical laws and parameter estimates. Thesemodels may be erroneous, however, due to unmodeledphysics and numerical errors. These discrepancies becomeapparent when model predictions disagree with measure-ments. The goal of this research is to use data to im-prove the fidelity of an initial model. The approach wetake is based on retrospective-cost-based optimization ofsubsystems that are inaccessible, that is, whose inputs andoutputs are not measured. Applications to space weathermodeling will be presented.
Dennis BernsteinUniversity of [email protected]
Anthony D’Amato, Asad AliAerospace Engineering DepartmentUniversity of [email protected], asadali
MS54
Multiscale Methods for Flows in HeterogeneousStochastic Porous Media
In this talk, I will discuss multiscale methods for stochas-tic problems. The main idea of our approach is to con-struct local basis functions that encode the local featurespresent in the coefficient to approximate the solution ofparameter-dependent flow equation. Constructing localbasis functions involves (1) finding initial multiscale ba-sis functions and (2) constructing local spectral problemsfor complementing the initial coarse space. We use the Re-duced Basis (RB) approach to construct a reduced dimen-sional local approximation that allows quickly computingthe local spectral problem. I will present the details of thealgorithm, numerical results, and computational cost. Ap-plications to Bayesian Uncertainty Quantification will bediscussed. This is a joint work with Juan Galvis (TAMU)and Florian Thomines (ENPC).
Yalchin EfendievDept of MathematicsTexas A&M [email protected]
MS54
Feedback Particle Filter: A New Formulation forNonlinear Filtering Based on Mean-field Games
A new formulation of the particle filter for nonlinear filter-
ing is presented, based on concepts from optimal control,and from mean-field game theory. The feedback particlefilter admits an innovations error-based feedback controlstructure: The control is chosen so that the posterior dis-tribution of any particle matches the posterior distributionof the true state given the observations. The feedback par-ticle filter is shown to have significant advantages over theconventional particle filter in the numerical examples con-sidered. Application to Bayesian inference in neural cir-cuits is briefly discussed with the aid of coupled oscillatormodels.
Prashant G. MehtaDepartment of Mechanical and Industrial EngineeringUniversity of Illinois at Urbana [email protected]
Sean MeynDepartment of Electrical and Computer EngineeringUniversity of Illinois at Urbana [email protected]
MS54
Uncertainty Analysis of Dynamical Systems – Ap-plication to Volcanic Ash Transport
Uncertainty analysis of a number of societally importantphysical systems may be posed in the dynamic systemsframework. In this leadoff talk we will overview thisminisymposium on this topic and frame issues using ourgroup’s work on transport of volcanic ash transport. Basicapproaches and results of application of these methods torecent eruption events will be presented.
Abani K. PatraSUNY at BuffaloDept of Mechanical [email protected]
Puneet SinglaMechanical & Aerospace EngineeringUniversity at [email protected]
Marcus BursikDept. of GeologySUNY at [email protected]
E. Bruce PitmanDept. of MathematicsUniv at [email protected]
Matthew JonesCenter for Comp. Research,Univ. at Buffalo, Buffalo, NY [email protected]
Tarunraj SinghDept of Mechanical EngineeringSUNY at [email protected]
UQ12 Abstracts 119
MS55
Optimal Use of Gradient Information in Uncer-tainty Quantification
We discuss methods that use derivative information forprobabilistic uncertainty propagation in models of com-plex system. We demonstrate that such methods needsless sampling information compared to their derivative-free version. Here we discuss the importance of severalalgorithmic choices, such as the choice of basis in regres-sion with derivative information, the choice of error modelby means of Gaussian processes, and the choice of sam-pling points based on optimal design considerations. Wedemonstrate in particular that standard choices originat-ing from derivative-free methods are not suitable in thiscircumstance, and we define criteria adapted to the useof derivative information that result in novel designs andbases. We demonstrate our findings on models from nu-clear engineering.
Mihai AnitescuArgonne National LaboratoryMathematics and Computer Science [email protected]
Fred J. HickernellIllinois Institute of TechnologyDepartment of Applied [email protected]
Yiou LiIllinois Institute of [email protected]
Oleg RoderickArgonne National [email protected]
MS55
Gradient-based Optimization for Mixed Epistemicand Aleatory Uncertainty
Within this talk, a novel sampling/optimization approachto mixed aleatory/epistemic uncertainty quantification ispresented for hypersonic computational fluid dynamic sim-ulations. This approach uses optimization to propagateepistemic uncertainty to simulation results while samplingof these optimization results is used to characterize thedistribution due to aleatory variables. To reduce the re-quired number of optimizations, a Kriging surrogate is em-ployed. For this talk, this combined sampling/optimizationapproach will be demonstrated for analytic functions andsimulations drawn from hypersonic fluid flows. Addition-ally, the performance of a similar uncertain optimizationapproach is tested for this application.
Brian LockwoodUniversity of [email protected]
Dimitri MavriplisDepartment of Mechanical EngineeringUniversity of [email protected]
MS55
Gradient-based Data Assimilation in Atmospheric
Applications
Adaptivity in space and time is ubiquitous in modern nu-merical simulations. To date, there is still a considerablegap between the stateoftheart techniques used in direct(forward) simulations, and those employed in the solutionof inverse problems, which have traditionally relied on fixedmeshes and time steps. This talk describes a framework forbuilding a space-time consistent adjoint discretization fora general discrete forward problem, in the context of adap-tive mesh, adaptive time step models.
Adrian SanduVirginia Polytechnic Institute andState [email protected]
Mihai AlexeVirginia TechDepartment of Computer [email protected]
MS55
Sensitivity Analysis and Error Estimation for Dif-ferential Algebraic Equations in Nuclear ReactorApplications
We develop a general framework for computing the adjointvariable to nuclear engineering problems governed by a setof differential-algebraic equations. We use an abstract vari-ational approach to develop the framework, which providesflexibility in modification and expansion of the governingequations. Using both a simplex example with known so-lution and a dynamic reactor model, we show both theo-retically and numerically that the framework can be usedfor both parametric uncertainty quantification and the es-timation of global time discretization error.
Hayes StriplingTexas A&M UniversityNuclear Engineering [email protected]
Mihai AnitescuArgonne National LaboratoryMathematics and Computer Science [email protected]
Marvin AdamsTexas A&M [email protected]
MS56
A Semi-Intrusive Deterministic Approach to Un-certainty Quantifications
This paper deals with the formulation of a semi-intrusive(SI) method allowing the computation of statistics of lin-ear and non linear PDEs solutions. This method shows tobe very efficient to deal with probability density functionof whatsoever form, long-term integration and discontinu-ities in stochastic space. The effectiveness of this methodis illustrated for a modified version of Kraichnan-Orszagthree-mode problem where a discontinous pdf is associatedto the stochastic variable, and for a nozzle flow with shocks.The results have been analyzed in terms of accuracy andprobability measure flexibility. Finally, the importance ofthe probabilistic reconstruction in the stochastic space isshown up on an example where the exact solution is com-
120 UQ12 Abstracts
putable, in the resolution of the viscous Burgers equations.We expect some more applications by the time of the con-ference.
Remi AbgrallINRIA Bordeaux Sud-OuestUniversite de [email protected]
James G. GlimmStony Brook UniversityBrookhaven National [email protected] or [email protected]
MS56
Uncertainty Quantification with Incomplete Phys-ical Models
The emergence of uncertainty quantification (UQ) hasbeen a critical development towards the goal of predic-tive science. The many efficient, non-intrusive, intuitiveapproaches of probabilistic uncertainty have acceleratedthe widespread adoption of the methodology. The addi-tional uncertainty arising from incomplete physical modelsmakes the analysis much less obvious. This work exam-ines how optimization over a restricted class of functionalscan bound the analysis of the true model if it were avail-able. Furthermore, these bounds provide insight into thesensitivity of this knowledge-gap relative to the overall pre-diction.
Gianluca IaccarinoStanford [email protected]
Gary [email protected]
MS56
Numerical Methods for Polynomial Chaos in Un-certain Flow Problems
The Euler equations subject to input uncertainty are inves-tigated via the stochastic Galerkin approach. We present afully intrusive method based on a variable transformationof the continuous equations. Roe variables are employedto get quadratic dependence in the flux function. The Roeformulation saves computational cost compared to the for-mulation based on expansion of conservative variables, andcan handle cases of supersonic flow, for which the conser-vative variable formulation leads to instability.
Mass Per PetterssonStanford [email protected]
MS56
Evidence Based Multiciplinary Robust Design
Abstract not available at time of publication.
Massimilian VasileUniversity of [email protected]
MS57
Toward Estimation of Sub-ice Shelf Melt Ratesto Ocean Circulation Under Pine Island Ice Shelf,West Antarctica
Increased melt rates under ice shelves around Antarcticahave been suggested as a dominant cause for observed ac-celeration of the ice sheet near its marine margin. The meltrates are difficult to observe directly. We present first stepstowards estimating the melt rates from accessible hydrog-raphy and optimal control methods. We address to whichextent hydrographic observations away from the ice-oceanboundary can be used to constrain melt rates. We derivesensitivity patterns of melt rates to changes in ocean circu-lation underneath the Pine Island Ice-Shelf. The sensitivitypatterns are computed with an adjoint model of an oceangeneral circulation model that resolves the sub-ice shelf cir-culation. Inverting for best-estimate melt rates presents afirst step towards backward propagation of uncertainties.
Patrick [email protected]
Martin LoschAlfred Wegener [email protected]
MS57
Ice Bed Geometry: Estimates of Known Unknowns
For ice sheets, kilometer scale features in bed geometry maydominate scenarios of ice mass loss. We developed a radarsensor model for Thwaites basin in West Antarctica anduse conditional simulations of off-track features to char-acterize unknown knowns of its bed geometry. We shalldiscuss the observational, scientific, and statistical strate-gies for quantifying effects of bed geometry uncertaintieson estimates of sea level rise.
Charles JacksonUTIGUniversity of Texas at [email protected]
Chad GreeneDepartment of Geological SciencesThe University of Texas at Austinchad allen greene ¡[email protected]¿
John Goff, Scott Kempf, Duncan YoungInstitute for GeophysicsThe University of Texas at [email protected], [email protected],[email protected]
Evelyn PowellInstitute for Geophysicsevelyn powell ¡[email protected]¿
Don BlankenshipUTIGUniversity of Texas at [email protected]
MS57
Quantification of the Uncertainty in the Basal Slid-
UQ12 Abstracts 121
ing Coefficient in Ice Sheet Models
The estimation of uncertainty in the solution of ice sheet in-verse problems within the framework of Bayesian inferenceis considered. We exploit the fact that for Gaussian noiseand prior probability densities and linearized parameter-to-observable map the posterior density (i.e., the solutionof the statistical inverse problem) is Gaussian and hence ischaracterized by its mean and covariance. The method isapplied to quantify uncertainties in the inference of basalboundary conditions for ice sheet models.
Noemi PetraInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
Hongyu ZhuInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]
Georg Stadler, Omar GhattasUniversity of Texas at [email protected], [email protected]
MS57
Bayesian Calibration via Additive Regression Treeswith Application to the Community Ice SheetModel
Quantifying uncertainties in calibration experiments in-volving complex computer models with high dimensionalinput spaces and large outputs is often infeasible. We de-velop a non-parametric Bayesian trees model that scalesto challenging calibration problems. Here, each tree repre-sents a weak learner and the overall model is a sum-of-trees.The idea is to learn locally-adaptive bases while retainingefficient MCMC sampling of the posterior distribution. Wedemonstrate our method by calibrating the Community IceSheet Model.
Matthew Pratola, Stephen Price, David HigdonLos Alamos National [email protected], [email protected], [email protected]
MS58
Reduced Basis, the Discrete Interpolation Methodand Fast Quadratures for Gravitational Waves
Finding gravitational waveforms (GWs) is potentially go-ing to open a new window to the universe. Einstein’sequations are nonlinear parameter dependent (mass, spinetc.) PDEs that approximate the post-Newtonian equa-tions (PN). PN without spin gives stationary phase ap-proximations (SPA) to waveforms. Using matched filter-ing techniques, a LIGO search to find GWs, requires about10,000 SPA templates and several months on supercomput-ers. Reduced basis and using DEIM points as quadraturenodes leads to significant savings.
Harbir AntilThe University of [email protected]
MS58
Decreasing the Temporal Complexity in Nonlinear
Model Reduction
UQ for nonlinear dynamical systems presents a challenge:uncertainty-space sampling requires thousands of large-scale simulations. To mitigate this burden, researchershave developed model-reduction techniques that decreasethe model’s ‘spatial complexity’ by decreasing 1) the num-ber of degrees of freedom and 2) the complexity of non-linear functions evaluations. However, when an implicttime integrator is used, the number of time steps remainsvery large. This talk presents a strategy for decreasing themodel’s ‘temporal complexity’.
Kevin T. CarlbergStanford [email protected]
MS58
Error Analysis for Nonlinear Model Reduction Us-ing Discrete Empirical Interpolation in the ProperOrthogonal Decomposition Approach
Discrete Empirical Interpolation Method (DEIM) is anefficient technique for resolving the complexity issue ofthe standard projection-based model reduction methods,such as the popular Proper Orthogonal Decomposition(POD) approach, due to nonlinearities of dynamical sys-tems. Recently, the hybrid approach combining POD withDEIM has been successfully used to obtain accurate low-complexity reduced systems in many applications. How-ever, error analysis for this approach has not been inves-tigated yet. This work provides state space error boundsfor the solutions of reduced systems constructed from thisPOD-DEIM approach. The analysis is particularly rele-vant to ODE systems arising from spatial discretizationsof parabolic PDEs. The resulting error bounds reflect thePOD approximation property through the decay of singu-lar values, as well as explain how the DEIM approximationerror involving the nonlinear term comes into play. Theconditions under which the stability of the original systemis preserved and the reduction error is uniformly boundedwill be discussed.
Saifon ChaturantabutVirginia [email protected]
Danny C. SorensenRice [email protected]
MS58
Linearized Reduced-order Models for Optimizationand Data Assimilation in Subsurface Flow
Trajectory piecewise linearization (TPWL) is a reduced-order modeling approach where new solutions for non-linear problems are represented using linear expansionsaround previously simulated (and saved) solutions. Thehigh-dimensional representation is projected into a low-dimensional subspace using proper orthogonal decompo-sition, which results in very large runtime speedups. Wedescribe the application of TPWL for the optimization ofwell settings in oil reservoir simulation and for inverse prob-lems involving the determination of geological parameters.
Jincong HeStanford [email protected]
122 UQ12 Abstracts
Lou J. DurlofskyStanford UniversityDept of Petroleum [email protected]
MS59
Testbeds for Ocean Model Calibration
Tuning and uncertainty quantification in ocean models ischallenging particularly because of the lack of suitable ob-servation datasets, and the computational expense in run-ning full-Earth simulations. Ocean models can be runin limited instances at an eddy-resolving resolution, sohigh-resolution runs can be a target response for lower-resolution parameterization. Testbed configurations arethen sufficient to expose effects of concern. We show re-sults from a channel model representative of the AntarcticCircumpolar Current (ACC), emphasizing eddy heat trans-port.
James Gattiker, Matthew HechtLos Alamos National [email protected], [email protected]
MS59
An Overview of Uncertainty Quantification in theCommunity Land Model
The Community Land Model (CLM) is the terrestrial com-ponent of the Community Earth System Model (CESM),which is used extensively for projections of the future cli-mate system. CLM has over 100 uncertain parameters,contains multiple nonlinear feedbacks, and is computation-ally demanding, presenting a number of challenges for UQefforts. CLM modeling has benefitted from our initial UQefforts through a new ability to rank parameters by relativeimportance and to identify unrealistic parameter combina-tions.
Daniel Ricciuto, Dali WangOak Ridge National [email protected], [email protected]
Cosmin Safta, Khachik SargsyanSandia National [email protected], [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
Peter ThorntonOak Ridge National [email protected]
MS59
Efficient Surrogate Construction forHigh-Dimensional Climate Models
In this study we explore the use of non-intrusive meth-ods to construct surrogate approximations for the Com-munity Land Model (CLM). Relevant vector machine tech-niques are used to reduce the dimensionality of the inputspace. This approach is used in conjunction with adaptiveconstrained sparse quadrature to construct the surrogatemodel coefficients. Domain clustering methodologies arealso employed to properly model plateaus and sharp dis-
continuities in the climate model observables.
Cosmin Safta, Khachik SargsyanSandia National [email protected], [email protected]
Daniel RicciutoOak Ridge National [email protected]
Robert D. BerrySandia National [email protected]
Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]
Peter ThorntonOak Ridge National [email protected]
Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]
MS59
Uncertainty in Regional Climate Experiments
climate models are subject to a number of different sourcesof uncertainty. Regional climate modeling introduce addi-tional uncertainties associated with the boundary condi-tions and resolution of the models. In this talk, based onthe ensemble being generated as part of the North Ameri-can Regional Climate Assessment Program (NARCCAP),we will present statistical methodology for the analysis ofthe spatial-temporal output in the ensemble and quantify-ing the uncertainties associated with the NARCCAP ex-periment.
Stephan SainInstitute for Mathematics Applied to GeosciencesNational Center for Atmospheric [email protected]
MS60
Calibration, Model discrepancy and ExtrapolativePredictions
In the presence of relevant physical observations, one canusually calibrate a computer model, and even estimate sys-tematic discrepancies of the model from reality. Estimatingand quantifying the uncertainty in this model discrepancycan lead to reliable predictions - so long as the predictionis ”similar” to the available physical observations. Exactlyhow to define ”similar” has proven difficult. Clearly it de-pends on how well the computational model captures therelevant physics in the system, as well as how portablethe model discrepancy is, going from the available physicaldata to the prediction. This talk will discuss these con-cepts using computational models ranging from simple tovery complex.
Dave HigdonLos Alamos National LaboratoryStatistical Sciences
UQ12 Abstracts 123
MS60
Calibration, Validation and the Importance ofModel Discrepancy
Extrapolation is inevitably accompanied by substantial un-certainty. To manage and quantify that uncertainty, cali-bration and validation are important tasks. But a funda-mental feature underlying all these activities is model dis-crepancy, which is the difference between reality and thesimulator output with ”best” or ”true” input values. Thistalk will show, both conceptually and through examples,that unless model discrepancy is acknowledged and care-fully modeled calibration and extrapolations will be biasedand over-confident.
Anthony O’HaganUniv. Sheffield, [email protected]
Jenny BrynjarsdottirStatistical and Applied Mathematical Sciences [email protected]
MS60
”Experiment” and ”Traveling” Models, Extrapola-tion Risk from Undetected Model Bias, and DataConditioning for Systematic Uncertainty in Exper-iment Conditions
This talk will discuss some fundamental concepts and is-sues related to extrapolative prediction risk associated withmodel validation and conditioning. Topics will include:”traveling model” which travels to predictions beyond thevalidation or conditioning setting as a subset of the larger”experiment model” of the experiments; handling uncer-tainties accordingly; consistent and non-consistent bias anduncertainty in extrapolation; Type X validation or condi-tioning error from neglected systematic experimental un-certainty; data conditioning to mitigate associated predic-tion risk.
Vicente J. RomeroValidation and Uncertainty Quantification DepartmentSandia National [email protected]
MS60
Towards Finding the Necessary Conditions for Jus-tifying Extrapolations
A comprehensive approach is proposed to justify extrap-olative predictions for models with known sources of er-ror. The connection between model calibration, validationand prediction is made through the introduction of alterna-tive uncertainty models used to model the localized errors.In addition a validation metric is introduced to provide aquantitative characterization of consistency of model pre-dictions when compared with validation data. The talk willdiscuss the challenges in caring out the proposed method-ology.
Gabriel A. TerejanuInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
Ernesto E. PrudencioInstitute for Computational Engineering and [email protected]
MS61
A Random Iterative Proper Orthogonal Decompo-sition Approach with Application to the Filteringof Distributed Parameter Systems
We consider the problem of filtering systems governed bypartial differential equations. We propose a randomly per-turbed iterative version of the snapshot proper orthogonaldecomposition (POD) technique, termed RI-POD, to con-struct an ROM to capture the systems global behaviour.The technique is data based, and is applicable to forced aswell as unforced systems. The ROM generated is used toconstruct reduced order Kalman filters to solve the filteringproblem.
Suman ChakravortyTexas A&M UniversityCollege Station, [email protected]
MS61
A New Approach to Uq Based on the JointExcitation-Response Pdf: Theory and Simulation
We study the evolution equation of joint excitation-response PDF obtained by Hopf functional approach forvarious stochastic systems. Some issues arising in this for-mulation are examined including, multiple available equa-tions, representation of correlated random process, and etc.This approach is tested with Nonlinear Advection equa-tion, Tumor cell model, Duffing Oscillator, and Limit CycleOscillator, and the results are compared.
Heyrim ChoBrown UniversityProvidence, RIheyrim [email protected]
Daniele VenturiBrown Universitydaniele [email protected]
George E. KarniadakisBrown UniversityDivision of Applied Mathematicsgeorge [email protected]
MS61
Bayesian UQ on Infinite Dimensional Spaces withIncompletely Specified Priors
The Bayesian framework is popular way of handling datain UQ. This is done by defining priors on (possibly infinitedimensional) spaces of measures and functions and con-ditioning those priors after the observation of the sampledata. It has been known since the earlier work of Diaconisand Freedman that, when the probability space allows foran infinite number of outcomes, the Bayesian posteriors, ingeneral, do not converge towards the “true distribution.’ Inthis work we compute optimal bounds on posteriors whenpriors in incompletely specified (through finite-dimensionalmarginals for instance). These bounds show that: (1) pri-ors sharing the same finite dimensional marginals may leadto diverging posteriors under sample data (2) any desired
124 UQ12 Abstracts
posterior can be achieved by a prior satisfying pre-specifiedfinite-dimensional constraints (3) the measurement of sam-ple data (in general) increases uncertainty instead of de-creasing it if the classical Bayesian framework is applied(“as is’) to an infinite (possibly high-dimensional) space.Finally, we develop a (OUQ) variant of the Bayesian frame-work that (1) allows to work with incompletely specifiedpriors (2) remains consistent in infinite-dimensional spaces(whose posteriors converge towards the “true distribution’)(3) provides optimal bounds on posteriors (4) leads tosharper estimates than those associated with the vanillafrequentist framework if the (incomplete) information onpriors is accurate. This is a joint work with Clint Scovel(LANL) and Tim Sullivan (Caltech) and a sequel to an ear-lier work on Optimal Uncertainty Quantification (OUQ,http://arxiv.org/abs/1009.0679, joint with C. Scovel, T.Sullivan, M. McKerns (Caltech) and M. Ortiz (Caltech)).
Houman OwhadiApplied [email protected]
MS61
Conjugate Unscented Transformation – ”OptimalQuadrature”
This talk will introduce a recently developed Conju-gate Unscented Transformation (CUT) to compute multi-dimensional expectation integrals. An extension of pop-ular unscented transformation will be presented to find aminimal number of quadrature points that is equivalent tothe Gaussian quadrature rule of same order. Equivalent tosame order implies that both the new reduced quadraturepoint set and the Gaussian quadrature product rule exactlyreproduce the expectation integral involving a polynomialfunction of order d.
Nagavenkat AdurthiMAE DeaprtmentUniversity at [email protected]
Puneet SinglaMechanical & Aerospace EngineeringUniversity at [email protected]
Tarunraj SinghDept of Mechanical EngineeringSUNY at [email protected]
MS62
Perspectives on Quantifying Uncertain Mecha-nisms in Dynamical Systems
Due to lack of scientific understanding, some mechanismsare not always well-represented in mathematical modelsfor complex systems. The impact of these uncertain mech-anisms on the overall system evolution may be delicateor even profound. These uncertain mechanisms are some-times microscopic, and it is desirable to examine how theyaffect the system at the macroscopic level since we are oftenmainly interested in macroscopic dynamics. The speakerpresents an overview of several available analytical andcomputational techniques for extracting macroscopic dy-namics, while taking uncertain microscopic mechanismsinto account. The issues include quantifying uncertain
mechanisms via probability distributions, dynamical aver-aging of random slow-fast microscopic mechanisms, ensem-ble averaging of fluctuating driving forces, and especially,data-driven quantification of uncertain mechanisms in wa-ter vapor dynamics.
Jinqiao DuanInstitute for Pure and Applied [email protected]
MS62
Stochastic Integrable Dynamics of Optical PulsesPropagating through an Active Medium
Resonant interaction of light with a randomly-prepared,lambda-configuration active optical medium is describedby exact solutions of a completely-integrable, stochasticpartial differential equation, thus combining the opposingconcepts of integrability and disorder. An optical pulsepassing through such a material will switch randomly be-tween left- and right-hand circular polarizations. Exactprobability distributions of the electric-field envelope vari-ables describing the light polarization and their switchingtimes will be presented .
Gregor KovacicRensselaer Polytechnic InstDept of Mathematical [email protected]
Ethan AtkinsCourant [email protected]
Peter R. KramerRensselaer Polytechnic InstituteDepartment of Mathematical [email protected]
Ildar R. GabitovDepartment of Mathematics, University of [email protected]
MS62
Reduced Models for Mode-locked Lasers withNoise
We explore the impact of spontaneous emissions noise onmode-locked laser linewidth by using a variational tech-nique to reduce the original model, a stochastic partialdifferential equation, to a low-dimensional stochastic ordi-nary differential equation. The joint distribution of the pa-rameters is mapped to a full-width half-maximum measureof the output laser linewidth, i.e., frequency uncertainty.Events causing the linewidth to exceed a prescribed un-certainty level are rare, requiring the application of largedeviations theory or importance sampling approaches.
Richard O. Moore, Daniel S. CargillNew Jersey Institute of [email protected], [email protected]
Tobias SchaeferDepartment of MathematicsThe College of Staten Island, City University of New [email protected]
UQ12 Abstracts 125
MS62
Analysis and Simulations of a Perturbed StochasticNonlinear Schrodinger Equation for Pulse Propa-gation in Broadband Optical Fiber Lines
We study the interplay between bit pattern randomness,Kerr nonlinearity, and delayed Raman response in broad-band multichannel optical fiber transmission lines. Weshow that pulse propagation can be described by a per-turbed stochastic nonlinear Schrodinger equation, whichtakes into account changes in pulse amplitude, group veloc-ity, and position. Numerical simulations with the stochas-tic PDE along with analysis of the corresponding stochasticreduced ODE model are used to calculate the bit-error-rateand to identify the major error-generating processes in thefiber line.
Avner PelegState University of New York at [email protected]
Yeojin ChungSouthern Methodist [email protected]
MS63
Quantification of Uncertainty in Wind Energy
Abstract not available at time of publication.
Karthik [email protected]
MS63
Uncertainty Quantification for Rayleigh-TaylorTurbulent Mixing
We present two main results. First: a reformulation of con-vergence for simulations of turbulent flow, through binningsolution values from neighboring points (in a ”supercell”)to define a finite approximation to a space-time dependentprobability density function. We thereby obtain conver-gence of fluctuations, and of nonlinear functions of the so-lution. Second: to reconstruct initial conditions (often notrecorded) from early time data with uncertainty bounds onthe reconstruction and on the entire solution.
Tulin KamanSUNY Stony [email protected]
James GlimmState University of New York, Stony [email protected]
MS63
Case Studies in Uncertainty Quantification forAerodynamic Problems in Turbomachinery
In this study, we study the use of common UQ methodsfor problems of importance in practical engineering de-signs. In particular, we assess the efficiency of commonpolynomial chaos basis functions and sampling techniqueslike Smolyak-grids. The computational frame-work will befirst validated against Monte-Carlo simulations to assessconvergence of pdfs. It will then be used to assess thevariability in compressor blade efficiency and turbine vane
loss due to uncertainty in inflow conditions. The resultswill be used to answer the following questions: Do we neednew probabilistic algorithms to quantify the impact of un-certainty? What is the optimal basis for standard per-formance metrics in turbomachinery? What are the com-putational and accuracy requirements of this probabilisticapproach? Are there alternate (more efficient) techniques?We believe that the answers to the above questions willprovide a quantitative basis to assess the usefulness of non-intrusive (and possibly intrusive) probabilistic methods toanalyze variability in engineering designs.
Sriram ShankaranGeneral [email protected]
MS64
Prior Modelling for Inverse Problems Using Gaus-sian Markov Random Fields
In inverse problems, regularization functions are often cho-sen to incorporate prior information about a spatially dis-tributed parameter, such as differentiability. In contrast, astandard Bayesian approach assumes that the value of eachparameter is Gaussian with mean and variance dependentupon the values of its neighbors. This approach leads toa so-called Gaussian Markov random field prior. We willexplore the interesting connections between these two ap-proaches, which in many cases yield equivalent variationalproblems.
Johnathan M. BardsleyUniversity of [email protected]
MS64
Evaluation of Covariance Matrices and their use inUncertainty Quantification
Abstract not available at time of publication.
Eldad HaberDepartment of MathematicsThe University of British [email protected]
MS64
Challenges in Uncertainty Quantification for Dy-namic History Matching
The inversion of large scale ill-posed problems introducesmultiple challenges. Examples would be: how to prescribea suitable regularizer, design an informative experiment,quantify uncertainty, exploit supplementary informationfrom multiple sources, or even how to define an appropri-ate optimization scheme. In the context of flow in porousmedium, subsurface parameters are inferred through in-version of oil production data in a process called historymatching. In this talk the inherent uncertainty of the prob-lem is tackled in terms of prior sampling. Despite meticu-lous efforts to minimize the variability of the solution space,the distribution of the posterior may remain rather intru-sive. In particular, geo-statisticians often propose large setsof prior samples that regardless of their apparent geologi-cal distinction are almost entirely flow equivalent. As anantidote, a reduced space hierarchical clustering of flow-relevant indicators is proposed for aggregation of thesesamples. The effectiveness of the method is demonstratedboth with synthetic and real field data. This research was
126 UQ12 Abstracts
performed as part of IBM-Shell joint research project.
Lior HoreshBusiness Analytics and Mathematical SciencesIBM TJ Watson Research [email protected]
Andrew R. ConnIBM T J Watson Research [email protected]
Gijs van Essen, Eduardo JimenezShell Innovation Research & [email protected], [email protected]
Sippe [email protected]
Ulisses MelloIBM Watson Research [email protected]
MS64
Title Not Available at Time of Publication
Abstract not available at time of publication.
Luis TenorioColorado School of [email protected]
MS65
The Influence of Topology on Sound Propagationin Granular Force Networks
Granular materials have been modelled using both contin-uum and particulate perspectives but neither approach cancapture complex behaviours during sound propagation. Anetwork representation provides an intermediate approachthat allows one to directly consider microscopic featuresand longer-range interactions, e.g. force chains. In experi-ments on granular materials composed of photoelastic par-ticles, we characterise the internal force structure and showthat weighted meso-scale network diagnostics are crucialfor quantifying sound propagation in this heterogeneousmedium.
Danielle S. BassettUniversity of California Santa [email protected]
Eli Owens, Karen DanielsNorth Carolina State [email protected], [email protected]
Mason A. PorterUniversity of OxfordOxford Centre for Industrial and Applied [email protected]
MS65
Size of Synchronous Firing Events in Model Neu-ron Systems
We investigate the interaction of synchronous dynamicsand the network topology in simple pulse-coupled Markov
chain models for neuron dynamics. Theories for predict-ing the size of synchronous firing events are presented, inwhich the probabilistic dynamics between these events aretaken into account. We use the theories to find differentmodel networks that preserve the global dynamics on thesenetworks.
Katherine NewhallRensselaer Polytechnic [email protected]
MS65
Network Analysis of Anisotropic Electrical Prop-erties in Percolating Sheared Nanorod Dispersions
Percolation in nanorod dispersions induces extreme prop-erties, with large variability near the percolation thresh-old. We investigate electrical properties across the dimen-sional percolation phase diagram of sheared nanorod dis-persions [Zheng et al., Adv. Mater. 19, 4038 (2007)]. Wequantify bulk average properties and corresponding fluctu-ations over Monte Carlo realizations, including finite sizeeffects. We also identify fluctuations within realizations,with special attention on the tails of current distributionsand other rare nanorod subsets which dominate the elec-trical response.
Feng B. ShiUNC, Chapel [email protected]
Mark ForestUniversity of North Carolina at Chapel [email protected]
Peter J. MuchaUniversity of North [email protected]
Simi WangUniversity of North Carolina at Chapel [email protected]
Xiaoyu ZhengDept of MathematicsKent State [email protected]
Ruhai ZhouOld Dominion [email protected]
MS65
Asymptotically Inspired Moment-Closure Approx-imation for Adaptive Networks
Adaptive social networks, in which nodes and networkstructure co-evolve, are often described using a mean-fieldsystem of equations for the density of node and link types.These equations constitute an open system due to depen-dence on higher order topological structures. We proposea moment-closure approximation based on the analyticaldescription of the system in an asymptotic regime. Weshow a good agreement between the improved mean-fieldprediction and simulations of the full network system.
Maxim S. ShkarayevMathematical Science Department, Rensselaer Polytech
UQ12 Abstracts 127
Leah ShawCollege of William and MaryDept. of Applied [email protected]
MS66
Propagating Uncertainty from General CirculationModels through Projected Local Impacts
We develop statistical methodologies for impact analysisof climate change for economically important sectors, andtransfer this development to the design of decision-supporttools. Our approach is to implementin the context of spe-cific regions and sensitive sectors of high economic and so-cial importancetools that allow a coherent characterizationof the propagation of uncertainty all the way to projectedimpacts, appropriately equipped with confidence or credi-bility statements relevant to the quantification of risks.
Peter GuttorpUniversity of [email protected]
MS66
Measures of Model Skill and Parametric Uncer-tainty Estimation for Climate Model Development
Measures of model skill and parametric uncertainty esti-mation for climate model development.
Gabriel HuertaIndiana [email protected]
Charles JacksonUTIGUniversity of Texas at [email protected]
MS66
Optimal Parameter Estimation in Community At-mosphere Models
The current tuning process of parameters in global climatemodels is often performed subjectively or treated as anoptimization procedure to minimize model biases basedon observations. In this study, a stochastic important-sampling algorithm, Multiple Very Fast Simulated Anneal-ing (MVFSA) was employed to efficiently sample the inputparameters in the KF scheme based on a skill score so thatthe algorithm progressively moved toward regions of theparameter space that minimize model errors.
Guang Lin, Ben Yang, Yun Qian, Ruby LeungPacific Northwest National [email protected], [email protected],[email protected], [email protected]
MS66
Covariance Approximation for Large MultivariateSpatial Datasets with An Application to MultipleClimate Models
In this work we investigate the cross-correlations across
multiple climate model errors. We build a Bayesian hier-archical model that ac- counts for the spatial dependence ofindividual models as well as cross-covariances across differ-ent climate models. Our method allows for a non-separableand non-stationary cross-covariance structure. We alsopresent a covariance approximation approach to facilitatethe computation in the modeling and analysis of very largemultivariate spatial data sets. The covariance approxima-tion consists of two parts: a reduced-rank part to capturethe large-scale spatial dependence, and a sparse covariancematrix to correct the small-scale dependence error inducedby the reduced rank approximation. Simulation results ofmodel fitting and prediction show substantial improvementof the proposed approximation over the predictive processapproximation and the independent blocks analysis. Wethen apply our computational approach to the joint statis-tical modeling of multiple climate model errors.
Huiyan SangTexas A&M [email protected]
MS67
Validation Experience for Stochastic Models ofGroundwater Flow and Radionuclide Transportat Underground Nuclear Test Sites and the ShiftAway from Quantitative Measures and Toward It-erative Improvement
Several groundwater models of underground nuclear tests,incorporating uncertainty, were subjected to validationprocesses. There were failures in balancing readily quan-tifiable and measurable model attributes against harder-to-measure conceptual issues, in gaining decision-maker ac-ceptance of statistically based conclusions, and ultimatelyin the quality and quantity of data forming the foundationof the original models. From this experience, the regulatoryframework has shifted away from validation and toward amodel evaluation approach with an emphasis on iterativeimprovement.
Jenny ChapmanDesert Research [email protected]
Ahmed HassanIrrigation and Hydraulics DepartmentCairo [email protected]
Karl PohlmannDesert Research [email protected]
MS67
Validation and Data Assimilation of Shallow WaterModels
We will discuss the validation of large-scale coupled wave-current models for simulating hurricane storm surge.We will demonstrate that by including the appropriatephysics, discretizations, and parameters, one can consis-tently match measured data from a wide range of hurri-canes. However, we will also point out where the mod-els have significant uncertainty and describe a new data-assimilation framework for real-time hurricane surge fore-casts.
Clint Dawson
128 UQ12 Abstracts
Institute for Computational Engineering and SciencesUniversity of Texas at [email protected]
Jennifer Proft, J. Casey DietrichUniversity of Texas at [email protected], [email protected]
Troy ButlerInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
Talea MayoUniversity of Texas at [email protected]
Joannes WesterinkDepartment of Civil Engineering and Geological SciencesUniversity of Notre [email protected]
Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]
M.U. AltafKing Abdullah University of Science and [email protected]
MS67
Quantifying Long-Range Predictability and ModelError through Data Clustering and InformationTheory
We present a framework blending data clustering and in-formation theory to quantify long-range initial-value pre-dictability and forecast error with imperfect models in com-plex dynamical systems. With reference to wind-drivenocean circulation, we demonstrate that the pertinent infor-mation for long-range forecasting can be represented viaa coarse-grained partition of the set of initial data avail-able to a model. A related formalism is applied to assessthe forecast skill of Markov models of ocean circulationregimes.
Dimitris GiannakisNew York [email protected]
MS67
Model Selection and Inference for a Nuclear Reac-tor Application
We compare statistical techniques for model selection, witha focus on information theoretic approaches, applied ina statistical framework for calibration of computer codes.Additionally, we propose a sequential strategy for itera-tive refinement of importance distributions used to sampleuncertain model inputs to estimate the probability that acalibrated model exceeds a fixed or random threshold. Wedemonstrate these methods on a nuclear reactor applica-tion involving calculation of peak coolant temperature withthe R7 code.
Laura SwilerSandia National Laboratories
Albuquerque, New Mexico [email protected]
Brian Williams, Rick PicardLos Alamos National [email protected], [email protected]
MS68
Uncertainty Quantification and Model Validationin Flight Control Applications
Models derived from physics or data are often too com-plicated for analysis or controller design. This is leads toreduced order modeling. However, it is important to assessthe quality of such models. Often, for nonlinear systemsin particular, model validation is performed using Monte-Carlo simulations. Two systems are qualified to be similarif the variance of the trajectories are small. In this paper,we present a new paradigm where two systems are qualifiedto be similar if the density function of the output is simi-lar, when excited by the same random input. This is moregeneral than comparing few moments or the support ofthe trajectories. We use Wasserstein distance to comparedensity functions and also argue why the commonly useKullback-Liebler metric is unsuitable for model validation.Examples based on flight control problems are presentedto illustrate model validation in this framework.
Raktim BhattacharyaTexas A&M UniversityCollege Station, [email protected]
MS68
New Positive-definite Random Matrix Ensemblesfor Stochastic Dynamical System with High Levelof Heterogeneous Uncertainties
The scientific community has seen an increasing surgeof applications of positive-definite random matrix ensem-bles in the fields of mechanics and multiscale mechan-ics in recent years. These ensembles are employed todirectly model several positive-definite matrix-valued ob-jects, e.g., mass matrices, stiffness matrices, constitutiveelasto-plasticity tensor, etc. Matrix variate distributions,e.g., Wishart/Gamma distribution, Beta Type 1 distribu-tion, Kummer-Beta distribution, or distributions derivedfrom them, are currently considered for this modeling pur-pose. These distributions are, however, parameterized suchthat they induce specific mean and covariance structuresthat do not allow sufficient flexibility in modeling high levelof uncertainties. The current work will discuss two newpositive-definite random matrix ensembles that are likelyto provide more flexibility in modeling high level of uncer-tainties.
Sonjoy Das, Sonjoy DasUniversity at [email protected], [email protected]
MS68
Existence and Stability of Equilibria in StochasticGalerkin Methods
We consider dynamical systems in form of nonlinear au-tonomous ordinary differential equations including randomparameters. Thus the corresponding solutions representrandom processes. The existence of a unique stable equi-
UQ12 Abstracts 129
librium is assumed for each sample of the parameters.Thereby, the equilibria depend on the random parameters.We apply the polynomial chaos expansion of the unknownrandom process to solve the stochastic dynamical system.The stochastic Galerkin method yields a larger coupled sys-tem of nonlinear ordinary differential equations satisfied byan approximation of the random process. The existence ofequilibria is investigated for the coupled system. More-over, we analyze the stability of these fixed points, wherethe spectrum of a Jacobian matrix has to be examined.Numerical simulations of test examples are presented.
Roland PulchUniversity of [email protected]
MS68
A Nonlinear Dimension Reduction Technique forRandom Fields Based on Topology-PreservingTransformations
The intrinsic dimensionality of a random field may be de-fined as the minimum number of degrees of freedom whichis neccessary to account for its relevant statistical informa-tion. In this talk we will present a new nonlinear dimensionreduction technique for random fields based on latent vari-able models and topology-preserving transformations. Thelow-dimensional embedding will be explicitly constructedtogether with the probability measure associated with thereduced-order probability space.
Daniele VenturiBrown Universitydaniele [email protected]
Guang LinPacific Northwest National [email protected]
George E. KarniadakisBrown UniversityDivision of Applied Mathematicsgeorge [email protected]
MS69
New Results for the Stochastic PDEs of Fluid Dy-namics
The addition of white noise driven terms to the fundamen-tal equations of physics and engineering are used to modelnumerical and empirical uncertainties. In this talk we willdiscuss some recent results for the Stochastic Navier-Stokesand Euler Equations as well as for the Stochastic PrimitiveEquations, a basic model in geophysical scale fluid flows.For all of the above systems our results cover the case of ageneral nonlinear multiplicative stochastic forcing.
Nathan Glatt-HoltzIndiana [email protected]
MS69
Asymptotic Behavior of Stochastic Lattice Differ-ential Equations
Random attractor is an important concept to describe long-term behavior of solutions for a given stochastic system.In this talk we will first provide sufficient conditions for
the existence of a global compact random attractors forgeneral dynamical systems in weighted space of infinite se-quences. We then apply the result to show the existenceof a unique global compact random attractor for first or-der, second order and partly dissipative stochastic latticedifferential equations with random coupled coefficients andmultiplicative/additive white noise in weighted spaces.
Xiaoying HanAuburn [email protected]
MS69
Unbiased Perturbations of the Navier-Stokes Equa-tion
We consider a new class of stochastic perturbations of theNavier-Stokes equation. Its unique feature is the preserva-tion of the mean dynamics of the deterministic equation,hence it is an unbiased random perturbation of the equa-tion. We will discuss results for the steady state solutionsand the convergence to steady solutions, and show suffi-cient conditions which are very similar to analogous resultsfor the unperturbed equation. Numerical simulations com-paring this unbiased perturbation with the usual stochasticNavier-Stokes will be shown.
Chia Ying LeeStatistical and Applied Mathematical Sciences InstituteU. of North [email protected]
Boris RozovskyBrown [email protected]
MS69
Generalized Malliavin Calculus and Spdes inHigher-Order Chaos Spaces
The Malliavin derivative, divergence operator (Skorokhodintegral), and the Ornstein-Uhlenbeck operator are ex-tended from the traditional Gaussian setting, or first-orderchaos space, to nonlinear generalized functionals of whitenoise, which are elements of the full chaos space. Theseextensions are then applied to study stochastic ellipticand parabolic equations driven by noise from higher-orderchaos spaces.
Sergey LototskyDepartment of MathematicsUniversity of Southern [email protected]
Boris RozovskiiBrown Universityboris [email protected]
Dora SelesiUniversity of Novi [email protected]
MS69
Stochastic-Integral Models for Propagation-of-Uncertainty Problems in Nondestructive Evalua-tion
Generalized polynomical chaos (gPC) and the probabilistic
130 UQ12 Abstracts
collocation method (PCM) are finding considerable appli-cation to problems of interest to engineers in which randomparame- ters are an essential feature of the mathematicalmodel. So far the applications have been mainly to stochas-tic partial differential equations, but we extend the methodto volume-integral equations, which have met great suc-cess in electromagnetic nondestructive evaluation (NDE),especially with eddy-currents. The problems of main in-terest to the NDE community in this connection are con-cerned with the issue of propagation of uncertainty whenthe relevant parameters are not well characterized, or areknown simply as random variables. We demonstrate theideas by considering a metallic surface that has undergonea shot-peening treatment to reduce residual stresses, andhas, therefore, become a random conductivity field.
Elias SabbaghVictor [email protected]
Kim Murphy, Harold SabbaghVictor Technologies, [email protected], [email protected]
John [email protected]
Jeremy KnoppAir Force Research [email protected]
Mark BlodgettAir Forch Research [email protected]
MS70
Hybrid Methods for Data Assimilation
I present a theoretical comparison of two widely used dataassimilation methods: the ensemble Kalman filter (EnKF)approach and the four dimensional variational (4D-Var)approach. Then I propose a suboptimal hybridizationscheme to combine the advantages of the EnKF and the4D-Var. This new hybrid technique is computationallyless expensive than the full 4D-Var and, in simulations,performs better than EnKF for both linear and nonlineartest problems.
Haiyan ChengWillamette [email protected]
Adrian SanduVirginia Polytechnic Institute andState [email protected]
MS70
Model Covariance Sensitivity as a Guidance for Lo-calization in Data Assimilation
In this talk we present how to use model covariance sen-sitivity information to guide, in a more precise manner,the selection of appropriate cross-correlation length-scaleparameters for model covariance localization techniques.Covariance sensitivities are obtained using adjoint-basedmethods. Numerical experiments are presented for the
ensemble Kalman filter data assimilation applied to theLorenz 40-variable model. The relevance of the technique,as well as its utility, for operational data assimilation sys-tems will be presented and discussed.
Humberto C. GodinezLos Alamos National LaboratoryApplied Mathematics and Plasma [email protected]
Dacian N. DaescuPortland State UniversityDepartment of Mathematics and [email protected]
MS70
The Estimation of Functional Uncertainty usingPolynomial Chaos and Adjoint Equations
The combined use of nonintrusive polynomial chaos (PC)and adjoint equations (yielding the gradient) is addressedaimed at the estimation of uncertainty of a valuable func-tional subject to large errors in the input data. Randomvariables providing maximum impact on the result (leadingvalues) may be found using the gradient information thatallows reduction of the problem dimension. The gradientmay be also used for the calculation of PC coefficients, thusenabling further acceleration of the computations. Usinggradient information for the estimation of polynomial coef-ficients (APC) also provides a significant reduction of thecomputation time. The adjoint Monte VCarlo (AMC)ishighly efficient from a computational viewpoint and accu-rate for moderate amplitude of errors. For a large numberof random variables, the accuracy of AMC is close (or evensuperior) to PC results even under large error.
Michael NavonFlorida State [email protected]
MS70
FATODE: A Library for Forward, Tangent Linear,and Adjoint ODE Integration
FATODE is a FORTRAN library for the integration of or-dinary differential equations with direct and adjoint sensi-tivity analysis capabilities. The paper describes the capa-bilities, implementation, code organization, and usage ofthis package. FATODE implements the forward, adjoint,and tangent linear models for four families of methods —ERK, FIRK, SDIRK, and Rosenbrock. A wide range ofapplications are expected to benefit from its use; examplesinclude parameter estimation, data assimilation, optimalcontrol, and uncertainty quantification.
Hong ZhangVirginia [email protected]
Adrian SanduVirginia Polytechnic Institute andState [email protected]
MS71
UpscalingMicro-scale Material Variability Through the use
UQ12 Abstracts 131
of Random Voronoi Tessellations
A randomly seeded Voronoi tessellation is used to modelthe microscale grain structure of a thin ductile ring. Eachgrain is assigned an elastic stiffness tensor with cubic sym-metry but with randomly oriented principal directions.The ring is subjected to internal pressure loading and de-formed elastically. The displacement and stress responseof the ring is studied from the perspective of homogeniza-tion theory as the ratio of the grain size to ring diameteris varied.
Joseph BishopSandia National [email protected]
MS71
Discrete Models of Fracture and Mass Transportin Cement-based Composites
The life-cycle performance of structural concrete is af-fected by material features and internal processes that spanseveral length scales. This presentation regards the dis-crete lattice modeling of such features, fracture, and crack-assisted mass transport in concrete. Domain discretizationis based on Voronoi/Delaunay tessellations of semi-randompoint sets. Lack of scale separation complicates the model-ing efforts. Variability is quantified by simulating multiple,random realizations of nominally identical structures.
John BolanderUC [email protected]
Peter GrasslSchool of EngineeringUniversity of [email protected]
MS71
Maximal Poisson-disk Sampling for UQ Purposes
We describe a new approach to solve the maximal Poisson-disk sampling in d-dimensional spaces for UQ purposes.In contrast to existing methods, our technique tracks theremaining voids after a classical dart throwing phase byconstructing the associated convex hulls. This provides amechanism for termination with bounded time complex-ity. We also show how to utilize this sampling techniquesto achieve better estimations of probability of failure anddetection of discontinuities. We demonstrate applicationexamples.
Mohamed S. EbeidaSandia National [email protected]
MS71
Simplex Stochastic Collocation Approach
The Simplex Stochastic Collocation method is introducedas an non-intrusive, adaptive un- certainty quantificationmethod for computational problems with random inputs.The effectiveness of adaptive formulation is determined byboth h- and p-refinement measures and a stopping crite-rion derived from an error estimate based on hierarchicalsurpluses. A p-criterion for the polynomial degree p is for-mulated to achieves a constant order of convergence withdimensionality. Numerical results show that the stopping
criterion is reliable owing to the accurate and conservativeerror estimate, and that the refinement measures outper-form uniform refinement.
Gianluca IaccarinoStanford [email protected]
Jeroen WitteveenStanford UniversityCenter for Turbulence [email protected]
MS71
Random Poisson-Disk Samples and Meshes
Maximal Poisson-disk sampling produces random pointclouds of uniform density. Every part of the domain iswithin distance r of a sample, yet no two samples are closerthan r to each other. I will describe an optimal algorithmfor generating samples, and how we sample the domainboundary differently to get a Delaunay triangle or Voronoipolyhedral mesh. These meshes have provable quality sim-ilar to determinist meshes, but with uniform random ori-entations.
Scott A. Mitchell, Mohamed S. EbeidaSandia National [email protected], [email protected]
MS72
Quantile Emulation for Non-Gaussian StochasticModels
Stochastic models often require a large number of expensiveruns for practical applications. In the emulator framework,the model is replaced with a faster surrogate (a Gaussianprocess) which captures prediction uncertainty under theassumption of Gaussianity. If the simulator response isnon-Gaussian however, alternative approaches have to besought. We investigate emulation of quantiles and discusshow to perform Bayesian inference for this class of models.A real world example application is provided.
Remi BarillecUniversity of Aston, UKNon-linearity and Complexity Research [email protected]
Alexis Boukouvalas, Dan CornfordUniversity of Aston, [email protected], [email protected]
MS72
Methods for Assessing Chances of Rare, High-Consequence Events in Groundwater Monitoring
This talk looks at multiple approaches for assessing uncer-tainty in high-consequence, low probability events relatedto groundwater monitoring. We consider Monte Carlo,Markov chain Monte Carlo, and null-space Monte Carlo,all of which have difficulty estimating the chance of lowprobability events. We also look at modifications to theseapproaches that help address this issue. We focus on asynthetic application in 3d solute transport.
Dave HigdonLos Alamos National Laboratory
132 UQ12 Abstracts
Statistical [email protected]
Elizabeth Keating, Zhenxue Dai, James GattikerLos Alamos National [email protected], [email protected], [email protected]
MS72
Sensitivity Analysis and Estimation of ExtremeTail Behaviour in Two-dimensional Monte CarloSimulation (2DMC)
Two-dimensional Monte Carlo simulation is frequentlyused to implement risk models, comprising mechanistic andprobabilistic components, to separately quantify uncer-tainty and variability within a population. Risk managersmust assess the proportion exceeding a critical threshold,and the levels of exceedence. We describe an efficientmethod combining Bayesian model emulation and extremevalue theory, which can also be used to quickly identifythose model parameters having the greatest impact on theprobabilities of rare events.
Marc C. Kennedy, Victoria RoelofsThe Food and Environment Research [email protected],[email protected]
MS72
An Investigation of the Pineapple Express Phe-nomenon via Bivariate Extreme Value Theory
We apply bivariate extreme value methods in an examina-tion of extreme winter precipitation events produced by aregional climate model. We first assess the model’s abilityto reproduce these events in the US Pacific coast region.We then link these events to large-scale process dynamics,with a particular focus on the pineapple express (PE), aspecial type of winter event. Finally, we examine extremeevents produced by the model in a future climate scenario.
Grant WellerDepartment of Statistics,Colorado State [email protected]
Dan CooleyColorado State [email protected]
Stephan SainInstitute for Mathematics Applied to GeosciencesNational Center for Atmospheric [email protected]
MS73
An Introduction to Emulators and Climate Models
State of the art climate simulators pose particular prob-lems for UQ methods. They have very large state spaces(o1012) and are computationally expensive. In this talkI will introduce climate models, what they do, how theywork and the problems that arise when we quantify theiruncertainty, identify its source and use data to reduce it.I will outline some solutions to these , where the current
challenges lie and outline future research.
Peter ChallenorNational Oceanography Centre, Southampton, [email protected]
MS73
Calibration of Gravity Waves in WACCM
The Whole Atmosphere Community Climate Model(WACCM) is a comprehensive numerical model, span-ning the range of altitude from the Earth’s surface tothe thermosphere. Gravity waves are a key componentof the dynamics, and are parametrized. However, the cur-rent parametrization does not result in fully realistic fea-tures such as the period of the quasi-biennial oscillation.We carry out a Bayesian calibration of the gravity wavesscheme using efficient sequential design of experiments.
Serge GuillasUniversity College [email protected]
Hanli [email protected]
MS73
Uncertainty in Modeled Upper Ocean Heat Con-tent Change using a Large Ensemble
We evaluate the uncertainty in the heat content change(ΔQ) in the top 700 meters in the ocean component ofthe CCSM3.0 model using an emulator created from alarge member ensemble. A large ensemble of the CCSM3.0ocean/ice system at a 3 degree resolution was created byvarying parameters related to ocean mixing and advection.The outcomes are compared to four estimates using obser-vational data. We explore how other modeled estimates ofΔQ from the CMIP3 ensemble can be understood in thecontext of this large ensemble.
Robin TokmakianNaval Postgraduate [email protected]
MS73
Emulating Time Series Output of Climate Models
The RAPIT project is concerned with the risk, probabilityand impacts of future changes to the Atlantic MeridionalOverturning Circulation (AMOC) under anthropogenicforcing. Our principle resource is an unprecedented per-turbed parameter ensemble, (order 10,000 runs), of thefully coupled, un-flux-adjusted climate model, HadCM3.We present dynamic methods for emulating the AMOCtime series output by the model and explore the use ofthese emulators in helping to constrain future AMOC pro-jections using different data sources and ocean monitoringsystems.
Danny WilliamsonDurham [email protected]
MS74
A Non-intrusive Alternative to a Computational
UQ12 Abstracts 133
Measure Theoretic Inverse
We develop a non-intrusive technique for extending thecomputational measure-theoretic methodology to problemswhere it is either infeasible or impractical to obtain deriva-tive information from the map. We discuss various sourcesof errors in the methodology and how to address these.
Troy ButlerInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
MS74
Inverse Function-based Methodology for UQ
There is a growing need of statistical methodology forproblems addressing inverse sensitivity analysis in whichthere are one or multiple quantities of interest connectedto the input space through a system of partial differen-tial equations. Though Bayesian methodology has becomequite popular in this setting, we explore potential non-Bayesian solutions using some newer ideas related to in-verse function-based inference. The goal is to understandthe distribution on the parameter space without use of pri-ors.
Jessi CisewskiDepartment of Statistics and Operations ResearchUniversity of North Carolina at Chapel [email protected]
MS74
A Computational Approach for Inverse SensitivityProblems using Set-valued Inverses and MeasureTheory
We discuss a numerical method for inverse sensitivity anal-ysis of a deterministic map from a set of parameters toa randomly perturbed quantity of interest. The solutionmethod has two stages: (1) approximate the unique set-valued solution to the inverse problem and (2) apply mea-sure theory to compute the approximate probability mea-sure on the parameter space that solves the inverse prob-lem. We discuss convergence and analysis of the method.
Donald EstepColorado State [email protected] or [email protected]
MS74
Inverse Problem and Fisher’s View of StatisticalInference
R. A. Fisher’s fiducial inference has been the subject ofmany discussions and controversies ever since he intro-duced the idea during the 1930’s. The idea experienceda bumpy ride, to say the least, during its early yearsand one can safely say that it eventually fell into disfavoramong mainstream statisticians. However, it appears tohave made a resurgence recently under various guises. Forexample under the new name generalized inference fiducialinference has proved to be a useful tool for deriving sta-tistical procedures for problems where frequentist methodswith good properties were previously unavailable. There-fore we believe that the fiducial argument of R.A. Fisherdeserves a fresh look from a new angle. In this talk we firstgeneralize Fisher’s fiducial argument and obtain a fiducial
recipe applicable in virtually any situation. We demon-strate this fiducial recipe on examples of varying complex-ity. We also investigate, by simulation and by theoreti-cal considerations, some properties of the statistical pro-cedures derived by the fiducial recipe showing they oftenposses good repeated sampling, frequentist properties. Asan example we apply the recipe to interval observed mixedlinear model. Portions of this talk are based on a joinedwork with Hari Iyer, Thomas C.M. Lee and Jessi Cisewski
Jan HannigDepartment of Statistics, University of North [email protected]
MS75
Variable Selection and Sensitivity Analysis via Dy-namic Trees with an application to Computer CodePerformance Tuning
We introduce a new response surface methodology, dy-namic trees, with an application to variable selection andan input sensitivity analysis. These methods are used toanalyze automatic computer code tuning that combinesclassification and regression elements on large data sets.The talk will also highlight an R package implementing themethods, called dynaTree, which is available on CRAN.
Robert GramacyUniversity of [email protected]
MS75
Bayesian Filtering in Large-Scale Geophysical Sys-tems and Uncertainty Quantification
Bayesian filters compute the probability distribution of thesystem, thus readily provide a framework for uncertaintyquantification and reduction. In geophysical applications,filtering techniques are designed to produce a small sampleof state estimates, as a way to reduce computational bur-den. This, coupled with poorly known model-observationdeficiencies, would lead to distribution estimates that arefar from optimal, yet still provide meaningful estimates.We present this problem and discuss possible approachesto produce improved uncertainty estimates.
Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]
Bruce CornuelleUniversity of California, San [email protected]
Aneesh SubramanianUniversity of California, San Diego, [email protected]
Hajoon SongUniversity of California, Santa Cruz, [email protected]
Xiaodong Luo1King Abdullah University of Science and Technology(KAUST),[email protected]
134 UQ12 Abstracts
MS75
On Data Partitioning for Model Validation
Validation processes usually require that experimental databe split into calibration and validation datasets. As thischoice is often subjective, we introduce an approach basedon the concept of cross-validation. The proposed methodallows one to systematically determine the optimal par-tition and to address fundamental issues in validation,namely that the model 1) be evaluated with respect tothe data and to predictions of quantities of interest; 2) behighly challenged by the validation set.
Rebecca MorrisonUT [email protected]
Corey BryantThe University of Texas at AustinInstitute for Computational Engineering and [email protected]
Gabriel A. TerejanuInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
Serge PrudhommeICESThe University of Texas at [email protected]
Kenji MikiInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]
MS75
Validation of a Random Matrix Model forMesoscale Elastic Description of Materials withMicrostructures
We present validation of a bounded random matrix modeldefining the constitutive elastic behavior of heterogeneousmaterial. Using an experimental database of microstruc-tures, we first construct a surrogate model in order to fur-ther the data, required for calibration and validation. Wethen recall the construction of the random matrix modelcharacterizing the mesoscale apparent elasticity tensor ofthe material. Finally a procedure is presented to validatethe model using observations resulting from fine scale sim-ulations.
Arash NoshadravanUniversity of Southern [email protected]
Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]
Johann GuilleminotLaboratoire de Modelisation et Simulation Multi EchelleUniversite [email protected]
Ikshwaku Atodaria, Pedro Peralta
School for Engineering of MatterArizona State [email protected], [email protected]
MS76
Toward Analysis of Uncertainty in Chaotic Systems
Due to the exponential growth of small disturbances, un-certainty analysis in chaotic systems is challenging. Forexample, even for mean quantities, sensitivity analysis us-ing a standard adjoint-based approach fails because theadjoint variables diverge. In this talk, recent work on un-certainty quantification for chaotic systems is described. Inparticular, several methods, including polynomial chaos ex-pansions, are evaluated for capturing uncertainty in meanquantities using the Lorenz equations as a model problem.
Eleanor DeramUniversity of Texas at [email protected]
Todd OliverPECOS/ICES, The University of Texas at [email protected]
Robert D. MoserUniversity of Texas at [email protected]
MS76
A Dynamically Weighted Particle Filter for SeaSurface Temperature Modeling
The sea surface temperature (SST) is an important factorof the earth climate system. We analyze the SST data ofCaribbean Islands area after a hurricane using the radialbasis function network-based dynamic model. Comparingto the traditional grid-based approach, our approach re-quires much less CPU time and makes the real-time fore-cast of SST doable on a personal computer.
Faming LiangTexas A&M UniversityCollege Station, [email protected]
Duchwan RyuDepartment of BiostatisticsMedical College of [email protected]
Bani MallickDepartment of StatisticsTexas A&M [email protected]
MS76
Characterization of the Effect of Uncertainty inTerrain Representations for Modeling GeophysicalMass Flows
In modeling flow on natural terrains a persistent and diffi-cult problem is the effect of errors and uncertainties in therepresentation of the terrain. In this talk we will discussthe effect of such uncertainty on outputs from a geophysi-cal mass flow model. We will then introduce methodologyto systematically account for these uncertainties by usinga model of the error and sampling it to create ensembles
UQ12 Abstracts 135
of possible elevations.
Ramona StefanescuUniversity at [email protected]
Abani K. PatraSUNY at BuffaloDept of Mechanical [email protected]
Marcus BursikDept. of GeologySUNY at [email protected]
MS77
Statistical Inference Problems for NonlinearSPDEs
We consider a parameter estimation problem to determinethe drift coefficient for a large class of parabolci StochasticPDEs driven by additive or multiplicative noise. We deriveseveral different classes of estimators based on the first NFourier modes of a sample path observed continuously on afinite time interval. We study the consistency and asymp-totic normality of these estimators as number of Fouriercoefficients increases, and we present some numerical sim-ulation results.
Igor CialencoIllinois Institute of [email protected]
MS77
Filtering the Navier Stokes Equations
Data assimilation refers to methodologies for the incorpo-ration of noisy observations of a physical system into an un-derlying model in order to infer the properties of the stateof the system (possibly including parameters). The modelitself is typically subject to uncertainties, in the input dataand in the physical laws themselves. This leads naturallyto a Bayesian formulation in which the posterior probabil-ity distribution of the system state, given the observations,plays a central conceptual role. The aim of this paper isto use this Bayesian posterior probability distribution asa gold standard against which to evaluate various com-monly used data assimilation algorithms. We study the 2DNavier-Stokes equations in a periodic geometry and eval-uate a variety of sequential filtering approximations basedon 3DVAR and on extended and ensemble Kalman filters.The performance of these data assimilation algorithms isquantified by comparing the relative error in reproducingmoments of the posterior probability distribution. The pri-mary conclusions of the study are that: (i) with appropri-ate parameter choices, approximate filters can perform wellin reproducing the mean of the desired probability distri-bution; (ii) however these filters typically perform poorlywhen attempting to reproduce information about covari-ance; (iii) this poor performance is compounded by theneed to modify the filters, and their covariance in particu-lar, in order to induce filter stability and avoid divergence.Thus, whilst filters can be a useful tool in predicting meanbehaviour, they should be viewed with caution as predic-tors of uncertainty. We complement this study with (iv) atheoretical analysis of the ability of the filters to estimatethe mean state accurately, and we derive a non-autonomous
(S)PDE for this estimate in the continuous-time limit.
Kody LawMathematics InstituteUniversity of [email protected]
MS77
Adaptive Construction of Surrogates for BayesianInference
Surrogate models are often used to accelerate Bayesian in-ference in statistical inverse problems. Yet the constructionof globally accurate surrogates for complex models can beprohibitive and in a sense unnecessary, as the posterior dis-tribution typically concentrates on a small fraction of theprior support. We present a new adaptive approach thatuses stochastic optimization (the cross-entropy method) toconstruct surrogates that are accurate over the supportof the posterior distribution. Efficiency and accuracy aredemonstrated via parameter inference in nonlinear PDEs.
Jinglai Li, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]
MS77
Reimagining Diffusion Monte Carlo for QuantumMonte Carlo, Sequential Importance Sampling,Rare Event Simulation and More
Diffusion Monte Carlo was developed forty years ago withinthe Quantum Monte Carlo community to compute groundstate energies of the Schrodinger operator. Since then thebasic birth/death strategy of DMC has found its way intoa wide variety of application areas. For example efficientresampling strategies used in sequential importance sam-pling algorithms (e.g. particle filters) are based on DMC.As I will demonstrate, some tempting generalizations of thebasic DMC framework lead to an instability in the timediscretization parameter. This instability has importantconsequences in, for example, applications of DMC in se-quential importance sampling and rare event simulation.We suggest a modification of the basic DMC algorithmthat eliminates this instability. In fact, the new algorithmis more efficient than DMC under any condition (parame-ter regime). We prove this as well as show numerically andanalytically that it is stable in unstable regimes for DMC.
Jonathan WeareUniversity of ChicagoDepartment of [email protected]
Martin HairerThe University of [email protected]
MS78
Computing and Using Observation Impact in 4D-Var Data Assimilation
Data assimilation combines information from an imperfectmodel, sparse and noisy observations, and error statistics,to produce a best estimate of the state of a physical system.Different observational data points have different contribu-tions to reducing the uncertainty with which the state isestimated. In this paper we present a computational ap-
136 UQ12 Abstracts
proach to quantify the observation impact for analyzing theeffectiveness of the assimilation system, for data pruning,and for designing future sensor networks.
Alexandru CioacaVirginia [email protected]
Adrian SanduVirginia Polytechnic Institute andState [email protected]
MS78
Error Covariance Sensitivity and Impact Estima-tion with Adjoint 4D-Var
The adjoint-data assimilation system (DAS) is presented asa feasible approach to evaluate the forecast sensitivity withrespect to the specification of the observation and back-ground error covariances in a four-dimensional variationalDAS. A synergistic link is established between various tech-niques to analyze the DAS performance: observation sen-sitivity, error covariance sensitivity and impact estimation,and a posteriori diagnosis. Applications to atmosphericmodeling and the current status of implementation at nu-merical weather prediction centers are discussed.
Dacian N. DaescuPortland State UniversityDepartment of Mathematics and [email protected]
MS78
Lagrangian Data Assimilation
Abstract not available at time of publication.
Christopher JonesUniversity of North Carolina and University of [email protected]
MS78
Uncertainty Quantification and Data AssimilationIssues for Macro-Fiber Composite Actuators
Macro-fiber composites (MFC) are being considered for arange of present and emerging applications, such as flowcontrol and remote configuration of large space structures,due to their large bandwidth, moderate power require-ments, and moderate stain capabilities. However, theyalso exhibit hysteresis and constitutive nonlinearities dueto their PZT components. In this presentation, we will dis-cuss uncertainty quantification and data assimilation tech-niques for these actuators within the context of the homog-enized energy modeling framework. This will utilize datacollected under a variety of operating regimes.
Ralph C. SmithNorth Carolina State UnivDept of Mathematics, [email protected]
Zhengzheng HuDepartment of MathematicsNorth carolina State [email protected]
Nathanial [email protected]
MS79
Analyzing Predictive Uncertainty using PairedComplex and Simple Models
In analyzing the uncertainty of environmental model pre-dictions a tension exists between the prior information andlikelihood terms of Bayes Equation. The first requires acomplex model that reflects the complexity of earth pro-cesses. The second requires a simple, fast-running modelthat can capture information from historical data. Use ofa single model to achieve both erodes its ability to achieveeither. A paired model approach can achieve the same endswith fewer compromises.
John DohertyFlinders University andWatermark Numerical [email protected]
MS79
Modeling Spatial Variability as Measurement Un-certainty
Direct evaluation of uncertainty in models, parameters andmeasurements will be viewed through an ill-posed inverseproblem and we will demonstrate how incorporating un-certainty can regularize the problem. In particular, wewill estimate soil hydraulic properties using two differenttypes of measurements, and incorporate uncertainty dueto spatial variation. These estimates can be used overlarger scales than those that were measured, and their cor-responding uncertainty indicates areas where measurementuncertainty needs to be reduced.
Jodi MeadBoise State UniversityDepartment of [email protected]
MS79
Using Electrochemical Impedance Spectroscopy forStudying Respiration of Anode Respiring Bacteriafor Solid Oxide Fuel Cells
Electrochemical impedance spectroscopy (EIS) is a power-ful diagnostic technique that involves applying a small ACvoltage to an electrode for a range of frequencies. EIS forestimating anode resistances in the study of anode bacteriarespiration is of interest. Respiration of the bacteria leadsto an electrical current that can be used for various bioen-ergy applications. Here the focus is on the inverse problemfor practical data obtained in solid-oxide fuel cell research.
Rosemary A. RenautArizona State UniversitySchool of Mathematical and Statistical [email protected]
MS79
Quantifying Simplification-induced Error usingSubspace Techniques
All geophysical models represent a simplification of real-
UQ12 Abstracts 137
ity. As a result, the history-matching process can lead tosurrogate parameters that compensate for model simplifi-cation. This parameter surrogacy may or may not createpredictive bias, depending on prediction sensitivity to nulland solution space parameter components. The potentialfor bias indicates the role of history matching is predic-tion dependent. History-matching-induced predictive biasis analyzed and quantified from a subspace perspective fora physics-based water resource model.
Jeremy WhiteU.S. Geological SurveyFlorida Water Science [email protected]
Joseph HughesUS Geological SurveyFlorida Water Science [email protected]
MS80
Hybrid Reduced order Modeling for UncertaintyManagement in Nuclear Modeling
The speaker will overview reduced order modeling tech-niques currently used in nuclear reactor design calculations.Reactor calculations are very complex given that natureof the physics governing their behavior and their heteroge-neous design. Reduced order modeling is therefore requiredto allow engineers to carry out design calculations on a rou-tine basis. In doing so, the fidelity of the calculations mustbe preserved. When used to perform uncertainty quan-tification, sensitivity analysis, or data assimilation, addi-tional reduction is necessary since these analyses requiremany executions of the models. The speaker has devel-oped hybrid statistical deterministic reduction techniquesto enable the execution of these analysis on a routine basis.He will give a quick overview of reduction techniques usedin best-estimate reaction calculations, then focus in depthon reduction algorithms he developed to perform sensitiv-ity and uncertainty analyses.
Hany S. Abdel-KhalikNorth Carolina State [email protected]
MS80
Sparse Interpolatory Reduced-Order Models forSimulation of Light-Induced Molecular Transfor-mations
We describe an efficient algorithm for using a quantum-chemistry simulator to drive a gradient descent algorithm.Our objective is to model light-induced molecular trans-formations. The simulator is far too expensive and thedesign space too high-dimensional for the simulator to beused directly to drive the dynamics. We begin by applyingdomain-specific knowledge to reduce the dimension of thedesign space. Then we use sparse interpolation to modelthe energy from the quantum code on hyper-cubes in con-figuration space, controlling the volume of the hyper-cubesboth to maintain accuracy and to avoid failure of the quan-tum codes internal optimization. We will conclude thepresentation with some examples that illustrate the algo-rithm’s effectiveness.
Carl T. KelleyNorth Carolina State UnivDepartment of Mathematics
David MokrauerDept of MathematicsNC State [email protected]
MS80
Intrusive Analysis for Uncertainty Quantificationof Simulation Models
We develop intrusive, hybrid uncertainty quantificationmethods applicable to modern, high-resolution simulationmodels. We seek to construct surrogate models of un-certainty by augmenting knowledge of the model with re-sults of intrusive analysis, such as Automatic Differenti-ation. Our toolset now includes a gradient-enhanced re-gression, stochastic-processes based analysis that provideserror bounds and statistical metrics for the prior regres-sion model, and a framework for the use of reduced, lower-fidelity approximations to effectively describe uncertaintiesin the model.
Oleg RoderickArgonne National [email protected]
PP1
A Statistical Decision-Theoretic Approach to Dy-namic Resource Allocation for a Self-Aware Un-manned Aerial Vehicle
For inference and prediction tasks, it is typical for decision-makers to have available to them several different numer-ical models as well as experimental data. Our statisticaldecision-theoretic approach to dynamic resource allocationseeks to exploit optimally all available information by se-lecting models and data with the appropriate fidelity forthe task at hand. We demonstrate our approach on theonline prediction of flight capabilities for a self-aware un-manned aerial vehicle.
Doug Allaire, Karen E. WillcoxMassachusetts Institute of [email protected], [email protected]
PP1
Computational Reductions in Stochas-tically Driven Neuronal Models with AdaptationCurrents
The complexity of stochastic, conductance-based neuronalnetwork dynamics has motivated several computational re-ductions to the Hodgkin-Huxley neuron model. These re-ductions, however, rarely generalize to neurons containingrealistic adaptive ionic currents. Using several measures ofrobustness and network simulations, we show the model-ing algorithm described in this talk achieves the goals ofattaining accuracy and efficiency while producing a mod-eling scheme general enough for modeling a wide array ofspecialized neuronal dynamics.
Victor BarrancaRensselaer Polytechnic [email protected]
Gregor KovacicRensselaer Polytechnic Inst
138 UQ12 Abstracts
Dept of Mathematical [email protected]
David CaiNew York UniversityCourant [email protected]
PP1
Propagating Arbitrary Uncertainties ThroughModels Via the Probabilistic Collocation Method
Nonintrusive methods for propagating uncertainty in simu-lations have been introduced in recent times. Some of thesemethods use polynomial chaos expansions which can beused with many standard statistical distributions based onthe Askey scheme. This work describes an efficient methodfor propagating arbitrary distributions through forwardmodels based on application of the standard recurrence re-lation for generating orthogonal polynomials. The methodis demonstrated in the context of eddy-current nondestruc-tive evaluation.
Matthew CherryUniversity of Dayton Research [email protected]
Jeremy KnoppAir Force Research [email protected]
PP1
A Predictive Model for Geographic Statistical Data
Any planar map may be transformed into a graph. If weconsider each country to be represented by a vertex (ornode), if they are adjacent they will be joined by an edge.To consider how trends migrate across boundaries, we ob-tain relevant measures of the statistic we want to consider;namely, the index of prevalence, and the index of incidence.We define a cycle by a given unit of time, usually a year.We then propose various alternate equations whereby, byparametrizing various variables, such as population size,birth rate, death rate, and rate of immigration/emigration,we calculate a new index of prevalence/index of incidence,for the next cycle. For a given data set, each statistic weconsider may propagate by a different equation, and/or adifferent set of parameters; this will be determined empiri-cally. What we are proposing is, technically, to model howa discrete stochastic process propagates geographically, ac-cording to geographical proximity. Very often, statisticsthat depend on geographical proximity are tabulated byvariables that are not; i.e., alphabetically. Such a predic-tive model would be relevant in areas such as public health;and/or crime mapping, for law enforcement purposes. Wepresent an application using a GIS (geographic informationsystem).
Jorge Diaz-CastroUniversity of P.R. School of LawUniversity of P.R. at Rio [email protected]
PP1
A Stochastic Collocation Method Combined with aReduced Basis Method to Compute Uncertainties
in the Sar Induced by a Mobile Phone
A reduced basis method is introduced to deal with astochastic problem in a numerical dosimetry applicationin which the field solutions are computed using an itera-tive solver. More precisely, the computations already per-formed are used to build an initial guess for the iterativesolver. It is shown that this approach significantly reducesthe computational cost.
Mohammed Amine DrissaouiLaboratoire [email protected]
PP1
Uncertainty Quantification for HydromechanicalCoupling in Three Dimensional Discrete FractureNetwork
Fractures and fracture networks are the principle pathwaysfor migration of water, heat and mass in geothermal sys-tems, oil/gas reservoirs, sequestrated CO2 leakages. Weinvestigate the impact of parameter uncertainties of thePDFs that characterize discrete fracture networks on thehydromechanical response of the system. Numerical resultsof first, second and third moments, normalized to a basecase scenario, are presented and integrated into a prob-abilistic risk assessment framework. (Prepared by LLNLunder Contract DE-AC52-07NA27344)
Souheil M. Ezzedine, Frederick [email protected], [email protected]
PP1
Random and Regular Dynamics of StochasticallyDriven Neuronal Networks
Dynamical properties of and uncertainty quantification inIntegrate-and-Fire neuronal networks with multiple timescales of excitatory and inhibitory neuronal conductancesdriven by random Poisson trains of external spikes will bediscussed. Both the asynchronous regime in which the net-work spikes arrive at completely random times and thesynchronous regime in which network spikes arrive withinperiodically repeating, well-separated time periods, eventhough individual neurons spike randomly will be pre-sented.
Pamela B. FullerRensselaer Polytechnic [email protected]
PP1
Calibration of Computer Models for RadiativeShock Experiments
POLAR experiments aimed to mimic the astrophysical ac-cretion shock formation in laboratory using high-powerlaser facilities. The dynamics and the main physical prop-erties of the radiative shock produced by the collision ofthe heated plasma with a solid obstacle have been charac-terised on recent experiments and compared to radiationhydrodynamic simulations. This poster will present thestatistical method based on Bayesian inference used to cal-ibrate the main unknown parameters of the simulation andto quantify the model uncertainty.
Jean Giorla
UQ12 Abstracts 139
Commissariat a l’Energie Atomique, [email protected]
E. Falize, C. Busschaert, B. LoupiasCEA/DAM/DIF, F-91297, Arpajon, [email protected], [email protected],[email protected]
M. Koenig, A. Ravasio, A. DiziereLaboratoire LULI, Ecole Polytechnique, Palaiseau, [email protected],[email protected],[email protected]
Josselin GarnierUniversite Paris [email protected]
C. MichautLUTH, Observatoire de Paris, Meudon, [email protected]
PP1
Regional Climate Model Ensembles and Uncer-tainty Quantification
One goal of the North American Regional Climate ChangeAssessment Program (NARCCAP) is to better under-stand regional climate model output and the impact of theboundary conditions provided by the global climate model,the regional climate model itself, and interaction betweenthe two. This poster will outline a statistical frameworkfor quantifying temperature output and the associated un-certainty both spatially and temporally. The aim is tounderstand the contributions of each model type and howthose contributions vary over space.
Tamara A. GreasbyNational Center for Atmospheric [email protected]
PP1
Multiple Precision, Spatio-Temporal ComputerModel Validation using Predictive Processes
The Lyon-Fedder-Mobarry (LFM) global magnetospheremodel is a computer model used to study physical processesin the magnetosphere and ionosphere. Given a set of inputvalues and solar wind data, the LFM model numericallysolves the magnetohydrodynamic equations and outputsa spatio-temporal field of ionospheric energy. The LFMmodel has the benefit of being able to be run at multipleprecisions (with higher precisions more closely represent-ing true ionospheric conditions at the cost of computation).This work focuses on relating the multiple precision outputfrom the LFMmodel to field observations of ionospheric en-ergy. Of particular interest here are the input settings (andtheir uncertainties) of the LFM model which most closelymatch the field observations. A low rank spatio-temporalstatistical LFM model emulator is constructed using pre-dictive processes. The emulator borrows strength acrossthe multiple precisions of the LFM model to calibrate theinput variables to field observations. In this way, the lesscomputationally demanding yet lower precision models areleveraged to provide estimates of the higher precision LFMmodel output.
Matthew J. Heaton
National Center for Atmospheric [email protected]
Stephan SainInstitute for Mathematics Applied to GeosciencesNational Center for Atmospheric [email protected]
William Kleiber, Michael WiltbergerNational Center for Atmospheric [email protected], [email protected]
Derek BinghamDept. of Statistics and Actuarial ScienceSimon Fraser [email protected]
Shane ReeseBrigham Young [email protected]
PP1
Variance Based Sensitivity Analysis of EpidemicSize to Network Structure
I demonstrate the use of global sensitivity analysis to studythe dependence of epidemic growth on the structure of theunderlying network. These networks model the social in-teractions and migration patterns of individuals susceptibleto a disease. Variance based sensitivity indices are shownto indicate quantities in the structure of these networksthat determine the behaviour of epidemics.
Kyle S. Hickmann, James (Mac) Hyman, John E.OrtmannTulane [email protected], [email protected],[email protected]
PP1
Autoregressive Model for Real-Time Weather Pre-diction and Detecting Climate Sensitivity
We propose reduced stochastic models with nonlinearterms replaced by autoregressive linear stochastic model inFourier space for filtering and for climate modeling. Usingthe Lorenz-96 model as a test bed, we show that this re-duced filtering strategy is computationally cheap and pro-vides more accurate filtered solutions than current meth-ods. The potential of application of this autoregressivemodel to study climate sensitivity using the Fluctuation-Dissipation Theorem (FDT) is also addresses.
Emily L. KangDepartment of Mathematical SciencesUniversity of [email protected]
John HarlimDepartment of MathematicsNorth Carolina State [email protected]
PP1
A Split Step Adams Moulton Milstein Method for
140 UQ12 Abstracts
Stiff Stochastic Differential Equations
A split-step method for solving Ito stochastic differen-tial equations (SDEs) is presented. The method is basedon Adams Moulton Formula for stiff ordinary differentialequations and modified for use on stiff SDEs which arestiff in both the deterministic and stochastic components.Its order of convergence is proved and stability regions aredisplayed. Numerical results show the effectiveness of themethod in the path-wise approximation of SDEs and its’capability to deal with uncertainty.
Abdul M. KhaliqMiddle Tennessee State UniversityDepartment of Mathematical [email protected]
David A. VossWestern Illinois [email protected]
PP1
Computer Model Calibration with High and LowResolution Model Output for Spatio-TemporalData
We discuss a framework for computer model calibrationbased on low and high resolution model output for largespatio-temporal datasets. The physical model in use isa coupled magnetohydrodynamical model for solar stormsoccurring in Earth’s upper atmosphere. The statisticalmodel is informed by known physical equations and is tai-lored to large datasets. Given initial runs of the computermodel, we propose a sequential design based on ExpectedImprovement.
William KleiberNational Center for Atmospheric [email protected]
Stephan SainInstitute for Mathematics Applied to GeosciencesNational Center for Atmospheric [email protected]
Michael WiltbergerNational Center for Atmospheric [email protected]
Derek BinghamDept. of Statistics and Actuarial ScienceSimon Fraser [email protected]
Shane ReeseBrigham Young [email protected]
PP1
Goal-Oriented Statistical Inference
In many applications statistical inference is utilized formodel identification before simulations are conducted topredict outputs of interest. Our goal-oriented approach toinference automatically targets parameter modes requiredto make predictions. In the linear Gaussian case, the meanand covariance of the posterior predictive are obtained byinferring many fewer parameter modes than an unmodi-
fied approach. This method could pave the way for effi-cient Markov chain Monte Carlo in the high-dimensionalparameter setting.
Chad E. [email protected]
Karen E. WillcoxMassachusetts Institute of [email protected]
PP1
Uncertainty in Model Selection in Remote Sensing
The model choice problem arise in the context of estimat-ing the total amount of atmospheric aerosols using satellitemeasurements. Modeling uncertainty related to unknownaerosol type is taken into account when assessing uncer-tainty in aerosol products using Bayesian model selectiontools. In this work, uncertainty modeling is comprised ofmeasurement uncertainty estimates, uncertainty in aerosolmodels, and uncertainty in model selection. Illustratingand quantifying modeling uncertainty gives valuable infor-mation to algorithm developers and users.
Anu Maatta, Marko Laine, Johanna TamminenFinnish Meteorological [email protected], [email protected],[email protected]
PP1
Multifidelity Approach to Variance Reduction inMonte Carlo Simulation
Engineers often have a suite of numerical models at theirdisposal to simulate a physical phenomenon. For uncer-tainty propagation, we present an approach that utilizesinformation from inexpensive, low-fidelity models to reducethe variance of the Monte Carlo estimator of an expensive,high-fidelity model. For a given computational budget, wedemonstrate that our approach can produce more accurateestimation of the statistics of interest than regular MonteCarlo simulation.
Leo Ng, Karen E. WillcoxMassachusetts Institute of Technologyleo [email protected], [email protected]
PP1
Local Sensitivity Analysis of Stochastic Systemswith Few Samples
We estimate local sensitivity of predicted quantities instochastic models with respect to input parameters by min-imizing the Lp residual of linear hyperplanes fit throughsparse multidimensional output. We analyze how thenumber of function evaluations necessary for reliable esti-mates for the sensitivity indices depends upon the Lp norm(0 < p < 2) used and how the optimal Lp norm can re-duce the number of function evaluations necessary for thesensitivity analysis. We apply this method to an epidemicmodel.
John E. Ortmann, James (Mac) Hyman, Kyle S.HickmannTulane [email protected], [email protected],[email protected]
UQ12 Abstracts 141
PP1
Worst Case Scenario Analysis Applied to Uncer-tainty Quantification in Electromagnetic Simula-tions
The input parameters of models used for the simulationof technical devices exhibit uncertainties, e.g., due to themanufacturing process. Whenever their probability distri-bution is unknown, a worst case scenario analysis [Babuskaet al. (2005)] is appealing, potentially requiring little nu-merical effort. Based on sensitivity analysis techniques, thequantification of uncertainty due to variations in geometry,material distribution and sources will be discussed for elec-tromagnetic problems within the finite element method.
Ulrich RoemerTechnische Universitaet DarmstadtInstitut fuer Theorie Elektromagnetischer [email protected]
Stephan KochInstitut fuer Theorie Elektromagnetischer FelderTechnische Universitaet [email protected]
Thomas WeilandInstitut fuer Theorie Elektromagnetischer Felder, TEMFTechnische Universitaet [email protected]
PP1
Constructive and Destructive Correlation Dynam-ics in Simple Stochastic Swimmer Models
Micro-swimming organisms usually can be classified intotwo groups based on their method of propulsion: push-ers and pullers. Experimental data as well as numericalsimulations point towards much larger orientational corre-lations for suspensions of pushers compared to pullers. Iwill discuss an analytical mean field theory for a simplifiedswimmer model that captures two body interactions andcan help lead to an understanding of the above observa-tions.
Kajetan SikorskiRensselaer Polytechnic InstituteDepartment of Mathematical [email protected]
PP1
Preserving Positivity in Pc Approximations ViaWeighted Pc Expansions
The preservation of the positivity of the solution in dy-namical systems is important in a wide range of applica-tions. When working with polynomial chaos (PC) approx-imations, the positivity can be lost after the truncationof the PC expansion. We show that the introduction ofweights in the PC approximations solves this problem fora wide range of polynomial systems. We present sufficientconditions for the weights and an algorithm that allowsdetermining these weights.
Faidra Stavropoulou, Josef ObermaierHelmholtz Zentrum [email protected],[email protected]
PP1
Visualization of Uncertainty: Standard Deviationand Pdf’s
Uncertainty can be expressed as an error bar (ie stan-dard deviation) or a distribution. Except for the simplestcases, concurrent visualization of uncertainty and underly-ing value, uncertainty remains un-visualized. We present amethod to display scalar uncertainty based on tessellatingthe field of uncertainty density into cells containing nearlyequal uncertainty. We present a method for segmenting afield of node based probability distributions into meaning-ful features. We use climate data to illustrate the concepts.
Richard A. StrelitzLos Alamos Naitional LaboratoryLos Alamos National [email protected]
PP1
How Typical Is Solar Energy? A 6 Year Evaluationof Typical Meteorological Year Data (TMY3)
The US solar industry makes bets on system performanceusing NRELs typical meteorological year data (TMY3).TMY3 is a 1-year dataset of typical solar irradiance andother meteorological elements used for simulation of solarpower production. Irradiance naturally varies from TMY3temporally and geographically, thus creating risk. Usingdata from NOAAs Surface Radiation Network (SurfRad),6-years of irradiance observations were analyzed. Statisti-cally comparing SurfRad against TMY3 quantified tempo-ral and geographic uncertainty in typical solar irradiance.
Matthew K. Williams, Shawn KerriganLocus [email protected],[email protected]
PP1
Second-Order Probability for Error Analysis in Nu-merical Computation
It is difficult to quantify the uncertainty for numerical com-putation because computational error are deeply affectedby many epistemic uncertainty variables. The best knownmethods for estimating computational error are Richard-son extrapolation(RE) and generalized RE. However, itonly estimate descretization error. In this paper we pro-pose second-order probability method for error analysisin numerical computation. This method can provide in-tegrated error analysis for all error sources and quantifymeasurement uncertainty of numerical computation error.
Hua Chen, Yingyan Wu, Shudao Zhang, Haibing Zhou,Jun XiongInstitute of Applied Physics and ComputationalMathematicschen [email protected], wu [email protected],zhang [email protected], zhou [email protected],xiong [email protected]