58
A seamless approach towards stochastic modeling: Sparse grid collocation and data driven input models Baskar Ganapathysubramanian 1 and Nicholas Zabaras Materials Process Design and Control Laboratory, Sibley School of Mechanical and Aerospace Engineering, 188 Frank H.T. Rhodes Hall, Cornell University, Ithaca, NY 14853-3801, USA Abstract Many physical systems of fundamental and industrial importance are significantly affected by the underlying fluctuations/variations in boundary, initial conditions as well as variabilities in operating and surrounding conditions. There has been in- creasing interest in analyzing and quantifying the effects of uncertain inputs in the solution of partial differential equations that describe these physical phenomena. Such analysis naturally leads to a rigorous methodology to design/control physical processes in the presence of multiple sources of uncertainty. A general application of these ideas to many significant problems in engineering is mainly limited by two issues. The first is the significant effort required to convert complex deterministic software/legacy codes into their stochastic counterparts. The second bottleneck to the utility of stochastic modelling is the construction of realistic, viable models of the input variability. This work attempts to demystify stochastic modelling by pro- viding easy-to-implement strategies to address these two issues. In the first part of the paper, strategies to construct realistic input models that encode the variabil- ities in initial and boundary conditions as well as other parameters are provided. In the second part of the paper, we review recent advances in stochastic modeling and provide a road map to trivially convert any deterministic code into its sto- chastic counterpart. Several illustrative examples showcasing the ease of converting deterministic codes to stochastic codes are provided. Key words: Stochastic partial differential equations; Deterministic simulators; Random processes; Stochastic collocation; Data driven models; Sparse grids; Random heterogeneous media; Microstructures; Model reduction; 1 Corresponding author. Email: (bg74,zabaras)@cornell.edu, URL: http://mpdc.mae.cornell.edu/ Preprint submitted to Elsevier Preprint 10 December 2007

A seamless approach towards stochastic modeling: Sparse ... · A seamless approach towards stochastic modeling: Sparse grid collocation and data driven input models Baskar Ganapathysubramanian1

Embed Size (px)

Citation preview

A seamless approach towards stochastic

modeling: Sparse grid collocation and data

driven input models

Baskar Ganapathysubramanian1 and Nicholas Zabaras

Materials Process Design and Control Laboratory, Sibley School of Mechanical and

Aerospace Engineering, 188 Frank H.T. Rhodes Hall, Cornell University, Ithaca,

NY 14853-3801, USA

Abstract

Many physical systems of fundamental and industrial importance are significantlyaffected by the underlying fluctuations/variations in boundary, initial conditions aswell as variabilities in operating and surrounding conditions. There has been in-creasing interest in analyzing and quantifying the effects of uncertain inputs in thesolution of partial differential equations that describe these physical phenomena.Such analysis naturally leads to a rigorous methodology to design/control physicalprocesses in the presence of multiple sources of uncertainty. A general applicationof these ideas to many significant problems in engineering is mainly limited by twoissues. The first is the significant effort required to convert complex deterministicsoftware/legacy codes into their stochastic counterparts. The second bottleneck tothe utility of stochastic modelling is the construction of realistic, viable models ofthe input variability. This work attempts to demystify stochastic modelling by pro-viding easy-to-implement strategies to address these two issues. In the first part ofthe paper, strategies to construct realistic input models that encode the variabil-ities in initial and boundary conditions as well as other parameters are provided.In the second part of the paper, we review recent advances in stochastic modelingand provide a road map to trivially convert any deterministic code into its sto-chastic counterpart. Several illustrative examples showcasing the ease of convertingdeterministic codes to stochastic codes are provided.

Key words: Stochastic partial differential equations; Deterministic simulators;Random processes; Stochastic collocation; Data driven models; Sparse grids;Random heterogeneous media; Microstructures; Model reduction;

1 Corresponding author. Email: (bg74,zabaras)@cornell.edu, URL:http://mpdc.mae.cornell.edu/

Preprint submitted to Elsevier Preprint 10 December 2007

1 Introduction

With the rapid advances in computational power and easier access to high-performance computing platforms, it has now become possible to computa-tionally investigate and design realistic multiscale, multiphysics phenomena.As a direct consequence of this computational ability, there has been growingawareness that the tightly coupled and highly nonlinear systems (that suchproblems are composed of) are affected to a significant extent by the inher-ent (usually unresolvable) uncertainties in material properties, fluctuationsin operating conditions, and system characteristics. The scale at which thesephenomena occur can range from the micro-scale (MEMS devices), macro-scale (devices/components made from polycrystalline or functionally gradedmaterials) to the geological scale (geothermal energy systems).

A good example of how the underlying variability affects performance is inMicro-Electro Mechanical Systems (MEMS). The material properties and thegeometric parameters specifying MEMS devices play a significant part in thedetermining their performance. Commercial feasibility requires using low costmanufacturing processes to fabricate these MEMS devices. This often resultsin significant uncertainties in these parameters, which leads to large varia-tions in the device performance. To accurately predict and subsequently de-sign/tailor the performance of such systems, it becomes essential for one toinclude the effects of these input uncertainties into the system and understandhow they propagate and alter the final solution.

Another class of problems that illustrate the necessity of incorporating theeffects of uncertainty (in the lower scales) is in the design and analysis of com-ponents made of polycrystalline, functionally graded and other heterogeneousmaterials. Most engineering devices involve the use of such multi-componentmaterials which constitute a major element in structural, transport, and chem-ical engineering applications. They are ubiquitous, entering wherever thereis a need for cheap, resilient and strong materials. The thermal and elasticproperties of such materials (particularly polycrystals) are highly anisotropicand heterogeneous, depending on the local microstructure. Experimental evi-dence has shown that microstructural variability in polycrystalline materialscan have a significant impact on the variability of macro-properties such asstrength or stiffness as well as in the performance of devices made of ran-dom heterogeneous media. Nevertheless, there has been no dedicated effortin the engineering community for incorporating the effect of such microstruc-tural uncertainty in process or device design even though it is well known thatmost performance related failures of critical components are directly relatedwith rare events (tails of the PDF) of microstructural phenomena [1–6]. Inaddition, no predictive modeling or robust design of devices made of hetero-geneous media (MEMs, energetic materials, polycrystals, etc.) is practically

2

feasible without accounting for microstructural uncertainty.

Sources of uncertainties: The above two examples clearly underscore thenecessity of incorporating the effects of uncertainty in property variation andoperating conditions during the analysis and design of components and de-vices. This necessity of understanding uncertainty effects extends to almost allcomplex systems involving multiple coupled physical phenomena. Familiar sys-tems include analysis of thermal transport through devices (nozzle flaps, gears,etc.) that are composed of polycrystals and/or functionally graded materials,hydrodynamic transport through porous media and chemical point operations(e.g. filtration, packed columns. fluidization, sedimentation and distillation op-erations). Other sources of uncertainty include errors/fluctuations/variationsin input constitutive relations, material properties as well as initial/boundaryconditions that are used in the model of the system that is being analyzed.These inputs are usually available or derived from experimental data. Thepresence of uncertainties/perturbations in experimental studies implies thatthese input parameters have some inherent uncertainties. The uncertaintiesin the physical model may also be due to inaccuracies or indeterminacies ofthe initial conditions. This may be due to variation in experimental data ordue to variability in the operating environment. Uncertainties may also creepin due to issues in describing the physical system. This may be caused by er-rors in geometry, roughness, and variable boundary conditions. Uncertaintiescould also occur due to the mathematical representation of the physical sys-tem. Examples include errors due to approximate mathematical models (e.g.linearization of non-linear equations) and discretization errors.

Ingredients for stochastic analysis: There are two key ingredients for con-structing a framework to analyze the effects of various sources of uncertaintyin a system.

• The first is a mathematically rigorous framework that can be used to definethe stochastic differential equations representing the systems along with aviable solution strategy for the same. The stochasticity (i.e. inclusion of theeffects of uncertainty) in this set of differential equations enters as uncertainboundary and initial conditions, variations in properties, fluctuations andthermal perturbations, or more generally as noise. We denote these as theinput uncertainties.

• The second ingredient is a set of techniques to construct usable, realisticmodels of these input uncertainties. These models should utilize availableexperimental information and/or expert knowledge. This is a crucial (yetunderstated) issue with stochastic modeling. It is necessary to provide mean-ingful/realistic models for the input uncertainty to draw any meaningfulconclusions from the resulting solutions. For instance, trying to account forthe variability in room temperature (in a simple heat transfer in a roomproblem) using an input model that allows the room temperature to vary

3

from -100 OC to +100 OC is a valid albeit probably useless model.

There are various frameworks that have been developed in recent years forincorporating the effects of uncertainties in various systems. We do not intendto review them here, though we will briefly gloss over the key aspects of most ofthem in Section 4. The interested reader is referred to [7–14] and the referencestherein for a taste of stochastic analysis. In the opinion of the authors, thesparse grid collocation approach [14–16] to solving differential equations isprobably the easiest to implement and utilize. The key advantages that thistechnique offers are:

• Relies entirely on multiple calls to deterministic simulators, thus removingthe necessity of programming and validating new stochastic codes. This alsoprovides the ability of utilizing complex codes and legacy codes that cannotbe easily reprogrammed to include stochastics.

• Allows the incorporation of multiple sources of uncertainty in a very straight-forward manner.

• Provides a high degree of scalability. Since the multiple calls to the de-terministic simulators are all independent of each other, this part of theframework can be made embarrassingly parallel. The only sequential partof the framework is the post-processing part that constructs the stochasticsolution and computes the relevant statistics of the solution.

In the next section we go over some necessary basic mathematical preliminar-ies before discussing the input model generation strategies and the stochasticsparse grid collocation approach to solving stochastic partial differential equa-tions.

2 Some definitions and mathematical preliminaries

For the sake of clarity, let us denote the variable of interest as θ. Denote by D,a region in R

nsd, the domain of interest. nsd = 1, 2, 3 is the number of spacedimensions over which D is defined. Denote by T ≡ [0, tmax] the time domainof interest. Denote the boundary of D as ∂D. Denote the set of governing(deterministic) differential equations defining the evolution of θ as

B(θ : x, t, q, α) = 0. (1)

The solution of the above set of equations results in a function θ : D×T → R.In the above equation, q, α represent the input conditions, boundary con-ditions, material parameters as well as any constitutive relations that thephysical system may require. For instance in the context of thermal problems,q, α could represent the heat flux and the thermal conductivity of the system

4

while in the context of elastic deformation problems, q, α could representthe displacement boundary conditions and the elastic tensor. In conventionaldeterministic analysis one is given values or a deterministic function represen-tation for q, α and subsequently solves for the evolution of θ as a functionof these input conditions. As emphasized earlier, in many cases these inputparameters have some uncertainty associated with them. This brings up thenecessity of stochastic analysis.

Suppose, without loss of generality, that the input quantity α is in fact un-certain. Assume also, for the sake of clarity, that α is a scalar. Denote by Ωα

the space of possible values that α can take. Now, Ωα can be a set of discretepoints α1, . . . , αN or it can be a continuous range of values α ∈ [a, b]. Torigorously define this notion of variability, the notion of a σ-algebra is intro-duced. A σ-algebra over a set Ωα is a nonempty collection F of subsets of Ωα

that is closed under complementation and countable unions of its members.Intuitively, a σ-algebra of a set/region is a smaller collection of subsets of thegiven set allowing countably infinite operations/subsets. The use of σ-algebrasallows us to restrict our attention to a smaller, and usually more useful, col-lection of subsets of a given set. The main use of σ-algebras is in the definitionof measures on Ωα. This is one of the cornerstones of probability theory (theinterested reader is referred to [17] for a gentle introduction to σ-algebras inprobability theory). The measure that we are usually interested in is the prob-

ability measure, P , i.e. the probability of a particular subset in the σ-algebraoccurring. That is the function/measure P : F → [0, 1]. With these definitions,we are now ready to define the probability space in which the variations of therandom input α are defined on. Let (Ωα,F ,P) be a probability space, whereΩα is the space of basic outcomes, F is the minimal σ-algebra of the subsetsof Ωα. P is the probability measure on F . The σ-algebra can be viewed as acollection of all possible events that can be derived from the basic outcomesin Ωα and have a well defined probability with respect to P . The notation ωα

will be used to refer to the basic outcomes in the sample space, i.e., ωα ∈ Ωα.

From a deterministic version of α we now have a probabilistic definition ofα(ωα). Note that α(ωα) is still an abstract quantity, that lies in an abstractprobability space (Ωα,F ,P). In the next section, we will look at different waysof representing this random process for straightforward utility in a computa-tional framework. This probabilistic definition of α(ωα) necessitates a redefin-ition of the governing equation. This can be done naturally as follows: definethe evolution of θ as

B(θ : x, t, q, α(ωα)) = 0. (2)

Now, the dependent variable θ depends not only on (x, t) but also on thespace Ωα. The solution of the above set of equations results in a functionθ : D × T × Ωα → R.

5

3 Representing input randomness

The first step towards solving Eq. (2) is some form of numerical representa-tion of the input random processes. A finite dimensional representation of theabstract probability space is a necessary prerequisite for developing any vi-able framework to solve Eq. (2). In this section, we discuss various techniquesthat can be utilized to represent the input uncertainties, with an emphasis onutilizing experimental data to construct these representations. Note that theinput parameters to Eq. (2) can be either be scalars or functions.

3.1 Input parameters as random variables

Consider the case when the input parameter, α is a scalar. The problem ofinterest is now to represent the uncertainty of α in a numerically feasible waythat utilizes experimental data. Experimental data, in this case, refers to a setof realizations of α, α1, . . . , αM. The most straightforward, intuitive way torepresent this variability is to construct a histogram of the variability and asso-ciate a random variable, ξα, with a PDF proportional to the histogram. Thus,each input parameter is represented using a random variable. The governingstochastic differential equation now becomes

B(θ : x, t, ξα) = 0, (3)

where ξα is the random variable which represents the variability in the inputparameter α. In case there are multiple (uncertain) scalar parameters q, α ,each of these input parameters should be represented as appropriate randomvariables ξq, ξα. Assume that there are n = 2 scalar input parameters, thegoverning stochastic differential equation now becomes

B(θ : x, t, ξq, ξα) = 0. (4)

Thus, from a deterministic dependant variable, θ, defined in nsd+1 (nsd spacedimensions and 1 dimension representing time) dimensions, we now have tosolve for the stochastic dependant variable, θ, defined in nsd + 1 + n (nsdspace dimensions, 1 dimension representing time and n stochastic dimensionsrepresenting the variability in the input parameters)

6

3.2 Random field as a collection of random variables

In some cases, the input parameters are functions showing variability in space.This could be, for instance, imposed boundary conditions or property vari-ability in heterogeneous media. Assume (for clarity), that q(x) is one suchinput parameter that is uncertain. One simple way of representing the uncer-tainty of this spatially varying random process (also called a random field)is to consider q(x) as a collection of scalar values at different spatial loca-tions. That is, the function q(x) is approximated as a collection of pointsq′ = q(x1), q(x2), . . . , q(xk). Each of these individual parameters q(xi) showsome uncertainty and hence can be represented as appropriate random vari-ables. Note that the representation of the random field q(x), x ∈ D asa vector of points q′ = q(x1), q(x2), . . . , q(xk) is an approximation that isequivalent to q(x) in the limit as k → ∞. For some finite value of k, the aboveprocedure of approximating a random field as a set of independent randomvariables results in a nsd+1+k dimensional stochastic differential equation forthe evolution of the dependent variable θ. The governing stochastic differentialequation is written as:

B(θ : x, t, ξq1 , . . . , ξqk) = 0. (5)

One problem with the above strategy is that the number of random vari-ables (the stochastic dimensionality) quickly becomes very large and can posesignificant computational problems.

3.3 The Karhunen-Loeve expansion

An alternate approach to representing random fields is to recognize that suchfields usually have some amount of spatial correlation. There is rich literatureon techniques to extract/fit correlations for these random fields from inputexperimental/numerical data. This is an area of intense ongoing research [19].The assumption of the existence of some correlation structure of the randomfield provides a elegant way of representing the random field as a finite set ofuncorrelated random variables. This technique is called the Karhunen-Loeveexpansion.

The Karhunen-Loeve expansion is essentially founded on the idea that thespace of all second-order random processes (i.e. processes with a finite secondmoment) can be defined as a Hilbert space equipped with the mean-squarenorm. Denote this Hilbert space as L2(Ω,F ,P). Since this space is complete,

7

any random process q(x, ω) ∈ L2(Ω,F ,P) can be expressed as a summation

q(x, ω) =∞∑

i=1

qi(x)ζi(ω), (6)

where ζi(ω)∞i=0 forms a basis of the space and qi∞i=0 are the projections ofthe field onto the basis. Eq. (6) involves an infinite summation and hence iscomputationally intractable. The Karhunen-Loeve expansion (KLE) considersfinite-dimensional subspaces of L2(Ω,F ,P) that best represent the uncertaintyin q(x, ω). The KLE for a stochastic process q(x, t, ω) is based on the spectraldecomposition of its covariance function Rhh(y1, y2). Here, y1 and y2 denote thespatio-temporal coordinates (x1, t1) and (x2, t2), respectively. By definition,the covariance function is symmetric and positive definite and has real positiveeigenvalues. Further, all its eigenfunctions are orthogonal and span the spaceto which q(x, t, ω) belongs. The KL expansion can be written as

q(x, t, ω) = E(q(x, t, ω)) +∞∑

i=0

λifi(x, t)ξi(ω), (7)

where E(q(x, t, ω)) denotes the mean of the process and ξi(ω)∞i=0 forms a setof uncorrelated random variables whose distribution has to be determined [18].(λi, fi(x, t))∞i=0 form the eigenpairs of the covariance function:

D×T

Rhh(y1, y2)fi(y2)dy2 = λf1(y1). (8)

The chief characteristic of the KLE is that the spatio-temporal randomnesshas been decomposed into a set of deterministic functions multiplying randomvariables. These deterministic functions can also be thought of as representingthe scales of fluctuations of the process. The KLE is mean-square convergentto the original process q(x, t, ω). More interestingly, the first few terms ofthis expansion represents most of the process with arbitrary accuracy. Theexpansion in Eq. (7) is typically truncated to a finite number of summationterms, k. Using this technique, the representation becomes

B(θ : x, t, ξq1 , . . . , ξqk) = 0. (9)

3.4 Model reduction as a tool for constructing input models

In many cases, one can only obtain limited experimental information aboutthe variation in a particular property/conditions that is essential to analyz-ing a particular system. This is particularly apparent when one is dealing

8

with systems that involve heterogeneous or anisotropic media or forcing con-ditions. Examples of these systems include (but are not limited to) poly-crystalline materials, functionally-graded composites, heterogeneous porousmedia, multi-phase composites, structural materials like cement and mortar.The thermal-mechanical (and chemical) properties of these media/materialsdepend on the local distribution of materials. In such analysis, the only infor-mation that is usually available experimentally to quantify these variations arestatistical correlations (like the mean distribution of material, some snapshotsof the structure at a few locations or some higher order correlation functions,Fig. 1). This leads to an analysis of the problem assuming that the propertiesare random fields satisfying some experimentally determined statistics. In thecontext of a computational framework to perform such analysis, one needs toconstruct a stochastic input model that encodes this limited information.

Physically meaningful/useful solutions can be realized only if property statis-tics are experimentally obtained and used in generating the stochastic model.Usually, experimental data regarding statistics and correlations of the prop-erty (or microstructure) are known (in the following, for the sake of consis-tency, we will use microstructure and property interchangeably). There maybe infinitely many microstructures (or property maps) that satisfy these re-lations. Consequently, any of these microstructures are equally probable tobe the microstructure under consideration. Hence we have to consider thevariation of the properties (resulting from the possible variation in topol-ogy/microstructures) as a random field. The first step to including the ef-fect of these random fields is to estimate a reliable descriptor of the same.For this descriptor to be useful in a computational framework, it must be afinite-dimensional function. The objective is then to construct low-dimensionalfunctions that approximate this property variation. By sampling over this ap-proximate space, we are essentially sampling over all possible, plausible mi-crostructures (or any other property variations).

In our recent work in this area [20], a general framework to construct alow-dimensional stochastic model to describe the variation in spatially vary-ing quantities (based on limited statistical information) was developed. Themethodology relies on a limited number of snapshots of possible propertymaps (or microstructures). These snapshots are either experimentally obtainedor reconstructed using the experimental statistical information available. Amodel reduction scheme (based on Principle Component Analysis (PCA))was formulated to represent this set of snapshots/property maps (which isa large-dimensional space) as a low-dimensional approximation of the space.A sub-space reduction strategy (see [20]) based on constraints that enforce theexperimental statistics was used to construct the smallest low-dimensional rep-resentation of the variability. This basic idea is illustrated using the exampleof polycrystalline materials (this general concept can be applied to any prop-erty variation) in Fig. 2 using a trivial two-dimensional model. The image on

9

the left shows a reduced-order model (the plane) that captures the variationin polycrystalline microstructures that satisfy an experimentally determinedgrain boundary distribution [20]. Each point on the linear manifold showncorresponds to a microstructure with a unique grain distribution. By sam-pling over this space, one essentially samples over the space of all allowablemicrostructures. The figure on the right shows a similar orientation distribu-tion model. This methodology proved to be extremely effective in analyzingthe effect of thermal property variations in two phase microstructures.

Following this, in [21], a more flexible non-linear method to encode the mi-crostructural variability was developed. This framework is based on conceptsborrowed from psychology [22,23], cognitive sciences [24,25], differential geom-etry [26–28] and graph theory [29,30]. The key mathematical concept on whichthe framework is constructed on is the so-called principle of ‘manifold learn-ing’: Given a finite set of unordered data points in high dimensional space,construct a low dimensional parametrization of this dataset. The finite set ofunordered data points correspond to various (experimental or reconstructed)plausible realizations of the property map. In [21], we show that this set ofunordered points (satisfying some experimental statistics) lies on a manifoldembedded in a high dimensional space. That is, all the possible variations ofthe quantity of interest (given a limited set of information about its variabil-

ity, in terms of say, some experimentally determined correlation statistics) lieon a curve in some high-dimensional space. An isometric transformation ofthe manifold into a low-dimensional, convex connected set is performed. It isshown that this isometric transformation preserves the underlying geometryof the manifold. The isometric transformation of the manifold is constructedusing ideas from graph theory, specifically by generating graph approxima-tions to the geodesic distance on embedded manifolds [21]. Given only a finiteamount of information about the property variability, this methodology recov-ers the geometry of the curve and its corresponding low-dimensional represen-tation. Rigorous error bounds and convergence estimates of this parametricrepresentation of the property variability are constructed [21]. This parametricspace serves as an accurate, low-dimensional, data-driven representation of theproperty variation. The framework was shown to work exceedingly well in con-structing models of various property variations. This methodology was usedto construct low-dimensional input stochastic models of property variationsfor the analysis of two-phase microstructures [21].

Given limited information about the variability of a spatially varying para-meter, the techniques discussed above construct a low dimensional stochasticmodel, G, representing the variability as:

α(x, ω) = G(ξ1, . . . , ξk). (10)

10

The corresponding governing stochastic differential equation is

B(θ : x, t, ξq1 , . . . , ξqk) = 0. (11)

4 Solving the stochastic differential equation

In Section 2, a discussion on how the presence of uncertainties can be modelledin the system through reformulation of the governing equations as stochasticpartial differential equations (SPDEs) was given. Section 3 provides varioustechniques to construct realistic input models that represent variations of theinput parameters. In the current section, a brief review of the various optionsavailable to solve the resulting stochastic differential equations are given fol-lowed by a discussion of the easy-to-implement sparse grid collocation strategy.

Solution techniques for SPDEs can be broadly classified into two approaches:statistical and non-statistical. Monte-Carlo simulations and its variants con-stitute the statistical approach to solve SPDEs. This strategy basically in-volves randomly sampling from the input models, running the deterministiccode using these realizations of the input parameters and averaging the re-sults of multiple simulations. The major attraction of these methods is thattheir convergence rates do not depend on the number of independent randomvariables. Further, they are very easy to implement. The only necessity ofcomputing uncertainties using the statistical approach is the availability of aworking deterministic code. However, the statistical approach quickly becomesintractable for complex problems in multiple random dimensions. This is be-cause the number of realizations required to acquire good statistics is usuallyquite large. Furthermore, the number of realizations changes with the varianceof the input parameters and the truncation errors in this inherently statisticalmethod are hard to estimate. This has in part been alleviated by improvedsampling techniques like Latin hypercube sampling and importance samplingamong others [31].

A non-statistical approach consists of approximating and then modelling theuncertainty in the system. One example of this approach is the perturbationmethod. Here, the random field is expanded about its mean in a truncatedTaylor series. This method is usually restricted to small values of uncertain-ties. A similar method is to expand the inverse of the stochastic operator ina Neumann series. This method is again restricted to small uncertainties andfurthermore it is almost impossible to apply these methods to complex prob-lems [7]. In the context of expanding the solution in terms of its statisticalmoments, there has been some recent work in reformulating the problem as aset of moment differential equations (MDE). The closure equations for theseMDEs are then derived from Taylor expansions of the input uncertainty as

11

well as perturbation and asymptotic expansions. The random domain decom-position method [32–35] coupled with the MDE method has been shown tobe successful in solving complex fluid flow problems in porous media havinglarge input variances.

A more recent approach to model uncertainty is based on the spectral sto-chastic finite element method (SSFEM) [7]. In this method, the random fieldis discretized directly, i.e. uncertainty is treated as an additional dimensionalong with space and time and a field variable is expanded along the uncer-tain dimension using suitable expansions. The basic idea is to project thedependent variables of the model onto a stochastic space spanned by a set ofcomplete orthogonal polynomials. These polynomials are functions of a set ofrandom variables, ξ(θ), where θ is a realization of the random event space.This technique has been used with great success modelling uncertainties invarious systems [8–13,36–41]. Error bounds and convergence studies [42–46]have shown that these methods exhibit fast convergence rates with increasingorders of expansions. These convergence studies assume that the solutions aresufficiently smooth in the random space. Like the statistical methods, the SS-FEM approach also reduces the problem to a set of deterministic equations.However, unlike the statistical methods, the resulting deterministic equationsare often coupled. This coupled nature of the deterministic problems makesthe solution of the stochastic problem extremely complex as the number ofstochastic dimensions increases or the number of expansion terms is increased.Furthermore, extensive software development is required to convert a validateddeterministic code into a viable stochastic code. In several cases, when the de-terministic codes are themselves very complex, such alteration is impossibleor ill-advised.

There have been recent efforts to couple the fast convergence of the Galerkinmethods with the decoupled nature of Monte-Carlo sampling [47]. In [15],a methodology is proposed wherein the Galerkin approximation is used tomodel the physical space while a collocation scheme is used to sample therandom space. A tensor product rule was used to interpolate the variablesin stochastic space using products of one-dimensional interpolation functions(double orthogonal polynomials) based on Gauss points. Though this schemeeffectively decoupled the deterministic system of equations, the number ofrealizations required to build the interpolation scheme increased as power ofthe number of random dimensions (nN

pt). In [16,14], the Smolyak algorithm isused to build sparse grid interpolants in high-dimensional space. The sparsegrid collocation and cubature schemes have been well studied and utilizedin different fields [48–52]. Using this method, interpolation schemes can beconstructed with orders of magnitude reduction in the number of sampledpoints to give the same level of approximation (up to a logarithmic factor) asinterpolation on a uniform grid. The major attraction of these methods aretwofold: They result in completely decoupled deterministic equations, thus

12

relying entirely on calls to the deterministic solver and they have a reasonablyfast convergence rate. In the remaining part of this section, we review thebasic aspects of the stochastic sparse grid collocation strategy.

We are interested in solving the nsd + 1 + N dimensional stochastic partialdifferential equation

B(θ : x, t, ξ1, . . . , ξN) = 0. (12)

Denote by ξ = ξ1, . . . , ξN the space of variations in the input parametersand the range of variability of ξ by Γ ⊂ R

N .

4.1 Fundamentals of the collocation method

The basic idea of the collocation approach is to build an interpolation func-tion for the dependent variables using their values at particular points in thestochastic space. The problem of interpolation can be stated as follows:Given a set of nodes ΘN = ξiM

i=1 in the N -dimensional random space Γand the smooth function f : R

N → R, find the polynomial If such thatIf(ξi) = f(ξi), ∀ i = 1, . . . ,M .

The polynomial approximation If can be expressed using the Lagrange inter-polation polynomials as follows:

If(ξ) =M∑

k=1

f(ξk)Lk(ξ), (13)

where Li(ξj) = δij. Now, once the interpolating polynomials have been gen-erated using the nodes ΘN, the value of the function at any point ξ ∈ Γ isapproximately If(ξ).

The Lagrange interpolated value of ξ, denoted by ξ is :

ξ =M∑

k=1

ξkLk(ξ). (14)

Substituting this into the governing Eq. (12) gives

B(θ : x, t,M∑

k=1

ξkLk(ξ)) = 0. (15)

13

The interpolation form of the representation immediately leads to M decoupleddeterministic systems

B(θ : x, t, ξi) = 0, i = 1, . . . ,M. (16)

The collocation method collapses the (nsd + 1 + N)-dimensional problem tosolving M deterministic problems in nsd + 1 dimensions. The stochastic solu-tion can be constructed as

θ(x, t, ξ) =M∑

k=1

θ(x, t, ξk)Lk(ξ). (17)

The statistics of the random solution can then be obtained by

< zα(x) >=M∑

k=1

zα(x, ξk)∫

Γ

Lk(ξ)ρ(ξ)dξ. (18)

The first step to constructing a multi-dimensional interpolating function is tofirst construct a one dimensional function. Construction of the one-dimensionalor univariate interpolant is a well studied problem [53]. The problem reducesto an optimal choice of the sampling points (or nodes), X, from which aLagrange polynomial based interpolant is generated. One such type of nodedistribution is the interpolation based at the Chebyshev extrema [52,53].

From univariate interpolation to multivariate interpolation

When one is dealing with multiple stochastic dimensions, it is straightforwardto extend the interpolation functions developed in one dimension to multipledimensions as simple tensor products. If u(ξ) is a function that has to be ap-proximated in N dimensional space, and i = (m1,m2, . . . mN) are the numberof nodes used in the interpolation in N dimensions, the full-tensor productinterpolation formula is given as

INu(ξ) = (I i1 ⊗ . . . ⊗ I iN )(u)(ξ)

=m1∑

j1=1

. . .mN∑

jN=1

u(ξi1j1 , . . . , ξ

iNjN

).(Li1j1 ⊗ . . . ⊗ LiN

jN), (19)

where I ik are the interpolation functions in the ik direction and ξikjm

is the jmth

point in the kth coordinate. Clearly, the above formula needs m1 × . . . × mN

function evaluations at points sampled on a regular grid. In the simplest caseof using only two points in each dimension, the total number of points requiredfor a full-tensor product interpolation is M = 2N . This number grows veryquickly as the number of dimensions is increased. Thus one has to look at

14

intelligent ways of sampling points in the regular grid described by the full-tensor product formula so as to reduce the number of function evaluationsrequired. This immediately leads to sparse collocation methods that are brieflyreviewed next.

4.2 Sparse grid collocation methods

For the univariate case, Gauss points and Chebechev points have least inter-polation error (for polynomial approximations). In the case of multivariateinterpolation, one feasible methodology that has been used is to construct in-terpolants and nodal points by tensor products of one-dimensional interpolantsand nodal points. An obvious disadvantage of this strategy is that the num-ber of points required increases combinatorially as the number of stochasticdimensions is increased.

The Smolyak algorithm provides a way to construct interpolation functionsbased on a minimal number of points in multi-dimensional space. Using Smolyak’smethod, univariate interpolation formulae are extended to the multivariatecase by using tensor products in a special way. This provides an interpolationstrategy with potentially orders of magnitude reduction in the number of sup-port nodes required. The algorithm provides a linear combination of tensorproducts chosen in such a way that the interpolation property is conserved forhigher dimensions.

4.3 Smolyak’s construction of sparse sets

Smolyak’s algorithm provides a means of reducing the number of supportnodes from the full-tensor product formula while maintaining the approxima-tion quality of the interpolation formula up to a logarithmic factor. Considerthe one-dimensional interpolation formula

Um(f) =m

j=1

f(xj)Lj. (20)

Let us denote the set of points used to interpolate the one-dimensional functionby Θ(k). For instance, Θ(3) could represent the Chebyshev points that interpo-late a third-order polynomial. The Smolyak algorithm constructs the sparseinterpolant Aq,N (where N is the number of stochastic dimensions and q−N isthe order of interpolation) using products of one-dimensional functions. Aq,N

15

is given as [51,52]

Aq,N(f) =∑

q−N+1≤|i|≤q

(−1)q−|i|.

(

N − 1

q − |i|

)

.(U i1 ⊗ . . . ⊗ U iN ), (21)

with AN−1,N = 0 and where i = (i1, . . . , iN) and |i| = i1 + . . . + iN . Here ikcan be thought of as the level of interpolation along the k − th direction. TheSmolyak algorithm builds the interpolation function by adding a combinationof one dimensional functions of order ik with the constraint that the sum total(|i| = i1 + . . . + iN) across all dimensions is between q − N + 1 and q. Thestructure of the algorithm becomes clearer when one considers the incrementalinterpolant, ∆i given by [48,50–52]

U0 = 0, ∆i = U i − U i−1. (22)

The Smolyak interpolation Aq,N is then given by

Aq,N(f) =∑

|i|≤q

(∆i1 ⊗ . . . ⊗ ∆iN )(f)

=Aq−1,N(f) +∑

|i|=q

(∆i1 ⊗ . . . ⊗ ∆iN )(f). (23)

To compute the interpolant Aq,N(f) from scratch, one needs to compute thefunction at the nodes covered by the sparse grid Hq,N

Hq,N =⋃

q−N+1≤|i|≤q

(Θ(i1)1 × . . . × Θ

(iN )1 ). (24)

The construction of the algorithm allows one to utilize all the previous re-sults generated to improve the interpolation (this is immediately obvious fromEq. (23)). By choosing appropriate points (like the Chebyshev and Gauss-Lobatto points) for interpolating the one-dimensional function, one can ensurethat the sets of points Θ(i) are nested (Θ(i) ⊂ Θ(i+1)). To extend the interpo-lation from level i to i + 1, one only has to evaluate the function at the gridpoints that are unique to Θ(i+1), that is, at Θi

∆ = Θi\Θi−1. Thus, to go froman order q − 1 interpolation to an order q interpolation in N dimensions, oneonly needs to evaluate the function at the differential nodes ∆Hq,N given by

∆Hq,N =⋃

|i|=q

(Θi1∆ ⊗ . . . ⊗ ΘiN

∆ ). (25)

16

4.4 Interpolation error

As a matter of notation, the interpolation function used will be denotedAN+k,N , where k is called the level of the Smolyak construction. The inter-polation error using the Smolyak algorithm to construct multidimensionalfunctions (using the piecewise multilinear basis) is [51,52]

‖f − Aq,N(f)‖ = O(M−2|log2M |3(N−1)), (26)

where M = dim(H(q,N)) is the number of interpolation points. On the otherhand, the construction of multidimensional functions using polynomial basisfunctions gives an interpolation error of [51,52]

‖f − Aq,N(f)‖ = O(M−k|log2M |(k+2)(N+1)+1), (27)

assuming that the function f ∈ F kd , that is, it has continuous derivatives up

to order k.

4.5 Solution strategy

The final solution strategy is as follows: a stochastic collocation method inΓ ⊂ R

N along with a finite element discretization in the physical space D ⊂ Rd

is used. Given a particular level of interpolation of the Smolyak algorithm inN -dimensional random space, we define the set of collocation nodes ξkM

k=1

on which the interpolation function is constructed. Given a piecewise FEMmesh Xh

d ∈ W 1,20 (D), find, for k = 1, . . . ,M , solutions

θhk(x) = θh(x, ξk), (28)

such that

B(θh(ξi) : x, t, ξi) = 0, i = 1, ..,M. (29)

The final numerical solution takes the form

θ(x, t, ξ) =M∑

k=1

θ(x, t, ξk)Lk(ξ), (30)

where Lk are the multidimensional interpolation functions constructed usingthe Smolyak algorithm.

17

Recently rigorous error estimates for the collocation based stochastic solutionhave been developed in [16] and [54]. The error is basically split into a su-perposition of errors: the error due to the discretization of the deterministicsolution, εD, and the error due to the interpolation of the solution in stochasticspace, εI . Very generally this error can be written as ε ≤ (C1ε

2D + C2ε

2I)

1/2

Remark: Further reduction in computational effort can be obtained by in-corporating adaptivity into the collocation framework. The interested readeris referred to our recent work [14] for a discussion of a dimension adaptivesparse grid collocation strategy.

4.6 Ingredients for a sparse grid collocation implementation

We outline an approach to perform stochastic analysis of partial differentialequations by non-intrusively using an existing deterministic code. The solutionprocedure can be broadly divided into three distinct operations:

• A subroutine for computing deterministic solutions.• A subroutine for building the interpolation functions.• A subroutine for computing moments and other post-processing operations.

Deterministic code: The concept of the collocation method lies in effectivelydecoupling the physical domain computation from the computations in thestochastic dimensions. The deterministic codes used must take in as input,the sparse grid coordinates. For example, consider the case of two stochasticdimensions (ξ1, ξ2), that determine the input random process ω. The rangeof (ξ1, ξ2) is assumed to be [0, 1] × [0, 1] without loss of generality. Thedeterministic executable must be able to take in the different sampled two-tuples (ξi

1, ξi2) and output logically named result files Ri .out .

Interpolation functions: A simple wrapper program first initializes the sto-chastic dimensions and constructs the sparse grid coordinates. The determin-istic code is run (in parallel) and the appropriately named result files areavailable. The wrapper program then reads the input data and constructs theinterpolation function. An easily usable MATLAB based toolbox to computethe sparse grid interpolation toolbox is available at [55,56]. A C++ implemen-tation that easily interfaces with deterministic simulators is available at [57].

Post-processing: Post-processing usually involves computing the momentsof the stochastic solution across the whole domain. This is essentially a cuba-ture of the interpolating function across the stochastic space. The integrationis very straightforward because the weights corresponding to the known nodalpositions are computed a priori. A data set containing the weights for eachnodal position, for different dimensions and different interpolation depths can

18

be easily created. Then the cubature over the stochastic space for simulta-neously computing moments of various fields becomes simple scalar-matrixproducts.

5 Numerical examples

Various applications of the techniques developed in the preceding sections areshowcased in this section.

5.1 Natural convection with random boundary conditions

In the following example, a natural convection problem in a square domain[−0.5, 0.5] × [−0.5, 0.5] is considered. The left wall is maintained at a highertemperature of 0.5. The right wall is maintained at a lower mean temperatureof −0.5. The temperatures at different points on the right hand side boundaryare correlated. This is physically feasible through, say, a resistance heater,where the mean temperature remains constant, but material variations causelocal fluctuations from this mean temperature. We assume this correlation tobe a simple exponential correlation function, C(y1, y2) = exp(−c · |y1 − y2|).Here, c is the inverse of the correlation length that is taken as 1. For ease ofanalysis, we consider a non-dimensionalized form of the governing equations.The Prandtl number is set to 1.0 and the thermal Rayleigh number is setto 5000. The upper and lower boundaries are thermally insulated. No slipboundary conditions are enforced on all four walls.

The Karhunen-Loeve expansion of the exponential correlation for the inputuncertainty is performed. Truncating the infinite dimensional representationof the boundary temperature to a finite dimensional approximation gives

T (y, ξ) = −0.5 +M∑

i=0

ξi

λifi(y), (31)

where ξi are normally distributed (N(0, 1)). The eigenvalues of the correlationfunction decay very rapidly. The two largest eigenvalues contribute to 95% ofthe energy of the spectrum. To showcase the sparse grid collocation method,three cases, where the Karhunen-Loeve expansion is truncated at 2, 4 and 8terms, are considered. The physical domain is discretized into 50 × 50 uni-form quadrilateral elements. Each deterministic simulation is performed untilsteady-state is reached. This was around 600 time steps with a time intervalof ∆t = 1 × 10−3.

19

5.1.1 Comparison with Monte-Carlo simulations and GPCE

A comparison of the accuracy and efficiency of the sparse grid collocationapproach to Monte Carlo and GPCE based methods are performed. In thecase of the sparse grid collocation method, a level 6 interpolation schemewas used. This resulted in solving the deterministic problem at 321 differentrealizations. The solution procedure is embarrassingly parallel. The completesimulation was performed independently on 16 nodes (32 processors) of the V 3cluster at the Cornell Theory Center. The total simulation time was about 8minutes. To compare the results with other available methods, we performedMonte-Carlo simulations of the same process. 65000 samples from the two-dimensional phase space were generated. A second order polynomial chaosexpansion of the above problem was also performed. The mean and standarddeviation values of the three methodologies match very closely as seen inFig. 3 and Fig. 4, respectively. The total computational time for the GPCEproblem was 235 minutes. Table. 1 shows representative times for the naturalconvection problem solved using GPCE and collocation based methods. Thedimension 2 and 4 problems were solved using a second-order polynomialchaos representation, while the problem in 8 dimensions was solved using afirst-order polynomial chaos representation. All problems were solved on 16nodes (32 processors) of the V 3 cluster at the Cornell Theory Center. Noticethat as the number of dimensions increases, the performance of the collocationmethod improves.

5.1.2 Higher-order moments and PDFs

Once the interpolant has been constructed, it is relatively straightforward toextract higher-order statistics from the stochastic solution. The effect of theuncertain boundary conditions is to change the convection patterns in thedomain. At the extreme ranges, the direction of the convective roll actuallyreverses. It is informative to look at the probability distribution of the ve-locity at the points of high standard deviation. This provides a clue towardsthe existence of equilibrium jumps and discontinuities in the field. The point(0.34, 0.0) is one such point of high-deviation (see Fig. 4). The velocity at thatpoint was sampled randomly 500000 times from the 2 dimensional stochasticspace and the corresponding PDF from its histogram distribution was com-puted. The probability distribution functions for the temperature, pressureand velocities at this spatial point are plotted in Fig. 5. The applied randomtemperature boundary conditions results in some realizations having tempera-tures (along the right wall) both higher and lower than the left wall. This leadsto instances where there is a complete reversal in the direction of circulation.Furthermore, as seen from Fig. 3, most of the flow in the horizontal directionis concentrated in two regions. Consequently, the u velocity in these regionsexperiences a large variation due to the imposed boundary conditions. This

20

results in the tailing of the u velocity distribution seen in PDF for u velocityin Fig. 5. Since pressure can be considered to enter the physics as a means ofimposing continuity, the PDF for the pressure exhibits a similar tailing.

5.2 Natural convection with random boundary topology

In the previous problem, we modelled the uncertainty due to the random-ness in the boundary conditions. In the present problem, we investigate theeffects of surface roughness on the thermal evolution of the fluid heated frombelow. The surface of any material is usually characterized broadly into twoaspects: waviness and roughness. Waviness refers to the large scale fluctua-tions in the surface. Roughness is the small scale perturbations of the surface.Roughness is described by two components: the probability of the point beingz above/below the datum (PDF), and the correlation between two points onthe surface (ACF). The type of surface treatment that resulted in the surfacecan be directly concluded from the auto-correlation function. For example,if the surface has been shot-peened, its ACF would have a larger correlationdistance that if the surface had been milled. Usually the PDF of the positionof a particular point is well represented by a Gaussian like distribution. Toa degree, the ACF can be approximated by an exponential correlation ker-nel with a specified correlation length. Surface roughness has been shown toplay an important role in many systems, particularly those involving turbulentflow [58]. A recent work of Tartakovsky and Xiu [59] deals with this problemin a spectral stochastic framework.

A natural convection problem in a rectangular domain [−1.0, 1.0]× [−0.5, 0.5]is considered. The schematic of the problem is shown in Fig. 6. The upperwall is smooth while the lower wall is rough. The ACF of the surface rough-ness of the lower wall is approximated to be an exponential function. Thecorrelation length is set to 0.10. The mean roughness (absolute average devi-

ation from a smooth surface) is taken to be 1100

thof the characteristic length

(L = 1.0). The first eight eigenvalues are considered to completely representthe surface randomness. The problem is solved in eight dimensional stochasticspace. The bottom wall is maintained at a higher temperature of 0.5. The topwall is maintained at a lower temperature of −0.5. The side walls are ther-mally insulated. The Prandtl number is set to 6.4 (corresponding to water)and the thermal Rayleigh number is set to 5000. No slip boundary conditionsare enforced on all four walls. The physical domain is discretized into 100×50uniform quadrilateral elements. Each deterministic simulation is performeduntil steady-state is reached. This was around 400 time steps with a timeinterval of ∆t = 1 × 10−2.

Each deterministic simulation represents one realization of the random (rough)

21

lower wall sampled at different ξ = (ξ1, .., ξ8). The lower wall is character-ized by the truncated KL expansion, y(x) =

∑8i=1

√λiξifi(x), where λi and

fi(x) are the eigenvalues and eigenvectors of the ACF. A reference domain([−1.0, 1.0] × [−0.5, 0.5]) and grid (100 × 50 uniform quadrilateral elements)are considered and the nodes on the bottom boundary are transformed accord-ing to the above formula to get each realization of the random boundary. TheJacobian of the transformation from the reference grid to the new grid can beeasily computed (It is just the ratio of the elemental areas before and afterthe transformation). The deterministic solutions at each collocation point instochastic space is computed on the reference grid. This allows the subsequentcomputation of the ensemble averages naturally. A level 4 sparse interpolationgrid was utilized for discretizing the stochastic space. 3937 collocation pointswere used to discretize the eight dimensional stochastic space using the sparsegrid collocation method. 8 nodes on the Velocity-3 cluster at the Cornell The-ory Center took 500 minutes for simulating the problem at all the collocationpoints. Experiments have shown that surface roughness results in the emissionof thermal plumes (at high Rayleigh number) and generally causes enhancedheat transport and improved flow circulation [58].

5.2.1 Moments and probability distribution functions

Fig. 7 plots the mean values of the temperature, velocity and pressure. Therough lower surface causes the mean temperature to be much more homoge-nized. The mean velocity is larger than the case with smooth lower surface. Inthe case of smooth lower surfaces, the u velocity shows symmetry about thehorizontal centerline. Notice that this is lost when the lower surface is maderough. The rougher surface enhances the circulation patterns in the lower halfof the system. Fig. 8 plots the mean velocity vectors and some streamline con-tours. The top corners of the system consist of recirculating rolls which resultin reduced heat transfer at the edges as can be seen in the mean temperaturecontour plot.

Fig. 9 plots the second moment of the temperature, velocity and pressurevariables. The standard deviation of the velocity is of the order of the meanvelocity. This suggests that there are multiple modes at which the systemexists and the variation of the random variables across the stochastic spacecauses the system to move from one stable state to another. If the state changesabruptly as the random variable moves across a threshold value, a mode shiftis said to have occurred. The existence of multiple patterns in the region ofparameters considered here is well known [60].

22

5.2.2 Mode shifts and equilibrium jumps

One way to identify multiplicity of stable states is through visualizing theprobability distribution functions of the dependent variables at a physicallyrelevant point. Small changes in the roughness causes the central thermalplume in the case of smooth surface Rayleigh-Benard instability to changepositions. This can cause reversal of the direction of the fluid flow in partsof the domain. We choose one such point, (0, 0.25) which has a large devi-ation in the temperature to plot the PDFs. The PDFs were computed from500000 Monte-Carlo realizations of the dependent variables. Fig. 10 plots thecorresponding distribution functions. Notice that there exist two peaks in thedistribution functions for the temperature pressure and the v velocity. Thisconfirms the existence of two stable states. We assert that these shifts also oc-cur due to the non-linearities introduced by the surface roughness. To verify ifthis is indeed the case, Fig. 11 shows the temperature and v velocity variationsat a specific point ([0.0, 0.25], chosen because of its large temperature and vvelocity second moment) plotted over the first two random variables. Noticethe abrupt jump in the temperature and v velocity as the random variablescross 0. Fig. 12 shows the projection of the variation onto a single dimension.The mode shift is apparent in the abrupt change of the dependent variablesas the random variable crosses over zero.

5.3 Convection in heterogeneous porous media

Fluid flow through porous media is an ubiquitous process occurring in variousscales ranging from the large scale :- geothermal energy systems, oil recovery,to much smaller scales :- flow of liquid metal through dendritic growth in alloysolidification and flow through fluidized beds. The analysis of flow througha medium with deterministic variable porosity has been well studied [61].But in most cases of heterogeneous porous media, the porosity either variesacross multiple length scales or cannot be represented deterministically. In thefollowing example, we focus on the case when the porosity is described as arandom field. The usual descriptor of the porosity is its correlation functionand the mean porosity.

A schematic of the problem is shown in Fig. 13. This problem has been in-vestigated in [61]. A square domain of dimensions [−0.5, 0.5] × [−0.5, 0.5] isconsidered. The inner half of the square is considered to be free fluid. Therest of the domain is filled with a porous material. The porous material isassumed to be Fontainebleau sandstone, whose experimental correlation func-tion is taken from [62]. The correlation function and a cross-sectional imageof the sandstone is shown in Fig. 14.

23

The governing equation for the fluid flow in the variable porosity medium isgiven by [61,63]:

∂v

∂t+ v · ∇(

v

ǫ) = −Pr

Da

(1 − ǫ)2

ǫ2v− 1.75 ‖ v ‖ (1 − ǫ)

(150Da)1/2ǫ2v + Pr∇2

v−

ǫ∇p− ǫPrRaTeg, (32)

where ǫ is the volume fraction of the fluid and Da is the Darcy number.Natural convection in the system is considered. The left wall is maintained ata dimension-less temperature 1.0, while the right wall is maintained at 0.0.The other walls are insulated. No slip conditions are enforced on all four walls.The Rayleigh number is 10000 and the wall Darcy number is 1.185 × 10−7.The wall porosity is ǫ = 0.8 and the porosity of the free fluid in the interiorof the domain is 1.0.

The numerically computed eigenvalues of the correlation are shown in Fig. 14.The first eight eigenvalues correspond to 94% of the spectrum. We consider theporosity to described in a 8 dimensional stochastic space. The mean porosityis 0.8. A level 4 interpolatory sparse grid with 3937 collocation points wasutilized for the stochastic simulation. A fractional time step method based onthe work in [61] is used in the simulation of the deterministic problem. Fig. 16plots the mean temperature, velocity and pressure contours. Fig. 17 shows themean velocity vectors and some streamlines. The standard deviation of thedependent variables is shown in Fig. 18.

5.4 Diffusion in heterogeneous random media

In the following example, an illustrative example showcasing (a) the construc-tion of a input stochastic model based on ideas from model reduction [20] and(b) a seamless interfacing of the stochastic sparse grid collocation with a mul-tiscale direct simulator, is provided. The problem is to construct the stochasticsteady state temperature profile in a domain which consists of a two phasecomposite. The exact material distribution (microstructure) in the domainis unknown. The thermal conductivity depends on the material distribution.Furthermore, the length scale of variations in the material distribution (andhence the conductivity) necessitates the use of multi-scaling arguments.

We start from a given experimental image. The image (204µm × 236µm),shown in Fig. 19, is of a Tungstan-Silver composite [64]. This composite wasproduced by infiltrating a porous tungsten solid with molten silver. This isa well characterized system, which has been used to test various reconstruc-tion procedures [66,65]. The first step is to extract the necessary statisticalinformation from the experimental image. The image is cropped, deblurred

24

and discretized. The volume fraction of silver is p = 0.2. The experimentaltwo-point correlation is extracted from the image. The normalized two-point

correlation (g(r) = L2(r)−p2

p−p2 ), is shown in Fig. 20.

The next step is to utilize these extracted statistical relations (volume fractionand two-point correlation) to reconstruct a class of 3D microstructures. Weutilize a statistics based reconstruction procedure based on Gaussian RandomFields (GRF). In this method, the 3D microstructure is obtained as the levelcuts to a random field. This random field satisfies a given field-field corre-lation. The statistics of the reconstructed 3D image can be matched to theexperimental image by suitably modifying the field-field correlation functionand the level cut values (see [66] for a detailed discussion). Following the workin [66], the GRF is assumed to satisfy a specified field-field correlation givenby

γ(r) =e−r/β − (rc/β)e−r/rc

1 − (rc/β)

sin(2πr/d)

2πr/d, (33)

where the field is characterized by the correlation length β, a domain scale dand a cutoff scale rc. Optimal values of (β, d, rc) are obtained by minimizingthe error between the theoretical two-point correlation and the experimentaltwo-point correlation. The theoretical two-point correlation corresponding to(β, d, rc) = (2.229, 12.457, 2.302)µm is plotted in Fig. 21.

Using the optimal parameters of the GRF (to match with the experimentaldata), realizations of 3D microstructure were computed. Each microstructureconsisted of 129 × 129 × 129 pixels. This corresponds to a size of 39.7µm ×39.7µm × 39.7µm. One realization of the 3D microstructure reconstructedusing the GRF is shown in Fig. 22.

Model reduction based on principle component analysis was implemented us-ing the eigs subroutine in Matlab [68]. The implementation of the PCA ensuredthat these eigen-images are normalized. Principle component analysis of themicrostructure data set revealed that the first 9 eigen-images could representabout 95% of the eigen-spectrum (see Fig. 23). The stochastic dimension isset at N = 9.

Microstructures reconstructed using the eigen-images contain fractional val-ues due to the removal of smaller basis components. These fractional pixelvalues are rounded off to get the digitized microstructure [67,65]. The sub-space contraction procedure (see [20]) results in the function G. G is a 9-dimensional function that serves as the stochastic input for the diffusionequation. A simple diffusion problem is considered. A computational domainof 128 × 128 × 128 is considered (this corresponds to a physical domain of39.7µm × 39.7µm × 39.7µm ). The random heterogeneous microstructure is

25

constructed as a 129 × 129 × 129 pixel image. The steady-state temperatureprofile, when a constant temperature of 0.5 is maintained on the left wall anda constant temperature of −0.5 is maintained on the right wall, is evaluated.All the other walls are thermally insulated. The axis along which the temper-ature boundary conditions are imposed is denoted as the x-axis (left-right)while the vertical axis is the z-axis.

The construction of the stochastic solution is through sparse grid collocationstrategies. A level 5 interpolation scheme is used to compute the stochasticsolution in 9 dimensions. The stochastic problem was reduced to the solu-tion of 15713 deterministic decoupled equations. Forty nodes (each with two3.8G CPUs) of our 64-node Linux cluster were utilized to solve these deter-ministic equations. The total computational time was about 56 hours. Eachdeterministic problem involved the solution of a diffusion problem on a givenmicrostructure using a multiscale solver (See [20]). Each deterministic problemis solved using a 8× 8× 8 coarse element grid (uniform hexahedral elements)with each coarse element having 16×16×16 fine-scale elements. The solutionof one deterministic multiscale problem took about 34 minutes. In compari-son, one fully-resolved deterministic fine scale FEM solution took nearly 40hours on a single processor.

The reduction in the interpolation error with increasing depth of interpolationis shown in Fig. 25. The interpolation error is defined as the variation of theinterpolated value of the function from the computed value (e = max(|f −I(f)|)). As the level of interpolation increases, the number of sampling pointsused to construct the stochastic solution increases (see [52,14] for explicitformulae relating the depth of interpolation and the number of sampling pointsand further related discussions).

The mean temperature is shown in Fig. 24. The figure plots iso-surfaces oftemperatures −0.25 (Fig. 24(b)), 0.0 (Fig. 24(c)) and 0.25 (Fig. 24(d)). Thefigure also shows temperature slices at three different locations of the xz plane:y = 0 (Fig. 24(e)), y = 20µm (Fig. 24(f)) and y = 40µm (Fig. 24(g)).

The standard deviation and other higher-order statistics of the temperaturevariation are shown in Fig. 26. Fig. 26(a) plots standard deviation iso-surfaces.Figs. 26d–26f plot slices of the temperature deviation at three different planesy = 0, y = 20µm, y = 40µm, respectively. The standard deviation reaches8% of the maximum temperature difference maintained. Two points, one froma region of high-standard deviation (A = (22.15, 20, 0)µm) and another froma region of moderate-deviation (B = (11.69, 0, 9.23)µm), are chosen and theprobability distribution functions of temperature at these points determined.Fig. 26b plots the PDF for the point with large standard deviation. Noticethat the range of the variability of temperature at this point is rather high.

26

6 Conclusions

The current work provides a gentle introduction to the application of sto-chastic modeling. Though the extension of deterministic modeling and designof components, devices and phenomena to include a stochastic aspect is verypromising both in terms of fundamental understanding of physical phenomenaas well as for enhanced performance in the presence of uncertainty, there doesnot seems to be a bottleneck in its more general application. Two issues seemsto result in this bottleneck: (1) The availability of an easy to use and implementstrategy to solve stochastic partial differential equations that preferably relieson available deterministic solvers (2) The problem of constructing realisticinput models to perform such stochastic analysis. This paper addresses thesetwo issues issues. The sparse grid collocation approach provides a non-intrusive

approach to stochastic modeling. A overview of this technique, with an eyetowards implementation, is provided. Various techniques for constructing real-istic input models are also discussed. Several numerical examples illustratingthe construction of input stochastic models and the subsequent solutions ofthe stochastic partial differential equations are provided.

Acknowledgements

This research was supported by the Computational Mathematics program ofAFOSR (grant F49620-00-1-0373). The computing was conducted using theresources of the Cornell Center for Advanced Computing.

References

[1] Major aircraft disasters, information available at(http://dnausers.d-n-a.net/dnetGOJG/Disasters.htm).

[2] AIM,Accelerated insertion of materials program, DSO Office, DARPA, informationavailable at http://www.darpa.mil/dso/thrust/matdev/aim.htm.

[3] Prognosis, Prognosis program, DSO Office, DARPA, information available athttp://www.arpa.mil/dso/thrust/matdev/prognosis.htm.

[4] National security assessment of theU.S. forging industry, Bureau of industry and security, U.S. dept. of commercehttps://www.bis.doc.gov/defenseindustrialbaseprograms/osies/.

[5] MAI, Metals affordability initiative, AFRL/ML, information available athttp://www.wrs.afrl.af.mil/contract/.

27

[6] The USAF/Navy forging supplier initiative, information available at:http://www.onr.navy.mil/sci

tech/industrial/mantech/docs/successes/metals/.

[7] R. G. Ghanem, P. D. Spanos, Stochastic Finite Elements: A Spectral Approach,Dover publications, 1991.

[8] R. Ghanem, Probabilistic characterization of transport in heterogeneous porousmedia, Comput. Methods Appl. Mech. Engrg. 158 (1998) 199-220.

[9] R. Ghanem, A. Sarkar, Mid-frequency structural dynamics with parameteruncertainty, Comput. Methods Appl. Mech. Engrg. 191 (2002) 5499-5513.

[10] R. Ghanem, Higher order sensitivity of heat conduction problems to randomdata using the spectral stochastic finite element method, ASME J. HeatTransfer 121 (1999) 290-299.

[11] D. Xiu, G. E. Karniadakis, Modeling uncertainty in steady state diffusionproblems via generalized polynomial chaos, Comput. Methods Appl. Mech.Engrg. 191 (2002) 4927-4948.

[12] D. Xiu, G. E. Karniadakis, Modeling uncertainty in flow simulations viageneralized polynomial chaos, J. Comp. Physics 187 (2003) 137–167.

[13] D. Xiu, G. E. Karniadakis, A new stochastic approach to transient heatconduction modeling with uncertainty, Int. J. Heat Mass Transfer 46 (2003)46814693.

[14] B. Ganapathysubramanian, N. Zabaras, Sparse grid collocation schemes forstochastic natural convection problems, Journal of Computational Physics 225(1) (2007) 652–685.

[15] I. Babuska, F. Nobile, R. Tempone, A stochastic collocation method for ellipticpartial differential equations with random input data, ICES Report 05-47, 2005.

[16] D. Xiu, J. S. Hesthaven, High order collocation methods for the differentialequation with random inputs, SIAM J. Sci. Comput. 27 (2005) 1118–1139.

[17] C. W. Gardiner, Handbook of stochastic methods for physics, chemistry andthe natural sciences, Springer-Verlag,1990

[18] M. Loeve, Probability Theory, Springer-Verlag: 4th Edition, Berlin, 1977.

[19] C. Desceliers, R. Ghanem, C. Soize, Maximum likelihood representation ofstochastic chaos representations from experimental data. Int. J. Num. Met.Engr. 66 (2006) 978–1001.

[20] B. Ganapathysubramanian, N. Zabaras, Modelling diffusion in randomheterogeneous media: Data-driven models, stochastic collocation and thevariational multi-scale method, Journal of Computational Physics, in press

[21] B. Ganapathysubramanian and N. Zabaras, A non-linear dimension reductionmethodology for generating data-driven stochastic input models, Journal ofComputational Physics, submitted.

28

[22] S. T. Roweis, L. K. Saul, Nonlinear Dimensionality Reduction by Locally LinearEmbedding, Science 290 (2000) 2323–2326.

[23] J. Tenenbaum, V. DeSilva, J. Langford, A global geometric framework fornonlinear dimension reduction, Science, 2000.

[24] V. deSilva, J. B. Tenenbaum, Global versus local methods in nonlineardimensionality reduction, Advances in Neural Information Processing Systems15 (2003) 721–728.

[25] M. Bernstein, V. deSilva, J. C. Langford, J. B. Tenenbaum, Graphapproximations to geodesics on embedded manifolds, Dec 2000, Preprint maybe downloaded at the URL: http://isomap.stanford.edu/BdSLT.pdf.

[26] J. R. Munkres, Topology, Second edition, Prentice-Hall, 2000.

[27] J. A. Costa, A. O. Hero, Entropic Graphs for Manifold Learning, Proc. ofIEEE Asilomar Conf. on Signals, Systems, and Computers, Pacific Groove, CA,November, 2003.

[28] J. A. Costa, A. O. Hero, Geodesic Entropic Graphs for Dimension and EntropyEstimation in Manifold Learning, IEEE Trans. on Signal Processing 52 (2004)2210–2221.

[29] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction toAlgorithms, 2001, The MIT Press.

[30] J. A. Costa, A. O. Hero,Manifold Learning with Geodesic Minimal SpanningTrees, 2003, arXiv:cs/0307038v1.

[31] D. Frenkel, B. Smit, Understanding Molecular Simulations: From AlgorithmsTo Applications, Academic Press, 2002.

[32] C. L. Winter, D. M. Tartakovsky, Mean flow in composite porous media.Geophys. Res. Lett., 27 (2000) 1759-1762, 2000

[33] C. L. Winter, D. M. Tartakovsky, Groundwater flow in heterogeneous compositeaquifers. Water Resour. Res., 38 (2002) 23.1

[34] C. L. Winter, D. M. Tartakovsky, A. Guadagnini, Numerical solutions ofmoment equations for flow in heterogeneous composite aquifers. Water Resour.Res., 38 (2002) 13.1

[35] C. L. Winter, D. M. Tartakovsky, A. Guadagnini, Moment equations for flowin highly heterogeneous porous media. Surv. Geophys., 24 (2003) 81-106.

[36] M. Jardak, C-H. Su, G. E. Karniadakis, Spectral Polynomial Chaos Solutionsof the Stochastic Advection Equation, J. Sci. Comput. 17 (2002) 319–338.

[37] D. Lucor, C-H. Su, G. E. Karniadakis, Karhunen-Loeve representation ofperiodic second-order autoregressive processes, International Conference onComputational Science (2004) 827-834.

29

[38] X. Wan, G. E. Karniadakis, An adaptive multi-element generalized polynomialchaos method for stochastic differential equations, J. Comp. Physics 209 (2005)617–642.

[39] X. Wan, D. Xiu, G. E. Karniadakis, Stochastic Solutions for the Two-Dimensional Advection-Diffusion Equation, SIAM J. Sci. Computing 26 (2005)578–590.

[40] D. Xiu, G. E. Karniadakis, The Wiener-Askey Polynomial Chaos for StochasticDifferential Equations, SIAM J. Sci. Computing 24 (2002) 619–644.

[41] B. Velamur Asokan, N. Zabaras, Variational multiscale stabilized FEMformulations for transport equations: stochastic advection-diffusion andincompressible stochastic Navier-Stokes equations, J Comp. Physics 202 (2005)94–133.

[42] M. K. Deb, I. K. Babuska, J. T. Oden, Solution of stochastic partial differentialequations using the Galerkin finite element techniques, Comput. Methods Appl.Mech. Engrg. 190 (2001) 6359–6372.

[43] I. Babuska, R. Tempone, G. E. Zouraris, Solving elliptic boundary valueproblems with uncertain coefficients by the finite element method: the stochasticformulation, Comput. Methods Appl. Mech. Engrg. 194 (2005) 1251–1294.

[44] I. Babuska, R. Tempone, G. E. Zouraris, Galerkin finite elements approximationof stochastic finite elements, SIAM J. Numer. Anal. 42 (2004) 800–825.

[45] D. Xiu,, D. Lucor, C.-H. Su, G. E. Karniadakis, Performance Evaluation ofGeneralized Polynomial Chaos, International Conference on ComputationalScience, Lecture Notes in Computer Science, Vol. 2660, pp. 346-354, Springer,2003.

[46] R. Tempone, Numerical Complexity Analysis of Weak Approximation ofStochastic Differential Equations, PhD Thesis, 2002.

[47] B. Velamur Asokan, N. Zabaras, Using stochastic analysis to capture unstableequilibrium in natural convection, J. Comp. Physics 208 (2005) 134–153.

[48] E. Novak, K. Ritter, The curse of dimension and auniversal method for numerical integration, Multivariate approximation andsplines, G. Nurnberger, J. W. Schmidt, G. Walz (eds.), 1997, 177–188.

[49] T. Gerstner, M. Griebel, Numerical integration using sparse grids, NumericalAlgorithms, 18 (1998) 209–232.

[50] E. Novak, K. Ritter, R. Schmitt, A. Steinbauer, On an interpolatory method forhigh dimensional integration, J. Comp. Appl. Mathematics, 112 (1999) 215–228.

[51] V. Barthelmann, E. Novak, K. Ritter, High dimensional polynomialinterpolation on sparse grids, Adv. Compu. Math. 12 (2000) 273–288.

[52] A. Klimke. Uncertainty Modeling using Fuzzy Arithmetic and Sparse Grids,PhD Thesis, Universitt Stuttgart, Shaker Verlag, Aachen, 2006.

30

[53] C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral Methods:Fundamentals in Single Domains Series, Springer (2006).

[54] D. Xiu, Efficient Collocational Approach for Parametric Uncertainty Analysis,Communications In Computational Physics 2 (2007) 293–309.

[55] A. Klimke, B. Wohlmuth, Algorithm 847: spinterp: Piecewise MultilinearHierarchical Sparse Grid Interpolation in MATLAB, ACM Transactions onMathematical Software 31 (2005).

[56] A. Klimke, Sparse Grid Interpolation Toolbox – User’s Guide, IANS report2006/001, University of Stuttgart, 2006.

[57] http://mpdc.mae.cornell.edu/Software.

[58] Y. Shen, P. Tong, K. -Q. Xia, Turbulent convection over rough surfaces, Phys.Rev. Let. 76 (1996) 908–911.

[59] D. M. Tartakovsky, D. Xiu, Stochastic analysis of transport in tubes with roughwalls, J. Comput. Physics 217 (2006) 248-259.

[60] E. Bodenschatz, W. Pesch, G. Ahlers, Recent developments in Rayleigh-Benardconvection, Annu. Rev. Fluid Mech. 32 (2000) 709–778.

[61] P. Nithiarasu, K. N. Seetharamu, T. Sundararajan, Natural convective heattransfer in a fluid saturated variable porosity medium, Int. J. Heat MassTransfer 40 (1997) 3955–3967.

[62] C. Manwart, R. Hilfer, Reconstruction of random media using Monte Carlomethods, Physical Review E 59 (1999) 5597–5600.

[63] D. Samanta, N. Zabaras, Modeling melt convection in solidification processeswith stabilized finite element techniques, International Journal for NumericalMethods in Engineering, 64 (2005) 1769–1799.

[64] S. Umekawa, R. Kotfila, O. D. Sherby, Elastic properties of a tungsten-silvercomposite above and below the melting point of silver, J. Mech. Phys. Solids13 (1965) 229-230.

[65] J. Aldazabal, A. Martin-Meizoso, J. M. Martinez-Esnaola, Simulation of liquidphase sintering using the Monte-Carlo method, Mater. Sci. Eng. A 365 (2004)151-155.

[66] A. P. Roberts, E. J. Garboczi, Elastic properties of a tungsten-silver compositeby reconstruction and computation, J. Mech. Phys. Solids 47 (1999) 2029–2055.

[67] L. Arleth, S. Marcelja. T. Zemb, Gaussian random fields with two level-cuts–Model for asymmetric microemulsions with nonzero spontaneous curvature, TheJournal of Chemical Physics 115 (8) (2001) 3923–3936.

[68] http://www.mathworks.com/products/matlab/.

31

Polycrystals and multicomponentmicrostructures

Exact distribution of components unknown. Topology and properties

vary.

Only statistical/averaged quantities available to characterize these structures experimentally.

eg. volume fraction, two-point correlation

Application

Analysis of systems consisting of such microstructures

Fig. 1. Limited information about important input quantities necessitates construc-

tion of stochastic input models

32

10

15

20-15 -10 -5 0 5 10 15

-10

-5

0

5

10

15

Grain size/boundary map

11 11.5 12 12.5 13 13.5 14 14.5 15 15.5 16-15

-10

-5

0

5

10

15

Texture map

Fig. 2. Illustrations of the underlying concept of stochastic input models

33

0.450.360.270.180.090.00

-0.09-0.18-0.27-0.36-0.45

8.006.404.803.201.600.00

-1.60-3.20-4.80-6.40-8.00

0.450.360.270.180.090.00

-0.09-0.18-0.27-0.36-0.45

8.006.404.803.201.600.00

-1.60-3.20-4.80-6.40-8.00

0.450.360.270.180.090.00

-0.09-0.18-0.27-0.36-0.45

8.006.404.803.201.600.00

-1.60-3.20-4.80-6.40-8.00

Fig. 3. Mean Temperature and u velocity contours from different solution strategies.Top row: A level 6 collocation method, Middle row: Monte-Carlo sampling over65000 points, Bottom row: Second-order GPCE expansion.

34

0.600.530.470.400.330.270.200.130.070.00

6.786.105.434.754.073.392.712.031.360.68

0.600.530.470.400.330.270.200.130.070.00

6.786.105.434.754.073.392.712.031.360.68

0.600.530.470.400.330.270.200.130.070.00

6.786.105.434.754.073.392.712.031.360.68

Fig. 4. Standard deviation of Temperature and u velocity contours from differentsolution strategies. Top row: A level 6 collocation method, Middle row: Monte-Carlosampling over 65000 points, Bottom row: Second-order GPCE expansion.

35

Temperature

Pro

bab

ility

Pressure

Pro

bab

ility

U velocity

Pro

bab

ility

V velocity

Pro

bab

ility

Fig. 5. Probability distribution functions for the dependent variables at point(0.34, 0). Top row: Temperature and pressure, Bottom row: u and v velocity com-ponents.

36

T (y) = 0.5

T (y) = -0.5

y = f(x,ω)

Fig. 6. Schematic of natural convection with random boundary topology (surfaceroughness).

37

0.4370.3120.1870.062

-0.062-0.187-0.312-0.437

6.5304.6702.8100.950

-0.911-2.771-4.631-6.492

4.1922.5280.864

-0.800-2.465-4.129-5.793-7.457

0.3550.2530.1520.050

-0.051-0.153-0.254-0.356

Fig. 7. Mean values of the variables: Top left: Temperature, Top right: u velocitycomponent, Bottom left: v velocity component, Bottom right: pressure.

38

-1.73678

-1.23957

-0.742352

-0.245137

-0.2451370.252077

0.252077

0.74

9292

1.24

651

Fig. 8. Mean velocity vectors and streamlines.

39

15.87513.75811.6429.5257.4085.2923.1751.058

0.3010.2600.2200.1800.1400.1000.0600.020

18.59916.11913.63911.1598.6796.2003.7201.240

0.8010.6940.5870.4800.3740.2670.1600.053

Fig. 9. Standard deviation of the variables: Top left: Temperature, Top right: u

velocity component, Bottom left: v velocity component, Bottom right: pressure.

40

Temperature

Pro

bab

ility

Pressure

Pro

bab

ility

U velocity

Pro

bab

ility

V velocity

Pro

bab

ility

Fig. 10. Probability distribution functions for the dependent variables at point(0.25, 0). Top row: Temperature and pressure, Bottom row: u and v velocity com-ponents.

41

Fig. 11. Mode shift: Left: Temperature, Right: v velocity component.

42

Random variable

Tem

pera

ture

-5 0 5-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

Random variable

Yve

loci

ty

-5 0 5

-15

-10

-5

0

5

10

15

Fig. 12. Mode shift: Left: Temperature, Right: v velocity component.

43

Free fluid

T = 0

u=v=0

T = 1

u=v=0

u=v=0

u=v=0

Porous medium

Fig. 13. Schematic of convection in a medium with random porosity.

44

0 10 20 30 40 50 600

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 14. Fontainebleau sandstone: Correlation function and cross-sectional image.

45

Index

Eig

enva

lue

5 10 15 200

5

10

15

Fig. 15. Eigen spectrum of the correlation kernel.

46

0.940.810.690.560.440.310.190.06

7.04.41.8

-0.8-3.4-6.0-8.6

-11.2

9.57.14.62.2

-0.3-2.7-5.1-7.6

0.60.50.40.30.30.20.10.0

Fig. 16. Mean values of the variables: Top left: Temperature, Top right: u velocitycomponent, Bottom left: v velocity component, Bottom right: pressure.

47

-2.65

294

-2.1221

-2.1

221-1

.5912

6

-1.5

9126

-1.06042

-1.06042

-1.06042

-0.52958

-0.52958

-0.52958

-0.0

2076

5

-0.020765

-0.0

2076

5

-0.020765

Fig. 17. Mean velocity vectors and streamlines.

48

0.0970.0840.0710.0580.0450.0320.0190.006

5.0564.3823.7083.0342.3591.6851.0110.337

6.9366.0115.0864.1613.2372.3121.3870.462

0.0540.0470.0400.0320.0250.0180.0110.004

Fig. 18. Standard deviation of the variables: Top left: Temperature, Top right: u

velocity component, Bottom left: v velocity component, Bottom right: pressure.

49

Fig. 19. Experimental image of a two-phase composite (from [64]).

50

r (um)

g(r)

0 5 10 15 200

0.2

0.4

0.6

0.8

1

Fig. 20. The two-point correlation function.

51

r (um)

g(r)

0 5 10 15 200

0.2

0.4

0.6

0.8

1

Experimental

Reconstructed

Fig. 21. Comparison of the two-point correlation function from experiments andfrom the GRF

52

Fig. 22. One instance (realization) of the two-phase microstructure.

53

Eigen number

Nor

mal

ized

eige

nval

ue

5 10 15 20 25 300

0.05

0.1

0.15

0.2

0.25

Fig. 23. Eigen-spectrum of the reconstructed microstructural images.

54

f

g

e

b c d

a

Fig. 24. Steady-state mean temperature (using first-order constraints): (a) temper-ature contour; (b-d) temperature iso-surfaces; (e-g) temperature slices.

55

Number of collocation points

Err

or

100 101 102 103 10410-2

10-1

100

101

Fig. 25. Reduction in the interpolation error with increasing number of samples(first-order constraint imposed).

56

Temperature

Pro

bab

ility

dist

ribu

tion

func

tion

-0.4 -0.2 0 0.2 0.40

0.5

1

1.5

2

2.5

3

3.5

4

Temperature

Pro

bab

ility

dist

ribu

tion

func

tion

0 0.2 0.40

1

2

3

4

5

6

7

f

e

b c

d

a

Fig. 26. Standard deviation of temperature (using first-order constraints): (a) stan-dard deviation iso-surfaces; (b-c)temperature PDF at two points. (d-f) standarddeviation slices;

57

Table 1Solution times (minutes)

N (stochastic dims) GPCE Sparse grid collocation

2 235 8

4 5340 187

8 1585 391

58