Upload
lamxuyen
View
215
Download
1
Embed Size (px)
Citation preview
Modeling Uncertainty in Flow Simulations via Polynomial Chaos
Dongbin Xiu and George Em Karniadakis �
Division of Applied Mathematics
Brown University
Providence, RI 02912
Submitted to Journal of Computational Physics
February 8, 2001
Abstract
We present a new algorithm based on Polynomial Chaos in order to model the input uncertainty and its
propagation in incompressible ow simulations. Speci�cally, the stochastic input is represented spectrally by
employing the Wiener-Hermite polynomials. Randomness is thus expressed as a new continuous variable, in
addition to the space-time domain. A standard Galerkin projection is performed and the resulting set of coupled
equations is then solved to obtain the solution for each random mode. We implement the method in the context
of the spectral/hp element method, which maintains control of the numerical error while its spectral trial basis is
from the same Askey family of polynomials as the Polynomial Chaos. The algorithm is applied to micro-channel
ows with random wall boundary conditions, and to external ows with random freestream. Both Gaussian and
non-Gaussian inputs are considered, and the results are in good agreement with exact solutions and Monte Carlo
simulations. The Polynomial Chaos based stochastic simulation, although more expensive than the corresponding
deterministic simulation, it generates all the solution statistics in a single run. Compared with the Monte Carlo
simulation, which requires at least thousands of runs to obtain accurate statistics, the Polynomial Chaos expansion
promises a substantial speed-up (at least one thousand in the current simulations), and thus it is a very e�ective
approach in modeling uncertainty propagation in ow simulations.
�Corresponding author, [email protected]
1
1 Introduction
There has been recently an intense interest in veri�cation and validation of large-scale simulations and in modeling
uncertainty [1, 2, 3]. In simulations, just like in the experiments, we often question the accuracy of the results and we
construct a posteriori error bounds, but the new objective is to model uncertainty from the beginning of the simulation
and not simply as an afterthought. Numerical accuracy and error control have been employed in simulations for
some time now, at least for the modern discretizations, e.g. [4, 5]. However, there is still an uncertainty component
associated with the physical problem, and speci�cally with such diverse factors as constitutive laws, boundary and
initial conditions, transport coeÆcients, source and interaction terms, geometric irregularities (e.g. roughness), etc.
.
Most of the research e�ort in CFD research so far has been in developing eÆcient algorithms for di�erent ap-
plications, assuming an ideal input with precisely de�ned computational domains. With the �eld reaching now
some degree of maturity, we naturally pose the more general question of how to model uncertainty and stochastic
input, and how to formulate algorithms in order for the simulation output to re ect accurately the propagation of
uncertainty. To this end, the Monte Carlo approach can be employed but it is computationally expensive and it is
only used as the last resort. The sensitivity method is an alternative more economical approach, based on moments
of samples, but it is less robust and it depends strongly on the modeling assumptions [6]. There are other more
suitable methods for physical applications, and there has already been good progress in other �elds, most notably in
seismology and structural mechanics. A number of papers and books have been devoted to this subject, e.g. [7],[8],
[9],[10], [11],[12],[13],[14].
The most popular technique for modeling stochastic engineering systems is the perturbation method where all
stochastic quantities are expanded around their mean via a Taylor series. This approach, however, is limited to small
perturbations and does not readily provide information on high-order statistics of the response. Another approach
is based on expanding the inverse of the stochastic operator in a Neumann series, but this too is limited to small
uctuations, and even combinations with the Monte Carlo method seem to result in computationally prohibitive
algorithms for complex systems [15]. A more e�ective approach pioneered by Ghanem and Spanos [10] in the context
of �nite elements for solid mechanics is based on a spectral representation of the uncertainty. This allows high-order
representation, not just �rst-order as in most perturbation-based methods, at high computational eÆciency. It is
based on the original ideas of Wiener (1938) on homogeneous chaos [16, 17] and the Hermite polynomials. The
classical Grad's method of moments for solutions of Boltzmann equation is based on Hermite polynomial expansions
2
as well [18]. The e�ectiveness of Hermite expansions was also recognized by Chorin in his early work [19], who
employed Wiener-Hermite series to substantially improve both accuracy and computational eÆciency of Monte Carlo
algorithms. However, the limitation of prematurely truncated Wiener-Hermite expansions, especially in applications
of turbulence [20], has also been recognized.
The spectral representation of the uncertainty is based on a trial basis (�) where � denotes the random event.
For example, the vorticity (! in two-dimensions) has the following �nite-dimensional representation
!(x; t; �) =
PXi=0
!i(x; t)i(�(�)):
Here !i(x; t) represents the deteministic part and can be interpreted as a scale (i) of vorticity uctuation. The
random trial basis is expressed in terms of multi-dimensional Hermite polynomials in powers of the random variable
�(�), which de�nes a speci�c probability distribution; for example, �(�) may be a Gaussian function. The polynomial
trial basis constructed in this way has been termed Polynomial Chaos (PC) by Wiener. It is a functional, as it is a
function of � which is a function of the random parameter � 2 [0; 1]. The theory of orthogonal functionals plays a
key role in the algorithms that we develop in this work. Note that the Monte Carlo algorithm is a subcase of the
above representation corresponding to the collocation procedure where the test basis is
i(�) = Æ(� � �i);
with Æ the Kronecker delta function, and �i referring to an isolated random event. For convergence, the Monte
Carlo method requires a great number of such events as it ignores interactions between the various scales unlike the
Galerkin representation.
In this work we propose a Polynomial Chaos (PC) expansion procedure to represent the uncertainty of the input in
ow simulations. The algorithms we develop are general but in this paper we concentrate on modeling the uncertainty
associated with boundary conditions. This situation is encountered, for example, in micro-channel ows [21] but also
in classical ows such, e.g. the freestream of ow past blu� bodies. We assume that the velocity at the boundaries
is described by a mean value and a random perturbation with certain propability distribution function (PDF). This
distribution can be assumed to have a known analytical form, e.g. Gaussian or lognormal distribution, or it can
be a measured distribution provided in tabulated form. The PC procedure can handle both Gaussian and non-
Gaussian representations, although for certain distributions there may exist a better representation than the original
Wiener-Hermite polynomial basis. To this end, we can replace the Wiener-Hermite expansions by the best spectral
polynomials of the Askey family [22] that match speci�c distribution functions. For example, the appropriate set for
3
Poisson distributions is the Charlier polynomials, for Gamma distributions the Laugerre polynomials, for binomial
distributions the Krawtchouk polynomials, for the beta kernel the Jacobi polynomials, etc. This can be appreciated
by the fact that the orthogonality weights of these polynomials match the probability measure of the corresponding
polynomials, and the same relationship exists of course for the Hermite polynomials and the Gaussian distribution.
The work we present in this paper is based on the pioneering work in solid mechanics of Ghanem & Spanos [10]
but applied here to incompressible Navier-Stokes equations. In addition, we employ the spectral/hp element method
in order to obtain a better control of the numerical error [23]. Our approach leads to homogeneous representations as
the mixed inner products required in the random and spatial disretizations are all spectral polynomials. In particular,
for certain distributions, e.g. beta kernels the corresponding Askey polynomial for the Wiener expansion is the Jacobi
polynomial, which is exactly the same as the trial basis employed in the spectral/hp element method [23].
In the next section we review the PC expansion, and in section 3 we address its implementation details as applied
to Navier-Stokes equations. In section 4 we present computational results for Gaussian and lognormal distributions,
and we compare with analytical stochastic solutions of Navier-Stokes we have obtained as well as with corresponding
Monte Carlo simulations. We conclude the paper with a discussion on possible extensions and applications of the
stochastic spectral/hp element method. We also include two appendices that explain how to generate partially
correlated Gaussian �elds and how to approximate the lognormal distribution that may be useful to CFD readers.
2 Representation of Random Processes
In this section we brie y review the PC expansion along with the Karhunen-Loeve (KL) expansion, another classical
technique for representating random processes. The KL expansion will be used to represent the known stochastic
�elds, e.g. the input. In the following analysis, we will use the symbol � to denote the standard Gaussian random
variable, i.e., Gaussian random variable with zero mean and unit variance.
2.1 Polynomial Chaos Expansion
The PC expansion was �rst proposed by Wiener [16]. According to the theorem by Cameron & Martin [24], it can
be used to approximate any functional in L2(C) and converges to the functional in the L2(C) sense. Therefore, PC
provides a means for expanding second-order random processes in terms of orthogonal polynomials. Second-order
random processes are processes with �nite variance, and this applies to all viscous ow processes. Thus, a second-
order random process X(�), viewed as a function of the independent random variable �, can be represented in the
4
form
X(�) = a0H0
+
1Xi1=1
ai1H1(�i1(�))
+
1Xi1=1
i1Xi2=1
ai1i2H2(�i1 (�); �i2 (�))
+
1Xi1=1
i1Xi2=1
i2Xi3=1
ai1i2i3H3(�i1 (�); �i2 (�); �i3 (�))
+ � � � ; (1)
where Hn(�i1 ; : : : ; �in) denotes the polynomial chaos of order n in the variables (�i1 ; : : : ; �in). The above equation
is the discrete version of the original Wiener-Hermite expansion, where the continuous integrals are replaced by
summations. The general expression of the polynomials is given by
Hn(�i1 ; : : : ; �in) = e12�T �(�1)n @n
@�i1 � � � @�ine�
12�T �; (2)
where � denotes the vector consisting of n random variables (�i1 ; : : : ; �in). For notational convenience, equation (1)
can be rewritten as
X(�) =
1Xj=0
ajj(�); (3)
where there is a one-to-one correspondence between the functions Hn(�i1 ; : : : ; �in) and j(�). The Polynomial Chaos
forms a complete orthogonal basis in the L2 space of random variables, i.e.,
< ij >=< 2
i > Æij ; (4)
where Æij is the Kronecker delta and < �; � > denotes the ensemble average. This is the inner product in the Hilbert
space of random variables
< f(�)g(�) >=
Zf(�)g(�)W (�)d�: (5)
and it is justi�ed by the Cameron & Martin theorem [24]. The weighting function is
W (�) =1p(2�)n
e�12�T �: (6)
What distinguishes the Wiener-Hermite expansion from many other possible complete sets of expansions is that
the polynomials here are orthononormal with respect to the weighting function W (�) that has the form of a n-
dimensional independent Gaussian probability distribution with unit variance. For example, the one-dimensional
5
(n = 1) polynomials are:
0 = 1; 1 = �; 2 = �2 � 1; 3 = �3 � 3�; : : : (7)
The orthogonality condition takes the speci�c form
< ij >=< 2
i > Æij = (i!)Æij : (8)
2.2 Karhunen-Loeve Expansion
The Karhunen-Loeve (KL) expansion [25] is another way of representing a random process. It is based on the spectral
expansion of the covariance function of the process. Let us denote the process by h(x; �) and its covariance function
by Rhh(x;y), where x and y are the spatial or temporal coordinates. By de�nition, the covariance function is real,
symmetric and positive de�nite. All eigenfunctions are mutually orthogonal and form a complete set spanning the
function space to which h(x; �) belongs. The KL expansion then takes the following form:
h(x; �) = �h(x) +
1Xi=1
p�i�i(x)�i(�); (9)
where �h(x) denotes the mean of the random process, and �i(�) forms a set of orthogonal random variables. Also,
�i(x) and �i are the set of eigenfunctions and eigenvalues of the covariance function, respectively, i.e.,
ZRhh(x;y)�i(y)dy = �i�i(x): (10)
If the random process itself h(x; �) is a Gaussian process, then the random variables �i form an orthonormal Gaussian
vector.
Among many possible decompositions of a random process, the KL expansion is optimal in the sense that the
mean-square error resulting from a �nite representation of the process is minimized. Its use, however, is limited as the
covariance function of the solution process is often not known a priori. Nevertheless, the KL expansion still provides
an e�ective means of representation of the input random processes when the covariance structure is known. In this
paper we will employ the KL procedure to represent the stochastic boundary conditions in the case of Gaussian
distributions. For non-Gaussian distributions we will employ Polynomial Chaos representations directly.
3 Polynomial Chaos Expansion for Navier-Stokes Equations
In this section we discuss the solution procedure for solving the stochastic Navier-Stokes equations by PC expansion.
The randomness in the solution can be introduced through boundary conditions, initial conditions, forcing, etc.
6
3.1 Governing Equations
We employ the incompressible Navier-Stokes equations
r � u = 0; (11)
@u
@t+ (u � r)u = �r�+ Re�1r2
u; (12)
where � is the pressure and Re the Reynolds number. All ow quantities, i.e., velocity and pressure are considered
stochastic processes. A random dimension, denoted by the parameter �, is introduced in addition to the spatial-
temporal dimensions (x; t), thus
u = u(x; t; �); � = �(x; t; �) (13)
We then apply the Polynomial Chaos expansion (3) to these quantities to obtain
u(x; t; �) =
PXi=0
ui(x; t)i(�); �(x; t; �) =
PXi=0
�i(x; t)i(�); (14)
where we have replaced the in�nite summation in (3) by a �nite term summation. The total number of expansion
terms, (P + 1), depends on the number of random dimensions (n) and the highest order of the polynomials chaos
(p) [10]:
P = 1 +
pXs=1
1
s!
s�1Yr=0
(n+ r): (15)
The most important aspect of the above expansion is that all random processes have been decomposed into a set
of deterministic functions in the spatial-temporal variables multiplying random coeÆcients that are independent of
these variables.
Substituting (14) into Navier-Stokes equations ((11) and (12)) and noting that the partial derivatives are taken
in physical space and thus they commute with the operations in random space, we obtain the following equations
PXi=0
r � ui(x; t)i = 0; (16)
PXi=0
@ui(x; t)
@ti +
PXi=0
PXj=0
[(ui � r)uj)]ij = �PXi=0
r�i(x; t)i + �
PXi=0
r2uii: (17)
We then project the above equations onto the random space spanned by the basis polynomials fig by taking the
inner product of above equation with each basis. By taking < �;k > and utilizing the orthogonality condition (4),
we obtain the following set of equations:
7
For each k = 0; : : : P ,
r � uk = 0; (18)
@uk
@t+
1
< 2
k >
PXi=0
PXj=0
eijk [(ui � r)uj)] = �r�k + �r2uk; (19)
where eijk =< ijk >. Together with < 2
i >, these coeÆcients can be determined analytically from the
de�nition of i and equation (2). This set of equations consists of (P + 1) system of Navier-Stokes equations for
each random mode coupled through the convection term. The solution associated with k = 0 term represents the
mean of the random solution. The solution corresponding to the �rst-order polynomial chaos, i.e., the k = 1; : : : ; n
terms where n is the dimension of random space, represents the Gaussian part of the solution. The rest of the
solution terms represent the nonlinear interactions between the mean and each Gaussian part of the solution and are
non-Gaussian.
3.2 Numerical Formulation
3.2.1 Temporal Discretization
We employ the semi-implicit high-order fractional step method, which for the standard deterministic Navier-Stokes
equations ((11) and (12)) has the form [26]:
u�PJq=0
�qun�q
�t= �
JXq=0
�q [(u � r)u]n�q ; (20)
^u� u
�t= �r�n+1; (21)
0un+1 � ^u
�t= Re�1r2
un+1; (22)
where J is order of accuracy in time and �; � and are the coeÆcients of the integration weights. A pressure Poisson
equation is obtained by enforcing the discrete divergence-free condition r � un+1 = 0
r2�n+1 =1
�tr � u; (23)
with the appropriate pressure boundary condition given as
@�
@n= �n �
�u+Re�1r�!n+1
�; (24)
where n is the outward unit normal vector and ! = r� u is the vorticity. The method is sti�y-stable and achieves
third-order accuracy in time; the coeÆcients for the integration weights can be found in [23].
In order to discretize the stochastic Navier-Stokes equations, we apply the same approach to the coupled set of
equations (18) and (19):
8
For each k = 0; : : : ; P ,
uk �PJ
q=0�qu
n�qk
�t= � 1
< 2
k >
JXq=0
�q
24 PXi=0
PXj=0
eijk(ui � r)uj
35n�q
; (25)
^uk � uk
�t= �r�n+1
k ; (26)
0un+1
k � ^uk
�t= Re�1r2
un+1
k : (27)
The discrete divergence-free condition for each mode r �un+1
k = 0 results in a set of consistent Poisson equations for
each pressure mode
r2�n+1
k =1
�tr � uk; k = 0; : : : ; P; (28)
with appropriate pressure boundary condition derived similarly as in [26]
@�k
@n= �n �
�uk +Re�1r�!n+1
k
�; k = 0; : : : ; P; (29)
where n is the outward unit normal vector along the boundary, and !k = r� uk is the vorticity for each random
mode.
3.2.2 Spatial Discretization
Spatial discretization can be carried out by any method, but here we employ the spectral/hp element method in
order to have better control of the numerical error [23]. In addition, the all-spectral discretization in space and
along the random direction leads to homogeneous inner products, which in turn result in more eÆcient ways of
inverting the algebraic systems. In particular, the spatial discretization is based on Jacobi polynomials on triangles
or quadrilaterals in two-dimensions, and tetrahedra, hexahedra or prisms in three-dimensions.
3.3 Post-Processing
The coeÆcients in the expansion of the solution process (14) are obtained after solving equations (25) to (29). We
then obtain the analytical form (in random space) of the solution process. It is possible to preform a number of
analytical operations on the stochastic solution in order to carry out a sensitivity analysis. Speci�cally, the mean
solution is contained in the expansion term with index of zero. The second-moment, i.e., the covariance function is
given by
Ruu(x1; t1;x2; t2) = < u(x1; t1)� u(x1; t1);u(x2; t2)� u(x2; t2) >
=
PXi=1
�ui(x1; t1)ui(x2; t2) < 2
i >�: (30)
9
Note that the summation starts from index (i = 1) instead of 0 to exclude the mean, and that the orthogonality
of the PC basis fig has been used in deriving the above equation. The variance of the solution, also called the
`mean-square' value, is obtained as
V ar(u(x; t)) =<�u(x; t)� u(x; t)
�2>=
PXi=1
�u2
i (x; t) < 2
i >�; (31)
and the root-mean-square (rms) is simply the square root of the variance. Similar expressions can be obtained for
the pressure �eld.
4 Stochastic Flow Simulations
In this section we present numerical results of the Polynomial Chaos solution to the Navier-Stokes equations. We �rst
consider a micro-channel ow where there is uncertainty associated with the wall boundary conditions. Subsequently,
we simulate a laminar ow past a circular cylinder with uncertain freestream. We model the uncertainty in the
boundary conditions both as Gaussian and as lognormal distributions in order to evaluate the convergence of the PC
method. In both cases we employ a two-dimensional Polynomial Chaos expansion (n = 2) with polynomial order up
to p = 4; these polynomial coeÆcients are listed in table 1.
Index p, Order of kth Polynomial Chaos
k Polynomial Chaos k
0 p = 0 1
1 p = 1 �12 �23 p = 2 �2
1� 1
4 �1�25 �2
2� 1
6 p = 3 �31� 3�1
7 �21�2 � �2
8 �1�2
2� �1
9 �32� 3�2
10 p = 4 �41� 6�2
1+ 3
11 �31�2 � 3�1�2
12 �21�22� �2
1� �2
2+ 1
13 �1�3
2� 3�1�2
14 �42� 6�2
2+ 3
Table 1: Polynomial terms for two-dimensional Polynomial Chaos.
4.1 Micro-channel Flow: Gaussian Input
We consider a micro-channel ow as shown in �gure 1. In micro- ows the no-slip boundary condition is not always
valid and appropriate slip boundary conditions should be used [21]. For gas ows Maxwell's boundary condition on
10
velocity slip or its high-order variants are appropriate at least in the slip ow regime. However, for liquids there
is no existing models for the molecule layering observed in molecular dynamics simulations [27]. Also, the e�ect of
roughness complicates further the validity or not of the no-slip boundary condition. Here we model such uncertainty
at the boundary by assuming that
u = 0 + r
where r is a random variable.
F=2ν
u=u1
u=u2
x
y
y=-1
y=1
Figure 1: Schematic of the domain for micro-channel pressure-driven ow.
The domain (see �gure 1) has dimensions such that y 2 [�1; 1] and x 2 [�5; 5]. The pressure gradient, acting
like a driving force, equals to twice the kinematic viscosity, and thus for a no-slip condition a parabolic pro�le is a
solution with centerline velocity equals to unity. Assuming that the two walls are moving with constant velocities
u1 and u2, then the exact solution is
u(x; y) = (1� y2) +1� y
2u1 +
1 + y
2u2 (32)
so it is a superposition of the Poiseuille and two Couette type ows.
4.1.1 Fully-Correlated Random Boundary Conditions
Let us now assume that the boundary conditions are random and spatially uniform, i.e.,
u1 = ��1 and u2 = �2�2;
where �1 and �2 are two independent standard Gaussian random variables with zero mean and unit variance and
�1, �2 are the standard deviations of u1 and u2, respectively. In this case, the exact solution of equation (32) still
11
applies and thus
u(x; y) = (1� y2) +1� y
2�1�1 +
1 + y
2�2�2 (33)
It can be seen from table 1 that the solution consists of the parabolic mean u0 and only the �rst two random modes
u1 and u2, linearly distributed across the channel. We will use this solution to validate the stochastic Navier-Stokes
solver.
y
u 0
u 1,u
2
-1 -0.5 0 0.5 10
0.25
0.5
0.75
1
0
0.005
0.01
0.015
0.02
u0
u1u2
Figure 2: Solution pro�le across the channel with uniform Gaussian boundary conditions.
Figure 2 shows the solution pro�le across the channel. The solution is obtained by setting �1 = 0:02 and
�2 = 0:01. A two-dimensional PC expansion is used (n = 2) with Polynomial Chaos of order p = 1. The numerical
results correctly obtain the distribution pro�les of the mean, the �rst and the second random modes. If p > 1 is
employed, all higher-order random modes are identically zero.
4.1.2 Partially-Correlated Random Boundary Conditions
Next we consider the case of non-uniform random boundary conditions, i.e. the boundary points are only partially-
correlated and thus the random contribution to the boundary condition varies along the walls. The boundary
conditions are assumed to be Gaussian random processes with correlation function in the form
C(x1; x2) = e�jx1�x2j
b ; (34)
12
where b is the correlation length. This correlation function has been employed extensively to model processes in a
variety of �elds and its use can be justi�ed on theorectical grounds [28].
Given the exponential form of the correlation function, the Karhunen-Loeve expansion of equation (9) can be
used to decompose the input random process. The corresponding eigenvalue problem of equation (10) can be solved
analytically. The eigenvalues are:
�n =2b
1 + b2!2n
; n = 1; 2; : : : (35)
and the eigenfunctions are:
fn(x) =
8><>:
cos(!nx)qa+
sin(2!na)
2!n
if n is odd,
sin(!nx)qa� sin(2!na)
2!n
if n is even,(36)
where [�a; a] is the size of the domain, and !n is determined by�1
b� !n tan(!na) = 0 if n is odd,
!n +1
btan(!na) = 0 if n is even.
The details of the derivations can be found in [10]. If the correlation function is not in the exponential form as in
equation (34) and the eigenvalue problem cannot be solved analytically, the correlation function can be constructed
numerically and a standard eigenvalue solver can be employed.
13
4
5
1
2
56
2
34
3
4
55
5
6
7
7
8
5
4
5
4
-5 0 5-1
0
1
8 0.00037 0.0002193066 0.000104885 1.23655E-054 -1.12158E-053 -7.85285E-052 -0.0002642041 -0.0005
5
5
6
7
7
8
2
345
1
2
34
5
6
5
3
3
4
-5 0 5-1
0
1
8 0.0002714297 0.0001940896 8.48499E-055 1.42857E-054 -1.42857E-053 -4.69169E-052 -9.14149E-051 -0.000133509
Figure 3: Deviation of mean solution from a parabolic pro�le in micro-channel ow with partially-correlated random
boundary conditions at the lower wall; Upper: u-velocity, Lower:v-velocity.
Here a = 5 and Re = 100; also the correlation length in (34) is set to b = 100. The eigenvalues from equation
(35) are
�1 = 9:675354; �2 = 0:1946362; �3 = 0:05014117; : : :
Due to the fast decay of the eigenvalues, we use the �rst two terms in the Karhunen-Loeve expansion (9). A fourth-
order Polynomial Chaos is used (p = 4) and �fteen expansion terms (P = 14) are needed. We also assume that
13
only the lower wall is considered to have the non-uniform random boundary condition. The upper wall is set to be
stationary and deterministic. The variance of Gaussian process of the lower wall is �2 = 0:12. A mesh with 10� 2
elemnts is employed and nineth-order Jacobi polynomial is used as the basis polynomial in each element. A parabolic
(no-slip) pro�le is used at the inlet and fully-developed conditions are asummed at the outlet.
1
1
4
1
4
1
3
2
1
1
2
3
1
-5 0 5-1
0
1
4 0.0716433 0.03187672 0.02331221 0.0105502
23
4
5
5
6
6
25
6 4
3
4
2
12
34
5
5 2 22 1
-5 0 5-1
0
1
6 0.0157875 0.006169684 0.001776943 0.001432742 0.0007764671 0.000312162
Figure 4: Contours of rms of u velocity (upper) and v velocity (lower).
Figure 3 shows the velocity contour plot of the deviation of the mean solution at steady-state from a parabolic
pro�le. The mean of u-veclocity remains close to the parabolic shape and the mean of v-velocity, although small in
magnitude, is non-zero, in contrast to the case where boundary conditions are uniformly random or deterministic.
Figure 4 shows steady-state solutions of the rms (root-mean-square) of u and v-velocity.
It is clear that theere is a developing region in the form of a boundary layer emanating at the inlet. Unlike
the fully-correlatred case, here all the higher-order expansion terms are non-zero, which implies that although the
random input is a Gaussian process, the solution output is not Gaussian. Since there is no analytic solution for
this ow with non-uniform random boundary conditions, we employ Monte Carlo (MC) simulation to validate the
Polynomial Chaos solution. The deterministic solver of the Navier-Stokes equations is used to compute the ow with
each realization of the Gaussian process from the exponential correlation function (34) as the boundary condition at
the lower wall. The MC solution is then obtained by gathering the statistics from the large ensemble of the solutions
from such realizations. A brief discussion on the generation of spatially correlated Gaussian �elds can be found in
Appendix A. For the comparison, we plot the non-dimensionalized variance of velocities, ums and vms, along the
centerline of the channel. It is de�ned as V ar(u)=�2 where V ar(u) is de�ned in equation (31) and �2 is the variance
of the input Gaussian �eld.
Figure 5 shows the solution of ums and vms along the centerline from MC simulation with 100, 500, 1000 and
14
x
Um
s
-4 -2 0 2 4
0
0.02
0.04
0.06
0.08
PCMC 100MC 500MC 1000MC 2000
x
Vm
s
-4 -2 0 2 4
0
0.01
0.02
0.03
0.04
PCMC 100MC 500MC 1000MC 2000
Figure 5: Variance of velocities along the centerline of the channel; Left: u-velocity, Right: v-velocity.
2000 realizations, together with the Polynomial Chaos (PC) solution. It is seen that the MC result converges as
the number of realizations increases and the solution converges to the PC solution. The advantage of Polynomial
Chaos expansion is evident here since the PC solution, with enough terms included for accurate approximation of
the randomness, is equivalent to the MC solution with in�nite number of realizations. While each single run of
PC solution is more expensive than a single deterministic run, in this case about 15 times more expensive as there
are 15 terms in the expansion, it is substantially faster than the Monte Carlo solution which requires thousands of
realizations in order to obtain solution statistics with comparable accuracy as the PC solutions. Note that here we
employ the standard Monte Carlo simulation without any acceleration algorithms, such as variance reduction [29].
4.2 Micro-channel Flow: Non-Gaussian Input
In this section we revisit the pressure-driven micro-channel ow discussed above but with non-Gaussian random
boundary conditions. Non-Gaussian distributions are not uncommon in micro- uidics where reactive ion etching
techniques cause anisotropies and preferential directions. The Polynomial Chaos expansion procedure can also be
applied to non-Gaussian processes. The particular non-Gaussian input we consider here corresponds to a lognormal
distribution. This process has been considered by Ghanem in [7] and [8]. It is chosen because its projection onto the
Polynomial Chaos basis can be obtained analytically. If this is not possible, as for most non-Gaussian processes, a
numerical projection procedure can instead be applied.
The lognormal process l is de�ned as
l(x) = eg(x); (37)
where g is a Gaussian process. We �rst apply the Karhunen-Loeve expansion of equation (9) to decompose the
15
Gaussian process g(x) as follows
g(x) = �g(x) +
nXi=1
gi(x)�i; (38)
where gi(x) =p�i�i(x) according to equation (9). We then expand the lognormal process l(x) by the Polynomial
Chaos
l(x) =
PXi=0
li(x)i(�): (39)
The expansion coeÆcients can be obtained by projecting the lognormal process onto the PC basis
li(x) =< l(x)i >
< 2
i >=
< eg(x)i >
< 2
i >: (40)
By using the Karhunen-Loeve expansion of g(x) (38), the above projection can be obtained analytically:
li(x) =< �i >
< 2
i >exp
0@�g(x) + 1
2
nXj=1
g2j
1A ; (41)
where the terms < �i > are listed below,
< �i >=
8>><>>:
gi when i = �i;gigj when i = �i�j � Æij ;gigjgk when i = �i�j�k � �iÆjk � �jÆki � �kÆij ;: : : : : : :
In case the process g is reduced to a random variable, the expansion of the lognormal variable can be simpli�ed to
l = exp
�g +
�2g2
!PXj=0
�jgj!j ; (42)
where �2g is the variance of the corresponding Gaussian variable g. A more detailed discussion about the lognormal
distribution and its Polynomial Chaos approximations can be found in Appendix B.
4.2.1 Fully-Correlated Random Boundary Conditions
We �rst consider the case where the boundary conditions at the walls are random but spatially uniform, i.e. all
grid points at the boundary are fully-correlated. The exact solution of equation (32) then applies. By substituting
the expansion of lognormal random variable (42) into the formula, we �nd that the exact solution consists of the
mean u0 and the random modes corresponding to the inputs from the two walls �1 and �2, linearly distributed across
the channel. There is no interaction between the two random walls so the cross terms, i.e., terms associated with
�i1�j2(i 6= j), in the expansion remain zero.
Figure 6 shows the velocity modes across the channel. The standard deviations of the underlying Gaussian
variables at lower and upper walls are �1 = 0:2 and �2 = 0:1, respectively. Fourth-order Polynomial Chaos is
employed (p = 4) and the total number of expansion terms is 15 (P = 14). Only the nonzero terms, i.e., terms
16
y
u 1,u
3
u 2,u
5
-1 0 10
0.07
0.14
0.21
0
0.04
0.08
0.12
u1
u2
u3
u5
y
u 6,u
10
u 9,u
14
-1 0 10
0.0005
0.001
0.0015
0
6E-05
0.00012
0.00018
u6
u9
u10
u14
Figure 6: Distribution of velocity random modes across the channel; Left: �rst- and second-order (non-zero) random
modes, Right: third- and fourth-order (non-zero) random modes.
associated with �1 or �2 only, are plotted (see table 1 for the indices of these terms). We see the linear distribution
of the di�erent random modes across the channel. All the cross terms, i.e., terms with the form �i1�j2(i 6= j), are zero
as expected. The highest order terms, u10 and u14, which are associated with the terms �41and �4
2, respectively, are
of the order of 10�6. This implies that the fourth-order PC expansion is adequate in this case.
4.2.2 Partially-Correlated Random Boundary Conditions
Next we consider spatially non-uniform input. The wall boundary condition is considered as a lognormal random
process as in equation (37). The underlying Gaussian process g(x) is exactly the same as the one in section 4.1.2 and
the same two-term Karhunen-Loeve expansion is applied to decompose g(x). Monte Carlo simulation is conducted
since no exact formula is known. Again only the lower wall is considered to be random and the standard deviation
of the underlying Gaussian process is � = 0:5. The non-dimensionalized variance of velocities, de�ned similarly as
in section 4.1.2, is plotted along the centerline of the channel in �gure 7.
Monte Carlo solutions corresponding to 100, 500 and 1,000 realizations are plotted, together with the solution
from the Polynomial Chaos expansion. It can be seen that as the number of realizations increases, the MC solution
converges to PC solution. The convergence rate is relatively slow due to the large variance of the random input.
4.3 Flow Past a Circulat Cylinder
In this section we simulate two-dimensional incompressible ow past a circular cylinder with random correlated
uctuations superimposed to the freestream. More speci�cally, the in ow takes the form
uin = �u+ g;
17
x
Um
s
-4 -2 0 2 40
0.05
0.1
PCMC 100MC 500MC 1000
x
Vm
s
-4 -2 0 2 4
0
0.01
0.02
0.03
PCMC 100MC 500MC 1000
Figure 7: Variance of velocities along the centerline of the channel; Left: u-velocity, Right: v-velocity.
where g is a Gaussian random variable or process. The size of the computational domain is [�15; 25]� [�9; 9] and
the cylinder is at the origin with diameter of D = 1. The Reynolds number Re = 100 based on the mean value of
the in ow velocity �u. The domain consists of 412 triangular elements and the Jacobi polynomial in each element for
spatial discretization is sixth-order.
4.3.1 Fully-Correlated Gaussian Freestream
In this case uin = �u + g and g = �� is a Gaussian random variable, where � = 0:01 is its standard deviation.
This results in a one-dimensional PC expansion and we employ fourth-order PC in the simulation. Figure 8 shows
the history of pressure signal at the rear stagnation point on the cylinder surface. The signal of the corresponding
deterministic simulation is also plotted and it is denoted as PD in dotted line, for reference purposes. It is seen that
the amplitude of the mean pressure signal is smaller compared to the deterministic signal due to di�usion induced
by the randomness. The higher modes, which describe the stochastic component of the solution, all start from zero
then develop gradually from P1 to P4 over time.
A stationary periodic state is reached after t = 350 in convective time units. Because only the mean and �rst
random modes (Gaussian part) have non-zero boundary inputs, the development to non-zero value of the higher
random modes is due to the interactions of the random modes through the nonlinear convective terms in equation
(19). Figure 9 shows the instantaneous vorticity distribution from the deterministic simulation and the mean solution
from the stochastic simulation at the same time t = 370; both simulations started with identical initial conditions
corresponding to a converged periodic state at Re = 100. Both plots use exactly the same contour levels so that the
randomness-induced di�usion in the stochastic solution can be identi�ed in the wake of the cylinder. In �gure 10 a
18
snapshot of rms of vorticity for the stochastic simulation is shown at the same time t = 370.
200 250 300 350
-0.16
-0.12
-0.08
-0.04
0 PD
P0
200 250 300 350
-0.04
0
0.04
0.08 P1
P2
P3
P4
Figure 8: Pressure signal of cylinder ow with uniform random in ow: � = 0:01. Upper: High modes. Lower: Zero
mode (mean).
It should be noted that this problem, although still relatively simple, is computationally much more complex
than the channel problem, and the approach of Monte Carlo simulation is prohibitively expensive.
4.3.2 Partially-Correlated Gaussian Freestream
We consider now the more realistic case of partially-correlated freestream velocity. We describe this partial correlation
by the exponential covariance kernel of equation (34) with variance �2 = 0:022. A relatively large correlation length
is chosen (b = 100) such that the �rst two eigenmodes are adequate to represent the process by Karhunen-Loeve
expansion (9). Thus, we employ a two-dimensional PC expansion and third-order polynomials (p = 3) (see table 1
for relevant terms).
Figure 11 shows the pressure signal, together with the deterministic signal for reference (denoted as PD in dotted
line). We see that the stochastic mean pressure signal has a smaller amplitude and it is out of phase with respect
to the deterministic signal. Although initially, the stochastic response follows the deterministic reponse, eventually
there is a change in the Strouhal frequency as shown in �gure 12. Speci�cally, the Strouhal frequency of the mean
stochastic solution is lower than the deterministic one due to e�ective lowering of the Reynolds number induced by
randomness. We now examine the response of the higher random modes. The �rst random mode �1 is dominant and
19
x
y
0 10 20-6
-4
-2
0
2
4
6
x
y
0 10 20-6
-4
-2
0
2
4
6
Figure 9: Instantaneous vorticity contours: Upper - Deterministic solution with uniform in ow; Lower - Mean
solution with uniform Gaussian random in ow.
11 1
4 2
1
2
1
4 5
3 4
3
1
3
1 1
x
y
0 10 20-6
-4
-2
0
2
4
67 1.422276 1.296765 1.014294 0.6071433 0.4975912 0.3454891 0.227166
Figure 10: Contours of rms of vorticity �eld with uniform Gaussian random in ow.
20
�2 is subdominant. Correspondingly, in the higher modes, only the terms associated with �1, i.e., terms P3 � �21and
P6 � �31are active (see table 1). These high modes are also plotted in �gure 11 (upper plot).
In �gure 13 we present velocity pro�les along the centerline for the deterministic and the mean stochastic solution
at the same time instant. We see that signi�cant quantitative di�erences emerge even with a relatively small 2%
uncertainty in the freestream. In �gure 14 we plot instantaneous vorticity contours for the mean of the vorticity and
compared it with the corresponding plot from the deterministic simulation. There is clearly a strong di�usive e�ect
induced by the randomness. In �gure 15 we plot contours of the corresponding rms of vorticity. It shows that the
uncertainty in uences the most interesting region of the ow, i.e., the shear layers and the vortex street and not the
far �eld.
Time
Pre
ssur
eS
igna
l
200 250 300 350 400-0.3
-0.25
-0.2
-0.15
-0.1
-0.05 PD
P0
Time
Pre
ssur
eS
igna
l
200 250 300 350 400
-0.04
0
0.04
0.08P1
P2
P3
P6
Figure 11: Pressure signal of cylinder ow with non-uniform Gaussian random in ow. Upper: High modes. Lower:
Zero mode (mean).
5 Summary and Discussion
We have developed a stochastic spectral method to model the uncertainty associated with boundary conditions in
simulations of incompressible ows. This method can also be applied to model uncertainty in the boundary domain,
e.g. a rough surface, and also to model the uncertainty associated with transport coeÆcients, e.g. the eddy viscosity
in large eddy simulations or other transport models. It sets the foundation for a composite error bar in CFD [30]
that includes, in addition to the numerical error, contributions due to imprecise input to the simulation. Clearly, this
uncertainty is propagated nonlinearly and the new method quanti�es statistically the uncertainty in the solution.
More speci�cally, we have employed Wiener-Hermite polynomials to represent the stochastic solution and in
21
0.05 0.1 0.15 0.2 0.25 0.30
0.5
1
1.5
2
2.5
3
3.5
4x 10
−4
Frequency
Spe
ctru
m
Deterministic signal Mean random signal: σ=0.02
Figure 12: Frequency spectrum for the deterministic (high peak) and stochastic simulation (low peak).
x0 5 10 15 20
-0.5
-0.25
0
0.25
0.5
0.75
1
UDeterministic
UMean
VDeterministic
VMean
Figure 13: Instantaneous pro�les of the two velocity components along the centerline for the deterministic and the
mean stochastic solution.
22
x
y
0 10 20
-5
0
5
x
y
0 10 20
-5
0
5
Figure 14: Instantaneous vorticity �eld : Upper - Deterministic solution with uniform in ow; Lower - Mean solution
with non-uniform Gaussian random in ow.
22
122
2
12
1
2
224
2
1
x
y
0 10 20
-5
0
5
7 1.546386 1.307815 1.14 0.8949323 0.62 0.2761361 0.124748
Figure 15: Instantaneous contours of rms of vorticity �eld with non-uniform Gaussian random in ow.
23
particular to discretize the random dimension. We have veri�ed in [31] that exponential convergence is obtained
for stochastic ordinary di�erential equations, for which exact solutions are available. On the other hand, it has
been reported in the literature that Wiener-Hermite expansions may be converging slower for certain distribution
functions, e.g. for Poisson processes [32]. This, however, can be recti�ed by employing a more suitable spectral
basis instead of the Hermite polynomials. For example in [32] multivariate Charlier polynomials were found to best
represent Poisson processes. This can be extended to many di�erent known distributions by employing a basis from
the general family of Askey polynomials [22], which includes polynomials both for continuous as well as discrete
processes. For example,
� Laguerre polynomials are associated with the Gamma distribution,
� Meixner polynomials with the Pascal distribution,
� Krawtchouk polynomials with the Binomial distribution,
� Jacobi polynomials with the Beta kernel, and
� Hahn polynomials with the Hypergeometric distribution.
For an arbitrary distribution any Wiener-Askey expansion would converge, in accord with the Cameron & Martin
theorem [24], however certain representations converge faster than others similarly to deterministic spatial expansions.
As regards eÆciency, a single Polynomial Chaos based simulation, albeit computationally more expensive than the
deterministic Navier-Stokes solver, is able to generate the solution statistics equivalent to the Monte Carlo simulation.
In the latter, thousands of realizations are required for converged statistics, which is prohibitively expensive for most
CFD problems in practice. For example, to obtain converged statistics using the standard Monte Carlo approach
for the two-dimensional ow past a cylinder that we presented in section 4.3 it would require currently more than a
year of computation on a standard workstation compared to about six days for the Polynomial Chaos simulation. Of
course a reduced variance version could be employed and also the Monte Carlo simulation is embarassingly parallel,
and thus it can be accelerated greatly. However, the Polynomial Chaos based stochastic simuilation can also be done
in parallel. One possible strategy is to distribute one random mode per processor, as the bulk of the work is parallel
due to linearity, and transpose the data to perform the nonlinear products in a separate step. Even with a modest
parallel eÆciency, the aforementioned computation would be completed in less than a day on a �fteen-processor
computer.
24
A Generation of Correlated Gaussian Random Fields
While numerous routines are available for generating independent Gaussian random variables, it is often needed to
generate a correlated Gaussian �eld (vector) with given correlation structure. In general, this can be performed by
means of: (1) Spectral representation; (2) ARMA (Auto-Regressive Moving Average) modeling; and (3) Covariance
matrix decomposition procedures. There are two approaches for covariance matrix decomposition methods: The
Cholesky decomposition method and the modal decomposition method. In this section we brie y review the Cholesky
decomposition method [33] and the �rst-order ARMA method.
A.1 The Cholesky Decomposition Method
Without loss of generality, we consider the normal random vector x = (x1; x2; � � � ; xn) with zero mean, i.e., each
xi(i = 1; : : : ; n) is a Gaussian random variable with zero mean. Let us denote its covariance matrix as
C =
264
�11 � � � �1n...
. . ....
�n1 � � � �nn
375 : (43)
We then call x the normal random vector of N(0;C).
index
Ran
dom
field
0 25 50 75 100
-3
-2
-1
0
1
2
3
Independent Gaussian fieldCorrelation length = 1Correlation length = 10
Figure 16: Correlated Gaussian random �eld with exponential correlation structure (correlation length is non-
dimensionalized by the size of the domain)
Let y be distributed N(0; In) where In is the unit matrix of size n, i.e., y is an independent (uncorrelated)
25
Gaussian random �eld. Let also x = Ay, then x is distributed N(0;AAT ) where AT is the transpose of A. This
result is a special case of a more general theorem by Anderson [34].
The problem of generating Gaussian random �eld with given correlation structure C is then transformed to the
problem of �nding matrix A such that AAT = C. Since matrix C is real and symmetric, this can be readily done
by the Cholesky's decomposition, where A takes the form of a lower-triangular matrix:
ai1 = �i1=p�11; 1 � i � n;
aii =
q�ii �
Pi�1k=1
a2ik; 1 < i � n;
aij =h�ij �
Pj�1k=1
aikajk
i=ajj ; 1 < j < i � n;
aij = 0; i < j � n:
(44)
Once the aij 's have been determined according to the given �ij 's, we need to generate an n-dimensional independent
Gaussian vector y, and subsequently perform the transformation x = Ay. The resulting Gaussian random vector x
will have the desired correlation structure C = f�ijg.
Figure 16 shows the correlated Gaussian �eld of the exponential correlation function de�ned in (34) with unit
variance. The correlation length is non-dimensionalized by the size of the domain. The two results with correlation
length of 1 and 10 are both obtained by applying the above Cholesky decomposition method to the same independent
Gaussian �eld also shown in the �gure. We see that the original �eld is uncorrelated and by increasing the correlation
length the correlation structure of the �eld becomes stronger.
A.2 First-Order ARMA Process Method
Another way of generating correlated Gaussian �eld is based on the idea of employing the �rst-order Auto-regressive
(AR(1)) process, also called the Markov process. This method works speci�cally for the exponential covariance
function (34)
C(x1; x2) = ejx1�x2j
b ; (45)
where b is the correlation length. The stationary AR(1) process is de�ned as,
xi = �xi�1 + �yyi; j�j < 1; (46)
where fyig is the set of independent Gaussian variables with zero mean and unit variance. It can be shown that the
stationary AR(1) process has the following properties [35]:
E(xi) = 0; (47)
V ar(xi) =�2y
1� �2; (48)
C(xi; xi�k) = �jkj; k = 0;�1;�2; : : : (49)
26
It is then straightforward to show that by choosing the parameters
� = e1b ; �y =
p1� �2; (50)
the resulting random vector x = fxig is a Gaussian random �eld with the exponential covariance kernel (equation
(45)).
B Lognormal Distribution and its Approximations
While the normal distribution arises from the sum of many small e�ects according to the Central Limit Theorem, the
lognormal distribution arises as the result of a multiplicative mechanism. Let X denote a Gaussian random variable,
the lognormal random variable is obtained by taking the exponential of X ,
Y = eX ; X = lnY: (51)
A random variable Y whose logarithms are normally distributed is said to have the lognormal distribution.
The PDF (Probability Density Function) of the Gaussian variable X is:
fX(x) =1
�Xp2�
exp
"�1
2
�x�mX
�X
�2#; �1 � x �1; (52)
where mX and �2X are the mean and variance of the Gaussian random variable X , respectively.
Equation (51) de�nes a one-to-one monotonic transformation. It is well known that when such transformation in
the form of Y = g(X) exists, the PDF of the new random variable Y is:
fY (y) =
����dg�1dy(y)
���� fX(g�1(y)): (53)
Therefore, upon substitution the PDF of the lognormal distribution is:
fY (y) =1
y�Xp2�
exp
"�1
2
�ln y �mX
�X
�2#; y � 0: (54)
The moments of lognormal distribution can be calculated as follows [36]:
< Y r >= ermXe12r2�2
X ; r = 1; 2; : : : (55)
The Polynomial Chaos expansion to the lognormal random variable is given in equation (42).
Y = exp
�mX +
�2X2
� PXj=0
�jXj!
j ; (56)
27
Upon substituting the one-dimensional Polynomial Chaos (equation (7)) we obtain the �rst few terms of the approx-
imation:
Y = c
�1 + �X� +
�2X2!
(�2 � 1) +�3X3!
(�3 � 3�) + � � ��; c = emX+
�2X
2 : (57)
The Taylor expansion-like formula approximates the lognormal distribution with a Gaussian distribution and higher-
order non-Gaussian correction terms. The PDF of the Polynomial Chaos approximation can be obtained via several
common techniques such as the method of Cumulative Distribution Function (CDF) [37]:
1. First-order approximation:
Y1 = c(1 + �X�); (58)
f1(y) =1
c�Xf(�); � =
�yc� 1�: (59)
2. Second-order approximation:
Y2 = c
�1 + �X� +
�2X2!
(�2 � 1)
�; (60)
f2(y) =1
c�X
1
r[f(�1) + f(�2)] ; (61)
where
�1;2 =1
�X(�1� r); r =
q�1 + �2X + 2y=c:
3. Third-order approximation:
Y3 = c
�1 +
�2X2!
(�2 � 1) +�3X3!
(�3 � 3�)
�; (62)
f3(y) =1
c�X
�4Xb2
�1 +
�4X (1� �2X )
b2
��1 +
1� 3y=c
r
�f(�); (63)
where
r =q2� 6y=c+ 9(y=c)2 � 3�2X + 3�4X � �6X ; b = �2X (1� 3y=c+ r)
1=3;
and
� = � 1
�X+�X � �3X
b� b
�3X:
In the above equations, the function f(x) = 1p2�e�x
2=2 is the PDF for the standard Gaussian variable � which has
zero mean and unit variance. Figure 17 shows the PDFs of the �rst-, second- and third-order PC approximations
to the lognormal distribution, together with the exact lognormal PDF. The results are obtained with mX = 0 and
�X = 0:3. It can be seen that the �rst-order approximation (58) is symmetric because it only contains the Gaussian
term. As more high-order terms are added, the PDFs become asymmetric and the third-order expansion gives a very
good approximation.
28
0 0.5 1 1.5 2 2.5 30
0.2
0.4
0.6
0.8
1
1.2
1.4
exact 1st−order2nd−order3rd−order
Figure 17: Probability density functions (PDF) of the lognormal distribution and its 1st-, 2nd- and 3rd-order
Polynomial Chaos approximations.
Acknowledgements
We would like to thank Dr. M. Jardak, D. Lucor, Prof. C.-H. Su and Prof. R. Ghanem for useful discussions. This
work was supported by ONR and computations were performed at Brown's TCASCV and NCSA's (University of
Illinois) facilities.
29
References
[1] Workshop on Validation and Veri�cation of Computational Mechanics Codes. Technical report, Caltech, De-
cember 1998.
[2] Workshop on Predictability of Complex Phenomena, Los Alamos, 6-8 December 1999. Technical report.
[3] Workshop on Decision Making Under Uncertainty, IMA, 16-17 September 1999. Technical report.
[4] T.J. Oden, W. Wu, and M. Ainsworth. An a posteriori error estimate for �nite element approximations of the
Navier-Stokes equations. Comp. Meth. Appl. Mech. Eng., 111:185, 1994.
[5] L. Machiels, J. Peraire, and A.T. Patera. A posteriori �nite element output bounds for the incompressible
Navier-Stokes equations; application to a natural convection problem. J. Comp. Phys., to appear, 2001.
[6] R.G. Hills and T.G. Trucano. Statistical validation of engineering and scienti�c models: Background. Technical
Report SAND99-1256, Sandia National Laboratories, 1999.
[7] R.G. Ghanem. Stochastic �nite elements for heterogeneous media with multiple random non-Gaussian properties.
ASCE J. Eng. Mech., 125(1):26{40, 1999.
[8] R.G. Ghanem. Ingredients for a general purpose stochastic �nite element formulation. Comp. Meth. Appl. Mech.
Eng., 168:19{34, 1999.
[9] T.D. Fadale and A.F. Emery. Transient e�ects of uncertainties on the sensitivities of temperatures and heat
uxes using stochastic �nite elements. J. Heat Trans., 116:808{814, 1994.
[10] R.G. Ghanem and P. Spanos. Stochastic �nite elements: a spectral approach. Springer-Verlag, 1991.
[11] W.K. Liu, G. Bester�eld, and A. Mani. Probabilistic �nite elements in nonlinear structural dynamics. Comp.
Meth. Appl. Mech. Eng., 56:61{81, 1986.
[12] W.K. Liu, A. Mani, and T. Belytschko. Finite element methods in probabilistic mechanics. Probabilistic Eng.
Mech., 2(4):201{213, 1987.
[13] M. Shinozuka and E. Leone. A probabilistic model for spatial distribution of material properties. Eng. Fracture
Mech., 8:217{227, 1976.
[14] M. Shinozuka. Structural response variability. J. Eng. Mech., 113(6):825{842, 1987.
30
[15] M. Shinozuka and G. Deodatis. Response variability of stochastic �nite element systems. Technical report,
Dept. of Civil Engineering, Columbia University, New York, 1986.
[16] N. Wiener. The homogeneous chaos. Amer. J. Math., 60:897{936, 1938.
[17] N. Wiener. Nonlinear problems in random theory. MIT Technology Press and John Wiley and Sons, New York,
1958.
[18] H. Grad. On the kinetic theory of rare�ed gases. Comm. Pure Appl. Math., 2:331{407, 1949.
[19] A.J. Chorin. Hermite expansion in Monte-Carlo simulations. J. Comput. Phys., 8:472{482, 1971.
[20] S.A. Orszag and L.R. Bissonnette. Dynamical properties of truncated Wiener-Hermite expansions. Phys. Fluids,
10:2603, 1967.
[21] A. Beskok and G.E. Karniadakis. A model for ows in channels, pipes and ducts at micro- and nano-scales. J.
Microscale Thermophysical Eng., 3:43{77, 1999.
[22] R. Askey and J. Wilson. Some basic hypergeometric polynomials that generalize Jacobi polynomials. Memoir
Amer. Math. Soc., AMS, Providence RI, 319, 1985.
[23] G.E. Karniadakis and S.J. Sherwin. Spectral/hp element methods for CFD. Oxford University Press, 1999.
[24] R.H. Cameron and W.T. Martin. The orthogonal development of nonlinear functionals in series of Fourier-
Hermite functionals. Ann. of Math., 48:385, 1947.
[25] M. Lo�eve. Probability theory, Fourth edition. Springer-Verlag, 1977.
[26] G.E. Karniadakis, M. Israeli, and S.A. Orszag. High-order splitting methods for incompressible Navier-Stokes
equations. J. Comp. Phys., 97:414, 1991.
[27] J. Koplik and J.R. Banavar. Continuum deductions from molecular hydrodynamics. Ann. Rev. Fluid Mech.,
27:257{292, 1995.
[28] A.M. Yaglom. An introduction to the theory of stationary random functions. Prentice-Hall, Inc., Englewood
Cli�s, New Jersey, 1962.
[29] B.D. Ripley. Stochastic Simulation. John Wiley & Sons, New York, 1987.
[30] G.E. Karniadakis. Towards an error bar in cfd. J. Fluids Eng., 117:March, 1995.
31
[31] D. Lucor, D. Xiu, and G.E. Karniadakis. Spectral representations of uncertainty in simulations: Agorithms and
applications. In ICOSAHOM-01, Uppsala Sweeden, June 11-15, 2001.
[32] H. Ogura. Orthogonal functionals of the Poisson process. IEEE Trans. Info. Theory, 18:473{481, 1972.
[33] E.M. Scheuer and D.S. Stoller. On the generation of normal random vectors. Technometrics, 4:278, 1962.
[34] T.W. Anderson. An introduction to multivariate statistical analysis. John Wiley and Sons, Inc., 1958.
[35] C. Chat�eld. The analysis of time series: an introduction. Chapman and Hall/CRC, 1996.
[36] J.R. Benjamin and C.A. Cornell. Probablilty, statistics, and decision for civil engineers. McGraw-Hill, Inc.,
1970.
[37] D.D. Wackerly, W. Mendenhall, and R.L. Schea�er. Mathematical statistics with applications. Duxbury Press,
Wadsworth Pub. Co., 1996.
32