Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
RESEARCH PAPER
Uncertainty quantification in reliability estimation with limitstate surrogates
Saideep Nannapaneni1 & Zhen Hu1& Sankaran Mahadevan1
Received: 7 December 2015 /Revised: 27 April 2016 /Accepted: 17 May 2016# Springer-Verlag Berlin Heidelberg 2016
Abstract Model-based reliability analysis is affected by dif-ferent types of epistemic uncertainty, due to inadequate dataand modeling errors. When the physics-based simulationmodel is computationally expensive, a surrogate has oftenbeen used in reliability analysis, introducing additional uncer-tainty due to the surrogate. This paper proposes a frameworkto include statistical uncertainty and model uncertainty insurrogate-based reliability analysis. Two types of surrogateshave been considered: (1) general-purpose surrogate modelsthat compute the system model output over the desired rangesof the random variables; and (2) limit-state surrogates. A uni-fied approach to connect the model calibration analysis usingthe Kennedy and O’Hagan (KOH) framework to the construc-tion of limit state surrogate and to estimating the uncertainty inreliability analysis is developed. The Gaussian Process (GP)general-purpose surrogate of the physics-based simulationmodel obtained from the KOH calibration analysis is furtherrefined at the limit state (local refinement) to construct thelimit state surrogate, which is used for reliability analysis.An efficient single-loop sampling approach using the proba-bility integral transform is used for sampling the input vari-ables with statistical uncertainty. The variability in the GPprediction (surrogate uncertainty) is included in reliabilityanalysis through correlated sampling of the model predictionsat different inputs. The Monte Carlo sampling (MCS) error,which represents the error due to limited Monte Carlo sam-ples, is quantified by constructing a probability density func-
tion. All the different sources of epistemic uncertainty arequantified and aggregated to estimate the uncertainty in thereliability analysis. Two examples are used to demonstrate theproposed techniques.
Keywords Reliability . Epistemic . Surrogate . Gaussianprocess . Uncertainty Quantification
NomenclatureX A vector of input random variables in a systemg(X) Physics modelpfc Failure probability of a component (single limit
state)I(.) Indicator function for identifying samples in a
failure regionpfs Failure probability of a system (with multiple
limit states)nMCS Number of Monte Carlo samples for reliability
analysisĝ(X) Surrogate model built for the simulation modelx A realization of the vector of input random
variablesgext(X) Limit state function (either a component or a
system limit state)ĝext(X) A surrogate for the limit state (component or
system)EFF Expected Feasibility FunctionX A single input random variablex A realization of an input random variableΘ A vector of distribution parametersθ A realization of the vector of distribution
parametersBMA Bayesian model averagingBHT Bayesian hypothesis testing
* Sankaran [email protected]
1 Department of Civil and Environmental Engineering, VanderbiltUniversity, Station B, Box 1831, Nashville, TN 37235, USA
Struct Multidisc OptimDOI 10.1007/s00158-016-1487-1
wk Weight corresponding to the kth distribution typein BMA
D Data available on an input random variableΨ Parameters of a simulation modelδ(X) True model discrepancy for the simulation modelgobs(X) Experimental observationsY Output of the physics model (Y=g(X))FY(y) True CDF of model outputFY yð Þ Estimated CDF of model outputgmod el(X) Simulation model to the physics modelδ Xð Þ An estimate for the model discrepancyUMD Auxiliary variable to represent the uncertainty in
model discrepancy estimateuMD A realization of the auxiliary variable for model
discrepancyUX Auxiliary variable for aleatory uncertainty in an
input random variableuX A realization of the auxiliary variable for aleatory
uncertainty in an inputφ A variable used to represent the set of input ran-
dom variables, uncertain model parameters anduncertainty in model discrepancy prediction
UAK Coefficient of variation of surrogate predictionpfMCS Failure probability after inclusion ofMonte Carlo
simulation errorUMCS Auxiliary variable for the epistemic uncertainty
in reliability estimate
1 Introduction
Optimization under uncertainty has been studied in twodirections – (1) Reliability-based Design Optimization(RBDO), and (2) Robust Design Optimization (RDO).One of the crucial elements in an RBDO problem is reli-ability analysis. Techniques for reliability analysis can becategorized into analytical methods and simulation-basedmethods (Du 2010). Analytical methods such as FirstOrder Reliability Method (FORM) and Second OrderReliability Method (SORM) approaches employ first or-der and second order approximations of the limit state(Haldar and Mahadevan 2000a). First-order and second-order bounds for system reliability estimates have beenproposed based on first-order and second-order approxi-mations of the limit states (Mahadevan et al. 2001; Songand Der Kiureghian 2003). The FORM and SORM-basedmethods become inaccurate when the limit-states arehighly nonlinear. Monte Carlo sampling (MCS) ap-proaches can be accurate but computationally expensive.To reduce the computational effort, surrogate-based reli-ability analysis methods have been developed (Hu and Du2015; Wang and Wang 2015; Youn and Choi 2004). Forillustration, consider a performance function represented
as g(X) and let g(X) = 0 represent the limit state. If thecomputational model for g(X) is expensive, two catego-ries of surrogates have been used in the reliability analysisliterature – surrogates that estimate the output for anygiven input i.e. g(X), and surrogates that particularlymodel the failure limit state i.e. g(X) = 0. For conve-nience, the surrogates that replace g(X) and g(X) ‐ go = 0are termed in this paper as general-purpose surrogates andlimit state surrogates, respectively.
In the context of general purpose surrogate (commonlyknown as response surface), Faravelli (1989), Choi et al.(2004) used a polynomial expansion response surface model,Papadrakikis et al. (1996) used a neural network based surro-gate, Dubourg et al. (2013), Kaymaz (2005) used a Kriging (orGaussian process) surrogate. In the case of limit state surro-gates (which are basically classifiers), Bichon et al. (2008,2011) and Echard et al. (2011) used a Gaussian process (GP)surrogate while Song et al. (2013) used a support vector ma-chine (SVM)-based surrogate. In this paper, the limit statesurrogates (classifiers) are considered. The surrogate-basedreliability methods have mainly considered only aleatory un-certainty (natural variability) so far. However, practical reli-ability analysis is affected by many sources of epistemic un-certainty (lack of knowledge); therefore, this paper investi-gates approaches to include such uncertainty in surrogate-based reliability analysis.
Epistemic uncertainty can be divided into two categories –statistical uncertainty and model uncertainty. Statistical uncer-tainty stems from inadequacies in the available data (e.g.,sparse, imprecise, qualitative, missing, or erroneous)(Sankararaman and Mahadevan 2011; Wang et al. 2009;Youn and Wang 2006) which results in uncertainty regardingthe probability distributions of the input random variables.Model uncertainty is due to uncertainty in model parameters,numerical solution errors, and model form error. The modelform error and numerical solution errors together are referredto in this paper as model discrepancy. In recent years, effortshave been made to account for both aleatory and epistemicuncertainty within reliability analysis, such as the auxiliaryvariable approach (Der Kiureghian and Ditlevsen 2009;Nannapaneni and Mahadevan 2015; Sankararaman andMahadevan 2013a), the conditional reliability index method(Hu et al. 2015), and the Bayesian network approach(Mahadevan et al. 2001; Straub and Der Kiureghian 2010).These methods have only concentrated on component reliabil-ity analysis (i.e. a single limit state) considering a few sourcesof epistemic uncertainty.
Uncertainty due to the use of a surrogate, also called surro-gate uncertainty, is an important source of uncertainty whenusing a surrogate for reliability analysis. In this paper, we areusing the term surrogate uncertainty to represent the variance inthe surrogate prediction and not the bias. For instance, when aGP surrogate is used, the prediction at any input is a Gaussian
S. Nannapaneni et al.
distribution. Another source of uncertainty that is often ignoredis the Monte Carlo sampling (MCS) error, which is the uncer-tainty due to the use of limited Monte Carlo samples in reliabil-ity analysis. As the number of Monte Carlo samples increases,the MCS error decreases and as the number of samples reachesinfinity, the MCS error tends to zero. Since an infinite numberof samples is not possible, there exists some residualMCS errorthat needs to be quantified and included in the reliability anal-ysis. Techniques for systematic incorporation of various episte-mic uncertainty sources, along with surrogate uncertainty andMCS error in limit state surrogate-based reliability analysis arenot available yet. Therefore, this paper seeks to incorporate allthe above stated sources of uncertainty in limit state surrogate-based reliability analysis. Here, “all” refers to all the uncer-tainties discussed up to and including that paragraph, whichinclude aleatory and statistical uncertainty in input random var-iables (distribution parameter, distribution type), uncertainmodel parameters, model discrepancy, surrogate uncertaintyand Monte Carlo sampling errors. Both component and systemreliability analyses are considered. For the sake of convenience,reliability analysis based on a limit state surrogate is termed assurrogate-based reliability analysis in the rest of the paper.
For the construction of limit state surrogate, thephysics-based simulation model is assumed to be avail-able. Along with the simulation model, the model dis-crepancy associated with it is assumed to be quantifiedusing calibration experiments and available. If theKennedy and O’Hagan calibration framework (Kennedyand O’Hagan 2001) is used, then the model discrepancyis represented using a GP model. Kennedy andO’Hagan, in their paper (Kennedy and O’Hagan 2001)have proposed two approaches for model calibration –(1) fully Bayesian, and (2) modular Bayesian. A goodoverview of both fully and marginal Bayesian ap-proaches is available in (Arendt et al. 2012). In thefully Bayesian approach, the model parameters of sim-ulation model and the hyper parameters are calibratedtogether by obtaining joint posterior distributions.Using the joint posterior distributions, marginal distribu-tions can be obtained.
The model calibration using the modular Bayesianapproach can be summarized in the following steps(Arendt et al. 2012): (1) Estimation of MLE (maximumlikelihood estimates) of the hyper parameters of the sim-ulation model GP surrogate using the simulation data,(2) Estimation of MLE of hyper parameters of the mod-el discrepancy GP surrogate using the experimental da-ta, simulation data, and hyper parameters of the simula-tion model GP surrogate, (3) Calibration of model pa-rameters based on the estimated hyper parameters of theGP surrogates, and (4) Prediction, where the overallprediction is marginalized over the model parameters.The model prediction at any input, when conditioned
on the model parameters and MLE of hyper parametersof GP surrogates, follows a Gaussian distribution.However, the unconditional prediction at any input isobtained by marginalizing over the posterior distribu-tions of the model parameters; in this case, the modelprediction is not Gaussian. When the overall predictionis marginalized over the model parameters, an expectedvalue for the reliability estimate is obtained. The abovemarginalization results in the overall prediction to notbeing Gaussian. However, since the goal in this paperis to quantify the uncertainty in the reliability estimate(as opposed to an expected value), marginalization overmodel parameters is not performed and the model pa-rameters (with their posterior distributions) are treatedjust like any stochastic inputs. In this case, the predic-tion (for a given realization of system input and modelparameter) will be Gaussian. As stated in (Arendt et al.2012), the separation of both the GP models for cali-bration is intuitive and therefore, the modular Bayesianapproach for calibration is adopted.
The outputs of the model calibration analysis using KOHframework are – (1) data on the inputs, simulation output andobservations, which are used for model calibration, (2) a GPmodel for the simulation model, (3) a GP model for the modeldiscrepancy, and (4) Posterior distribution of the model pa-rameters. The above four elements are later used in the con-struction of a limit-state surrogate for reliability analysis.
Earlier studies such as (Dubourg et al. 2013; Kaymaz 2005;Bichon et al. 2008, 2011; Echard et al. 2011) considered con-struction of the limit state surrogate for the physics-based sim-ulation model without any model discrepancy. In this paper,the model discrepancy is also considered in constructing thelimit state surrogate. However, construction of any surrogaterequires point values of paired input–output data whereas inthe presence of model discrepancy, the output at any input is aPDF. This problem is overcome in this paper by using anauxiliary approach, which allows for a one-to-one relationshipbetween input and output.
Several types of surrogates are available in the literaturesuch as polynomial chaos, Gaussian process (GP), supportvector machines, neural networks etc. This paper uses a GPsurrogate to model the limit state following the EfficientGlobal Reliability Analysis (EGRA) approach proposed byBichon et al. (2008, 2011). However, the techniques proposedin this paper are not limited to GPmodels and can be extendedto any surrogate. The GP surrogate of the simulation modelobtained from the KOH framework-based calibration analysisis further refined to construct the limit state surrogate byadding more training points close to the limit state. The train-ing points are adaptively selected by maximizing a learningfunction called the Expected Feasibility Function (EFF).Moredetails about the EFF and selection of training points are pro-vided in Section 2.1.2.
Uncertainty quantification in reliability estimation
In the presence of statistical uncertainty, an inputvariable can be represented through a family of PDFs,where each PDF corresponds to a realization of thedistribution parameters and a distribution type. The sam-pling of the input variable can be done through a nestedsampling approach where the distribution type and pa-rameters are sampled in the outer loop, and the samplesof the input variable are generated in the inner loop.This nested-loop procedure is computationally expen-sive; therefore a faster single loop sampling approachusing the probability integral transform (Sankararamanand Mahadevan 2011) is used here.
The major contribution through this paper is a unifiedframework connecting the model calibration analysis toconstructing a limit state surrogate and estimating theuncertainty in reliability analysis by incorporating differ-ent sources of epistemic uncertainty. The basic contribu-tions include: (1) The use of the auxiliary variable ap-proach to represent the model discrepancy for its inclu-sion in limit state surrogate refinement, and (2)Quantification of different types of epistemic uncertaintyand their incorporation in reliability analysis to quantifythe uncertainty in the reliability estimate.
The remainder of this paper is organized as follows:In Section 2, brief backgrounds on component and sys-tem reliability analysis in the presence of aleatory un-certainty, and on the quantification of different sourcesof epistemic uncertainty are provided. Section 3 presentsthe proposed methodology for component and systemreliability analysis in the presence of various sourcesof epistemic uncertainty. Two numerical examples areused to demonstrate the proposed methodology inSection 4. Concluding remarks are provided inSection 5.
2 Background
2.1 Reliability analysis with a limit state surrogate
Here, the definitions for component and system reliability, andan introduction to Efficient Global Reliability Analysis(EGRA) are provided.
2.1.1 Component and system reliability estimation usinga surrogate
Component reliability Consider a single limit state g(X) =0,where X= [X1, X2, ⋯, Xn] is a vector of input random vari-ables. The component failure probability (pf
c) is given as
pcf ¼ Pr g Xð Þ≤0f g ð1Þ
In surrogate-based reliability analysis usingMCS, the com-ponent failure probability can be calculated as
pcf ¼Xj¼1
nMCS
I g x jð Þ� �
≤0� �
=nMCS ð2Þ
where x is a realization of random variables X, nMCS is thenumber of MCS samples, ĝ(x) is the surrogate, ĝ(x(j)) is thesurrogate prediction at the jth sampling point x(j), andI(ĝi(x
(j))≤0) is a failure indicator function defined as
I eventð Þ ¼ 1; if an event is true0; otherwise
�ð3Þ
System reliability System failure events may be definedthrough a series, parallel or a mixed series/parallel combina-tion of component failures (Wang et al. 2011). Let gi(X),i=1, 2, ⋯, m be the limit-state functions. The failure prob-ability of a series combination of individual failures is givenby
psf ¼ Pr ∪m
i¼1gi Xð Þ≤0
� �ð4Þ
where pfs is the system failure probability, Pr{⋅} is probability,
and “∪” is union. The system failure probability, using a sur-rogate and MCS, can be calculated as
psf ¼Xj¼1
nMCS
I ∪m
i¼1gi x jð Þ� �
≤0� �
=nMCS ð5Þ
where ĝi(x(j)) is the surrogate prediction of the ith limit-statefunction at the jth sampling point x(j). Equation (5) can be re-written as
psf ¼Xj¼1
nMCS
I mini∈m
gi x jð Þ� �n o
≤0� �
=nMCS
¼Xj¼1
nMCS
I gmin x jð Þ� �
≤0� �
=nMCS ð6Þ
The failure probability of a parallel combination of individ-ual failures is given by
psf ¼ Pr ∩m
i¼1gi Xð Þ≤0
� �ð7Þ
where “∩” is intersection. Similar to the series system, thefailure probability of a parallel system can be calculated as
psf ¼Xj¼1
nMCS
I\mi¼1
gi x jð Þ� �
≤0
!=nMCS ð8Þ
S. Nannapaneni et al.
The above equation can be written as
psf ¼Xj¼1
nMCS
I maxi∈m
gi x jð Þ� �n o
≤0� �
=nMCS
¼Xj¼1
nMCS
I gmax x jð Þ� �
≤0� �
=nMCS ð9Þ
Figure 1 provides graphical illustrations of composite limitstates for series and parallel combinations of three individuallimit states gi(X) = 0, i=1, 2, 3. When a system failure isdefined through a mixture of series and parallel combinationof component failures, the system reliability can be estimatedby the following steps (Haldar and Mahadevan 2000b): (1)decompose the combined system into a set of mutually exclu-sive series combinations, (2) compute the reliability of eachseries combination, and (3) compute the system reliabilityusing the reliabilities of each of the series combinations.
2.1.2 Efficient global reliability analysis (EGRA)
In EGRA, a GP surrogate is constructed to approximate limitstate function gext(X) =0, i.e., we are building a surrogate forthe classifier or boundary between failure and safety. (Note thatgext(X) =g(X) for a component, gext(X) =gmin(X) for a seriessystem; and gext(X) =gmax(X) for a parallel system). Using theidea of the Expected Improvement (EI) (Jones et al. 1998),EGRA adaptively selects training points close to the limit stateto accurately model gext(X). The selection of training points isbased on a learning function called the Expected FeasibilityFunction (EFF) defined as (Bichon et al. 2008)
EFF xð Þ ¼ μg xð Þ−e� �
2Φe−μg xð Þσg xð Þ
� �−Φ
eL−μg xð Þσg xð Þ
� �−Φ
eU−μg xð Þσg xð Þ
� ��
−σg xð Þ 2ϕe−μg xð Þσg xð Þ
� ��−ϕ
eL−μg xð Þσg xð Þ
� �−ϕ
eU−μg xð Þσg xð Þ
� ��− Φ
eL−μg xð Þσg xð Þ
� �−Φ
eU−μg xð Þσg xð Þ
� �� ð10Þ
where e is the failure threshold (e=0 in this paper), eU= e+ε,eL= e− ε, μg(x), σg(x) are the mean and standard deviation ofthe GP prediction at point X = x, ε is usually chosen asε=2σg(x) (Bichon et al. 2008), and Φ(⋅) and ϕ(⋅) are the CDF
and PDF of a standard Gaussian variable, respectively. InEGRA, a new training point is identified by maximizing theEFF as x* ¼ argmaxx EFF xð Þf g. More details about EGRAare available in (Bichon et al. 2011). EGRA and other similarmethods that focus on limit state surrogates such as META-IS(Dubourg et al. 2013) and AK-MCS (Echard et al. 2011) haveso far concentrated on reliability analysis with only aleatoryuncertainty. In practical applications, several sources of episte-mic uncertainty may be involved in the reliability analysis. Thenext section discusses several sources of epistemic uncertaintyand their quantification.
2.2 Quantification of epistemic uncertainty
2.2.1 Statistical uncertainty regarding the inputs
In practical applications, it is common to have only data on aninput variable X, which could be sparse, and/or interval datacausing uncertainty in its PDF. Both parametric and non-parametric approaches have been developed to address theissue of statistical uncertainty.
(a) Parametric approachIn a parametric approach, an input variable is repre-
sented using a distribution type and distribution parame-ters. The presence of limited data causes uncertainty re-garding the distribution type and its parameters, whichare expressed as probability distributions in a Bayesianapproach.
Distribution parameter uncertainty Let a dataset D for avariable X consist of n point data pi(i=1 to n) and m intervaldata [aj,bj] (j=1 to m). The likelihood function for the distri-bution parameters θ can be constructed as
L θð Þ ¼ ∏n
i¼1f X X ¼ pi
θ� �∏m
j¼1FX X ¼ b j
θ� �−FX X ¼ aj
θ� �h ið11Þ
where fX(x) and FX(x) represent the PDF and CDF of X respec-tively. From the likelihood function, the PDFs of the distribu-tion parameters can be obtained using Bayes’ theorem.
Parallel system(b)Series system(a)
X
Y
1( ) 0g X
2 ( ) 0g X
3 ( ) 0g X
Composite
limit state
( ) 0g X
X
Y
1( ) 0g X
3 ( ) 0g X
Composite
limit state
( ) 0g X 2 ( ) 0g XFig. 1 Composite limit state ofthree individual limit states
Uncertainty quantification in reliability estimation
Distribution type uncertainty Two approaches are availableto handle distribution type uncertainty - (1) Composite distri-bution of possible distribution types using Bayesian ModelAveraging (BMA) (Hoeting et al. 1999), or (2) Single distri-bution type that best describes the data using BayesianHypothesis Testing (BHT) (Bernardo and Rueda 2002; Lingand Mahadevan 2013). BMA is used here and the composite
distribution is formulated as f X xjθð Þ ¼ ∑N
k¼1wk f X xjθkð Þ
where fX(x|θk) is the kth distribution type with wk and θk
representing its weight and distribution parameters. Theweights can be computed by comparing the posterior proba-bilities of the distribution types as
Pr f X xjθcð ÞjDð ÞPr f X xjθdð ÞjDð Þ ¼
Pr Dj f X xjθcð Þð ÞPr Dj f X xjθdð Þð Þ
Pr f X xjθcð Þð ÞPr f X xjθdð Þð Þ ð12Þ
where Pr(fX(x|θc)) and Pr(fX(x|θd)) are the prior probabilitiesof the two distribution types and Pr(fX(x|θc)|D)/Pr(fX(x|θd)|D)is the ratio of their likelihoods. In the presence of multipleplausible distributions, weights for all the distributions canbe computed with respect to a particular distribution. In thepresence of multiple plausible distributions, weights for all thedistributions can be computed with respect to a particular dis-tribution. If M1,M2…Mn are plausible distributions, then ra-tios of posterior weights can be computed between (M1,Mn),(M2,Mn)… (Mn − 1,Mn). After obtaining the ratio of weightscomputed using Eq. (12), weights can be normalized. Usingthe normalized weights and the plausible distribution types,the composite distribution can be constructed. Please refer to(Sankararaman and Mahadevan 2013b) for more details.
(b) Non-parametric approachAs opposed to the parametric approach, the non-
parametric approach does not assume any particular dis-tribution type or distribution parameters but the PDF isconstructed using interpolation techniques.
Let a datasetD for a variable X consist of n point datapi(i=1 to n) and m interval data [aj,bj] (j=1 to m); thedomain of X is discretized into Q points to model thenon-parametric distribution. Let the PDF values at thesediscretized points be equal to qi (i=1, 2, ⋯, Q). Sinceq=qi (i=1, 2, ⋯, Q) is unknown, they can be esti-mated by solving the following optimization problem:
maxq
L qð Þ ¼ ∏n
i¼1f X X ¼ pi
q� �∏m
j¼1FX X ¼ bj
q� �−FX X ¼ aj
q� �h is:t: q≥0; f X xð Þ≥0;
Zf X xð Þ dx ¼ 1
ð13ÞAfter obtaining the PDF values at these discretized points,
interpolation techniques are used to estimate the PDF values atany other input values. More details on the likelihood-based
non-parametric method are provided in (Sankararaman andMahadevan 2011).
2.2.2 Model uncertainty
This section discusses the quantification of different types ofmodel uncertainty such as model parameter uncertainty andmodel discrepancy. In addition, surrogate uncertainty andMCS error are discussed in the context of surrogate-basedreliability analysis.
(a) Model parameter uncertaintyModel parameter uncertainty represents the uncertain-
ty in the model parameters due to either natural variabil-ity or limited data or both. The three possible scenarios ofmodel parameter uncertainty are – (1) model parameter isdeterministic but unknown (epistemic uncertainty), (2)model parameter is stochastic with known distributionparameters (aleatory uncertainty), and (3) model param-eter is stochastic with unknown distribution parameters(aleatory and epistemic uncertainty). If a model parame-ter is deterministic but unknown, it can be estimatedusing available data using least squares, maximum like-lihood or Bayesian calibration.
Model parameters (Ψ) that are associated withaleatory uncertainty (probability distributions) andwith fixed distribution parameters, can be treatedsimilar to input variables for reliability analysisand the techniques used for quantification of uncer-tainty in the inputs (parametric and non-parametricapproaches, described in Section 2.2.1) can also beused for model parameters. If the distribution pa-rameters of Ψ are unknown (both aleatory and ep-istemic uncertainty), then one of the aforementionedcalibration techniques can be used to estimate thedistribution parameters using available data. Here,Bayesian calibration is used to explicitly quantifythe uncertainty in the model parameters or theirdistribution parameters.
(b) Model discrepancyModel discrepancy, in this discussion, represents
the combined error introduced due to the assump-tions and simplifications made in building a model(model form error) as well as the errors that arisein the methodology adopted in solving the modelequations (numerical solution errors). Differenttypes of numerical solution errors exist such asdiscretization error, round-off error, and truncationerror. Suppose gobs(X), gmod el(X), and δ(X) repre-sent the observations, simulation model predictionand model discrepancy respectively. For a givenX = x , t he th ree quan t i t i e s a re re l a t ed as
S. Nannapaneni et al.
gobs(x) = gmod el(x) + δ(x) + εobs(x). Here, εobs(x) re-fers to the observation (or experimental) error.When following the KOH framework, the quantifi-cation of the model discrepancy can be performedtogether with the calibration of model parameters.Note that in this paper, as stated in Section I, weadopt the modular Bayesian approach of the KOHframework.
(c) Reliability analysis errorsDifferent types of errors that arise in carrying out re-
liability analysis such as surrogate uncertainty and uncer-tainty quantification (UQ) error are discussed below.
Surrogate uncertainty In this paper, the variance (andnot bias) associated with the prediction of a surrogateis called surrogate uncertainty. For example, the predic-tion of a GP model is a Gaussian distribution with pa-rameters dependent on the input. Since a surrogate (GPmodel) is employed in this paper for reliability analysis,the details about surrogate uncertainty and its impact onthe rel iabi l i ty est imate are discussed later, inSection. 3.2.2.
Monte Carlo Simulation (MCS) errorMCS error representsthe error due to the use of limited number of Monte Carlosamples for uncertainty propagation. The MCS error is quan-tified as the difference between the empirical CDF (construct-ed using Monte Carlo samples after uncertainty propagation)and the true CDF of the output quantity of interest(Sankararaman and Mahadevan 2013b).
Consider the model Y=g(X) and let Ns samples be gener-ated from the input and propagated through the model to ob-
tain the samples of the output Y. Let FY yð Þ represent the em-pirical CDF constructed fromNs samples and FY(y) be the trueCDF. If ny samples are less than a value of Y= y*, then nyfollows a binomial distribution ny∼B(Ns,FY(y)) consideringthe value of each sample as the result of a Bernoulli trial(Haldar and Mahadevan 2000a). As Ns becomes larger, thebinomial distribution can be approximated with a Gaussiandistribution. A review of several empirical rules that havebeen proposed in the literature for approximating a binomialdistributionwith a Gaussian distribution is provided in (Emuraand Lin 2015). The most commonly used rule, as stated in(Emura and Lin 2015), is NsFY(y) > 5 and Ns(1−FY(y)) > 5.Let us consider this case. A large of Monte Carlo samplesare generally used in surrogate-based methods since it is com-putationally inexpensive. Even if we use about 100,000 sam-ples (which is a common number in surrogate-basedmethods), the threshold FY(y) value is 5
100000 ¼ 0:00005,which covers more than 99.99 % of the domain.
Since the CDF value from the Monte Carlo output samples
is given by FY yð Þ ¼ ny=Ns, we have
FY yð ÞeN FY yð Þ;ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiFY yð Þ 1−FY yð Þð Þ=Ns
p� �ð14Þ
Therefore, the MCS error associated with FY yð Þ canbe expressed as a Gaussian random variable with meanand standard deviation as given in Eq. (14). Since thetrue CDF FY(y) is unknown, Eq. (14) cannot be useddirectly. However, confidence intervals for FY(y) can be
estimated given the empirical CDF FY yð Þ, the numberof samples used Ns and the degree of accuracy 1− γ as
FY yð Þ þ z2γ=2
2Ns� zγ=2
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiFY yð Þ 1−FY yð Þð Þ
Nsþ z2
γ=2
4N2s
r !1
1þz2γ=2Ns
, where zγ/2 refers
to the 1− γ2 quantile of the standard normal distribution
(Agresti and Coull 1998). It should be noted that thetrue CDF FY(y) is a fixed quantity but unknown; there-fore, it is an epistemic source of uncertainty and quan-tified using confidence intervals. For given values of
FY yð Þ and Ns, we can estimate the percentile values ofFY(y) by varying the accuracy parameter γ. From thepercentile values, the entire CDF can be numericallyconstructed which can then be used to obtain a PDF.
For illustration, let l,u represent the lower and upper boundsof confidence intervals corresponding to accuracy parameter γ.Therefore, Pr FY yð Þ < uð Þ ¼ 1− γ
2 and Pr FY yð Þ < lð Þ ¼ γ2.
Hence, the CDF values at FY(y)= l,u areγ2 and 1−
γ2 respectively.
Following the same procedure at multiple values of γ, the CDFvalues at the corresponding lower and upper bounds of the con-fidence intervals can be obtained which can be used to constructthe CDF of FY(y).
The next section presents techniques for the inclusion ofdifferent types of epistemic uncertainty discussed in this sec-tion in surrogate-based reliability analysis.
3 Proposed methodology
An overview of the proposed methodology for surrogate-based reliability estimation including different types of episte-mic uncertainty is presented in Fig. 2.
The overall approach can be divided into two stages asshown in Fig. 2 – (1) construction of a surrogate, and (2) useof the surrogate for reliability analysis. Uncertainty sourcessuch as statistical uncertainty in the inputs, uncertain modelparameters, and model discrepancy influence the surrogateconstruction, thereby affecting the reliability estimate, where-as the surrogate uncertainty and MCS error do not influencethe surrogate construction but only affect the reliability esti-mate. Details regarding the handling of different sources ofepistemic uncertainty in these two stages are discussed below.
Uncertainty quantification in reliability estimation
3.1 Stage 1: surrogate construction including epistemicuncertainty
First, the different inputs to be included in the surrogate arediscussed which is followed by the surrogate construction.
3.1.1 Inputs to the surrogate
In this paper, the inputs for the surrogate model includethe inputs for the simulation model (X), the uncertainmodel parameters (Ψ) and the model discrepancy.
Consider a system with m limit state functions given by giX;Ψð Þ ¼ gmodel; i X;Ψð Þ þ δi Xð Þ; i ¼ 1; 2; ⋯; m
where gmod el, i(X,Ψ) and δi Xð Þ are the simulation modeland the model discrepancy of the ith limit state functionrespectively. Since KOH calibration framework is used,the model discrepancy is not modeled as a function ofmodel parameters. However, including the model parame-ters in the model discrepancy term might be a more rig-orous. The quantification of the model discrepancy is byusing the simulation model. The performance of the over-all model (simulation model and model discrepancy) mightbe satisfactory over the entire range of the inputs but themain idea of the limit state surrogate is to model the limitstate perfectly and is not concerned about the its perfor-mance in the interior domain of the inputs (away fromlimit state). Since the limit state model governs the reli-ability estimate (and its uncertainty), we wish to model itas precisely as possible. In system reliability, it is possiblein some cases that we may have some calibration data ona few limit states but not a lot of data on the remaininglimit states. Therefore, the construction of the limit state surro-gate refines the system limit state by obtaining training pointsconsidering all the individual limit states. The focus of this workis to quantify the uncertainty in the reliability prediction byinvestigating various sources of epistemic uncertainty. It is basedon the assumption that the model calibration has already beenperformed. We obtain a GP surrogate for the model discrepancyfrom calibration analysis.
The key idea is to include the model discrepancy terms inthe construction of the limit state surrogate and then use thesurrogate for reliability analysis. The limit-state functiongext(X,Ψ) for surrogate-based reliability analysis is formulat-ed as
gext X;Ψð Þ ¼gmodel X;Ψð Þ þ δ Xð Þ; for a component
mini¼1; 2; ⋯; m
gmodel; i X;Ψð Þ þ δi Xð Þn o
; for a series system
maxi¼1; 2; ⋯; m
gmodel; i X;Ψð Þ þ δi Xð Þn o
; for a parallel system
8>>><>>>:ð15Þ
Note that the model discrepancy, δi Xð Þ, i=1, 2, ⋯, m, israndom at any given point X=x, which results in uncertaintyin the response gext(X,Ψ). Directly constructing a surrogatefor the implicit response given in Eq. (15) is not practical due
to randomness in δi Xð Þ. An explicit representation of variabil-ity in δi Xð Þ is required to formulate Eq. (15) as a deterministicfunction; this challenge is addressed using the auxiliary vari-able method.
The “auxi l iary var iable” was introduced bySankararaman and Mahadevan (2013a, b) in the contextof random variables whose distribution types and parame-ters are uncertain. Such variables are associated with bothaleatory and epistemic uncertainty. The auxiliary variableis used to represent the aleatory part of the overall uncer-tainty. Mathematically, an auxiliary variable u, based onthe probability integral transform, is defined as
u ¼ FX xjΘ ¼ θð Þ ¼ ∫x
−∞f X wjΘ ¼ θð Þ dw, where θ is a re-
alization of the distribution parameter Θ, w is a dummyvariable used for integration, and FX(x|Θ= θ) denotes theCDF value of variable X for a realization of Θ. For agiven realization of Θ, and a realization of the auxiliaryvariable u gives a unique value of X. This gives a deter-ministic relationship between (Θ,u) and X. In this work,we have extended the concept of the auxiliary variablefurther, to represent the epistemic uncertainty in the pre-diction of model discrepancy and in the reliability esti-mate. In general, whenever there is a stochastic mapping,
Statistical
uncertainty
Simulation model Output Initial Surrogate
model
Limit state Surrogate
Reliability estimateOutput
Surrogate uncertainty MCS error
Stage 2:
Reliability
analysis with
the surrogate
Input
Model parameter
uncertainty Model
discrepancyStage 1:
Surrogate
construction
Model Calibration Fig. 2 Flow chart of proposedmethod for reliability analysisunder uncertainty
S. Nannapaneni et al.
i.e., mapping of a single value to a probability distribu-tion, the auxiliary variable can be used to convert thestochastic mapping to a deterministic mapping.
An auxiliary variable is used to represent the variability inthe prediction of model discrepancy in order to build the limit
state surrogate. Since a GP is used to model δi Xð Þ, for anygiven X=x, the model discrepancy follows a Gaussian distri-
bution given by δi xð ÞeN μδi xð Þ; σ2δi xð Þ
� �. The PDF of δi xð Þ is
f δ δi xð Þjμ ¼ μδi xð Þ; σ ¼ σδi xð Þ� �
and the CDF value is given by
uMD;i ¼ Fδ δi x;ψð Þjμ ¼ μδi x;ψð Þ; σ ¼ σδi xð Þ� �
¼Z δx
−∞f δ wjμ ¼ μδi xð Þ;σ ¼ σδi xð Þ� �
dw
ð16Þ
where w is a dummy variable for integration. The auxiliaryvariable follows a uniform distribution between 0 and 1 (sincethe CDF values range from 0 to 1). Using the auxiliary vari-
able, a realization of δi x;ψð Þ for a given UMD,i=uMD,i,μ ¼ μδi xð Þ, and σ ¼ σδi xð Þ can be computed by
δi xð Þ ¼ F−1δi xð Þ UMD;i ¼ uMD;i
μ ¼ μδi xð Þ; σ ¼ σδi xð Þ� �
ð17Þ
where F−1δi xð Þ ⋅ð Þ is the inverse Gaussian CDF of δi xð Þ.
More details about the auxiliary variable approach areavailable in (Song et al. 2013). Combining Eqs. (15)and (17), we have
gext X;Ψ; Uð Þ ¼gmodel X;Ψð Þ þ F−1
δ xð Þ UMDjμ ¼ μδ xð Þ; σ ¼ σδ xð Þ� �
; for a component
mini¼1; 2; ⋯; m
gmodel; i X;Ψð Þ þ F−1δi xð Þ UMD;i
μ ¼ μδi xð Þ; σ ¼ σδi xð Þ� �n o
; for a series system
maxi¼1; 2; ⋯; m
gmodel; i X;Ψð Þ þ F−1δi xð Þ UMD;i
μ ¼ μδi xð Þ; σ ¼ σδi xð Þ� �n o
; for a parallel system
8>>>><>>>>: ð18Þ
Thus, the original model with stochastic output ismapped to a deterministic model, which can then beused to build a surrogate for reliability analysis. InitialGP surrogates (for simulation model and model discrep-ancy) are obtained from the KOH calibration frameworkas shown in Stage 1 of Fig. 2. These surrogates candirectly be used for reliability analysis. However, if bet-ter accuracy is desired, then the GP surrogate of thesimulation model can be further refined around the limitstate (Stage 2 in Fig. 2). This local refinement requires addi-tional training points and these are obtained by evaluating thesimulation model Eq. (18). After constructing the limit statesurrogate, it is directly used for reliability analysis without anyfurther runs of the original model.
3.1.2 Including epistemic uncertainty in surrogateconstruction
As stated in Section 2.1.2, the training points for the construc-tion of the limit state surrogate are adaptively selected by max-imizing a learning function called the Expected FeasibilityFunction (EFF). Several optimization techniques (gradient-based, sampling-based) are available for maximizing the EFF.In this paper, a Monte Carlo sampling-based technique is im-plemented, as demonstrated in the Adaptive-Kriging MonteCarlo Simulation (AK-MCS) method (Echard et al. 2011).The key idea in MCS-based optimization is to generate a poolof samples of the inputs from their corresponding PDFs andchoose the sample from the pool that maximizes the EFF. The
epistemic uncertainty in the inputs of the limit state surrogatecan be included in the surrogate modeling construction throughthe sampling of input variables, which is discussed below.
Inputs to the simulation model (X) Quantification of uncer-tainty in the distribution parameters and distribution type ofrandom input variables was discussed in Section 2.2. For arealization of distribution parameters and distribution type, theinput variable is represented by a PDF; therefore, for multiplerealizations of distribution parameters and type, the input var-iable is represented through a family of PDFs. The traditionalapproach for sampling of an input variable with uncertaindistribution parameters and distribution type is through anested double-loop procedure where the distribution parame-ters and distribution type are sampled in the outer loop andsamples of the input random variable are generated in theinner loop. The double-loop sampling procedure is computa-tionally expensive; therefore, a single-loop sampling proce-dure using an auxiliary variable based on the probability inte-gral transform is used. An auxiliary variable UX is defined torepresent the aleatory uncertainty in a random variable andrepresented using the probability integral transform as
uX ¼ FX xΘX ¼ θX ; dX ¼ dX
*� �
¼Zx−∞
f X wΘX ¼ θX ; dX ¼ dX
*� �
dw
ð19Þwhere ΘX,dX represent the uncertain distribution parametersand distribution type; their realizations are represented by θX,
Uncertainty quantification in reliability estimation
dX*, w is a dummy variable for integration, FX(x) and fX(x)represent the CDF and PDF of X respectively. Thus, for arealization of auxiliary variable UX=uX, distribution parame-ters θX and distribution type dX
* generated from their corre-sponding PDFs, one realization of the input variable can beobtained as x=FX
− 1(uX|ΘX=θX,dX= dX*) using the inverseCDF method. Following this procedure, several realizationsof the input can be obtained.
If a non-parametric approach is used for the representationof statistical uncertainty (i.e. no distribution type or parame-ters), its CDF is constructed using numerical techniques whichis then used for generating samples using inverse CDF tech-nique. If an input variable is not associated with any statisticaluncertainty (i.e. only known distribution type and distributionparameters), then the conventional inverse CDF technique(Haldar and Mahadevan 2000b) can be used for generatingsamples.
To perform sampling of correlated variables, the correlatedvariables are first transformed into an uncorrelated space usingorthogonal transformation or Cholesky decomposition.Samples are then generated individually in the uncorrelatedspace and transformed into the correlated space. Please refer toChapter 9 in (Haldar and Mahadevan 2000b) for more details.
Uncertain model parameters (Ψ) Since we are adopting themodular Bayesian approach, the obtained posterior distribu-tions of the model parameters are conditioned on the MLE ofhyper parameters, and correlations are not calculated betweenthe model parameters and the model discrepancy. Correlationswould be calculated when the fully Bayesian approach ofKOH framework is implemented. As stated in Section 2.2.2,the uncertain model parameters (Ψ) can be treated similar tothe inputs; therefore the sampling techniques presented abovefor the inputs can also be used for sampling the uncertainmodel parameters.
Model discrepancy (UMD) Section 3.1.1 proposed an aux-iliary variable approach to explicitly represent the uncer-tainty in the model discrepancy. Since the auxiliary var-iables follow a uniform distribution between 0 and 1,their sampling can be carried out through random uni-form sampling between 0 and 1.
From the model calibration analysis, as stated inSection 1, we have data on the inputs, model parametersand simulation model output that was used for calibration.If we have k samples, then k random samples of the modeldiscrepancy (UMD) are generated and these data can beused as initial training points to construct the limit statesurrogate. Then, more training points are adaptively addedby maximizing the EFF until a convergence criterion issatisfied. In this paper, convergence is assumed to beachieved when the maximum EFF is less than a thresholdvalue (EFFmax <EFF *).
3.2 Stage 2: uncertainty quantification in reliabilityanalysis
3.2.1 Reliability analysis
In Stage 1, several Monte Carlo samples of the inputs (X,Ψ,UMD) were generated to carry out the maximization of EFF foradaptive selection of training points for limit state surrogateconstruction. Since these samples are generated from theircorresponding PDFs, these samples can also be used to carryout reliability analysis. Using these samples, the failure prob-ability can be calculated as
psf ¼Xj¼1
nMCS
I gext x jð Þ; uMDjð Þ; Ψ jð Þ
� �≤0
� �=nMCS ð20Þ
where x(j), uMD(j), Ψ(j) are the j-th sample of X, UMD, Ψ,
which are input variables to the simulation model, auxiliaryvariables for model discrepancy and uncertain model param-eters, respectively. ĝext(.) represents the surrogate built to ap-proximate the true limit state gext(.) and I(.) is the failure indi-cator function as defined in Section 2.1.1
3.2.2 Inclusion of surrogate uncertainty in the reliabilityestimate
Since a GP surrogate is used, the prediction at any input is aGaussian distribution with parameters dependent on the input.
In mos t ca se s , on ly the mean pred i c t i ons μgext
x jð Þ; uMDjð Þ; Ψ jð Þ�
are used to estimate ĝext(x(j), uMD(j),
Ψ(j)). The failure probability is therefore estimated as
ps
f ¼Xj¼1
nMCS
I μgext
x jð Þ; uMDjð Þ; Ψ jð Þ
� �≤0
� �=nMCS ð21Þ
where psf is the estimate of the failure probability obtained by
using the mean predictions. If our purpose is only to estimatethe expected failure probability, we may ignore the correlationbetween the uncertain responses (Jiang et al. 2013). However,the focus of this paper is to quantify the uncertainty in thereliability estimate. Therefore, the uncertainty due the surro-gate prediction is also considered to quantify the overall un-certainty in reliability estimate. When the accuracy of the sur-rogate is high (i.e. the uncertainty of prediction is low), theabove treatment of using the mean predictions works well, andresults in a single value of the reliability estimate. If the accu-racy of the model prediction is low, it becomes necessary toalso include the prediction uncertainty for reliability estima-tion. In order to quantify the effects of surrogate uncertaintyon reliability analysis, an uncertainty quantification problem istherefore formulated as shown in Fig. 3.
S. Nannapaneni et al.
(a) Correlation analysis of surrogate predictionsFor any inputφ(j) = [x(j), uMD
(j), Ψ(j)], the predictionfrom the surrogate follows a Gaussian distribution given
b y gext φ jð Þ� eN μgextφ jð Þ�
; σ2gext
φ jð Þ� � �. A l s o ,
ĝext(φ(j)), j=1, 2, ⋯, nMCS, are correlated due to thecovariance function assumed in a GP.
As indicated in Fig. 3, the uncertainty of the systemfailure probability estimate (the unconditional failureprobability estimate given in Eq. (20)) due to surrogateuncertainty can be quantified by propagating the uncer-tainty in ĝext(φ(j)), j=1, 2, ⋯, nMCS, through Eq. (20).Since ĝext(φ
(j)), j=1, 2, ⋯, nMCS, are nMCS correlatedrandom variables, their correlations are analyzed basedon which a sampling-based method is developed for un-certainty quantification.
In MCS-based reliability analysis, nMCS is usuallylarge. Performing the correlation analysis for the nMCS
random variables is computationally expensive. In orderto reduce the number of random variables (ĝext(φ(j)),j= 1, 2, ⋯, nMCS), ĝext(φ(j)), j= 1, 2, ⋯, nMCS arepartitioned into two groups based on the probability thatthey may result in the error of failure probability esti-mate. The first group includes responses ĝext(φ(j)),j=1, 2, ⋯, ng1, for which the probability of makingan error in the sign of I(ĝext(φ(j))) is very low (i.e.,
0.001). Therefore, the mean predictions μgextφ jð Þ�
are
used to substitute for ĝext(φ(j)) in Eq. (20). The remainingresponses in ĝext(φ(j)), j=1, 2, ⋯, nMCS form the sec-ond group, which are treated as random variables. Thesystem failure probability given in Eq. (21) then becomes(Hu and Mahadevan 2015)
psf ¼Xi¼1
ng1
I μgextφ jð Þ� �� �
þXi¼1
ng2
I gext φ jð Þ� �� � !.
nMCS ð22Þ
where ng1 and ng2 are the number of samples in the firstand second groups respectively. The partition ofĝext(φ
(j)), j=1, 2, ⋯, nMCS is achieved based on thefollowing function (Echard et al. 2011)
UAK φ jð Þ� �
¼ μgext
φ jð Þ� � .σ
gext
φ jð Þ� �
ð23Þ
UAK(φ(j)) represents the coefficient of variation of the mod-
el prediction, which can be used to estimate the probability of
making an error in the sign of ĝext(φ(j)). The first group ofresponses correspond to UAK(φ
(j))≥3.1 and the rest of theresponses fall into the second group. Defining the trainingpoints in the current GP model asφs and gext(φ
s), the covari-ance matrix of ĝext(φ
(k)), k=1, 2, ⋯, ng2 conditioned on thetraining points is given by
Σpjt ¼ Σpp−ΣptΣ−1tt Σ
Tpt ð24Þ
where Σpp, Σpt, and Σtt are the covariance matrixes betweenĝext(φ(k)) and ĝext(φ(k)), ĝext(φ(k)) and gext(φ
s), gext(φs) and
gext(φs), respectively. Based on the covariance matrixΣp|t, the
conditional correlation matrix ρp|t of ĝext(φ(k)), k=1, 2, ⋯,
ng2 is equal to ρij, which represents the correlation betweenĝext(φ(i)) and ĝext(φ(j)), i, j=1, 2, ⋯, ng2, conditioned oncurrent training points. Thus, the correlation matrix is an ng2× ng2 square matrix with diagonal elements as 1.(b) Propagation of surrogate prediction uncertainty
After obtaining the correlation matrix, the sampling-based method can be used to propagate the surrogateprediction uncertainty of ĝext(φ(j)) to the uncertainty inpfs based on Eq. (22). To do this, samples of ĝext(φ
(k)),k=1, 2, ⋯, ng2, are generated using the following ex-pression (Hu and Mahadevan 2015):
gext φ kð Þ� �
¼ μgext
φ kð Þ� �
þ σgext
φ kð Þ� �X
j¼1
ng2
ξ jϕTj ρ: i
. ffiffiffiffiη j
p ð25Þ
where ξj, j= 1, 2, ⋯, ng2, are independent standardGaussian variables; and ηi and ϕi
T are the eigenvalues and
eigenvectors of ρp|t and ρ: i ¼ ρi1; ρi2; ⋯; ρing2
h iT.
Since the surrogate prediction at each input is a randomvariable, Nsim samples are generated for each ĝext(x(k)),k=1, 2, ⋯, ng2, resulting in the following sampling ma-trix:
gNsim�ng2 ¼ gext i; jð Þf g;∀i ¼ 1;⋯;Nsim; j ¼ 1;⋯; ng2 ð26ÞUsing the samples of ĝext(x(k)), k = 1, 2, ⋯, ng2 and
Eq. (22), samples of pfs are obtained as
psf jð Þ ¼Xi¼1
ng1
I μgextφ ið Þ� �� �
þXi¼1
ng2
I gext i; jð Þð Þ( ).
nMCS ; j ¼ 1; 2;⋯;Nsim
ð27Þ
where pfs(j) is the jth sample of pf
s due to the surrogate predic-tion uncertainty. Using the samples of pf
s(j), j=1, 2, ⋯,
(1) (1) (1)ˆ ( , , )extg MDx u Ψ(2) (2) (2)ˆ ( , , )extg MDx u Ψ
( ) ( ) ( )ˆ ( , , )MCS MCS MCSn n nextg MDx u Ψ
Eq. (20) sfp
Fig. 3 Effects of surrogateuncertainty on reliability analysis
Uncertainty quantification in reliability estimation
Nsim, a PDF can be constructed for the system failure proba-bility pf
s. This distribution represents the uncertainty in pfs due
to surrogate uncertainty.
3.2.3 Inclusion of MCS error in reliability estimate
Using Eq. (27), several samples of pfs are obtained through
correlated sampling of the model predictions at several inputs.As discussed in Sec. 2.2.2 (c), there exists an uncertainty in theestimation of each failure probability sample, pf
s(j), j=1, 2,⋯, Nsim due to the limited number of Monte Carlo samples(referred to here as MCS error), which results in each pf
s(j)being a random variable. To avoid any confusion in the nota-tion, the failure probability whenMCS error is also consideredis denoted as pf
MCS(j). Following the discussion on MCSerror in Section 2.2.2 (c), we can construct the PDF ofpfMCS(j) by estimating its quantiles using pf
s(j), numberof samples nMCS and degree of accuracy 1− γ as
psf jð Þ þz2γ=2
2nMCS� zγ=2
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipsf jð Þ 1−psf jð Þ
� �Ns
þz2γ=2
4n2MCS
vuut0B@1CA 1
1þz2γ=2nMCS
; ð28Þ
where zγ/2 represents the 1− γ2 quantile of a standard
normal distribution. After including surrogate uncertain-ty and MCS error, pf
s is represented by a family ofPDFs (Fig. 4b).
After the inclusion of surrogate uncertainty, the failureprobability is represented using a PDF as shown in Fig. 4a. pf
s
is a sample from this PDF (shown as a red dot in Fig. 4a).When MCS error is also considered, pf
s is not a single valuebut a PDF denoted as pf
MCS (shown as a continuous red curvein Fig. 4b). Similarly, each sample from the PDF in Fig. 4acorresponds to a different PDF in Fig. 4b. Thus, the failureprobability is represented as a family of PDFs in the presenceof surrogate uncertainty and MCS error. This family of PDFscan then be integrated to an unconditional PDF (bold brokenred curve in Fig. 4b) using the auxiliary variable approachdescribed in Section 3.1.
The family of PDFs for the failure probability esti-mate (Fig. 4b) can be treated similar to the family ofPDFs for an input variable and uncertain distribution
parameters. Note that in the case of an input variable,the auxiliary variable represents its aleatory uncertain-ty whereas in the case of the failure probability esti-mate, it represents the epistemic uncertainty due tolimited Monte Carlo samples. The single loop sam-pling approach used for sampling the input variablescan also be used for sampling the failure probabilityestimates.
Assume that there are Nsim PDFs of the failure probability(i.e.,Nsim samples of pf
s),Nsim samples of the auxiliary variableUMCS are generated in the interval [0, 1]. Here, the auxiliaryvariable is used to represent the contribution of epistemic un-certainty of MCS error to the overall uncertainty in the reli-ability estimate. The reliability estimate without the inclusionof surrogate uncertainty and MCS errors is a point-value (sin-gle-value). However, when surrogate uncertainty and MCSerrors are included, the reliability estimate is representedthrough a family of PDFs as shown in Fig. 4b. As discussedin Section 2.2.2 (c), the reliability estimate is representedthrough a PDF with some distribution parameters. Here, theseparameters are calculated after incorporating aleatory uncer-tainty, statistical uncertainty in input variables, uncertain mod-el parameters, model discrepancy and surrogate uncertainty.Therefore, for each realization of distribution parameters, thereliability estimate follows a PDF.
For one realization of the auxiliary variable and pfs,
one realization of the failure probability estimate fromits unconditional PDF is obtained. For a given valueof pf
s, there exists an associated PDF due to MCSerror which is numerically estimated after obtainingthe quantiles as stated above. Denoting the generated
samples as u 1ð ÞMCS ; ⋯; u Nsimð Þ
MCS , the samples of uncondi-tional failure probability estimate can be obtained as
pnewf jð Þ ¼ F−1MCS u jð Þ
MCS
psf jð Þ� �
; j ¼ 1; 2;⋯;Nsim ð29Þ
where FMCS− 1 (⋅) is the inverse CDF of the MCS error.
Based on the generated samples of pfnew(j), the uncon-
ditional PDF of the failure probability can be estimat-ed. This PDF represents the uncertainty in the failureprobability estimate due to both surrogate predictionuncertainty and MCS error.
sfp
sfp
( )sfp j
( )MCSfp j
(a) Only surrogate uncertainty
Unconditional
probability
distribution
(b) Surrogate uncertainty and MCS error
Fig. 4 Uncertainty in failureprobability due to surrogateuncertainty and MCS error
S. Nannapaneni et al.
3.3 Summary of the proposed methodology
To summarize, the first stage consists of the construction of alimit state surrogate to approximate the true but unknown limitstate (component or system). From themodel calibration analysisusing KOH framework, a general-purpose GP surrogate of thesimulation model is available, which is further refined by addingmore training points close to the limit state throughmaximizationof the Expected Feasibility Function (EFF), to obtain the limitstate surrogate. Along with the original input variables to thesimulation model and uncertain model parameters, model dis-crepancy is also considered as an input by explicitly representingits variability through an auxiliary variable. Monte Carlosampling-based maximization of the EFF is carried out whichinvolves generating a large pool of samples of the inputs fromtheir corresponding PDFs and selecting a sample, which maxi-mizes the EFF value as a new training point.
In the second stage, reliability analysis is carried out using theGP surrogate through Monte Carlo sampling. The Monte Carlosamples generated during the first stage for EFF maximizationare propagated through the GP surrogate for reliability analysis.The prediction from a GP follows a Gaussian distribution, andoutputs at different inputs are correlated; this surrogate uncer-tainty is included in the failure probability estimate through cor-related sampling of themodel predictions at several inputs. Sincecorrelated sampling of all inputs is computationally expensive,the inputs are divided into two groups depending on the proba-bility that they affect the failure probability estimate; this proba-bility is analyzed by obtaining the coefficient of variation (COV)of model prediction. Only the mean values of model predictionsare used if COV ≥ 3.1, whereas the randomness in model pre-diction is considered for all other inputs. Several realizations ofthe model predictions are obtained through correlated samplingresulting in several samples of failure probability estimate, onefor each realization. After consideration of surrogate uncertainty,the failure probability estimate is represented using a PDF. Theconsideration ofMCS error (due to limitedMonte Carlo samplesfor uncertainty propagation) results in each of the failure proba-bility samples to be represented by a PDF; this results in thefailure probability estimate to be represented using a family ofPDFs. If necessary, the family of PDFs can be integrated to forman unconditional PDF, which can be reported as the failure prob-ability estimate after considering all sources of epistemicuncertainty.
4 Numerical examples
4.1 A single component with multiple limit states
Consider a short cantilever beam subjected to a point load atits free end as shown in Fig. 5. Two limit states – maximumdeflection and maximum stress are considered for the failure
of the beam. The goal in this problem is to compute the reli-ability with respect to each of the individual limit states andthe system reliability, considering both the limit states.
Assume that the free end deflection of the beam due to thepoint load is modeled (according to the Euler-Bernoulli beamtheory) as d(P)=PL3/3EI. Here, E and I represent the Young’smodulus and moment of inertia respectively. Since the beam isshort, there exists a discrepancy between the experimental obser-vations and predicted deflections computed (because the Euler-Bernoulli model is not accurate for short beams). This modeldiscrepancy is calibrated using the KOH framework. A total of30 points were generated through Latin Hypercube Sampling(LHS) across the domain of inputs, at which both simulationand experimental data are available. Among them 25 points wereused for calibration (model parameters and model discrepancyusing the KOH framework) and 5 points were used for valida-tion. The maximum COV (coefficient of variation) among thetesting points is 0.1134. Therefore, the deflection of the beam,after accounting for model discrepancy δ(P), is given as
d Pð Þ ¼ PL3=3EI þ δ Pð Þ ð30Þ
The expression for the computation of maximum stress isgiven as
s Pð Þ ¼ PLh=2I ð31Þ
The load P is assumed to be aleatory with uncertain distribu-tion type and distribution parameters. E and L are aleatory fol-lowing Gaussian distributions with known distribution parame-ters. The cross-section parameters (b, h) are also assumed to bealeatory with known parameters due to geometric variations inthe manufacturing process. Table 1 shows the random variables
Fig. 5 A cantilever beam with point-load at the free end
Table 1 Example 1. Variables and their statistics
Parameters Distribution Mean Standard deviation
P μP (×105 N) Normal 35 0.3
σP (×105 N) Lognormal 4 0.1
E (×109 N/m2) Normal 210 10
L (m) Normal 2 0.01
b (m) Normal 0.18 0.005
h (m) Normal 0.75 0.005
Uncertainty quantification in reliability estimation
and their statistics used in this example, and Table 2 gives twocandidate distribution types of P and their correspondingprobabilities.
The two limit state functions corresponding to deflectionand stress are given as
gd Pð Þ ¼ d0−d Pð Þ ð32Þgs Pð Þ ¼ s0−s Pð Þ ð33Þwhere gd(P) <0 and gs(P) <0 indicate failure, and s0, d0 are thelimiting values of stress and deflection. In this example, thethreshold values for stress and deflection are assumed deter-ministic (d0 = 0.01 m, s0 = 500 MPa). Let the models inEqs. (30, 31) be known as simulation models for deflectionand stress respectively. The system fails when either gd(P) <0or gs(P) < 0, and the system failure probability is given by
psf ¼ Pr gd Pð Þ≤0∪gs Pð Þ≤0f g ð34Þ
(a) Component reliability analysisThe reliability analysis with respect to each of the
individual limit states is first performed using the pro-posed method. Since the deflection limit state is associ-ated with model discrepancy, the reliability analysis isperformed with and without discrepancy, in order to in-vestigate the effectiveness of the proposed method inhandling model discrepancy during surrogate construc-tion. Tables 3 and 4 show the reliability analysis results,along with the number of function evaluations (NOF),without considering the surrogate uncertainty and MCSerror, in order to compare with the simulation modelestimate. In this example, the threshold EFF value forselection of training points in surrogate construction isassumed to be 0.002. The results show that the proposedmethod can comprehensively estimate the componentreliability in the presence of various sources of epistemicuncertainty. The results illustrate that the reliability
estimate can be improved by considering the model dis-crepancy (Table 3). Figure 6 shows the comparison be-tween the simulation model limit state, and the limit-statefrom surrogate with and without consideration of modeldiscrepancy. Since it is not possible to show the limitstate contours with all the random variables, Fig. 6 showsthe contours between length and load on the X and Yaxesrespectively. The other random variables E, b, h are con-ditioned at their mean values. It shows that the surrogateconstructed taking model discrepancy into considerationis closer to the true limit state than the surrogate withoutconsidering the model discrepancy. Figure 7 provide thefailure probability distribution with respect to each limitstate after considering both the surrogate uncertainty andMCS error. The figures illustrate that the proposed meth-od can effectively quantify the uncertainty in the reliabil-ity analysis results.
(b) System reliability analysisA surrogate is constructed considering both the limit
states for system reliability analysis using the methoddiscussed in Sec. 3.2. Figure 8 gives the PDF of failureprobability after considering both the deflection andstress limit states. It shows that the proposed methodcan effectively quantify the uncertainty in the systemfailure probability estimate (Table 5).
Table 2 Distribution types and their probabilities for Load P
Distribution type Normal Type 1 EVD
Probability 0.2 0.8
Table 3 Example 1. Component failure probability results (Deflection)
pf NOF ε (%)
Simulation model 0.0249 1 × 106 –
Limit state surrogate without modeldiscrepancy
0.0057 25+ 0 77.11
Limit state surrogate with model discrepancy 0.0248 25+ 2 0.4
The NOF include the initial number of samples from calibration (25) andthe number of added new samples
Table 4 Example 1. Component failure probability results (Stress)
pf NOF ε (%)
Simulation model 0.0605 1 × 106 –
Limit state surrogate model 0.0633 32 4.63
Fig. 6 Comparison of limit states between the simulation model andsurrogate, with and without considering model discrepancy
S. Nannapaneni et al.
4.2 A system ofmultiple components eachwith a limit state
Example 1 demonstrated the proposed method for componentreliability analysis and reliability analysis of a series system.In this example, reliability analysis of a parallel system isdemonstrated. Consider the two-bar system shown in Fig. 9.The system consists of two bars supporting a panel on which aload P is applied at its center.
The two bars are assumed to be made of different materialswith different failure stress characteristics. The Young’s mod-uli of the materials (EA, EB) are assumed to be aleatory vari-ables with known distribution types and distribution parame-ters. For illustration, the bars are assumed to be of equal length(L = 1 m) and the area of cross-section of the two bars areassumed to be deterministic (AA = 0.04 m2, AB = 0.0625 m2).The forces in the bars are given by
Fi ¼ P EiAi= EAAA þ EBABð Þð Þ; i ¼ A;B ð35Þ
Let σA0 and σB
0 represent the failure stresses of the bars; thelimit state functions are given by
gi Xð Þ ¼ Aiσ0i −P EiAi= EAAA þ EBABð Þð Þ; i ¼ A;B ð36Þ
The applied load P is assumed to aleatory, but with uncer-tain distribution type and uncertain distribution parameters.The failure stress is assumed to be deterministic but not knownprecisely. Some point and interval data are assumed to beavailable from material testing. Table 6 provides the list ofvariables used in this example and their statistics.
Using the available data, non-parametric distributions areconstructed for both the threshold stresses using the likelihoodapproach discussed in Section 2, and spline-based interpola-tion is employed for the PDF modeling. The failure stress of amaterial is not calibrated. We assume that data about failurestress is directly available from material testing and we con-struct a non-parametric distribution to represent the uncertain-ty. After constructing the PDF, it is treated as any other inputrandom variable for reliability analysis.
For illustration purposes, the same candidate distributiontypes for load P and their probabilities are used as in Table 2.The system described here is a parallel system and thereforefailure occurs when both the bars fail i.e. gA(X) < 0 andgB(X) <0. The system failure probability is given by
psf ¼ Pr gA Xð Þ < 0∩gB Xð Þ < 0f g ð37Þ
(a) (b)
Fig. 7 Failure probability with respect to (a) deflection, and (b) stress
Fig. 8 System failure probability with respect to both deflection andstress limit states
Table 5 Example 1. Failure probability estimates using MCS andproposed method
pfs NOF ε (%)
Simulation model 0.0625 1 × 106 –
Limit state surrogate model 0.0648 276 3.68
Uncertainty quantification in reliability estimation
Table 7 gives the system failure probability analy-sis results after consideration of statistical uncertainty,model discrepancy and uncertain model parameters,and without considering surrogate uncertainty andMCS error. Similar to the previous example, thethreshold EFF value for selection of training pointsin surrogate construction is assumed to be 0.002.Figure 10 shows the system failure probability esti-mates without considering surrogate uncertainty andMCS error, considering only surrogate uncertaintyand considering surrogate both uncertainty and MCSerror. It can be seen that the surrogate uncertainty andMCS error result in larger uncertainty in the reliabil-ity estimate compared to considering only surrogateuncertainty. In order to reduce the uncertainty in theestimate, the surrogate needs to be further refined byadding more training points and the number of MonteCarlo samples need to be increased to reduce theMCS error.
5 Conclusion
This paper proposed a unified framework connectingthe model calibration analysis to the construction of alimit state surrogate and estimating the uncertainty inreliability estimate due to the incorporation of differenttypes of epistemic uncertainty. Epistemic uncertaintydue to data (statistical uncertainty) and model (modelparameter uncertainty, model discrepancy, surrogate un-certainty, Monte Carlo sampling error) are considered.A parametric approach is proposed for quantification ofinput variables with statistical uncertainty. Non-parametric distributions are used to quantify uncertainmodel parameters with statistical uncertainty. First,model calibration is carried out using the KOH frame-work, whose output include GP surrogates for simula-tion model and model discrepancy. The general purpose(global) GP surrogate of simulation model is furtherrefined by adding more training points close to the limitstate (local refinement) to obtain the limit state surro-gate for reliability analysis; this is referred to as a hy-brid approach (local refinement of the general purposeGP surrogate). An auxiliary variable is used to representthe model discrepancy, which enables the constructionof a limit state surrogate that includes the model dis-crepancy. The inputs for the limit state surrogate are theoriginal input variables along with the uncertain modelparameters and the model discrepancy. The selection oftraining points is carried out using a learning function
Table 6 Example 2. Variables and their statistics
Variable Distribution Mean Standarddeviation
Load P μP (×106 N) Normal 19 0.3
σP (×106 N) Lognormal 1.6 0.1
Young’s Modulus EA (×109 Pa) Normal 210 20
Young’s Modulus EB (×109 Pa) Normal 180 15
Failure stress of bar A, σA0 (×107 Pa) Point data – [24.7, 24.95,25.1,25.3]
Interval data – [(24.6, 24.62), (24.8, 24.84), (25.1, 25.15)]
Failure stress of bar B, σB0 (×107 Pa) Point data – [21.8, 21.92,22.08,22.15]
Interval data – [(21.85, 21.88), (22.06, 22.08), (22.13, 22.16)]
Fig. 9 A two-bar system
Table 7 System Failure probability estimates using simulation modeland limit state surrogate
pfs NOF Error (%)
Simulation model (Euler model +model discrepancy)
0.0048 6 × 105 –
Limit state surrogate model 0.0049 302 2.08
S. Nannapaneni et al.
called the Expected Feasibility Function (EFF). The GPmodel is then used to carry out reliability analysis usingMonte Carlo sampling. A single-loop sampling ap-proach using an auxiliary variable is used for the sam-pling of input variables with statistical uncertainty andsamples of the inputs are generated and passed throughthe surrogate to obtain the reliability estimate. Note thatthis results in a single value of the failure probability.
The model prediction using a GP model is a Gaussiandistribution with the parameters dependent on the input. Inaddition, the model predictions at different inputs are correlat-ed due to the covariance function in the GPmodel. To accountfor the surrogate uncertainty (i.e., the variability in the predic-tion), correlated sampling of model predictions at several in-puts is carried out and used for reliability analysis. Inclusion ofsurrogate uncertainty results in a probability distribution(PDF) for the failure probability. WhenMCS error is incorpo-rated, each sample in the failure probability distribution isrepresented by a PDF, which results in a family of PDFs forthe failure probability.
Two examples demonstrated the effectiveness of the pro-posed method. The proposed method is able to not only ad-dress heterogeneous sources of epistemic uncertainty duringsystem reliability analysis, but also provide the uncertaintyassociated with the reliability estimate.
This works considered only time-independent reliabilityanalysis. Future work should consider time-dependent reli-ability analysis. Quantities with spatial variability (and asso-ciated epistemic uncertainty) which further complicate theproblem also need to be considered.
Ultimately, the purpose of uncertainty quantification isto inform decision making, such as resource allocation ordesign optimization. The inclusion of epistemic uncer-tainty in reliability-based design optimization (RBDO)is still a topic of active research. The framework devel-oped in this paper provides a systematic approach to
quantifying and including several sources of epistemicuncertainty within reliability analysis, and therefore facil-itates their subsequent inclusion in design optimization.Future work may also consider the quantification of rel-ative contributions of different sources of uncertainty onthe reliability estimate, which will facilitate resource al-location for epistemic uncertainty reduction (e.g., addi-tional data collection vs model refinement).
Acknowledgments The research reported in this paper was supportedby funds from the Air Force Office of Scientific Research (Grant No.FA9550-15-1-0018, Technical Monitor: Dr. David Stargel). The supportis gratefully acknowledged. The authors also thank the reviewers for theirhelpful comments.
References
Agresti A, Coull BA (1998) Approximate is better than “exact” for inter-val estimation of binomial proportions. Am Stat 52(2):119–126
Arendt PD, Apley DW, Chen W (2012) Quantification of model uncer-tainty: Calibration, model discrepancy, and identifiability. J MechDes 134(10):100908
Bernardo JM, Rueda R (2002) Bayesian hypothesis testing: a referenceapproach. Int Stat Rev 70(3):351–372
Bichon BJ, Eldred MS, Swiler LP, Mahadevan S, McFarland JM (2008)Efficient global reliability analysis for nonlinear implicit perfor-mance functions. AIAA J 46(10):2459–2468
Bichon BJ, McFarland JM, Mahadevan S (2011) Efficient surrogatemodels for reliability analysis of systems with multiple failuremodes. Reliab Eng Syst Saf 96(10):1386–1395
Choi SK, Grandhi RV, Canfield RA (2004) Structural reliability under non-Gaussian stochastic behavior. Comput Struct 82(13):1113–1121
Der Kiureghian A, Ditlevsen O (2009) Aleatory or epistemic? Does itmatter? Struct Saf 31(2):105–112
Du X (2010) System reliability analysis with saddlepoint approximation.Struct Multidiscip Optim 42(2):193–208
Dubourg V, Sudret B, Deheeger F (2013)Metamodel-based importance sam-pling for structural reliability analysis. Probab Eng Mech 33:47–57
Echard B, Gayton N, Lemaire M (2011) AK-MCS: an active learningreliability method combining Kriging and Monte Carlo simulation.Struct Saf 33(2):145–154
Emura T, Lin YS (2015) A comparison of normal approximation rules forattribute control charts. Qual Reliab Eng Int 31(3):411–418
Faravelli L (1989) Response-surface approach for reliability analysis. JEng Mech 115(12):2763–2781
Haldar A, Mahadevan S (2000a) Reliability assessment using stochasticfinite element analysis. John Wiley & Sons, New York
Haldar A, Mahadevan S (2000b) Probability, reliability, and statisticalmethods in engineering design. John Wiley & Sons, New York
Hoeting JA, Madigan D, Raftery AE, Volinsky CT (1999) Bayesianmodel averaging: a tutorial. Stat Sci 14(4):382–401
Hu Z, Du X (2015) Mixed efficient global optimization for time-dependent reliability analysis. J Mech Des 137(5):051401
Hu Z, Mahadevan S, (2015) Global sensitivity analysis-enhanced surro-gate (GSAS) modeling for reliability analysis. Struct MultidiscipOptim :1–21
Hu Z, Mahadevan S, Du X (2015) Uncertainty quantification in time-dependent reliability analysis method. The ASME 2015 internation-al design engineering technical conferences (IDETC) and computersand information in engineering conference (CIE), August 2–7, 2015in Boston, MA
Fig. 10 System failure probability estimate using simulation model andlimit state surrogate
Uncertainty quantification in reliability estimation
Jiang Z, Chen W, Fu Y, Yang R (2013) Reliability-based design optimi-zation with model bias and data uncertainty. SAE Int J Mater Manuf6(3):502–516
Jones DR, SchonlauM,WelchWJ (1998) Efficient global optimization ofexpensive black-box functions. J Glob Optim 13(4):455–492
Kaymaz I (2005) Application of kriging method to structural reliabilityproblems. Struct Saf 27(2):133–151
Kennedy MC, O’Hagan A (2001) Bayesian calibration of computermodels. J R Stat Soc Ser B Stat Methodol 63(3):425–464
Ling Y, Mahadevan S (2013) Quantitative model validation techniques:New insights. Reliab Eng Syst Saf 111:217–231
Mahadevan S, Zhang R, Smith N (2001) Bayesian networks for systemreliability reassessment. Struct Saf 23(3):231–251
Nannapaneni S, Mahadevan S (2015) Model and data uncertainty effectson reliability estimation. 17th AIAA non-deterministic approachesconference. AIAA SciTech, Florida
Papadrakakis M, Papadopoulos V, Lagaros ND (1996) Structural reliabil-ity analysis of elastic–plastic structures using neural networks andMonte Carlo simulation. Comput Methods Appl Mech Eng 136(1):145–163
Sankararaman S, Mahadevan S (2011) Likelihood-based representationof epistemic uncertainty due to sparse point data and/or interval data.Reliab Eng Syst Saf 96(7):814–824
Sankararaman S, Mahadevan S (2013a) Separating the contributions ofvariability and parameter uncertainty in probability distributions.Reliab Eng Syst Saf 112:187–199
Sankararaman S, Mahadevan S (2013b) Distribution type uncertain-ty due to sparse and imprecise data. Mech Syst Signal Process37(1):182–198
Song J, Der Kiureghian A (2003) Bounds on system reliability by linearprogramming. J Eng Mech 129(6):627–636
Song H, Choi KK, Lee I, Zhao L, Lamb D (2013) Adaptive virtualsupport vector machine for reliability analysis of high-dimensionalproblems. Struct Multidiscip Optim 47(4):479–491
Straub D, Der Kiureghian A (2010) Bayesian network enhanced withstructural reliability methods: methodology. J Eng Mech 136(10):1248–1258
Wang Z, Wang P (2015) A double-loop adaptive sampling approach forsensitivity-free dynamic reliability analysis. Reliab Eng Syst Saf142:346–356
Wang P, Youn BD, Xi Z, Kloess A (2009) Bayesian reliability analysiswith evolving, insufficient, and subjective data sets. J Mech Des131(11):111008
Wang P, Hu C, Youn BD (2011) A generalized complementary intersec-tion method (GCIM) for system reliability analysis. J Mech Des133(7):071003
Youn BD, Choi KK (2004) A new response surface methodology forreliability-based design optimization. Comput Struct 82(2):241–256
Youn BD, Wang P (2006) Bayesian reliability based design optimizationunder both aleatory and epistemic uncertainties, 11th AIAA/ISSMOmultidisciplinary analysis and optimization conference, 6–8September. Portsmouth, Virginia
S. Nannapaneni et al.