9
Efficient Design of Reservoir Simulation Studies for Development and Optimization Subhash Kalla, SPE, and Christopher D. White, SPE, Louisiana State University Summary Development studies examine geologic, engineering, and eco- nomic factors to formulate and optimize production plans. If there are many factors, these studies are prohibitively expensive unless simulation runs are chosen efficiently. Experimental design and response models improve study effi- ciency and have been widely applied in reservoir engineering. To approximate nonlinear oil and gas reservoir responses, de- signs must consider factors at more than two levels—not just high and low values. However, multilevel designs require many simulations, especially if many factors are being considered. Par- tial factorial and mixed designs are more efficient than full facto- rials, but multilevel partial factorial designs are difficult to formu- late. Alternatively, orthogonal arrays (OAs) and nearly-orthogonal arrays (NOAs) provide the required design properties and can handle many factors. These designs span the factor space with fewer runs, can be manipulated easily, and are appropriate for computer experiments. The proposed methods were used to model a gas well with water coning. Eleven geologic factors were varied while optimiz- ing three engineering factors. An NOA was specified with three levels for eight factors and four levels for the remaining six factors. The proposed design required 36 simulations compared to 26,873,856 runs for a full factorial design. Kriged response sur- faces are compared to polynomial regression surfaces. Polynomial- response models are used to optimize completion length, tubing- head pressure, and tubing diameter for a partially penetrating well in a gas reservoir with uncertain properties. OAs, Hammersley sequences (HSs), and response models offer a flexible, efficient framework for reservoir simulation studies. Complexity of Reservoir Studies Reservoir studies require integration of geologic properties, drill- ing and production strategies, and economic parameters. Integra- tion is complex because parameters such as permeability, gas price, and fluid saturations are uncertain. In exploration and production decisions, alternatives such as well placement, artificial lift, and capital investment must be evaluated. Development studies examine these alternatives, as well as geologic, engineering, and economic factors to formulate and optimize production plans (Narayanan et al. 2003). Reservoir stud- ies may require many simulations to evaluate the many factor effects on reservoir performance measures, such as net present value (NPV) and breakthrough time. Despite the exponential growth of computer memory and speed, computing accurate sensitivities and optimizing production performance is still expensive, to the point that it may not be feasible to consider all alternative models. Thus, simulation runs should be chosen as efficiently as possible. Experimental design addresses this problem statistically, and along with response mod- els, it has been applied in engineering science (White et al. 2001; Peng and Gupta 2004; Peake et al. 2005; Sacks et al. 1989a) to • Minimize computational costs by choosing a small but sta- tistically representative set of simulation runs for predicting re- sponses (e.g., recovery) • Decrease expected error compared with nonoptimal simula- tion designs (i.e., sets of sample points) • Evaluate sensitivity of responses to varying factors • Translate uncertainty in input factors to uncertainty in pre- dicted performance (i.e., uncertainty analysis) • Estimate value of information to focus resources on reducing uncertainty in factors that have the most significant effect on re- sponse uncertainty to help optimize engineering factors. Designing and Analyzing Reservoir Studies Experimental design and response models use interpolation to re- duce the number of simulation runs required for a given level of prediction accuracy. Using simulated results, interpolation predicts reservoir responses for factor combinations that were not simu- lated. Experimental design chooses factor combinations for per- forming simulations. Experimental Design. A set of experiments and their associated factor combinations are called designs. Ideally, designs should span the factor space completely with the fewest possible runs: Good designs will generate accurate response models. Most classic designs from experimental statistics are orthogo- nal and rotatable (Box et al. 1978). Such designs are intended to derive response models and estimate sensitivities. On the other hand, Monte Carlo and related methods are sampling methods (also called designs, as they are also sets of experiments); sampling methods are intended to estimate response statistics rather than to model responses (Sandor and Andras 2004). If one has computed accurate response surfaces (typically, higher-order surfaces) and stipulated factor probability distributions, response models can be used in an efficient Monte Carlo process (Halton 1960; Sobol 1967; Kalagnanam and Diwekar 1997), to compute response sta- tistics (White et al. 2001). Two-level factorial designs and space-filling Latin hypercube sampling (LHS) designs (Aslett et al. 1998) are frequently used for statistical analysis. To approximate nonlinear oil and gas reservoir responses accurately, designs must consider factors at more than two levels, not just high and low values as in two-level factorial designs (Box and Hunter 1961). Multilevel designs can relate re- sponses and factors more accurately by including higher-order effects in polynomial regression. However, multilevel designs in- crease the computational burden because they require more simu- lations to fit the greater number of model parameters. This is especially important if many factors are being considered. Alternatives to multilevel-full-factorial designs include partial factorials and mixed level designs, which vary the number of levels used per factor; these designs are more difficult to formu- late. To our knowledge, mixed-multilevel partial-factorial designs have not been used in reservoir engineering. OAs and NOAs provide the desired design properties and can handle many parameters (Hedayat et al. 1999; Xu 2002). These arrays span the design space with fewer runs, can be manipulated easily, and are appropriate for analysis of computer experiments. Mixed-level OAs can also be generated, and they are good designs for reservoir simulation studies. See section titled “OAs.” Copyright © 2007 Society of Petroleum Engineers This paper (SPE 95456) was accepted for presentation at the 2005 SPE Annual Technical Conference and Exhibition, Dallas, 9–12 October, and revised for publication. Original manuscript received for review 14 July 2005. Revised manuscript received for review 23 April 2007. Paper peer approved 4 May 2007. 629 December 2007 SPE Reservoir Evaluation & Engineering

Efficient Design of Res Simulation

Embed Size (px)

DESCRIPTION

Efficient Design of Res Simulation

Citation preview

  • Efficient Design of Reservoir SimulationStudies for Development

    and OptimizationSubhash Kalla, SPE, and Christopher D. White, SPE, Louisiana State University

    SummaryDevelopment studies examine geologic, engineering, and eco-nomic factors to formulate and optimize production plans. If thereare many factors, these studies are prohibitively expensive unlesssimulation runs are chosen efficiently.

    Experimental design and response models improve study effi-ciency and have been widely applied in reservoir engineering.To approximate nonlinear oil and gas reservoir responses, de-signs must consider factors at more than two levelsnot justhigh and low values. However, multilevel designs require manysimulations, especially if many factors are being considered. Par-tial factorial and mixed designs are more efficient than full facto-rials, but multilevel partial factorial designs are difficult to formu-late. Alternatively, orthogonal arrays (OAs) and nearly-orthogonalarrays (NOAs) provide the required design properties and canhandle many factors. These designs span the factor space withfewer runs, can be manipulated easily, and are appropriate forcomputer experiments.

    The proposed methods were used to model a gas well withwater coning. Eleven geologic factors were varied while optimiz-ing three engineering factors. An NOA was specified with threelevels for eight factors and four levels for the remaining six factors.The proposed design required 36 simulations compared to26,873,856 runs for a full factorial design. Kriged response sur-faces are compared to polynomial regression surfaces. Polynomial-response models are used to optimize completion length, tubing-head pressure, and tubing diameter for a partially penetrating wellin a gas reservoir with uncertain properties.

    OAs, Hammersley sequences (HSs), and response models offera flexible, efficient framework for reservoir simulation studies.

    Complexity of Reservoir StudiesReservoir studies require integration of geologic properties, drill-ing and production strategies, and economic parameters. Integra-tion is complex because parameters such as permeability, gasprice, and fluid saturations are uncertain.

    In exploration and production decisions, alternatives such aswell placement, artificial lift, and capital investment must beevaluated. Development studies examine these alternatives, as wellas geologic, engineering, and economic factors to formulate andoptimize production plans (Narayanan et al. 2003). Reservoir stud-ies may require many simulations to evaluate the many factoreffects on reservoir performance measures, such as net presentvalue (NPV) and breakthrough time.

    Despite the exponential growth of computer memory andspeed, computing accurate sensitivities and optimizing productionperformance is still expensive, to the point that it may not befeasible to consider all alternative models. Thus, simulation runsshould be chosen as efficiently as possible. Experimental designaddresses this problem statistically, and along with response mod-els, it has been applied in engineering science (White et al. 2001;Peng and Gupta 2004; Peake et al. 2005; Sacks et al. 1989a) to

    Minimize computational costs by choosing a small but sta-tistically representative set of simulation runs for predicting re-sponses (e.g., recovery)

    Decrease expected error compared with nonoptimal simula-tion designs (i.e., sets of sample points)

    Evaluate sensitivity of responses to varying factors Translate uncertainty in input factors to uncertainty in pre-

    dicted performance (i.e., uncertainty analysis) Estimate value of information to focus resources on reducing

    uncertainty in factors that have the most significant effect on re-sponse uncertainty to help optimize engineering factors.

    Designing and Analyzing Reservoir Studies

    Experimental design and response models use interpolation to re-duce the number of simulation runs required for a given level ofprediction accuracy. Using simulated results, interpolation predictsreservoir responses for factor combinations that were not simu-lated. Experimental design chooses factor combinations for per-forming simulations.

    Experimental Design. A set of experiments and their associatedfactor combinations are called designs. Ideally, designs shouldspan the factor space completely with the fewest possible runs:Good designs will generate accurate response models.

    Most classic designs from experimental statistics are orthogo-nal and rotatable (Box et al. 1978). Such designs are intended toderive response models and estimate sensitivities. On the otherhand, Monte Carlo and related methods are sampling methods(also called designs, as they are also sets of experiments); samplingmethods are intended to estimate response statistics rather than tomodel responses (Sandor and Andras 2004). If one has computedaccurate response surfaces (typically, higher-order surfaces) andstipulated factor probability distributions, response models can beused in an efficient Monte Carlo process (Halton 1960; Sobol1967; Kalagnanam and Diwekar 1997), to compute response sta-tistics (White et al. 2001).

    Two-level factorial designs and space-filling Latin hypercubesampling (LHS) designs (Aslett et al. 1998) are frequently used forstatistical analysis. To approximate nonlinear oil and gas reservoirresponses accurately, designs must consider factors at more thantwo levels, not just high and low values as in two-level factorialdesigns (Box and Hunter 1961). Multilevel designs can relate re-sponses and factors more accurately by including higher-ordereffects in polynomial regression. However, multilevel designs in-crease the computational burden because they require more simu-lations to fit the greater number of model parameters. This isespecially important if many factors are being considered.

    Alternatives to multilevel-full-factorial designs include partialfactorials and mixed level designs, which vary the number oflevels used per factor; these designs are more difficult to formu-late. To our knowledge, mixed-multilevel partial-factorial designshave not been used in reservoir engineering.

    OAs and NOAs provide the desired design properties and canhandle many parameters (Hedayat et al. 1999; Xu 2002). Thesearrays span the design space with fewer runs, can be manipulatedeasily, and are appropriate for analysis of computer experiments.Mixed-level OAs can also be generated, and they are good designsfor reservoir simulation studies. See section titled OAs.

    Copyright 2007 Society of Petroleum Engineers

    This paper (SPE 95456) was accepted for presentation at the 2005 SPE Annual TechnicalConference and Exhibition, Dallas, 912 October, and revised for publication. Originalmanuscript received for review 14 July 2005. Revised manuscript received for review23 April 2007. Paper peer approved 4 May 2007.

    629December 2007 SPE Reservoir Evaluation & Engineering

  • Experimental Design Selection. In computer experiments,spanning the design space while constructing an accurate responsesurface is the decisive factor for choosing a design. Replicateddesign points [as in central composite (CCD) and many Box-Behnken designs (Myers and Montgomery 1995)] do not giveextra accuracy and are not needed for experimental error estima-tion in deterministic computer experiments; such replicationmay be useful for stochastic models or physical experiments(White et al. 2001). Three- or five-level designs such as Box-Behnken and CCD may require as many runs as two-level full-factorial designs if there are more than approximately five factors.Plackett-Burman screening designs (Myers and Montgomery1995) should be used only if the confounding situation is known orcan be presumed to be unimportant; they are suited to sensitivityanalysis, but not to response modeling.

    Another design problem is that of selecting a saturated design,which computes n regression coefficients from n runs. For k fac-tors and q interactions (q(2k)), there are n1+k+q coefficients tobe estimated. In general, a saturated design is hard to get fromCCD and Box-Behnken designs.

    OAs. OAs offer an alternative design approach. With orthogo-nal designs, the coefficient estimates are invariant in the sense thatthe estimated effect for a given factor will not change as otherfactors and interactions are included in or excluded from themodel; estimated coefficients are independent. OAs are a gener-alized form of LHS (McKay et al. 1979). OAs share many prop-erties with LHS and have additional desirable properties like or-thogonality. Under certain conditions, they are optimal (Hedayatet al. 1999).

    OAs are combinatorial structures with nonnegative integers aselements. These design matrices have been extensively used instatistics, especially for experimental design (Hedayat et al. 1999).An OA is an nk collection of elements (n rows, k columns) inwhich the ith column has si levels. The nk array has strengtht when all sets of t columns have an equal number of rows withthe same permutation of elements. The design properties are simi-lar to fractional factorials. In fact, OAs are fractional factorialdesigns; for example, the Placket-Burman design is an OA. AnOA is mixed if the levels of different factors (or columns) aredifferent. For mixed designs, the notation (Hedayat et al. 1999) isOA(n, k, s1k1 s2k2 . . . sqkq, t). There are k1 columns with s1 levels, andthere are a total of k columns corresponding to the k factors,kk1+k2+. . .+kq. Arrays of strength t>2 require modest n, andthese higher-strength designs yield better response models.Strength is related to the resolution and the projection properties ofa design (Hedayat et al. 1999); strength is equal to partial factorialresolution minus one. Higher-strength designs have lesser con-founding of effects and hence simpler interpretation, albeit at ahigher cost.

    Sometimes the OA requires more runs than desired. For ex-ample, an OA(18, 17, 24, 313, 2) may be desired (17 factors and 18runs) but the most similar known OA is OA(36, 17, 24, 313, 2),which has 36 runs. In other cases, a desired set of levels and factorsmay not have any associated OA; no OA(36, 14, 38, 46, 2) isknown. In such cases, NOAs can be used. For example, designsOA(18, 17, 24, 313, 2) and OA(36, 14, 38, 46, 2) are available,providing the designs we were seeking above. In addition, thenumber of runs can be decreased by using NOA designs ratherthan OA designs. Of course, there are tradeoffs in adopting anNOA rather than an OA. In NOA designs, columns are not or-thogonal, which will induce correlation between the regressed co-efficients. An algorithm to compute OA and NOA designs basedon A-optimality is used in this research (Xu 2002).

    Benefits of OA/NOA Many factors can be studied over the entire region spanned

    by the control factors. Large savings in the experimental effort result from decreas-

    ing the required number of simulations. Analysis is accurate as columns are orthogonal (OA only). OA is an adaptive design. Changing the values of the factor

    levels can refine OA design, columns (factors) can be eliminated,

    and new columns can be added. Previous runs are efficiently re-used (Hedayat et al. 1999).

    Wide ranges of factor counts and levels per factor can beused, especially with NOA.

    Excellent screening designs can be generated by two-levelsaturated designs (designs with the number of regressors equal ton1, where n is the number of runs).

    Response Surface Models. The relation between input vari-ables and output response can be represented using many differ-ent approaches.

    Polynomial models. The assumed response function isy = f x1, x2, . . . , xk + , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (1)

    where y is the expected value of response and x1, x2, . . . , xk are theinput parameters. This method requires a suitable approximationfor f, commonly a second-order model of the form (Myers andMontgomery 1995)

    y = 0 +i=1

    k

    i xi +i=1

    k

    ii xi2 +

    i=1

    k

    ji

    ij xi xj. . . . . . . . . . . . . (2)

    In the above response model the vector of coefficients com-prises the unknown parameter set; can be estimated by firstrunning experiments and tabulating the factors (inputs) and re-sponses (outputs). The parameter set can be estimated by regres-sion. Linear, stepwise least squares is often used, although thisintroduces bias for nonorthogonal designs (Montgomery 2001).

    Kriging. Kriging can be used to build response models. Kriginguses a linear combination of weights at known points to interpolate(Goovaerts 1997) with weights based on autocovariances or semi-variograms (Deutsch and Journel 1998). The autocovariance de-fines the continuity of a factor with separation distance in a par-ticular direction. Kriging has several important properties:

    The kriging equations need be solved only once for a givendesign, and kriging weights are then computed by matrix multi-plies rather than inversions. Thus, repeated application of krigedestimates is computationally efficient.

    Unlike simpler methods like inverse distance, kriging con-siders redundancy.

    Kriging is an exact interpolator; the interpolated estimateequals the observation, which is not true for unsaturated polyno-mial regression. Exactness is desirable for computer experiments,which have no random error (Sacks et al. 1989b).

    Far from data, the kriged estimate reverts to the neighbor-hood mean, whereas polynomials perform more erratically.

    Kriging is robust for a wide range of responses, because itassumes only second-order stationarity of the response over thefactor space (which was already assumed in computing the poly-nomial response model by least squares), makes no assumptionsregarding the linearity of the response with respect to the factors,and does not prescribe a particular functional form.

    Let x(x1, . . . . , xn)T denote the vector of inputs and y denotea response. Then

    yx = f x + zx, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (3)where f(x) is a known (usually polynomial) function of x and isassumed to be a constant in this study, z(x) is assumed to be aGaussian random function with mean zero, variance 2z and cor-relation R(xi, xj) between two z values at input vector xi and xj;these are two k-dimensional vectors at distinct points i and j (Sackset al. 1989b). Here, the assumed form of the correlation functionR(xi, xi) is

    Rxi, xj = expm=1

    k

    mxmi xm

    j 2, . . . . . . . . . . . . . . . . . . . . (4)where is a k-vector of correlogram ranges, and the xmi and xmj arethe mth components of sample points xi and xj. i is estimated byminimizing the difference between the predicted and estimatedR(xi, xj) for all factor differences (that is, pairs of points in thedesign) using maximum likelihood (Simpson et al. 1997, Jin et al.

    630 December 2007 SPE Reservoir Evaluation & Engineering

  • 2000). From the fitted model, a best linear unbiased predictor canbe constructed (Informatics 2002)

    y = 0 + rTxR1y 01, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (5)

    where r(x) is an n1 vector of correlations with unknown point iand the design points given by R(xi, x). R is the nn correlationmatrix with correlations between all the design points, and 0 isthe least squares estimate of 0, corresponding to the mean insimple kriging, f (x)0.

    Sampling for Uncertainty Analysis. Uncertainty assessment ofresponses is frequently done using Monte Carlo simulation. Thisanalysis translates the specified uncertainty in factors to un-certainty in responses by randomly varying factors and comput-ing responses.

    Quasi-Monte Carlo sampling schemes such as HSs and LHSreduce the required sample size while preserving many of MonteCarlo simulations desirable properties (Kalagnanam and Diwekar1997). HSs have better dispersion properties, but the number ofpoints to be generated must be specified a priori. Low-discrepancyquasirandom sequences are used in numerical integration, simula-tion, and optimization. They are not statistically independent, andare designed to give more uniformity in multidimensional spaces.Therefore, they are often more efficient than random numbers inmultidimensional Monte Carlo or modified Monte Carlo methods.Another appealing property of HSs is that correlations built be-tween columns (factors) are close to zero, but in LHS, happen-stance correlations commonly exist between the columns, degrad-ing sampling efficiency and making correct modeling of correlated

    factors more difficult (Iman and Conover 1982). As a result, theHSs method can span the k-dimensional space with a relativelysmall but still representative sample and ease correlation modeling.

    Quasi-Monte Carlo Convergence. Because risk estimation isan important use of reservoir modeling, some checks of conver-gence sample size must be performed on response models, just asconventional Monte Carlo requires convergence tests. Numericalexperiments compare convergence of sample means, standard de-viations, and skewness from independent samples of Monte Carlo,LHS, and HSs with 5, 10, 50, 100, 500, and 1,000 runs. HSs andLHS are compared with Monte Carlo for different distributionsincluding uniform, triangular, normal, log-normal, and beta distri-butions. In an example comparison, the input distribution is a betadistribution, and the property examined for convergence is re-sponse variance (Fig. 1). As the sample size increases, the varianceof the simulated beta distribution should converge to the stipulatedvariance. LHS and HSs attain the required characteristics in fewerruns then Monte Carlo. Moreover, as the number of runs increases,HSs and LHS converge to the actual value monotonically, but thepure Monte Carlo estimate oscillates around the expected value,making it harder to detect convergence. For the cases with only 30or 50 sample points, the HSs and LHS more consistently generatemeans and standard deviations closer to the true mean and standarddeviation compared with Monte Carlo sampling.

    LHS efficiently fills single-dimensional space, but HSs haslower discrepancy (Kalagnanam and Diwekar 1997) in placing npoints in k-dimensional hypercube efficiently compared withMonte Carlo and LHS samples. Kalgnanam and Diwekar (1997)provide a comparison of the performance of the HSs sampling tothat of LHS and Monte Carlo, finding HSs technique 3 to 100times faster than LHS and Monte Carlo techniques.

    Designed Simulation for a Gas Coning ProblemWith the motivation and description of the methods complete, wenow turn to an example application.

    Prediction of production rate and ultimate recovery is an im-portant part of reservoir development and management. The pro-duction of gas reservoirs that have no associated aquifers is rela-tively simple to predict by analytical models. However, gas recov-ery from waterdrive reservoirs is harder to predict because waterinflux may cause water coning and trap gas. Studies have exam-ined relationships between gas recovery, aquifer strength, reservoirproperties, and completion length (McMullan and Bassiouni 2000;Valajak et al. 2001). Gas coning is as an appropriate test of ex-perimental design, simulation, and response models.

    Factors Considered. Fourteen factors are examined in this study,viz., 11 geologic and three engineering variables (Table 1). Threeengineering factors (tubinghead pressure, tubing diameter, and com-

    Fig. 1Convergence of variance for different sampling techniques.

    631December 2007 SPE Reservoir Evaluation & Engineering

  • pletion length) are adjusted to optimize reservoir performance,whereas the other factors are uncertain. Uncertain factors are con-sidered because their sensitivities and interactions must be consid-ered when optimizing engineering factors. See the section titledOptimization Under Uncertainty. The examined factors (14 in to-tal) occur in different components of the reservoir simulationmodel: reservoir geometry, reservoir properties, well data, andrelative permeability curves. Most of the factors nonlinearly influencethe response. See the section titled Polynomial Response Surfaces.

    Factor ranges are selected using factor probability distributions,and the numbers of factor levels are chosen based on prior expec-tations of sensitivity and nonlinearity. Factors expected to affectthe response mostespecially if these influential factors are ex-pected to have nonlinear effectsare sampled at more levels(Table 1). The possibility of significant nonlinear effects or inter-actions can sometimes be inferred from an understanding of basicphysics and analytical solutions.

    Simulation Model. The radial model with a waterdrive mechanismcomprises 26 radial grid rings and 110 layers with no angular variation.

    Radius and thickness are varied to model volumetric uncer-tainty. Gas zone height, h, is 100 ft for all simulations (Fig. 2).Porosity, fluid properties, and reservoir temperature are also con-stant. Tubing performance is integrated with the reservoir simula-tion model so that tubinghead pressure and tubing diameter can bevaried and optimized.

    The total thickness is expanded by increasing the water zonethickness. Model reservoir depth is 5,000 ft and is not varied. Theground surface temperature is 60F, and the reservoir temperatureis 120F. The porosity of the gas zone is 25%. Irreducible water,residual gas saturation, and vertical-horizontal permeability ratiowere varied and considered uncertain (Table 1).

    The gas specific gravity is 0.65 with no nonhydrocarbon com-ponents. The gas/water relative permeability curves are for a wa-ter-wet system, and capillary pressure is assumed negligible be-cause of high contrasts in gas and water densities and the relativelyhigh permeability considered in this study (Armenta et al. 2003).Production is constrained by a constant tubinghead pressure and isterminated if the rate is lower than 1 MMscf/day.

    Automation. Reservoir models are built for each design point.Each row in the design array has k columns (for the k parameters)and can be used to prepare data decks using scripts (White andRoyer 2003). Any utility that handles strings efficiently can beused to create the data decks and include files corresponding todesign points. The coded variables in the design array are rescaledto parameters with units that the simulator reads. See the sectiontitled Factors Considered. After data files are prepared, modelsare run using batch files. A commercial simulator is used (CMGTechnologies 2002). Responses such as recovery factor or break-through time are extracted from the summary files.

    Response Model Building and Sensitivity StudiesThe complexity of the gas recovery responses is expected to re-quire a second order regression model:

    y = 0 +i=1

    k=14

    i xi +i=1

    k=14

    ii xi2 +

    i=1

    k=14

    ji

    ij xi xj. . . . . . . . . . . . . (6)

    This model includes one mean effect term, 14 main effects for 14variables, 14 quadratic terms for 14 terms, and 91 (k(k1)/2) in-teraction terms.

    A full two-level factorial design for fourteen factors wouldhave 214 (16,384) runs. The design used here is a mixed designwith three (eight factors) and four (six factors) levels. A full fac-torial for such a design would be even more expensive than thetwo-level factorial (384626,873,856 runs). The fractional fac-torial must be heavily fractionated to decrease the runs. Ad hocfractionations can create complex confounding and undesirabledesign properties. OAs and NOAs can fractionate the designs withcontrol of estimation error and design size. An NOA (36, 14, 38,46, 2) is chosen for the current problem.

    Because the NOA design considered here has 36 runs, thedegrees of freedom will allow us to estimate only 36 terms in theresponse model. Only main effects and quadratic terms are con-sidered to generate the regression surfaces; two-term and higherinteractions are confounded with the estimated coefficients. Alter-natively, one could attempt to include interactions by using step-wise regression.

    Polynomial Regression Surfaces. Multiple linear regression fitsresponse surface models for the 14 factors and various responses.Main and quadratic effects (29 regression coefficients, Appendix1) fit responses that are not linear with respect to factors. Thecoefficients of the factors indicate the significance of that factor,because the factors are scaled. The response surfaces illustrate thesensitivities (Fig. 3). Steeper surfaces indicate greater sensitivity tothe corresponding factors. Here, quadratic coefficients are smallerthan the main effect coefficients. For gas recovery, the most im-portant factors are permeability, initial water saturation, and thenon-Darcy factor. The importance of the non-Darcy factor is es-pecially noteworthy because this effect occurs even at modest rate,and it is often neglected [or represented using a lumped non-Darcyskin, which does not have an equivalent effect on gas coning(Armenta et al. 2003; Armenta 2003)].

    The regression coefficient R2 for this model is high (0.95), butthere are a few degrees of freedom so that the adjusted coefficientis modest (Radj2 0.73; Table 2). Nonetheless, the regression co-efficients and F-test indicate a good fit.

    Fig. 2Sketch of radial reservoir model.

    Fig. 3Gas recovery sensitivity to pressure and permeability(polynomial regression).

    632 December 2007 SPE Reservoir Evaluation & Engineering

  • Kriged Surfaces. Interpolation is also done using simple kriging.In a gas coning problem, x(x1, . . . . , x36)T is the vector of inputsand y denotes gas recovery. Because of the small number of runs(n36) and the high dimensionality of the problem (k14), asimple isotropic model (i, i) is used. Sacks et al. (1989a)made a similar assumption. The isotropic is estimated usingmaximum likelihood. Examining the results, as connate water satu-ration increases, the recovery will be lower, as expected (Fig. 4).However, the effect is not linear, as simple volumetric analysiswould imply. Moreover, the change in concavity along the Sw axisshows that the kriging method can produce surface shapes thatquadratic models cannot. In addition, the fitting error at all sampledata is identically zero, which is sensible for a set of deterministiccomputer experiments.

    Comparison of Regression Models. Polynomial and krigingmodels for a gas recovery problem are evaluated using bootstrap-ping and jackknifing, respectively (Efron and Tibshirani 1993).These methods estimate the standard error for a simple mean.Here, a bootstrap sample of 10,000 polynomial model residualsyields a standard error estimate of 0.038. Jackknifing the krigingmodel of the gas recovery estimates the standard error of thismodel at 0.142. Jackknifing was used for the kriged results be-cause it is much less intense, computationally.

    The standard errors for polynomials are smaller than those forkriged models. This implies that the polynomial model is better forsensitivity analysis and optimization. These results are partly be-cause the number of coefficients used for kriging is smaller thanthose used for the polynomial model, especially with the assump-tion of isotropy in the 14 dimensions. The use of a more detailed,anisotropic correlation model might improve the kriging results.More sample points, better estimation of kriging parameters, andmore levels in the design might increase the effectiveness of kriging.

    Uncertainty of Reservoir ResponsesFactor uncertainty influences the optimization process in the res-ervoir simulation. Uncertainty analysis estimates the response dis-tribution. Uncertainty could be caused by measurement errors (es-pecially for factors like permeability and relative permeability),upscaling errors, and measurements at only a few, clustered loca-tions (Narayanan 1999).

    For the current gas reservoir problem, stochastic responsesimulation uses HSs of 1,000 realizations for sampling the 14dimensional factor space. The quasi-Monte Carlo simulations useresponse surfaces from NOA designs. The uncertain factors aredrawn from specified probability distributions; factor distributionsmay be estimated using historical and analog data and knowledgeabout the physics controlling the factor (e.g., depositional anddigenetic processes controlling permeability). Inferred distribu-tions may be of any form.

    These simulations reveal the range of possible outcomes for theresponse (here, gas recovery) and the probability of occurrence.

    Uncertainty Estimates Using Different Experimental Designs.To test the reliability of first and second moments of responseestimated by NOA (36, 14, 38, 46, 2) uncertainty analysis is re-peated using response surfaces from NOA (72, 14, 38, 46, 2) andNOA (108, 14, 38, 46, 2). The 108-run NOA design is generated byadding the 36-run and 72-run NOA designs because OAs are adap-tive designs (adding two carefully selected designs generates athird design with good statistical properties). Response surfacesfrom the 72-run NOA contain main and quadratic effects as in the36-run model and also include interactions of engineering factors(tubinghead pressure, tubing diameter, and completion length).Response surfaces from the NOA with 108 runs contain the effectsof 72-run design plus the interactions of dominating factors asdetermined from earlier, smaller designs (aquifer size, connatewater saturation, and water relative permeability).

    Response models from all designs predict similar mean gasrecoveries (Table 3, Fig. 5). The 36-run design variance estimateis less than the 108-run design variance estimate, but the meanestimate of gas recovery of all designs are statistically indistin-guishable at the 0.05 confidence level, and both models satisfac-torily fit the data (in terms of correlation coefficients and F sta-tistics). However, the difference in standard deviation indicatesthat estimates of extrema are different, particularly between the36-point and more densely sampled designs (Table 3).Optimization Under UncertaintyResponse surfaces can be used with conventional optimizationtechniques. Although the factors nonlinearly influence the re-

    Fig. 4Gas recovery sensitivity to water saturation and perme-ability (hyperkriging).

    Fig. 5Monte Carlo (MC) simulation results using differentNOA designs.

    633December 2007 SPE Reservoir Evaluation & Engineering

  • sponse, quadratic polynomial function (response surfaces) optimi-zation is fast and often accurate.

    The objective function is the response surface for gas recoveryfrom the 36-run NOA. The apparent optimal solution depends onthe accuracy of the response surface, and the response-surfaceaccuracy depends on the choice of design parameters such asnumbers of levels and runs. In this section, the choice of the 36-run design to generate response surfaces for optimization is exam-ined by comparing optima obtained by the 36, 72, and 108-runNOA designs.

    Formulation of the Objective Function. The optimization of gasrecovery under uncertainty maximizes the expected value of thisresponse by changing the engineering factors c, cx (White andRoyer 2003).

    fc =

    fc, d, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (7)

    where f (c) is the expected value of the gas recovery, which is afunction of only engineering factors c, obtained by integrating overthe uncertain factors ; x, c and are nonintersecting sets.The differential d() is the joint probability density of the un-certain factors. This formulation (Eq. 7) recognizes that reservoirproperties such as kv /kh are rarely precise; averaging over all un-certain factors using multivariate joint uncertainty distribution() incorporates this risk in an average sense. Moreover, thequadratic (and possibly kriged) response models approximate thenonlinear factor-response relationships. Integrating across rangesof uncertain parameters lowers the problem dimensionality andtends to smooth the surface and eliminate local extrema in a sta-tistically reasonable way.

    Optimization Method and Results. If Eq. 7 is solved analytically,f is a second order equation without any interactions, as no inter-action terms are assessed in the 36-run NOA design. But the un-certain factors would influence the objective function f if the un-certain factors interact with the engineering factors. Although inthis example the control (c) and uncertain () factors are uncor-related a priori (i.e., they are independently distributed), they maystill interact in the response model-factor correlation and interac-tion are distinct issues. The uncertain (or nuisance) factors havebeen integrated out of f, but the interactions with c are folded induring the integration. As the quadratic effects are estimated in36-run design, f for this response surface is a parabola in space cand may have an optimum within the factor space (not possible fora linear model). In this study, f is obtained by numerically inte-grating f over the uncertain factor space using an HSs and opti-mization over c uses nonlinear optimization.

    The optima of 36-run and 108-run are close for all the threefactors, but the 72-run deviates from both the models. All designshave optima around the full completion length, with average tub-ing diameter around 2 in., and low tubinghead pressure. The resultscould be further analyzed using another design with engineeringfactors and the most influential uncertainty factors (aquifer size,connate water saturation, and water relative permeability) refinedin the optimal region. The high initial ranges for the factors chosenfor this study and number of factors make it nearly impossible fordesigns to precisely locate optima unless the range is adaptivelyrefined in the neighborhood of the optimum.

    DiscussionDesign Size. Performance of the designs or sampling techniquesdepends on the sample size. The larger the sample, the more ac-curate the response model will be. However, beyond some thresh-old number of runs, the accuracy does not increase significantly.The general question is: How many runs are ideal for a givencase? The answer depends on how complex the system is andhow expensive the runs are. As the complexity of the systemincreases, the sample data set should be denser. For time-consuming problems, it is better to use fewer samples. For designs,a second order polynomial has b(k+1)(k+2)/2 coefficients to

    estimate and the number of experiments must be at least b. Giuntaet al. (Giunta and Watson 1998; Giunta et al. 2003) found that fora reasonably accurate regression, a useful empirical guideline isabout 1.5b runs for 5 to 10 variables, 3b runs for 10 to 20 variables,and 4.5b runs for a 20 to 30 variables. For LHS, Chen and Simpson(2000) suggested 3k runs for expensive experiments, 10k for mod-erately expensive experiments, and 3(k+1)(k+2)/2 for low-cost ex-periments. These relations are reasonable and could be cross-checked as in Sacks et al. (1989a). These relations would be usefulas a part of a software interface.

    Optimum of Cumulative Gas Recovery. Gas recovery optimi-zation results show that response surfaces generated by the 36-runNOA is an efficient design for optimization because optimumestimated by this design are very close to the results for the NOAwith 108 runs. These gas recovery optimization results show thata gas well with coning and uncertain reservoir properties shouldhave completions through the entire pay zone and the tubingheadpressure should be kept as low as possible to produce as much gasas possible for higher gas recovery (not for NPV optimum; seenext section).

    Tubinghead pressure is scaled to the initial reservoir pressure,so the optimum value is a function of pi. The limit for the tubing-head pressure could be facilities restriction and sand production.Tubing diameter is optimized to a diameter where recovery will behigh. Higher diameters give water-loading problems, especiallyunder water encroachment, whereas lower diameters yield lowerpeak rates. The results are different if water handling, compres-sion, and tubing costs are included in the objective function. Seethe section titled Optimization of NPV.

    Factor distributions also significantly influence the optimiza-tion and uncertainty analysis. For example, an alternative uncer-tainty evaluation uses all factor distributions shifted to the left(using a triangular distributions). Using response surfaces gener-ated by 108-run NOA and resampling from these different factordistributions, the uncertainty and optimization results change. Av-erage reserves increase from 41 to 52%, and standard deviationincreases from 17.7 to 26.9. The optimal tubing diameter changesfrom 2 to 2.3 in. The other two controllable factor optima wereunaffected by the change in factor distributions. Such analyses offactor distribution effects can be used to justify the resources foradditional data. For example, if aquifer properties are significantlyinfluencing the uncertainty and optimization, more resources couldbe shifted for acquiring aquifer data, perhaps by more in-depthanalysis of seismic data.

    Optimization of NPV. Economic factors such as discount factorand gas price have a significant influence on the viability of oil andgas development projects and should be considered when optimiz-ing controllable factors. This can be approached using NPV, whichmay be more important than cumulative gas recovery. The cost oftubing, water disposal cost (Cw), gas price (Cg), and gas pumpingcosts could change the optimum operating conditions.

    Considering water disposal costs makes full-interval comple-tions less attractive because they increase water production andmay decrease NPV even if gas recovery is higher for full-intervalcompletions (Fig. 6). Optimization of NPV gives a completionlength where gas recovery is high (depending on Cg and discountrate) and cost for water treatment is acceptably low (depends onCw and discount rate). Similarly, tubing cost, which depends onlength and diameter of the tubing, could change the optimum(Table 4). The objective function for this economic optimization isan NPV response surface from a 36-run NOA. Assumed watertreatment costs and annual discount rate are USD 0.20/bbl and10%, respectively. Gas price is USD 4/MCF. Annual inflation ingas and water treatment cost is assumed to be 3%. Tubing costs areincorporated in the NPV, but tubing costs do not significantlyinfluence the decisions because the costs are small compared withrevenue (less than 0.1%) at the depth considered (Fig. 7). How-ever, considering drilling and completion costs for a larger-diameter wellbore would likely affect the optimization of NPV.Similarly, compression requires an investment, and compression

    634 December 2007 SPE Reservoir Evaluation & Engineering

  • fuel costs may affect optimum factor estimates, especially ptf. Thelowest tubinghead pressure is preferable (high ptf,D means low ptf)for higher NPV, but practical production constraints like sandproduction and compression costs limit the minimum tubingheadpressure. Here, however, increased coning does not make thegreatest possible drawdown suboptimal (Fig. 8).

    Different responses are sensitive to different factors; this factcomplicates sensitivity analysis. The factors controlling NPV arepermeability, relative permeability curve shape, and relative per-meability endpoints. Aquifer support improves the NPV, but veryhigh mobility of water may decrease the value of the project(saddle shape, Fig. 9). Uncertainty analysis of the NPV of reservesgives a mean value of USD 94 million and standard deviation isUSD 22 million (Fig. 10).

    Gas recovery and NPV are correlated (0.77; Fig. 11 designpoints are also shown). However, a particular gas recovery is as-sociated with a range of NPV values (depending on operatingcosts, capital, and discount rates). The scatter implies that a mod-els ranking in NPV is generally not the same as in gas recovery;that is, the common practice of naming a model as the P10 modelis meaningless if more than one response is being considered.

    Optimal completion length for NPV differs from the optimumfor recovery (Table 4). Optimal tubinghead pressure is notchanged, and the diameter of tubing is changed slightly. The lowercompletion length decreases gas rates and recovery, but decreaseswater production and thus disposal costs.

    Further optimization analysis could include a refined designwith the three major uncertain factors, tubing diameter rangingfrom 1.5 to 2.5 in., head pressure with high ((piptf)/pi), andcompletion length (Hp) from 0.5 to 1.

    ConclusionsExperimental designs help examine reservoir behavior if it is af-fected by many factors over wide ranges. The ranges of the factors,levels of the factors, and number of runs are chosen depending onmodel complexity and the nature of dependencies between factorsand responses.

    Uncertainty can be assessed efficiently using experimental de-signs. OAs and NOAs are more efficient than full factorials, LHS,and other partial factorial designs. OAs and NOAs significantlydecrease the number of runs in comparison to traditional designslike CCD and Box-Behnken designs. HSs sampling yields more

    efficient and/or accurate Monte Carlo sampling compared to LHSand pure Monte Carlo, especially in high-dimensional problems.

    Polynomial models or high-dimensional kriging can be used tocreate response models. In the current study, polynomial regres-sion proved better than kriging for response surfaces by a meansquared error test. Kriging estimates may be improved by notassuming isotropy in the factor space or by kriging residuals to alow order polynomial.

    Response surfaces for a 36-run NOA can relate the responseslike NPV or gas recovery to the production parameters accurately.Response surfaces allow optimization, sensitivity, and uncertaintystudies to be done more quickly and easily than exhaustive nu-merical reservoir simulation. Thus, response surfaces may be areasonable alternative to more-expensive numerical models.

    Including water treatment costs in NPV optimization decreasesthe optimum completion length from the full interval that is opti-mal for gas recovery.

    Acknowledgments

    This work was supported by Landmark Graphics and Unocal. Wealso thank Miguel Armenta for help in formulating the reservoirmodel, Feng Wang for assistance with response modeling, HongTang for Matlab guidance, and Ozan Arslan for providing auto-mation software and advice.

    Nomenclature

    b coefficients to estimatec set of controlled parameters

    Cg gas price ($/Mscf)Cw water treatment costs ($/bbl)Dt tubing diameter (in)G initial gas in place (MMscf)

    Gp ultimate gas recovery (MMscf)GpD dimensionless ultimate gas recovery

    h perforated gas zone (ft)ht gas zone thickness (ft)

    Hp fraction of completion (h/ht)k number of factorsn number of experimentspi initial pressure (psia)

    ptf tubing head pressure (psia)ptfD dimensionless tubing head pressure

    q number of interactionsr vector of correlations with unknown point and design

    pointsR correlation functionR matrix with correlations between all the design pointss levelst strengthx design used in response surfacesy response in response surfaces coefficients for factors in response surfaces joint probability density of the uncertain factors

    correlation coefficient k-vector of exponential correlogram ranges set of uncertain parameters

    Fig. 6Objective function sensitivity to completion length.

    635December 2007 SPE Reservoir Evaluation & Engineering

  • ReferencesArmenta, M. 2003. Mechanisms and Control of Water Inflow to Wells in

    Gas Reservoirs with Bottom Water Drive. PhD Dissertation, LouisianaState U., Baton Rouge, Louisiana.

    Armenta, M., White. C.D., and Wojtanowicz, A.K. 2003. CompletionLength Optimization in Gas Wells. Paper presented at the CanadianInternational Petroleum Conference, Calgary, 1012 June.

    Aslett, R., Buck, R.J., and Duvall, S.G. 1998. Circuit Optimization ViaSequential Computer Experiments: Design of an Output Buffer. Ap-plied Statistics 47 (1): 3148.

    Box, G.E.P. and Hunter, J.S. 1961. The 2(k-p) Fractional Factorial De-signs, Part I, Technometrics 3 (3): 311351.

    Box, G.E.P., Hunter, W.G., and Hunter, J.S. 1978. Statistics for Experi-menters: An Introduction to Design, Data Analysis, and Model Build-ing. New York City: John Wiley & Sons.

    CMG Technologies Launcher, Version 2002.1, Revision 4, ComputerModeling Group, 19782002, Calgary.

    Deutsch, C.V. and Journel, A.G. 1998. GSLIB: Geostatistical SoftwareLibrary and Users Guide. New York City: Oxford.

    Efron, B. and Tibshirani, R.J. 1993. An Introduction to the Bootstrap. NewYork: Chapman & Hall.

    Giunta, A.A. and Watson, L.T. 1998. A Comparison of ApproximationModeling Techniques: Polynomial vs. Interpolating Models. PaperAIAA-1998-4758 presented at the 7th AIAA/USAF/NASA/ISSMOSymposium on Multidisciplinary Analysis and Optimization, St. Louis,Missouri, 24 September.

    Giunta, A.A., Wojtkiewicz, S.F., and Eldred, M.S. 2003. Overview ofModern Design of Experiments Methods for Computer Simulations.

    Paper AIAA-2003-649 presented at the Aerospace Sciences Meetingand Exhibit, Reno, Nevada, 69 January.

    Goovaerts, P. 1997. Geostatistics for Natural Resources Evaluation. NewYork City: Oxford.

    Halton, J.H. 1960. On the Efficiency of Certain Quasi-Random Sequencesof Points in Evaluating Multi-Dimensional Integrals. NumerischeMathematik 2 (1): 8490.

    Hedayat, A.S., Sloane, N.J.A., and Stufken, J. 1999. Orthogonal Arrays:Theory and Applications. Springer Series, New York City.

    Iman, R.L. and Conover, W.J. 1982. A Distribution-Free Approach toInducing Rank Correlation among Input Variables. Communications inStatistics. 11 (3): 311334.

    Informatics and Mathematical Modeling. 2002. Design and Analysis ofComputer Experiments, Version 2.0, Lyngby, Denmark.

    Jin, R., Chen, W., and Simpson, W.T. 2001. Comparative Studies of Meta-modeling Techniques under Multiple Modeling Criteria. Structural andMultidisciplinary Optimization, 23 (1): 113.

    Kalagnanam, J.R. and Diwekar, U.M. 1997. An Efficient Sampling Tech-nique for Off-Line Quality Control. Technometrics 39 (3): 308319.

    McKay, M.D., Conover, W.J., and Beckman, R.J. 1979. A Comparison ofThree Methods for Selecting Values of Output Variables in the Analy-sis of Output from a Computer Code. Technometrics 21 (2): 239245.

    McMullan, J.H. and Bassiouni, Z.A. 2000. Optimization of Gas-WellCompletion and Production Practices. Paper SPE 58983 presented atthe SPE International Petroleum Conference and Exhibition in Mexico,Villahermosa, Mexico, 13 February. DOI: 10.2118/58983-MS.

    Montgomery, D.C. 2001. Introduction to Linear Regression Analysis. NewYork City: John Wiley & Sons.

    Myers, R.H. and Montgomery, D.C. 1995. Response Surface Methodology:Process and Product Optimization Using Designed Experiments. NewYork City: John Wiley & Sons.

    Narayanan, K. 1999. Applications for Response Surfaces in ReservoirEngineering. MS Thesis, U. of Texas, Austin, Texas.

    Fig. 7Objective function sensitivity to tubing diameter. Fig. 8Objective function sensitivity to tubinghead pressure.

    Fig. 9NPV sensitivity to water relative permeability param-eters (polynomial regression).

    Fig. 10Monte Carlo simulation result for NPV using NOA (36,14, 38, 46, 2).

    636 December 2007 SPE Reservoir Evaluation & Engineering

  • Narayanan, K., Cullick, A.S., and Bennett, M. 2003. Better Field Devel-opment Decisions from Multiscenario, Interdependent Reservoir, Well,and Facility Simulations. Paper 79703 presented at the SPE ReservoirSimulation Symposium, Houston, 35 February. DOI: 10.2118/79703-MS.

    Peake, W.T., Abadah, M., and Skander, L. 2005. Uncertainty AssessmentUsing Experimental Design: Minagish Oolite Reservoir. Paper SPE91820 presented at the SPE Reservoir Simulation Symposium, TheWoodlands, Texas, 31 January2 February. DOI: 10.2118/91820-MS.

    Peng, C.Y. and Gupta, R. 2004. Experimental Design and Analysis Meth-ods in Multiple Deterministic Modeling for Quantifying HydrocarbonIn-Place Probability Distribution Curve. Paper SPE 87002 presented atthe SPE Asia Pacific Conference on Integrated Modeling for AssetManagement, Kuala Lumpur, Malaysia, 2930 March. DOI: 10.2118/87002-MS.

    Sacks, J., Welch, W.J., Mitchell, T.J., and Wynn, H.P. 1989a. Design andAnalysis of Computer Experiments. Statistical Science 4 (4): 409435.

    Sacks, J., Schiller, S.B., and Welch, W.J. 1989b. Designs for ComputerExperiments. Technometrics 31 (1): 4147.

    Sandor, Z. and Andras, P. 2004. Alternative Sampling Methods for Esti-mating Multivariate Normal Probabilities. J. of Econometrics 120 (2):207234.

    Simpson, W.T., Mauery, M.T., Korte, J.J., and Mistree, F. 1998. Compari-son of Response Surface and Kriging Models for MultidisciplinaryDesign Optimization. Paper AIAA-98-4755 presented at the Sympo-sium on Multidisciplinary Analysis and Optimization, St. Louis, Mis-souri, 24 September.

    Sobol, I.M. 1967. On the Distribution of Points in a Cube and the Ap-proximate Evaluation of Integrals. Computational Mathematics andMathematical Physics 7 (4): 86112.

    Valajak, M., Novakovic, D., and Bassiouni, Z.A. 2001. Physical and Eco-nomic Feasibility of Waterflooding of Low-Pressure Gas Reservoirs.Paper SPE 69651 presented at the SPE Latin American and Caribbean

    Petroleum Engineering Conference, Buenos Aires, 2528 March. DOI:10.2118/69651-MS.

    White, C.D. and Royer, S.A. 2003. Experimental Design as a Frameworkfor Reservoir Studies. Paper SPE 79676 presented at the SPE ReservoirSimulation Symposium, Houston, 35 February. DOI: 10.2118/79676-MS.

    White, C.D., Wills, B.J., Narayanan, K., and Dutton, S.P. 2001. Identifyingand Estimating Significant Geologic Parameters With ExperimentalDesign. SPEJ 6 (3): 311324. SPE: 74140-PA. DOI: 10.2118/74140-PA.

    Xu, H. 2002. An Algorithm for Constructing Orthogonal and Nearly-Orthogonal Arrays with Mixed Levels and Small Runs. Technometrics44 (4): 356368

    SI Metric Conversion Factorsacre 4.046 873 E+03 m2

    API 141.5/(131.5+API) g/cm3Btu 1.055 056 E+00 kJ

    ft 3.048* E01 mF (F32)/1.8 C

    *Conversion factor is exact.

    Appendix 1Sensitivity Models

    GpD = 0.53 + 0.21PiD + 0.52KhD .34SwcD 0.02SgcD+ 0.08KrgD + 0.06KrwD + 0.06NwD 0.18NgD + 0.04PtfD+ 0.06DtD + 0.08VaD 0.28KzD 0.22HpD 0.40BD+ 0.11P2iD + 0.05K2hD + 0.20S2wcD 0.02SgcD2 + 0.06K2rgD+ 0.01K2rwD + 0.27N2wD 0.04N2gD + 0.14P2tfD 0.18D2tD+ 0.04V2aD + 0.23K2zD + 0.04H2pD 0.36B2D

    where GpDGp /G

    Subhash Kalla is a PhD student in petroleum engineering atLouisiana State U. e-mail: [email protected]. His research interestsinclude uncertainty analysis, reservoir characterization, anddata integration. Kalla holds an MS degree from the LouisianaState U. in petroleum engineering and a BTech degree fromthe Regional Engineering College (now National Institute ofTechnology), Warangal, India, in chemical engineering. Chris-topher D. White is an associate professor of petroleum engi-neering at Louisiana State U. He holds a BS degree from Okla-homa U. and MS and PhD degrees from Stanford U., all inpetroleum engineering. e-mail: [email protected]. Formerly,White was a research engineer for Shell Development Com-pany and a research scientist for the Bureau of EconomicGeology, the University of Texas at Austin. His research inter-ests include general reservoir engineering, reservoir simulation,and statistics.

    Fig. 11Correlation between NPV and gas recovery.

    637December 2007 SPE Reservoir Evaluation & Engineering