Saggaf, M.-phd Abstract

Embed Size (px)

Citation preview

  • 7/28/2019 Saggaf, M.-phd Abstract

    1/4

    An Integrated Seismic and Well Log Analysis for theEstimation of Reservoir Properties

    Muhammed M. SaggafSubmitted to the Department of Earth, Atmospheric, and Planetary Scienceson April 11, 2000 in partial fulfillment of the requirements for the degree of

    Doctor of Philosophy Abstract We present an integrated approach forcharacterizing the reservoir and estimating its properties both at the well

    locations and in the inter-well regions. Such an approach can be an

    invaluable tool for attaining a detailed, consistent, and completecharacterization of the reservoir, as not only does it incorporate all majorsources of information that shape our understanding of the reservoir,

    including core descriptions, well logs, seismic data, and a prior knowledge of

    the geological setting of the region, but also it develops means for utilizing

    these sources of information in a unified manner that gives rise to acoherent framework for relating these sources of information to yield anintegrated reservoir model. We analyze the different components of this

    approach, develop methodologies for improving the prediction accuracy of

    each, and link the mechanisms across these components to achieve anaccurate and consistent characterization of the reservoir. The issues we

    tackle in this thesis can be broadly divided into four categories:enhancement of the seismic resolution, estimation of the reservoir

    properties at the well locations, characterizing in the inter-well regions, and

    pre-processing the data to remedy any incompleteness orinconsistency. The first component of the approach we present in this

    thesis is concerned with enhancing the resolution of the seismic data by

    generalizing the conventional deconvolution method to utilize properstochastic modeling of the underlying reflection coefficients of the earth.One of the fundamental assumptions of conventional deconvolution methods

    is that reflection coefficients follow the white noise model. However, analysis

    of well logs in various regions of the world observed that in the majority of

    cases reflectivity tends to depart from the white noise behavior. Theassumption of white noise leads to a conventional deconvolution operator

    that can recover only the white component of reflectivity, thus yielding adistorted representation of the desired output. Various alternative processes

    have been suggested to model reflection coefficients. We examine some of

    these processes, apply them, contrast their stochastic properties, andcritique their use for modeling reflectivity. These processes include ARMA,

    scaling Gaussian noise, fractional Brownian motion, fractional Gaussiannoise, and fractionally integrated noise. We then present a consistent

    framework to generalize the conventional deconvolution procedure to handle

  • 7/28/2019 Saggaf, M.-phd Abstract

    2/4

    reflection coefficients that do not follow the white noise model. This

    framework represents a unified approach to the problem of deconvolving

    signals of non-white reflectivity, and describes how higher-order solutions tothe deconvolution problem can be realized. We test generalized filters basedon the various stochastic models and analyze their output. Since these

    models approximate the stochastic properties of reflection coefficients to a

    much better degree than white noise, they yield generalize deconvolutionfilters that deliver a significant improvement on the accuracy of seismic

    deconvolution over the conventional operator. In the second component,we aim to provide an accurate and consistent characterization of the

    reservoir properties at the well locations, since the description of the

    reservoir invariably relies on its sampling at these locations. We tackle thetask of identifying lithological and depositional facies from well logs using

    two distinct approaches: competitive networks and fuzzy logic. Competitive

    networks are a special class of neural networks that perform vector

    quantization of the input data by competitive learning. They areuncomplicated one-layer or two-layer networks that are small, compute-

    efficient, inherently well suited to classification and pattern identification,

    and avoid the difficulties associated with the back-propagation networks and

    statistical methods. This approach can be applied in two different modes,depending on the availability of core information. In the unsupervised mode,the well is segregated into distinct facies classes based solely on the internal

    behavior of the logs, without the use of core information. In the supervised

    mode, the lithological and depositional facies presented in uncored wells are

    identified by making use of the interdependence of observed core and log

    data in proximate wells that have been cored and correlating this with thebehavior of the logs in the uncored wells. Fuzzy logic represents the degreeof fit of a particular observation to the definition of a set via membership

    functions that describe the fuzzy boundaries of that set. There are twoprincipal advantages of this approach. First, it represents a natural way to

    capture and describe vagueness, uncertainty, and imperfection in the data,

    as fuzzy logic is intrinsically well suited to characterizing vague and

    imperfectly defined knowledge (a situation encountered in most geologicaldata), and it can yield models that are simpler and more robust than thosebased on crisp logic. And, second, it provides a means of conveniently

    updating existing geological data, while fully honoring those data. In boththe competitive networks and fuzzy logic approaches, quantitativeconfidence measures are ascribed to the results of the analysis. These

    measures that describe how well the procedure can identify the facies givenuncertainties in the data, and both approaches can be enhanced by

    incorporating existing human experience and geological principles into the

    inference process in the form of formulated static and dynamic constraintsto guide that process. Additionally, both approaches are automatic, easy to

  • 7/28/2019 Saggaf, M.-phd Abstract

    3/4

    apply, robust in presence of noise, can handle data of large size and

    multiple log types, and do not suffer from input space distortion or non-

    monotonous generalization (data overfitting). The results of the twomethods are in general comparable, and cross-validation tests show thattheir predicted facies show considerable agreement with the actual facies

    observed in core analysis. The third component combines the two sources

    of information discussed above (seismic and well data) to extend theknowledge obtained at well locations through the use of the seismic data to

    attain an accurate and consistent characterization of the reservoir in theinter-well regions. There are two principal aims of this component: to

    estimate the point-values of the quantitative reservoir properties (such as

    porosity) and to provide automatic stratigraphic interpretation of the seismicdata by identifying and mapping the facies present in the reservoir. To

    estimate the point-values of porosity from seismic data, we present an

    approach that utilizes regularized back propagation and radial basis neural

    networks. Both types of networks have inherent smoothness characteristicsthat alleviate the non-monotonous generalization problem associated with

    traditional networks and help to avert overfitting the data. The approach we

    present thus far has four advantages over the traditional methods: 1) it is

    inherently non-linear and there is no need to linearize it, so it is quite adeptat capturing the intrinsic non-linearity of the problem., 2) it is virtuallymodel-free, no a priori theoretical operator is required to link the reservoir

    properties to the observed seismic response, 3) a starting model is not

    needed, and therefore the final outcome is not dependent on the proper

    choice of that initial guess, and 4) it is naturally smooth, hence it has much

    more monotonous generalization behavior than traditional neural networkmethods and is not prone to overfitting. The results obtained from cross-validation tests indicate that this approach can be quite adept at estimating

    the porosity distribution of the reservoir, and the accuracy of the resultsremained consistent as the network parameters (size and training length)

    were varied. In contrast, the results produced by the traditional back-

    propagation network were inconsistent, as the traditional network gave

    acceptable results only when the optimal network parameters were used,and the accuracy of the network deteriorated significantly as soon asdeviations from these optimal parameters occurred. For the classification

    and identification of the reservoir facies from seismic data, we employ anapproach based on competitive networks. As we mentioned earlier, thesenetworks are naturally non-linear and inherently well suited to classification

    and pattern identification. This approach avoids many of the difficultiesassociated with the existing methods traditionally utilized for this task, such

    as multi-variant statistics, linear Bayesian inference, expert systems, and

    back-propagation networks (which are most suitable for point-valueestimation rather than quantitative classification). Moreover, this approach

  • 7/28/2019 Saggaf, M.-phd Abstract

    4/4

    can be adapted to perform either classification of the seismic facies based

    entirely on the characteristics of the seismic response, without requiring the

    use of any well information, or automatic identification and labeling of thefacies where well information is available. The former is of prime use for oilprospecting in new regions, where few or no wells have been drilled,

    whereas the latter is most useful in development fields, where the

    information gained at the wells can be conveniently extended to the inter-well regions. It is especially valuable where 3D seismic surveys are

    available, as an areal map of the reservoir limits may be extracted from theseismic survey using this method. Cross-validation tests on synthetic and

    real seismic data demonstrated that the method could be an effective

    means of mapping the reservoir heterogeneity. For synthetic data, theoutput of the method showed considerable agreement with the actual

    geologic model used to generate the seismic data, while for the real data

    application, the predicted facies accurately matched those observed at the

    wells. Moreover, the resulting map corroborates our existing understandingof the reservoir and shows substantial similarity to the low frequency

    geologic model constructed by interpolating the well information, while

    adding significant detail and enhanced resolution to that model. The fourth

    component of the approach aims to remedy the incompleteness andinconsistency of the core and well data at the early gathering and inspectionstages. The accuracy of any quantitative method that subsequently

    attempts to extract geologic information from the data can only be as good

    as the accuracy of the data. We present two approaches in this thesis for

    accomplishing this task. To remedy the incompleteness of the data, we

    utilize regularized back-propagation networks to enhance wells of limited logsuites by estimating the missing logs in these wells. This is achieved byanalyzing the interdependence of the various log types in a well that has a

    complete suite of logs, and then applying the network to proximate wellswhose log suites are incomplete to estimate the missing logs in those wells.

    To remedy the inconsistency of the data we present an approach that

    assigns depth corrections to core plugs by computing a coherence measure

    between the core and log data and maximizing that measure. Thisautomatic correction resolves the inconsistencies between core and loginformation and gives rise to much better agreement between two data

    sets. Moreover, the resulting correction is not only automatic, and thusaverts the expenditure of considerable time and effort required by themanual procedures, but it is also more accurate and less affected by

    subjective human performance than these procedures.