Upload
naseem-ahmed
View
266
Download
14
Tags:
Embed Size (px)
Citation preview
tNavigator — advanced technologies for
hydrodynamic simulations of oil and gas
reservoirs Tom Robinson, Vasilii Shelkov
April 2015
Rock Flow Dynamics
About the company
History highlights
- 2005: RFD was founded and acquired first clients
- 2010: first large corporate license deals:
- co-marketing agreement and VC investment from Intel Capital in 2010
- sales and marketing office in Houston in 2011, Kuala Lumpur in 2012, London in 2014
- by 2014 RFD had about 400 active licenses sold worldwide, the number of existing and evaluating clients exceeds 160!
ongoing evaluations: PETRONAS, KOC, ADNOC/ADCO, PDO, Shell, ExxonMobile, YPF,…
Development center - 50 math, physics, software engineers based in Moscow, Russia
Worldwide sales & marketing:
- Houston, London, Kuala Lumpur and agents in UAE, Canada, China(CGG), Vietnam, Australia(Petrosys), Ghana
cluster users
70 oil&gas companies in 15 countries
E300 input data formats
Users worldwide: compositional, thermal compositional
STARS input data formats
Testimonials
The t-Navigator technology represents a game-changer for us compared to other reservoir simulation software in our organization. We not only can tackle far more complex reservoir models with the software, but we are also able to fully exploit the exceptional speed of t-Navigator in combination with our assisted history matching software to significantly reduce project cycle times. This in turn has made reservoir simulation a much more valuable tool to our organization. Larry Murray Manager, Waterflood Modeling Occidental Oil and Gas California Operations I would like to express my appreciation for RFD's outstanding
support and thank your development team for adding surfactant modeling capabilities to tNavigator. Use of this feature in our simulation work has advanced our understanding of surfactant flooding potential in the Spraberry oil field in West Texas. Texas oil and gas exploration and production company
http://rfdyn.com/clients/testimonials/
Having been a user of reservoir simulation for over 30 years, I was looking for the next step change in technology that allowed us to do the things we wanted to do, at the resolution we wanted, in an acceptable timeframe and at reasonable cost. RFD achieved this, and I'm sure will achieve a significant part of the reservoir simulation marketplace as others realize that this is a step change in the performance/price value driver.
Steve Flew Technical Director, Petrofac Malaysia
Publications
SPE 138071, 2010 New realistic hydraulic and technogenic fracture modeling approach in full-scale dynamic models
SPE 147021, 2011 A new approach to numerical simulation of fluid flow in fractured shale gas reservoirs»(>1300 downloads)
SPE 162091, 2012 Numerical modeling of salty reservoir waterflooding with fresh water
SPE 162090, 2012 Sector modeling of giant fields
SPE 162020, 2013 together with Novatek,
Optimum wells completions using simulation.....
SPE 163580, 2013 together with Heriot Watt University,
Using multi-target functions for automatic history matching….
SPE 164872, 2013 together with Saudi Aramco, Imperial College
Impact of Completion on Wellbore Skin Effect
SPE 168723-MS, 2013 together with Pioneer Natural Resources,
Investigation of flow and imbibition processes in Spraberry fracture….
SPE 163090, 2012 Advances in modeling of giant reservoirs (>400 downloads) Invited to plenary talk «Advances in modeling of giant fields» 2012 organized by Saudi Aramco
№7 most downloadable SPE paper in April 2013
SPE 171225, 2014 Samotlor field simulation using high resolution grids (approaching gigacell!!)
What is tNavigator?
tNavigator
- Full featured, finite difference dynamic reservoir simulator
- Designed from scratch for maximum parallel performance
- Integrated SW tool: simulation engine + pre + postprocessor
- User interactivity, detailed results monitoring at runtime
- Works best on multicore workstations and clusters
Reasons why engineers choose tNavigator
Performance Workstation: Interactive desktop supercomputer
3D runtime interactivity and 3 - 5 times parallel performance boost at no charge!
Cluster: Hybrid technology + Remote GUI
Record breaking parallel scalability: 10 - 100 times faster than other old/new commercial simulators
Remote GUI: runtime control, results monitoring
Plug&Play support for Petrel/Eclipse workflows - reads native E100/300, Tempest/MORE, IMEX/STARS (in progress), formats are converted “on the fly”
- simulation results can be exported as E100 binaries
Easy to learn, universal software - No Training is necessary to get started!
Advanced stuff: fractures, drainage matrix, models auto split/merge, handling “super sparse” models, etc.
Pricing: we don’t charge per core, MR option is available – 100s of cores could be used!
Documentation, training
Worldwide support – 24/7 Support team receives about 20 – 30 e-mails, phone calls per day
More than 40 training courses for users of various levels
Delivered about 10 training classes for more than 100 engineers yearly in
2011 - 2014 in Russia, Europe, USA, China, Middle East
Documentation
User's Manual – physics, numerical methods, approximations used,
the list of supported keywords (1500+ pages)
User's Guide – “how to” for GUI, practical examples, recommendations (400 pages)
Computing infrastructure
In 2013 - 14, RFD invested ~$700,000 in cluster hardware Moscow Engineering Center Houston office
320 cores, 16 nodes(dual Xeon E5 2680v2) FDR Infiniband 56 Gb/s, RAM 2.048TB, 120TB disk
- software development
- regression tests
- advanced simulation research
1280 cores, 64 nodes(dual Xeon E5 2680v2) FDR Infiniband 56 Gb/s, RAM 8.2TB, 200TB disk
Universal software architecture
Universal software architecture
E100
E300
STARS
Internal data model variables: molar
densities, pressure storage: block MSR
Pre-conditioner Block parallel ILU(0)
Parallel solver time: fully implicit (equations, wells,
aquifers, surface network)
space: finite volume linear solver: BCGS
technology: system threads, MPI, “hybrid”
Runtime interactive GUI (Windows, Linux, remote client for cluster)
pre-, monitoring, post-processor
Console/workst
TEMPEST/MORE
E100 binaries
IMEX
MPI/cluster
Integration with other packages
Golder Associates Golder Associates FracMan is using tNavigator in US and Germany to simulate fine grid reservoir models with DFNs generated by FracMan
Petroleum Experts (PETEX) The announcement is expected this month, the pre-release is available.
Petrel RE There is simple way to do it through Workflow Editor
MEPO “Out of the box” through the support data input/export in E100/E300 formats
EnAble tNavigator generates necessary trigger files for Enable EndOfTheRun
Support of standard functionality
for “black oil”
Black oil, live oil, wet gas (E100, MORE and IMEX formats):
- multi-component, FI and AIM calculations, dual porosity, dual permeability
- corner point, general mesh formulation (NNC, LGR, faults, pinch, etc.), unstructured grids
- multi-reservoir option
- BHP, THP well controls, controls by debits, group
- multiple filtration, PVT (including multibranching), equilibrium regions
- 3-phase permeabilities (end-point scaling), gravitation, capillary effects
- vertical, deviated, horizontal, multi-segmented wells, fractures
- segmented well density model
- aquifers (constant flux, Fetkovich, Carter Tracy, numerical)
- temperature option, tracers, (nano) polymer flooding, desalination for collectors with salt
- 3D streamline modeling, drainage matrix analysis, injection efficiency evaluation
- surface network option, D-factor, GPP controls, VFP lifting tables, ACTION, auto well drilling option
- loading LAS data, trajectories, comparison with dynamic wells profiles
- API tracking, gravity drainage, hysteresis, ASP injection, diffusion, absorption, CBM (coal bed methane)
Support of standard functionality
for “compositional” models
Equation of state calculation (E300 and MORE input data format):
- FI and AIM calculations, dual porosity, dual permeability, general mesh formulation
- multiple EOS (Peng-Robinson, Redlich-Kwong, Soave-Redlich-Kwong) and EOS regions
- compositional properties and EOS type for production system (surface conditions)
- relative permeability and capillary pressure may depend on surface tension
- relative permeability scaling with respect to composition
- velocity dependent relative permeabilities
- special treatment for oil and gas relative permeabilities near the critical point
- gas plants, gas fuel, sales and re-injection, multistage separators
- gas field model (gas Daily Contracted Quantity, DCQ)
- mixture injection (multicomponent and multiphase (WAG))
- production and injection surface networks
- molecular diffision, adsorbtion and desorbtion, coal bed methane (CBM)
- segments of multi-segment wells that represent sub-critical valves
- pressure maintenance regions
- support of full MPI and GUI functionality: runtime results monitoring, interactivity on clusters/wkstations
Support of standard functionality
for “thermal compositional” models
K-values based thermal (E300 and STARS input data format):
- FI and AIM calculations, dual porosity, dual permeability, general mesh formulation
- K-values for hydrocarbon components via tables or via correlation formulas (also for surface)
- support for solid phase and chemical reactions for insitu combustion process
- steam injection, mixture injection (multicomponent and multiphase (WAG))
- liquid phases individual component densities and viscosities as functions of temperature and pressure
- enthalpies of hydrocarbon components and rock as functions of temperature
- porosity dependence on temperature and pressure
- thermal conductivity dependence on thermal conductivities of the mobile phases, solid phase and rock
- the analytical model of heat loss between the reservoir and surroundings
- analytical, semi-analytical and numerical aquifers
- two heater models (with constant energy injection rate and depending on temperature gradient)
- selectable thermal primary variables (energy density or temperature)
- relative permeability scaling with respect to composition and temperature
- support of full MPI and GUI functionality: runtime results monitoring, interactivity on clusters/wkstations
Interactive supercomputer
New milestones in performance
HP Z820 (and analogs from other vendors)
24 cores, dual CPU workstation
Ivy Bridge E5 2697v2, 2.7 GHz, 12 cores 4 memory channels/CPU with DDR3 1866MHz
Fantastic parallel scalability!
Bandwidth machine!!
8-node cluster
$10K $75K
160 cores, dual CPU nodes
Ivy Bridge E5 2680v2, 2.8 GHz, 10 cores
FDR 56 Gb/sec
4 memory channels/CPU with DDR3 1866MHz
New milestone in workstation performance
• The tests were done with many real full-field models of various oil&gas companies
Parallel acceleration for a wide variety of
models: 14 - 16 times!!
on 16-core system
If NUMA and Hyperthreading are
dropped, 20 – 50% loss of parallel
performance is observed!
Total elapsed time is used for acceleration
measurements
Hyperthreading
Number of threads
Ac
ce
lera
tio
n
28 cores
24 cores
16 cores
8 cores
Hybrid performance scales with hardware
Compact cluster
Old cluster: 20 dual(12 core) nodes, 40 Xeons X5650, 240 cores, 24GB DDR3 1333MHz, Infiniband 40Gb/s
New cluster: 8 dual(20 core) nodes, 16 Xeons E5-2680v2, 160 cores,128GB DDR3 1860MHz, Infiniband 56Gb/s
“Model”: 21.8 million active blocks, 39 wells, 3 phase “black oil” – 100 times with 160 cores
Number of cores A
ccele
rati
on
Xeons X5650 Xeons E5-2680v2
6.4kW 3.2kW
SPE-163090-MS
GUI and Examples of functionality
Every detail at runtime
The simulation is running!
Monitor any dynamic grid
properties at runtime in 2D, 3D
Ex. Average Pressure
User defined vertical profiles
Ex. Average Pressure, SOIL, SGAS along the profile
Live contour lines
Live Bubble maps
Ex. Residual History-Calculated
well oil rate
User defined vertical profiles
Ex. SOIL, Streamlines, Counters, Bubble maps
Live perforation rates profile along the well
Ex. Oil rate, Water rate, Pressure
Live well rate
Ex. Oil rate, Water rate
(historic, calculated)
CO2 injection simulations
Long horizontal wells, dual porosity, compositional model Parallel acceleration on 160-core cluster is about 90! times
Naturally fractured reservoir
Approach #2: Discreet Fracture Network (DFN)
Permeability
Relative permeability zones
Fine grid: 1 ft.
A tough numerical problem
Needs good parallel performance! Pressure
Gas saturation
Oil saturation
The use of a reservoir simulator optimized for processing on multi-thread, multi-core
CPUs makes possible reservoir studies using high resolution models that were once impractical due to lengthy run-times"
Conclusions, SPE-168723-MS
SPE-168723-MS
The hydraulic fractures are simulated as a
network of «virtual perforations» connected
to the well:
Pros
Complex fracture dynamics can be simulated: - changes in propant permeabilities as functions of
pressure, flux or time
- auto opening of fractures at the injectors
- any fracture shape can be modeled (plane, “tree”)
- easy to re-configure: orientation, PIs, shapes
Cons - relatively fine grid is needed
SPE: 138071
Simplified hydraulic fracture simulation
Approach #1: virtual perforations
SPE: 169091
Complex LGR fractures for non-conventionals
Approach 2: dealing with multistage complexity
Shale oil full-field model with >1000 LGRs describing multiple fractures at horizontal wells.
Model runs efficiently in parallel on cluster with acceleration more than 100 times
Support of fully unstructured grids
Injection of Alkaline, Surfactants, Polymers
ASP
Absorbed surfactant concentration
More oil due to Bright Water
nanopolymer injection Change in absolute permeability due to combined
nano-polymer and surfactant injection
Injection of polymers and nano-polumers
More practical, engineering approach to treatment of salt as compared
to other commercial simulators
Simulating reservoirs/injectors with salt
SAGD Simulation – on Cluster
Runs on cluster and workstation
temperature 1/viscosity saturation
Simulation Study – Lateral Refinement
Fine (1m x 50m x 0.7 m) Coarse (2m x 50m x 0.7m)
Temperature of a cross-section at the end of simulation
New useful features
Runtime analysis of waterflood efficiency, drainage
matrix, well interference, injection efficiency
Built in analysis of 3D streamlines
Well-to-well interference (drainage) matrix at runtime
Graphs of oil “induced” production
rates for injectors
“Drainage table”, interference
of injectors and producers
Streamlines. Drainage matrix/network
Waterflood optimization
Drainage network visualization
History matching and FDP for complex models Real example from client company
Cluster/workstation run
to calculate boundary conditions
as a function of time
Initial model Definition of boundaries
Automatic split into
independent sector models S
yn
ch
ron
iza
tio
n o
f
bo
un
dary
co
nd
itio
ns
Optional:
archiving, public key encryption
or
History matching of individual
sector models
“Smart” automatic merge of sector models, fetching all the
changes made to sector models
SPE: 162090
FDP optimization or
E100/300 models with wells trajectories
No explicit well schedule is needed! Grids could be changed, refined, rotated
Connection factors (CFs) are calculated “on the fly” Example: permeability updates from well tests
could be directly incorporated Schedule features ( group hierarchy, etc) can be used
New performance frontier with “Hybrid” cluster technology
Compact cluster for the industry first
“Hybrid” parallel simulator
So
lve
r
MPI
OS Threads matrix
• Cluster level: model is loaded into
cluster and its grid is divided between
the cluster nodes according to a equal
number of active grid blocks
• Node level: the parallelization
between CPU cores is done on the
level of matrix
NUMA
• Simulator solver software integrates
both MPI and threads system calls
• The number of MPI processes is
limited to a number of cluster nodes,
not the number of cores!!
cluster network
Advanced remote GUI for clusters
Job queue management (start, stop,
results view), supports LSF, TORQUE, PBS Pro
Full graphics simulation results monitoring
at runtime (2,3D, wells, perforations, 3D
streamlines,..)
Dramatically boosts cluster usage efficiency!!
Approaching Gigascale
Trying different grid resolutions of giant Samotlor field,
3-phase blackoil, with 14,000 wells, 45 years of history
Model 1: 0.043 billion active blocks: 6 hours
Model 2: 0.160 billion active blocks: 54 hours
Model 3: 0.400 bil. active, 1.3 mil. perforations: ~2 weeks
generic commercial version of tNavigator is used!
Model 4 (tested with less wells): ~1 billion active blocks
64 nodes, 640 cores, 8TB of RAM published in October 2014: SPE-171225-MS
Approaching Gigascale
40 million active
160 million active
SPE-171225-MS
0.000
0.002
0.004
0.006
0.008
0.010
0.012
120 130 140 150 160 170 180
Pressure (Bars)
-1
1
3
0 20 40 60
At the later stage of production, the low
resolution model overestimates oil forecast production by ~1.5% per 5 years
preconditioner
History matching of giant fields using Principle
Component Monte Carlo (PCMC) approach
Hundreds of model
realizations in a second
Reduction of the number of arameters
Proxy-based pre-screening (geometry, streamlines based, etc.)
Full-field simulation of best candidates
Joint project with Halliburton. During MCMC history matching process 80 licenses of tNavigator were used to run jobs on 80-node IBM cluster.
Typical single parallel run involved 400 - 800 cores.
Hundreds of perm. realizations
DCT transformation
Monte Carlo
Elapsed time acceleration = 250 times CPU times acceleration = 750 times
New features in 2015 releases Preprocessor/Geological simulations
Strategic goals:
- building new type of user-friendly, and very closely
integrated with tNavigator geological package taking
full advantage of cluster computing for building giant
geological models and/or multiple model realizations
- fast memory based (not file-based) integration
between geology and dynamics, “gridless” simulations
and history matching, “big loop” for history matching,
lot’s of new magic :)
Functionality:
Grids and hydrodynamic models could be built from:
- logs, markers, seismic surfaces
- from *.rescu files
Automatic history matching tool for workstation & cluster
Thank you!!