13
The new Parallel Krylov Solver package for iMODFLOW-MetaSWAP Jarno Verkaik (Deltares) Paul van Walsum (WUR/Alterra) and Joseph Hughes (USGS) Edwin Sutanudjaja (UU) Raju Ram (TUD)

DSD-NL 2017 Parallel Krylov Solver Package for iMODFLOW-MetaSWAP - Verkaik

Embed Size (px)

Citation preview

The new Parallel Krylov Solver package

for iMODFLOW-MetaSWAP

Jarno Verkaik (Deltares)

Paul van Walsum (WUR/Alterra)

and

Joseph Hughes (USGS)

Edwin Sutanudjaja (UU)

Raju Ram (TUD)

29 June, 2017

Contents

• Problem statement and solution

• Context

• Mathematical model

• Concept of domain decomposition

• (Preliminary) results

• Practical usage with iMOD

SURFSARA-Cartesius: the Dutch supercomputer

Problem statement and solution

Problem statement:

• In order to support decision makers in solving hydrological problems, detailed high

resolution models are often needed.

• These models typically consist of a large number of computational cells and have

large memory requirements and long run times.

Solution:

• An efficient technique for obtaining realistic run times and memory requirements is

parallel computing, where the problem is divided over multiple processor cores.

29 juni 2017

Context

• 2010: start with parallelization of MT3DMS

• 2012: first applied to the National Nutrient Model (H2O, vol18)

• 2016: start parallelization for the Netherlands Hydrological Model (NHM)

entirely funded by Deltares and Alterra from own investments

• 2017: to be released with iMOD

• 2017 other: PKS added to iMOD-SEAWAT fresh-salt

Measured speedup for Sand Engine Model: ~40

• 2017 other: PKS planned to be released by USGS

29 juni 2017

Netherlands Hydrological Model (NHM)

Components:

- MODFLOW: 3D Groundwater flow

- MetaSWAP: 1D Soil Vegetation

Atmosphere Transfer (“SVAT”)

- TRANSOL : 1 D model of salinity

- MOZART : Surface water

Coupling:

- ID-based N:1 coupling tables

- Seamless MODFLOW-MetaSWAP

coupling with groundwater level as

shared state variable

29 juni 2017

Mathematical model

29 juni 2017

• Solving the groundwater flow equation

requires solving the linear system for heads ℎ:

𝐴ℎ = 𝑞

• Parallel Krylov Solver solves:

𝑀−1𝐴ℎ = 𝑀−1𝑞,

where 𝑴 is the block-Jacobi preconditioner

• E.g. for 2 subdomains:

𝐴11 𝐴12

𝐴21 𝐴22

ℎ1

ℎ2=

𝑞1

𝑞2

• This linear system is solved by CG

• Parallel solution = serial solution

partitioning

MODFLOW grid

1 3 7 8 9 2

4 6 10 11 12 5

3 7 8 9

6 10 11 12

1 3 7 2

4 6 10 5

processor 1 processor 2

Message

Passing

Interface

A11 A12 A21 A22

• Distribute the memory over multiple (connected) processor cores.

• For this, partition the MODFLOW domain (and hence the MetaSWAP SVATs), using

• Uniform blocks, or

• Load weighted blocks (Recursive Coordinate Bisection algorithm)

Concept of domain decomposition

29 juni 2017

Example “uniform”

128 subdomains Example “RCB”

128 subdomains

Results: NHM (1)

29 juni 2017

• iMODFLOW-MetaSWAP + surface water

• Maximum measured speedup ~5 on

NHM Deltares server (Windows)

• Maximum theoretical speedup is

limited by surface water (< 1/0.06 16.7)

• Exactly the same heads are computed

with PKS as for the serial case

Amdahl’s law

Results: California model

29 juni 2017

* Simulated on the SURFSARA-Cartesius Dutch National Super Computer # On a single code estimated to consume 12 hours

MODFLOW only;

128 core (245 Gb RAM)*;

50 x 50 meter;

23,897 x 27,974 ≈ 335 million active nodes

2,6 Gb file size;

16:34 minutes#

Further reading:

Vermeulen et. al., Large scale high resolution modeling,

MODFLOW and More conference 2017, Golden, USA.

Practical usage with iMOD (1)

Easy-to-use in three steps:

1. Install MPICH:

www.mpich.org/static/downloads/1.4.1p1/mpich2-1.4.1p1-win-x86-64.msi

2. Modify your run-file*, Dataset 5 (Solver Configuration)

3. Start your parallel job. E.g. starting from the DOS prompt using 4 cores:

mpiexec -localonly 4 iMODFLOW.exe imodflow.run

29 juni 2017

Enable PKS with a “minus”

Same options as PCG

Partitioning method, flag for merging IDF output

* The new iMOD project file (.PRJ) will be supported for PKS.

29 juni 2017

Practical usage with iMOD (2)

Practical usage with iMOD (3)

To-do regarding iMODFLOW:

• Support packages: MNW, SFR, LAK, and iPEST

• Improve line segment input (HFB, ISG) which may slow down speed up

• Develop post-processing tools (e.g. IDF merging)

To-do regarding MetaSWAP:

• Develop post-processing tool for merging bda-files into a single file

• Rename output csv-files (leave out p* extensions)

• Combine end-state files of partitions into single files for initialization and restart (MetaSWAP,

TRANSOL, soil temperature)

And:

• Overall speed up strongly depends on hardware

• To get maximum speed-up for a hi-res model tuning is necessary (e.g. load balancing)

• When you want new iMOD functionality, check impact on PKS

29 juni 2017