38
Computational Support for Parallel/Distribute d AMR Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/ TASSL

Computational Support for Parallel/Distributed AMR

  • Upload
    allan

  • View
    32

  • Download
    0

Embed Size (px)

DESCRIPTION

Computational Support for Parallel/Distributed AMR. Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL. Roadmap. Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure - PowerPoint PPT Presentation

Citation preview

Page 1: Computational Support for Parallel/Distributed AMR

Computational Support for Parallel/Distributed AMR

Manish ParasharThe Applied Software Systems Laboratory

ECE/CAIP, Rutgers Universitywww.caip.rutgers.edu/~parashar/TASSL

Page 2: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 2

Roadmap

Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure GrACE Programming Model and API GrACE Design & Implementation Current Research & Future Direction

Page 3: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 3

Cactus and GrACE

Cactus + GrACE– Transparent access to AMR via Cactus

» GrACE Infrastructure Thorn

» AMR Driver Thorn

– Status » Unigrid driver in place

» AMR driver under development

Page 4: Computational Support for Parallel/Distributed AMR

Berger-Oliger Adaptive Mesh Refinement

Page 5: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 5

The AMR Concept

Problem: How to maximize the solution accuracy for a given problem size with limited computational resources ?

Solution: Use dynamically adaptive grids (instead of uniform grids) where the grid resolution is defined locally based on application features and solution quality.

Method: Adaptive Mesh Refinement (AMR)

Page 6: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 6

Adaptively Griding the Application Domain

Marsha Berger et al. (http://cs.nyu.edu/faculty/berger/)

Page 7: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 7

Adaptive Grid Structure

Page 8: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 8

Berger-Oliger AMR: Algorithm Define adaptive grid structure Define grid functions Initialize grid functions Repeat NumTimeSteps

– if (RegridTime) Regrid at Level– Integrate at Level– if (Level+1 exists)

Integrate at Level+1Update Level from Level+1

End Repeat

Page 9: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 9

Berger-Oliger AMR: Grid Hierarchy

Page 10: Computational Support for Parallel/Distributed AMR

Hierarchical Linked Lists (HLL)

Page 11: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 11

HLL

AMR system devised by Lee Wild in 1996

Grid points split into nodes of size refinement-factor in each direction

Refine on nodes– Avoids clustering problems needed by

box based AMR schemes

Page 12: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 12

Status of HLL

Lee wrote a shared memory version which was tested on various problems and showed excellent scaling properties.

It is currently being re-implemented as a standalone library with shared memory and MPI parallelism. This library will be used by a Cactus thorn to provide an AMR driver layer.

Page 13: Computational Support for Parallel/Distributed AMR

GrACE:An Framework for Distributed AMR

Page 14: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 14

GrACE: An Overview

Page 15: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 15

Programming Interface

Coarse grained SPMD data parallelism C++ driver

– declares and defines computational domain and application variables in terms of GrACE programming abstractions

– defines overall structure of the AMR algorithms FORTRAN/FORTRAN 90/C computational

kernels– defined on regular arrays

Page 16: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 16

Programming Abstractions

Grid Hierarchy Abstraction– Template for the distributed adaptive grid

hierarchy Grid Function Abstraction

– Application fields defined on the adaptive grid hierarchy

Grid Geometry Abstraction– High-level tools for addressing regions in the

computational domain

Page 17: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 17

Grid Geometry Abstractions

Coords– rank, x, y, z, ...

BBox– lb, ub, stride

BBoxList Operations

– union, intersection, cluster, refine/coarsen, difference, ...

(lbx, lby)

(ubx, uby)

dx

dy

Page 18: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 18

GridHierarchy Abstraction

Attributes:– number of dimensions

– maximum number of levels

– specification of the computational domain

– distribution type

– refinement factor

– boundary type/width

GridHierarchy GH(Dim,GridType,MaxLevs)

Page 19: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 19

GridFunction Abstraction

GridFunction(DIM)<T> GF(“gf”, Stencils,GH,…)

Attributes:– dimension and type

– vector?

– spatial/temporal stencils

– associated GridHierarchy

– prolongation/restriction functions

– “shadow” specification

– alignments

– ghost cells

– boundary types/updates

– interaction types

– flux registers?

– parent storage?

Page 20: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 20

GridFunction Operations GridFunction storage for a particular time, level,

and component (and hierarchy) is managed as a Fortran 90 array object.

GF(t, l, c, Main/Shadow) <op> Scalar

GF(t, l, c, Main/Shadow) <op> GF2(….)

RedOp(GF, t, l, Main/Shadow)– <op> : =,+=,-=,/+,*=,…– RepOp: Max, Min, Sum, Product, Norm,….

Page 21: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 21

Ghost Communications

Ghost region communications based on GridFunction stencil attribute at the specified grid level

Sync (GF, Time,Level,Main/Shadow)Sync (GF, Time, Level,Axis,Dir,Main/Shadow)Sync (GH, Time,Level,Main/Shadow)

Page 22: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 22

Arbitrary copy (add, subtract) from Region1 to Region 2 the specified grid level.

Region-based Communications

Copy (GF, Time, Level, Reg1, Reg2, Main/Shadow)

R1

R2

Page 23: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 23

Data-parallel forall operator

forall (gf, time, level, component) Call FORTRAN Subroutine…...end_forall

Parallel operation for all grid components at a particular time step and level.

Page 24: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 24

Refinement & Regriding

Refine(GH, Level, BBoxList)RecomposeHierarchy(GH)

Encapsulates:– Generation of refined grids

– Redistribution

– Load-balancing

– Data-transfers

– Interaction schedules

Page 25: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 25

Prolongation/Restriction Functions Set prolong/restrict function for each GridFunction

foreachGF(GH, GF, DIM, GFType)

SetProlongFunction(GF, Pfunc);

SetRestrictFunction(GF, Rfunc);

end_forallGF

Prolong/RestrictProlong(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, ….,

Main/Shadow);

Restrict(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow);

Page 26: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 26

Checkpoint/Restart/Rollback Checkpoint

Checkpoint(GH,ChkPtFile);» Each GridFunction can be individually selected or

deselected for checkpointing» Checkpoint files independent of # of processors

Restart

ComposeHierarchy(GH,ChkPtFile); Rollback

RecomposeHierarchy(GH,ChkPtFile);

Page 27: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 27

IO Interface Initialize IO

ACEIOInit(); Select IO Type

ACEIOType(GH, IOType);» IOType := ACEIO_HDF, ACEIO_IEEEIO,..

BEGIN_COMPUTE/END_COMPUTE mark region not executed by a dedicated IO node

Do IOWrite(GF, Time, Level, Main, Double);

End IOACEIOEnd(GH);

Page 28: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 28

Multigrid Interface Determine the number of multigrid levels available

MultiGridLevels(GH, Level, Main/Shadow); Setup the multigrid hierarchy for a GridFunction

SetUpMultiGrid(GF, Time, Level, MGLf, MGlc, Main/Shadow);SetUpMultiGrid(GF, Time, Level, Axis, MGlf, MGlc,

Main/Shadow); Do Multigrid

GF(Time, Level, Comp, MGl, Main/Shadow)….; Release multigrid hierachy

ReleaseMultiGrid(GF, Time, Level, Main/Shadow);

Page 29: Computational Support for Parallel/Distributed AMR

GrACE: Design & Implementation

Page 30: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 30

Software Engineering in the Small: Design Principles

Separation of Concerns» policy from mechanisms» data management from solution methods» storage semantics from addressing and access» computer science from computational science from

engineering Hierarchical Abstractions

» application specific programming abstractions» semantically specialized DSM» distributed shared objects» hierarchical, extendible index space + distributed

dynamic storage

Page 31: Computational Support for Parallel/Distributed AMR

31Manish Parashar30 September, 1999

Separation of Concerns => Hierarchical Abstractions

Application

Application Components

Programming Abstractions

Dynamic Data-Management

Grid GeometryGrid Function Grid Structure

Cell Centered

Vertex Centered

Face Centered

Region

Point

Multigrid Hierarchy

Main Hierarchy

Shadow Hierarchy

Modules Kernels

Solver

Clusterer

Interpolator

Error Estimator

App. Objects

Grid

Tree

Mesh

Access

Storage

HDDA

Index Space

Method SpecificApplication Specific Adaptive Data-Mgmt

Page 32: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 32

Hierarchical Distributed Dynamic Array (HDDA) Distributed Array

– Preserve array semantics over distribution» Reuse FORTRAN/C computational components

– Communications are transparent– Automatic partitioning & load-balancing

Hierarchical array– Each element can be a HDDA

Dynamic Array– HDDA can grow and shrink dynamically

Efficient data-management for adaptivity

Page 33: Computational Support for Parallel/Distributed AMR

33Manish Parashar30 September, 1999

Separation of Concerns => Hierarchical Abstractions

HDDA

Index Space

Name Resolution

Partitioning Expansion & Contraction

Storage

Display Objects

Data Objects Interaction Objects

Access

Consistency Communication

Page 34: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 34

Distributed Dynamic Storage

Application Locality

Index Locality

Storage Locality

Page 35: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 35

Partitioning Issues

Locality Parallelism Load-balance Cost

Page 36: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 36

Composite Distribution

Inter-grid communications are local Data and task parallelism exploited Efficient load redistribution and clustering Overhead of generating & maintaining composite structure

Page 37: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 37

IO & Visualization

Page 38: Computational Support for Parallel/Distributed AMR

30 September, 1999 Manish Parashar 38

Integrated Visualization & IO Grid Hierarchy

» Views: Multi-level, multi-resolution grid structure and connectivity, hierarchical and composite grid/mesh views, ….

» Commands: Refine, coarsen, re-distribute, read, write, checkpoint, rollback, ….

Grid Function» Views: Multi/single-resolution plots, feature extraction and

reduced models, isosurfaces, streamlines, etc….» Commands: Read, write, interpolate, checkpoint, rollback, ….

Grid Geometry» Views: Wire-frames with resolution and ownership information» Commands: Read, write, refine coarsen, merge, ….