Upload
nishan
View
37
Download
2
Embed Size (px)
DESCRIPTION
COMPUTATIONAL ELEMENT S FOR VERY LARGE-SCALE, HIGH-FIDELITY AERODYNAMIC ANALYSIS AND DESIGN. 2006 년 11 월 20 일. 김종암 Aerodynamic Simulation & Design Lab. 서울대학교 기계항공공학부. Contents. Introduction Aerodynamic Solvers for High Performance Computing - PowerPoint PPT Presentation
Citation preview
Supercomputing Korea 2006
COMPUTATIONAL ELEMENTS FOR VERY LARGE-SCALE, COMPUTATIONAL ELEMENTS FOR VERY LARGE-SCALE, HIGH-FIDELITY AERODYNAMIC ANALYSIS AND DESIGNHIGH-FIDELITY AERODYNAMIC ANALYSIS AND DESIGN
김종암김종암
Aerodynamic Simulation & Design Lab.서울대학교 기계항공공학부
2006 년 11 월 20 일
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
ContentsContents
IntroductionIntroduction
Aerodynamic Solvers for High Performance ComputingAerodynamic Solvers for High Performance Computing Characteristics of International Standard Codes
Essential Elements for Teraflops CFDEssential Elements for Teraflops CFD High-Fidelity Numerical Methods for Flow Analysis and Design Parallel Efficiency Enhancement Geometric Representation for Complex Geometry
Some ExamplesSome Examples
ConclusionConclusion
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Introduction - Bio & AstrophysicsIntroduction - Bio & Astrophysics
[ Molecules in motion ] - 10.4 teraflops
SDSC (San Diego Supercomputer Center )
Understanding how molecules naturally behave inside cells. Predicting how the molecules might react to the presence of prospective drugs.
[ 2-D Rayleigh-Taylor Instability ]
FLASH center / Pittsburgh Supercomputing Center.
[ Simulation of supernovae ]
ORNL (Oak Ridge National Laboratory)
Researchers using an ORNL supercomputer have found that the organized flow beneath the shock wave in a previous two-dimensional model of a stellar explosion persists in three dimensions, as shown here.
[ Computationally predicting protein structures ]
ORNL (Oak Ridge National Laboratory)
A protein structure, predicted at ORNL (left) and the actual structure, determined experimentally
(right).
[ Blood-flow patterns at an instant during the systolic cycle ]
CITI (Computer and Information Technology Institute)
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Introduction - Weather forecastingIntroduction - Weather forecasting[ Global atmospheric circulation ]
DKRZ (Deutsches Klimarechenzentrum GmbH)
The German High Performance Computing Centre for Climate and Earth System Research
Animation of 1 month "simulated weather" with a global atmosphere model
[ Typhoon ETAU in 2003 ]
Earth Simulator Center
Result of non-hydrostatic ultrahigh-resolution coupled atmosphere-ocean model - 26.58 Tflops was obtained by a global atmospheric circulation code.
[ Global ocean circulation ]
DKRZ
3-D Particles/Streamlines coloured by temperature are used to visualize important features of the annual mean ocean circulation
[Twin typhoons over the Philippine Sea]
Earth Simulator Center
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Introduction - Aerospace & Other Introduction - Aerospace & Other related fieldsrelated fields
[ Full SSLV configuration ]
NASA Columbia Supercomputer
[ Aerodynamics simulation around a SAUBER PETRONAS C23 ]
SAUBER PETRONAS, Switzerland
[ the numerical simulation of the hydro-aerodynamic effects around the Shosholoza boat with the aim to gain an optimal design ]
the Scientific Supercomputing Center at Karlsruhe University
[ Bio-Agent Blast Dispersion Simulations ]
DTRA (Defense Threat Reduction Agency )
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Introduction - System architectureIntroduction - System architecture Primary Factors on Computing SpeedPrimary Factors on Computing Speed
CPU clock speed of computer Number of instruction per one clock
CPU clock speed is represented ‘Hz’ : frequency per one second
1 Tflops = A trillion floating-point operation per second 1 Tflops = A trillion floating-point operation per second Example Example
Pentium Xeon 2.4 Ghz : 2.4Ghz * 2 (Hyper-Threading) = 4.8 GFlops Ia64(Itanium) 1.4 Ghz : 1.4 * 2 (Hyper-Threading) * 2(Instruction) = 5.6 Gflops
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Computing Power NowadaysComputing Power Nowadays Top500 List (June 2006)Top500 List (June 2006)
Fastest machine : BlueGene/L by IBM (at DOE/NNSA/LLNL)
Ranked at #500 : 2.026 Tflops Era of teraflops computing has
already come!
BlueGene/LBlueGene/L 100,000+ processors Performance : 280.6 teraflops
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
IntroductionIntroduction Application CharacteristicsApplication Characteristics
AerospaceAerospaceEngineeringEngineering
Usage of memory is higher than hard drive Requirement of high speed CPU and high speed I/O Network speed is sensitive
MechanicalMechanicalEngineeringEngineering
• Explicit Problem
Performance of CPU and network speed are important• Implicit Problem
Requirement of high speed I/O and mass memory storage
Physical sciencePhysical science • Monte Carlo : High dependence on network performance
Chemical scienceChemical science
• Molecular Dynamics
Performance of CPU and network speed are important
Low dependency of memory size, I/O capacity and speed• Quantum Dynamics
Performance of CPU, network speed and mass memory storage are important
Life scienceLife science• Protein folding High speed CPU and memory size are a little important
AstronomyAstronomy Computing performance is sensitive to high speed CPU, network speed( Enormous influence at pre-process and post-process)
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
IntroductionIntroduction Application CharacteristicsApplication Characteristics
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Specialized High Performance Baseline Specialized High Performance Baseline CodesCodes
Standard Flow Solvers in NASA (USA)Standard Flow Solvers in NASA (USA) Full Potential
CAPTSD Block Structured
CFL3D, TLNS3D-MB, PAB3D, GASP , LAURA, VULCAN Overset Structured
OVERFLOW Unstructured
FUN3D, USM3D, 3D3U
Other Flow SolversOther Flow Solvers MIRANDA
High-order hydrodynamics code for computing instabilities and turbulent mix Coded by LLNL (Lawrence Livermore National Laboratory)
AVBP A compressible flow solver running on unstructured and hybrid grids Coded by CERFACS, France
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance Computing (USA)High Performance Computing (USA)
General Features of OverflowGeneral Features of Overflow Right-hand side options :
central differencing with Jameson 4/2 dissipation Roe upwinding.
Left-hand side options : Pulliam-Chaussee diagonalized scheme LU-SGS scheme Low-Mach number preconditioning First-order implicit time advance
Convergence acceleration options: Time-accurate mode or local timestep scaling Grid sequencing, multigrid
Performance Test Block structured overset grid
with 126 million grid points in total, 2000 time steps
Weak scaling : About 123,000 meshpoints in each processor
Efficiency : About 70% with 1024 processors(Compared to 64 processors)
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance Computing (USA)High Performance Computing (USA)
General Features of the CFL3D General Features of the CFL3D 2-D or 3-D grid topologies Inviscid, laminar and/or turbulent flows Steady or unsteady (including moving-grid) flows Spatial discretization
van Leer’s FVS, Roe’s FDS Time integration
Implicit approximate-factorization, dual-time stepping High order interpolation & limiting
TVD MUSCL Multiple block options:
1-1 blocking, patching, overlapping, embedding Convergence acceleration options:
Multigrid, mesh sequencing
Turbulence model options: Baldwin-Lomax Baldwin-Lomax with Degani-Schiff Modification Baldwin-Barth Spalart-Allmaras (including DES option) Wilcox k-omega Menter's k-omega SST Abid k-epsilon Explicit Algebraic Stress Model (EASM) K-enstrophy
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance Computing (USA)High Performance Computing (USA)
PETSc-FUN3D (NASA)PETSc-FUN3D (NASA) Code Features
FUN3D code attached to PETSc framework A tetrahedral vertex-centered unstructured code Spatial discretization with Roe scheme A Galerkin discretization for the viscous terms Pseudo-transient Newton-Krylov-Schwarz
block-incomplete factorization on each subdomainof the Schwarz preconditioner for time integration
Used for design optimization of airplanes,automobiles and submarines with irregular meshes
Performance Test Unstructured mesh with 2.7 million vertices,
18 million edges Weak scaling Performance : Nearly scalable with
O(1000) processors
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance Computing (USA)High Performance Computing (USA)
MIRANDA (LLNL)MIRANDA (LLNL) Code Features
High-order hydrodynamics code for computing instabilities and turbulent mix
Conducting direct numerical simulationand large-eddy simulation
FFTs and band-diagonal matrix solvers for spectrally-accurate derivatives
Studying Rayleigh-Taylor (R-T) and Richtmyer-Meshkov (R-M) instabilities
Performance Test Weak scaling parallel efficiency nearly 100%
with 128K processors Strong scaling shows good efficiency with
64K processors (Compared to performance with 8K processors)
All-to-all communication gives good performance
Turbulent Flow Mixing of Two Fluids(LES of R-T Instability)
Efficiency with Strong Scaling
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance Computing (Europe)High Performance Computing (Europe)
AVBP (CERFACS)AVBP (CERFACS) Code Features
A parallel CFD code for laminar and turbulent compressible Navier-Stokes equations on unstructured and hybrid grids
Unsteady reacting flow analysis based on the LES approach
Built upon a modular software library including integrated parallel domain partition and data reordering tools, message passing handler, supporting routines for dynamic memory allocation, routines for parallel I/O and iterative methods
Performance Nearly 100% of parallel efficiency with
4K processors (on BlueGene/L) Strong scaling case Code may run in the range of
O(1000)s of processors
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Aerodynamic Solvers for Aerodynamic Solvers for High Performance ComputingHigh Performance Computing
Efficiency of Various Applications Including CFDEfficiency of Various Applications Including CFD From BlueGene/L reports Both weak scaling and strong scaling parallelism
※ Weak scaling : Same domain size in each processor※ Strong scaling : Same domain size in total
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method -- High-Fidelity Numerical Method -
N-S Simulation Around a Helicopter Fuselage with Actuator DisksU.C. Davis Center for CFD
Numerical Flux Scheme : Accurate Shock Capturing
Higher-Order Interpolation : Complex Flow Structure &
Vortex Resolving
Enhanced Accuracy of Aerodynamic Coefficients.
Flow Analysis over Helicopter Full Body Configuration :
A Very-Large Scale Problem
Convergence Acceleration & Adaptive Grid Technique :
Reduced Computational Cost
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method –- High-Fidelity Numerical Method –
-2.0 -1.8 -1.6 -1.4 -1.2 -1.0x/L along stagnation line
0.0
20.0
40.0
60.0
80.0
No
n-d
imes
ion
aliz
ed V
aria
ble
s
Pressure
Total Enthalpy
Temperature
Roe with E-Fix Roe’s FDS Sharp capturing of shock discontinuity Unstable in expansion region (defect) Carbuncle phenomena (defect)
Damping & Feeding rate control
using Mach number-based function
RoeM
Shock Stability ( No Carbuncle ) Total Enthalpy Conservation Stability in Expansion Region Exact Capturing of Contact Discontinuity Accuracy comparable to Roe’s FDS
RoeM
RoeM SchemeRoeM Scheme
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method –- High-Fidelity Numerical Method –
AUSM+
Pressure wiggles cured by introducing
weighting functions based on pressure
Splitting the convective flux term and the pressure flux term
The hybrid form of FDS and FVS
Oscillation near a wall or across a strong shock (defect)
AUSMPW+
Eliminating expansion shock
Eliminating oscillations and overshoots
Reduced grid dependency
Improved convergence behavior
AUSMPW+ SchemeAUSMPW+ Scheme
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method –- High-Fidelity Numerical Method –
M-AUSMPW+
Propose the criterion for accurate calculation of cell-interface fluxes
Pressure splitting function is modified
Much effective in the computations of multi-dimensional flows
Achieve the complete
monotonic characteristics
Improved convergence characteristics
M-AUSMPW+ SchemeM-AUSMPW+ Scheme
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method –- High-Fidelity Numerical Method –
Higher Order Interpolation & Oscillation Control Scheme : Higher Order Interpolation & Oscillation Control Scheme : MLPMLP TVD and ENO approach : based on 1-D flow physics. Higher order interpolation with effective oscillations control in multiple dimension :
Multi-dimensional Limiting Process.
Feature 2: Profile of swirls near the corner
Feature 3: Interacted profile
of separated vortex & swirls
Feature 1: Profile of
separated vortexx
y
0.4 0.6 0.8 10
0.1
0.2
0.3
Density Contour
MLP5 + M-AUSMPW+(350 x 175 x 175 )
MLP5 + M-AUSMPW+( 350 * 175 * 175 )
plane of x = 0.8725
plane of x = 0.842
x
z
0.6 0.8
0.1
0.2
0.3
0.4
0.5
plane of y = 0.078 : the center of primary separated vortex
MLP5 + M-AUSMPW+( 350 * 175 * 175 )
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - High-Fidelity Numerical Method –- High-Fidelity Numerical Method –
0 20 00 0 40 00 0 60 00 0C o m p u ta tion C o st
1 E -5
1 E -4
1 E -3
1 E -2
1 E -1
1E + 0
1E + 1
Res
idu
al
L U -S G S
1 -L ev e l
4 -L ev e l
R u n g e-K u tta
1 -L ev e l
4 -L ev e l (M eth o d II)Time integration Iteration number Speed-up
Runge-Kutta 1-level 12515 1.0
Runge-Kutta 4-level (method I) 2912 2.8
Runge-Kutta 4-level(Method II) 1821 4.5
LU-SGS 1-level 26030 1.0
LU-SGS 4-level 3443 3.7
Multigrid : Issues regarded in hypersonic flowsMultigrid : Issues regarded in hypersonic flows Non-linearity in shock regions
cause robustness problem in prolongation Chemical reaction
time step restricted due to stiffness
Solutions to the problemsSolutions to the problems Modified Implicit residual smoothing Damped prolongation & Implicit treatment of source term
Test Problem : Nonequilibrium viscous flowTest Problem : Nonequilibrium viscous flow M∞=10 , 60km altitude
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Parallel Efficiency Enhancement -- Parallel Efficiency Enhancement -
Requirements for SystemsRequirements for Systems CPU - Fewer & powerful processors
Better for efficiency, management of resources, fault-prevention More power consumption and heat emission
Memory – Faster access & efficient management Most important factor for CFD applications
Network – Multiple interconnection networks Separated communication channel between inter-processor communication and global comm
unication Ex) IBM BlueGene/L : 5 different communication types
I/O – Unpredicted broken data Overload to storage server during data writing Sometimes broken ASCII data are observed
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Parallel Efficiency Enhancement -- Parallel Efficiency Enhancement -
Requirements for Software/ProgrammingRequirements for Software/Programming Memory size – Different array range among processors
Computing domains can be different in range with same mesh points Conventionally maximized memory size was allocated
Remedy : Variables stored in global memory (Shared memory system)Dynamic memory allocation in Fortran 90 (Distributed
memory system)
I/O – Writing conducted in each processor Conventional programs gathered all data set into one processor : Large-size
array allocation required
Etc : Optimized compiler options, highly functional debugger, minimization of serial processing
40×80Domain
80 × 40Domain
80 × 40Domain
Dimension X(80,80), Y(80,80), ……
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Parallel Efficiency Enhancement -- Parallel Efficiency Enhancement -
Requirements for AlgorithmsRequirements for Algorithms Scalability enhancement
Reduced global communication Global communication along with inter-processor communication leading to
synchronization problem Residual gathering, aerodynamic coefficient computation routines should be
improved
Dynamic load balancing Processor allocation for faster inter-processor communication Dynamic load balancing for the change of processor’s performance during
computation Fault-tolerance
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Geometric Representation -- Geometric Representation -
Multiple Body Problems
• Preprocessor for Partitioning &
Automatic Detection of Block Topology
• Automatic grid generator
& Grid Adaption Method
• Preprocessor for automatic
block connectivity• Postprocessor • Overset mesh
generator
Multiblock Overset Unstructured
• Block topology is complicated for structured system.• Grid generation work is a time consuming work.• Manual preprocess is impossible.
Complicated Geometry
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Geometric Representation -- Geometric Representation -
Multi-Block SystemMulti-Block System Modulation of Preprocessing Code Evaluation of Metric, Minimum Wall Distance and their Exchange Automatic Detection Block Topology
Flow Analysis of Combustion Chamber (NS, 600,000 pts., ASDL)
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Geometric Representation -- Geometric Representation -
Overset Mesh SystemOverset Mesh System Pre-processing for automatic finding process of hole, fringe and
donor cells due to complicated block connectivity (Overlap Optimization for PEGASUS)
Post-processing for the evaluation of aerodynamic coefficients (Zipper Grid)
Mesh A
Mesh B
Mesh C
Mesh A
Mesh A
Mesh A
Mesh B Mesh B
Mesh B
Mesh A
Mesh B
Mesh C
Mesh C
Mesh C
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Essential Elements for Teraflops CFD Essential Elements for Teraflops CFD - Geometric Representation -- Geometric Representation -
Unstructured SystemUnstructured System Automatic grid generation code (Mavriplis et al., NASA Langley)
Grid adaptation method
Subdivision Method Adjoint Based Adaptation Method
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Some ExamplesSome Examples Multi-block SystemMulti-block System
Parametric study in various flight conditions for aerospace engineering
Streamlines and Iso-velocity Surfaces
(Side Nozzle, N-S, M = 1.0)
Parametric Study of a Missile
with Side Nozzle (N-S, M =1.75)
Jet Off AOA 0
Jet On AOA 0
Jet On AOA 10
Jet Off AOA 20
Jet On AOA 20
Jet Off AOA 10
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Multi-block System Multi-block System Flow Analysis & Design of Turbulent Intake Flow using Multiblock System
Y
X
Z
Total Pressure Contour in the Duct Section & Streamlines
Y X
Z
p
0.6841380.6303450.5765520.5227590.4689660.4151720.3613790.3075860.2537930.2
Static Pressure ContourY X
Z
mach
0.9310340.8275860.7241380.620690.5172410.4137930.3103450.2068970.1034480
Mach Contour
Some ExamplesSome Examples
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Design Optimization based on Large Scale Design Optimization based on Large Scale ComputationComputation
Baseline Model :Baseline Model :
Designed Model :Designed Model :
Turbulent Duct Design with
Multi-block Mesh System
Some ExamplesSome Examples
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Some ExamplesSome Examples Overset Mesh SystemOverset Mesh System
Manually Assigned Block Connectivity Overlap Optimized Block Connectivity
Iterations
Residual
1 2501 5001 7501 10001
-5.5
-5
-4.5
-4
-3.5
-3
-2.5
-2
-1.5
-1
-0.5
0
Iterations
Residual
2000 4000 6000 8000
-5
-4
-3
-2
-1
0
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Some ExamplesSome Examples Overset Mesh SystemOverset Mesh System
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
X/D
-Cp
0.25 0.5 0.75 1-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Y/SPAN=18.5%
Y/SPAN=23.8%
Y/SPAN=40.9%
Y/SPAN=33.1%
Y/SPAN=84.4%Y/SPAN=63.6%
Y/SPAN=51.2%
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Some ExamplesSome Examples Design Optimization based on Large Scale Design Optimization based on Large Scale
ComputationComputation
BaselineBaseline DesignedDesignedx/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
2
18.5% (Baseline)18.5% (Designed)
x/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
2
23.8% (Baseline)23.8% (Designed)
x/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
2
33.1% (Baseline)33.1% (Designed)
x/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
2
40.9% (Baseline)40.9% (Designed)
x/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
263.6% (Baseline)63.6% (Designed)
x/c
-Cp
0.25 0.5 0.75 1
-0.5
0
0.5
1
1.5
2
84.4% (Baseline)84.4% (Designed)
Redesign
of DLR-F4 W/B
Conf. with
Overset Mesh System
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
Some ExamplesSome Examples Launch Vehicle Analysis with Load BalancingLaunch Vehicle Analysis with Load Balancing
Parallel computation on the Grid 32 processors in Seoul National University & KISTI 3.5 million mesh points
Load Load BalancBalanc
ee
WithouWithout t
BalancBalancee
ReducReduced ed
TimeTime
CalculationCalculation
(per (per Iteration)Iteration)
1.37561.3756 1.9276 1.9276 28.6428.64%%
CommunicatiCommunicationon
(per (per Iteration)Iteration)
0.71990.7199 0.7571 0.7571 -4.91% -4.91%
Computation Computation Time(Total)Time(Total)
13012.13012.22
16053.16053.44
18.9418.94%%
Supercomputing Korea 2006 Aerodynamic Simulation & Design Lab.
ConclusionConclusion Current StatusCurrent Status
Many disciplines are conducting teraflops computing Teraflops computing in CFD field has not been activated yet
Issues and RequirementsIssues and Requirements High-fidelity numerical schemes for the description of complex flowfield Domain decomposition method and parallel algorithms for enhancement of efficiency
/ fault-tolerancing Automatic pre- & post-processing techniques in geometric representation to resolve c
omplicated multiple body problems
Target CFD Application AreasTarget CFD Application Areas Unsteady Aerodynamics with Massive Flow Separation MDO and Fluid-Structure Interaction Multi-Body Aerodynamics with Relative Motion Multi-Scale Flow Computation