20
Rocket science, shock waves and video games Britton J. Olson Sanjiva K. Lele Department of Aeronautical Engineering Stanford University Super Computing 2011 Seattle, WA This work supported by the Department of Energy Computational Science Graduate Fellowship and DOE-SciDAC with computing resources provided by the HPCC at LLNL. Acknowledgment is given to Dr. A. Cook and Dr. W. Cabot for use of their code. Special thanks given to Dr. Andrew Johnson and Prof. Dimitri Papamoschou for providing their experimental data and for many valuable discussions.

Rocket science, shock waves and video games · 2019. 5. 10. · Rocket science, shock waves and video games Britton J. Olson Sanjiva K. Lele Department of Aeronautical Engineering

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • Rocket science, shock waves and video games

    Britton J. OlsonSanjiva K. Lele

    Department of Aeronautical EngineeringStanford University

    Super Computing 2011Seattle, WA

    This work supported by the Department of Energy Computational Science Graduate Fellowship and DOE-SciDAC with computing resources provided by the HPCC at LLNL. Acknowledgment is given to Dr. A. Cook and Dr. W. Cabot for use of their code. Special thanks given to Dr. Andrew Johnson and Prof. Dimitri Papamoschou for providing their experimental

    data and for many valuable discussions.

  • Rocket science in a nutshell

    Nov 15-19, 2011 2SC 11

  • Pressure in nozzle plume affects rocket performance

    Nov 15-19, 2011 3SC 11

    Under-expanded flow(high altitude)

    Ideally-expanded flow

    Over-expanded flow(low altitude)

  • Over-expansion leads to shock waves

    4Nov 15-19, 2011 SC 11

    • “Mach disks”

    • External flow

    • Shock turbulent boundary layer (STBL) interaction

    • Internal flow

  • Shock motion in nozzle is unsteady

    5Nov 15-19, 2011 SC 11

    • plume mixing

    • spurious side loadsIdeally-expanded

    Over-expanded

  • Experiment to find source of unsteadiness

    Nov 15-19, 2011 6SC 11

  • Questions remain

    • Turbulent boundary layer

    • Side loads

    • Mechanism for the coupling between shock and shear layer

    Turn to computational

    science to elucidate the physics

    Nov 15-19, 2011 7SC 11

  • High-fidelity CFD approach

    • Large-eddy simulation (LES) methodology

    • Comp. Expensive

    Nov 15-19, 2011 8SC 11

    Jet plume showing example of difference between experiment (left), RANS (center)

    and LES (right).

  • The Miranda Code

    • Developed at Lawrence Livermore National Lab

    • Used for astrophysics research (Rayleigh-Taylor)

    • Scales well on large clusters 32k

    • High order

    Nov 15-19, 2011 9SC 11

  • Results of LES calculations

    • Ran 3 meshes

    – 8, 30, 150 Million grid points

    • LLNL “hera” and “atlas” (max 4096 cpus)

    • ~ 5 Million CPU Hrs

    • ~ 20 TB of raw data

    Nov 15-19, 2011 10SC 11

  • Experiment Simulation

    Instantaneous density field

    *Sim. is

    Nov 15-19, 2011 11SC 11

    *Exp. is actual Schlieren.

  • Unsteady shock movie

    Nov 15-19, 2011 12SC 11

  • Broad range of spatiotemporal scales

    • Power spectral density show large range of time scales in shear layer

    Experiment− LES mesh A

    Nov 15-19, 2011 13SC 11

  • Turbulent boundary layer movie

    Nov 15-19, 2011 14SC 11

  • Small time step = Long wait

    • High Reynolds number flow (>1e5)

    • CFL limits time step

    • O(1e6) time steps needed

    Nov 15-19, 2011 15SC 11

  • Reduce wall time or bust

    Nov 15-19, 2011 SC 2011 16

    • An unfinished solution is no solution

    • Wall time per time step must be minimized

    • GPUs offer affordable alternative to large scale CPU cluster

  • LES speedup example

    Nov 15-19, 2011 SC 2011 17

    *Castonguay and Jameson et. al., AIAA-2011-3229 Stanford University

    • Re = 60,000

    • 16 Tesla C2070 cards

    • 13.6x speed-up over CPU (32 Xenon procs.)

    • CPU 102hrs

    • GPU 15hrs

  • Throughput is scheme dependent

    • .135 sec/dt (90% eff)

    • .918 sec/dt

    • 3-5 sec/dt (90% eff)

    Nov 15-19, 2011 SC 2011 18

    GPU 4th order Spectral Diff.

    CPU 4th order Spectral Diff.

    CPU 10th order Finite Diff.

    Time to Solution

    [sec/dt] x [N time steps]=

    Physics, Reynolds number

  • “Codesign” for performance

    Nov 15-19, 2011 SC 2011 19

    Physics

    ArchitectureNumerics

  • Summary and Looking forward

    • High Reynolds number LES

    • expensive

    • worthwhile

    • GPU can make simulations viable for smaller clusters

    • “Codesigned” software needed for optimal performance

    Nov 15-19, 2011 SC 2011 20