Deep Learning Meets Flow Visualization2018/11/26  · Deep Learning Meets Flow Visualization Chaoli...

Preview:

Citation preview

Deep Learning Meets Flow Visualization

Chaoli Wang

Big Data Challenges for Predictive Modeling of Complex Systems Symposium

University of Hong KongNovember 26, 2018

Outline of talk

• Deep learning + Visualization• Flow visualization• FlowNet• Vector field reconstruction• Future directions

Flow visualization

Flow fields

• A vector field: F(U) = V • U: field domain (x, y) in 2D; (x, y, z) in 3D • V: vector (u, v) in 2D; (u, v, w) in 3D

• Like scalar fields, vectors are defined at discrete points

Flow (vector) field

• Rendering each individual vector in the field• Hedgehogs: oriented

lines spread over the volume, indicating the orientation and magnitude of the flow; do not show directional information

• Glyphs, arrows: icons that show directions, but

Glyph-based methods

• Shading every pixels/voxels in the visualization• Line integral convolution

(LIC)• 3D LIC volume is

problematic due to the dense texture

Texture-based methods

• Rendering primitives built from particle trajectories• 1D: streamlines • Line variations: tubes,

ribbons• 2D: stream surfaces• 3D: flow volumes

Geometry-based methods

Streamlines• A family of curves that are instantaneously tangent to the

velocity vector of the flow• Show the direction a fluid element will travel in at any

point in time• Streaklines, pathlines, timelines: extension streamlines

to unsteady data

FlowVisual https://sites.nd.edu/chaoli-wang/flowvisual/

Examples of flow lines and surfaces

FlowNet

Outline of approach• Goal

– A single deep learning approach for identifying representative flow lines or flow surfaces

• Key ideas– Leverage an autoencoder to automatically

learn line or surface feature descriptors– Apply dimensionality reduction and interactive

clustering for exploration and selection

FlowNet user interface

Video demo

• Encoder-decoder framework • 3D voxel-based binary representation as input• Feature descriptor learning in the latent space

FlowNet architecture

Why voxel-based approach• Manifold-based

– Suitable for 3D mesh manifold (genus zero or higher genus surface)

– Does not work for flow lines or surfaces (non-closed)• Multiview-based

– Represent 3D shape with images rendered from different views

– Flow surfaces could be severely self-occluded• Voxel-based

– No precise line or surface is required for loss function computation and reconstruction quality evaluation

– Currently limited to a low resolution (e.g., 643)

FlowNet details

• The encoder consists of four convolutional (CONV) layers with batch normalization (BN) added in between, one CONV layer w/o BN, followed by two fully-connected layers

• The decoder consists of five CONV layers and four BN layers

• Apply the rectified linear unit (ReLU) at the hidden layers and the sigmoid function at the output layer

• Consider three loss functions: binary cross entropy, mean squared error (MSE), and Dice loss

Dimensionality reduction (DR) and object clustering

• Consider three DR methods: t-SNE (neighborhood-preserving), MDS and Isomap(distance-preserving)

• Consider three clustering methods: DBSCAN (density-based), k-means (partition-based), and agglomerative clustering (hierarchy-based)

• Finally choose t-SNE + DBSCAN• Compare three distance measures: FlowNet

feature Euclidean distance, streamline MCP distance, and streamline Hausdorff distance

Parameter setting and performance

Qualitative evaluation

Training set only Test set only Training set + test set

Quantitative evaluation

• Use representative streamlines to reconstruct the vector field using gradient vector flow (GVF)

FlowNetresults (1)

FlowNetresults (2)

FlowNet results (3)

Vector field reconstruction

Outline of approach• Goal

– Target streamline-based flow field representation and reduction in in-situ visualization setting

– Outperform the de facto method of gradient vector flow (GVF) for vector field reconstruction

– Achieve higher reconstruction quality compared with using compressed vector field

• Key ideas– Compress streamlines as sequence data using

SZ compression– Reconstruct vector field using deep learning

Two-stage vector field reconstruction

Stage 1 (S1) Stage 2 (S2)

S1: Low-resolution initialization• Starting from a randomly initialized vector field

Vlow

• Iteratively update the low-resolution vector field Vlow until it produces the same velocities as the input streamlines

Original S1: Random tracing

S1: Fixed tracing

S2: High-resolution refinement• Borrow the ideas of image super-resolution and

image completion• Build a CNN to upscale Vlow to Vhigh

• Fill the empty voxels through a nonlinear combination of their neighborhoods

Original S1: Random tracing

S2: Random tracing

Verification of two-stage reconstruction

Original

S2: Random tracing

Direct fixed tracingDirect random tracing

S1: Random tracing

S1: Fixed tracing

CNN architecture

The CNN includes seven deconvolutional layers and four residual blocks

An example of a residual block

Reduction rate and reconstruction PSNR (different numbers of lines)

Compression rate and reconstruction PSNR (different error bounds)

Visualization of reconstruction errorsG

VFO

ur m

etho

d

Comparison against direct compression of vector field

Ground truth

SZ compression Our method

• Scientific simulation data– From steady to unsteady data– From single variable to multivariate data

• Deep learning techniques– From CNN & RNN to GNN (graph neural network)– Adversarial learning and transfer learning

• Goals– Generalized 3D feature learning framework– Target scalar and vector fields, time-varying

multivariate data– Data augmentation for scientific visualization

Future directions

• Team members– Jun Han, Hao Zheng, Dr. Jun Tao (now at Sun

Yat-sen University), Martin Imre• Collaborators

– Drs. Hanqi Guo, Danny Chen• Funding

– NSF IIS-1455886, CNS-1629914, and DUE-1833129

– NVIDIA GPU Grant Program

Acknowledgments

chaoli.wang@nd.edu

Thank you!

Recommended