Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
DNN#Assisted*Parameter*Space*Exploration*and*Visualization*for*Large*Scale*Simulations
HAN#WEI *SHEN
DEPARTMENT*OF*COMPUTER*SCIENCE*AND*ENGINEERING
THE*OHIO*STATE*UNIVERSITY
1
Traditional**Visualization*Pipeline*
DiskI/O
SupercomputerSimulator Raw*data
Post9analysisMemory
I/O
• Data*are*often*very*large*•Mainly*batch*mode*processing;**Interactive*exploration*is*not*possible• Limited*parameters*to*explore
Parameter'Space'Exploration• Running&large&scale&simulations&is&very&time&and&storage&consuming&• A&physical&simulation&typically&have&a&huge¶meter&space• Ensemble&simulations&are&needed&for&analyzing&the&model&quality&and&also&identify&the&uncertainty&of&the&simulation&• It&is¬&possible&to&exhaust&all&possible&input¶meters&– time&and&space&prohibitive&
3
DNN#Assisted#Parameter#Space#Exploration• Can$we$create$visualization$of$the$simulation$outputs$without$saving$the$data?Or
• Can$we$predict$the$simulation$results$without$running$the$simulation?• Why?$: Identify$important$simulation$parameters$: Identify$simulation$parameter$sensitivity$: Quantify$the$uncertainty$of$the$simulation$models
• Methods:: InSituNet (IEEE$SciVis’19$Best$Paper): NNVA (IEEE$VAST’19$Best$Paper$Honorable$Mention)
4
InSituNet:*Deep*Image*Synthesis*for*Parameter*Space*Exploration*of*Ensemble*Simulations
5
Introduction• Ensemble(data(analysis(workflow• Issues5 I/O(bottleneck(and(storage(overhead
6
ensemblesimulations
simulationparameters
disk
I/O
raw data
post-hocanalysis
I/O
Introduction• In#situ#visualization- Generating#visualizations#at#simulation#time- Storing#images#for#post-hoc#analysis
7
ensemblesimulations
simulationparameters
disk
I/O
post-hocanalysis
I/O
viewparameters
visual mappingparameters
in situ visvisualization
images
image data
• Challenges) Limiting.the.flexibility.of.post)hoc.exploration.and.analysis) Raw.data.are.no.longer.available
) Incapable.of.exploring.the.simulation.parameters) Expensive.simulations.need.to.be.conducted.for.new.parameter.settings
• Our.Approach) Studying.how.the.parameters.influence.the.visualization.results) Predicting.visualization.results.for.new.parameter.settings
Introduction
8
Approach(Overview
9
in situ vis
viewparameters
visual mappingparameters
ensemblesimulations
simulationparameters
visualizationimages
!"#$
!%#"
!%#&'
(ℱ !"#$, !%#", !%#&' = (
InSituNetA#deep#neural#network#that#models#the#function#ℱ
Design'of'InSituNet
10
• Three'subnetworks'and'two'losses3 Regressor (mapping'parameters'to'prediction'images)3 Feature+comparator+(computing'feature+reconstruction+loss)3 Discriminator (computing'adversarial+loss)
inputparameters
Regressor
ground truth
prediction
featurereconstruction
loss
Discriminator adversarial loss
FeatureComparator
Design'of'InSituNet
11
• Three&subnetworks&and&two&losses2 Regressor (mapping(parameters(to(prediction(images)2 Feature'comparator'(computing&feature'reconstruction'loss)2 Discriminator (computing&adversarial'loss)
inputparameters
Regressor
ground truth
prediction
featurereconstruction
loss
Discriminator adversarial loss
FeatureComparator
• A"convolutional"neural"network"(CNN)4 Input:"parameters"!"#$,"!%#","and"!%#&'4 Output:"prediction"image" ()4 Weights:"*,"updated"during"training
Regressor'+,
12
inputparameters Regressor
prediction
sim
ulat
ion
para
met
ers
ReLU
others
bacthnormalization
2D convolution
fully connected
residual block
input/output
view
para
met
ers
visu
al m
appi
ngpa
ram
eter
s
Architecture"of"the"regressor
conc
at
(153
6, 4×4
×16×k)
resh
ape
(512
)
(l)(l,
512
)
(512
, 512
)
(153
6)
(4×4
×16×k)
(512
)
(n)
(n, 5
12)
(512
, 512
)
(512
)
(m)
(m, 5
12)
(512
, 512
)
(k, 3
, 3, 3
)
tanh
(4, 4
, 16×k)
(in=1
6×k,
out=
16×k
)
(in=1
6×k,
out=
8×k)
(in=8
×k, o
ut=8
×k)
(in=8
×k, o
ut=4
×k)
(in=4
×k, o
ut=2
×k)
(in=2
×k, o
ut=k
)
(8, 8
, 16×k)
(16,
16,
8×k
)
(32,
32,
8×k
)
(64,
64,
4×k
)
(128
, 128
, 2×k
)
(256
, 256
, k)
(256
, 256
, 3)
imag
e
(in, o
ut, 3
, 3)
(out
, out
, 3, 3
)
upsa
mpl
ing
(in, o
ut, 1
, 1)
upsa
mpl
ing
sum
a
Regressor'!"
13
conc
at
(153
6, 4×4
×16×k)
resh
ape
(k, 3
, 3, 3
)
tanh
sim
ulat
ion
para
met
ers
(512
)
(l)(l,
512
)
(512
, 512
)
(153
6)
(4×4
×16×k)
(4, 4
, 16×k)
(in=1
6×k,
out=
16×k
)
(in=1
6×k,
out=
8×k)
(in=8
×k, o
ut=8
×k)
(in=8
×k, o
ut=4
×k)
(in=4
×k, o
ut=2
×k)
(in=2
×k, o
ut=k
)
(8, 8
, 16×k)
(16,
16,
8×k
)
(32,
32,
8×k
)
(64,
64,
4×k
)
(128
, 128
, 2×k
)
(256
, 256
, k)
(256
, 256
, 3)
imag
e
ReLU
others
bacthnormalization
2D convolution
fully connected
residual block
(in, o
ut, 3
, 3)
(out
, out
, 3, 3
)
upsa
mpl
ing
(in, o
ut, 1
, 1)
upsa
mpl
ing
sum
input/output
view
para
met
ers
(512
)
(n)
(n, 5
12)
(512
, 512
)
visu
al m
appi
ngpa
ram
eter
s
(512
)
(m)
(m, 5
12)
(512
, 512
)
a
• A$convolutional$neural$network$(CNN)• Input:$simulation,$visual$mapping,$view$parameters
• Output:$prediction$image
• Weights$#:$weights$collected$from$all$layers
Regressor'!"
14
conc
at
(153
6, 4×4
×16×k)
resh
ape
(k, 3
, 3, 3
)
tanh
sim
ulat
ion
para
met
ers
(512
)
(l)(l,
512
)
(512
, 512
)
(153
6)
(4×4
×16×k)
(4, 4
, 16×k)
(in=1
6×k,
out=
16×k
)
(in=1
6×k,
out=
8×k)
(in=8
×k, o
ut=8
×k)
(in=8
×k, o
ut=4
×k)
(in=4
×k, o
ut=2
×k)
(in=2
×k, o
ut=k
)
(8, 8
, 16×k)
(16,
16,
8×k
)
(32,
32,
8×k
)
(64,
64,
4×k
)
(128
, 128
, 2×k
)
(256
, 256
, k)
(256
, 256
, 3)
imag
e
ReLU
others
bacthnormalization
2D convolution
fully connected
residual block
(in, o
ut, 3
, 3)
(out
, out
, 3, 3
)
upsa
mpl
ing
(in, o
ut, 1
, 1)
upsa
mpl
ing
sum
input/output
view
para
met
ers
(512
)
(n)
(n, 5
12)
(512
, 512
)
visu
al m
appi
ngpa
ram
eter
s
(512
)
(m)
(m, 5
12)
(512
, 512
)
a
• Fully'connected'layer• Input:'1D'vector'# ∈ ℝ&• Output:'1D'vector'' ∈ ℝ(• Weights:'matrix') ∈ ℝ&×(
• Activation'function• Rectified'Linear'Units'(ReLU)
' = )#
'′ = max(0, ')
Regressor'!"
15
conc
at
(153
6, 4×4
×16×k)
resh
ape
(k, 3
, 3, 3
)
tanh
sim
ulat
ion
para
met
ers
(512
)
(l)(l,
512
)
(512
, 512
)
(153
6)
(4×4
×16×k)
(4, 4
, 16×k)
(in=1
6×k,
out=
16×k
)
(in=1
6×k,
out=
8×k)
(in=8
×k, o
ut=8
×k)
(in=8
×k, o
ut=4
×k)
(in=4
×k, o
ut=2
×k)
(in=2
×k, o
ut=k
)
(8, 8
, 16×k)
(16,
16,
8×k
)
(32,
32,
8×k
)
(64,
64,
4×k
)
(128
, 128
, 2×k
)
(256
, 256
, k)
(256
, 256
, 3)
imag
e
ReLU
others
bacthnormalization
2D convolution
fully connected
residual block
(in, o
ut, 3
, 3)
(out
, out
, 3, 3
)
upsa
mpl
ing
(in, o
ut, 1
, 1)
upsa
mpl
ing
sum
input/output
view
para
met
ers
(512
)
(n)
(n, 5
12)
(512
, 512
)
visu
al m
appi
ngpa
ram
eter
s
(512
)
(m)
(m, 5
12)
(512
, 512
)
a
• 2D%Convolutional%Layer• Input:%tensor%# ∈ ℝ&×(×)• Output:%tensor%* ∈ ℝ&×(×)+• Weights:%kernel%, ∈ℝ-_(×-_&×)×)+
• Residual%block1• Adding%input%to%the%output%of%convolutional%layers
* = , ∗ #
1K.%He,%X.%Zhang,%S.%Ren,%and%J.%Sun.%Deep%residual%learning%for%image%recognition.%In%Proceedings%of%2016%IEEE%Conference%on%Computer%Vision%and%Pattern%Recognition,%pp.%770–778,%2016.
Loss$Function
16
inputparameters Regressor
prediction
• Difference*between*the*prediction*and*the*ground*truth• Used*to*update*the*weights*!
ground truth
Loss
Backpropagation
• Pixel&wise&loss&functions/ Example:&mean&squared&error&loss&(MSE&loss&ℒ"#$)/ Issue:&blurry&prediction&images
Loss$Function$+ Straightforward$Approach
17
inputparameters Regressor
prediction
ground truth
MSE&Loss
Blurry&image&generated&by&aregressor&trained&with&the&MSE&loss
• Combining(two(loss(functions(defined(by(two(subnetworks5 Feature(comparator(5>(feature(reconstruction(loss(ℒ"#$%5 Discriminator(5>(adversarial(loss(ℒ$&'
Loss$Function$+ Our$Approach
18
inputparameters Regressor
prediction
ground truth
featurereconstruction
loss
Discriminator adversarial loss
FeatureComparator
Design'of'InSituNet
19
• Three'subnetworks'and'two'losses3 Regressor (mapping'parameters'to'prediction'images)3 Feature'comparator'(computing+feature'reconstruction'loss)3 Discriminator (computing'adversarial1loss)
inputparameters
Regressor
ground truth
prediction
featurereconstruction
loss
Discriminator adversarial loss
FeatureComparator
Feature'Comparator'!• A"pretrained"CNN"(e.g.,"VGG3191)3 Input:"image""3 Output:"feature"map"!#(") ∈ ℝ(×*×+of"an"intermedia"layer",
20
conv
1_1
relu
1_1
pool
1
conv
1_2
relu
1_2
conv
2_2
relu
2_2
conv
2_1
relu
2_1
pool
2
2D convolution
ReLU
max pooling
input
feature map
h
w
c channels
1K."Simonyan and"A."Zisserman."Very"deep"convolutional"networks"for"large3scale"image"recognition."In"Proceedings"of"International"Conference"on"Learning"Representations,"2015.
• Definition( MSE,loss between,the,feature'maps of,the,prediction,and,the,ground,truth( Given,a,batch,of,ground,truth,images,!":$%& and,predictions, '!":$%&
• Benefits( Making,the,regressor,to,generate,images,sharing,similar,features,with,the,ground,truth,,which,leads,to,images,with,sharper,features
21
ℒ)*+,-,/ = 1ℎ345678"
$%&9/ !7 − 9/ '!7 ;
;
Feature'Reconstruction'Loss'ℒ)*+,
ground,truth ℒ)*+,ℒ<=*
Design'of'InSituNet
22
• Three&subnetworks&and&two&losses2 Regressor (mapping¶meters&to&prediction&images)2 Feature+comparator+(computing&feature+reconstruction+loss)2 Discriminator (computing+adversarial/loss)
inputparameters
Regressor
ground truth
prediction
featurereconstruction
loss
Discriminator adversarial loss
FeatureComparator
Discriminator+!"(2
56, 2
56, 3
)(in
=3, o
ut=k
)
(in=k
, out
=2×k
)
(in=2
×k, o
ut=4
×k)
(in=4
×k, o
ut=8
×k)
(in=8
×k, o
ut=8
×k)
(in=8
×k, o
ut=1
6×k)
(128
, 128
, k)
(64,
64,
2×k
)
(32,
32,
4×k
)
(16,
16,
8×k
)
(8, 8
, 8×k
)
(4, 4
, 16×k)
imag
e
(in=1
6×k,
out=
16×k
)
glob
al s
umpo
olin
g
(16×k,
1)(16×k)
(1)
(1)
sum
sigm
oid
(in, out, 3, 3)
(out, out, 3, 3)
(in, out, 1, 1)
averagepooling
sum
averagepooling
real
/ fa
ke
ReLU
others
2D convolution
fully connected
residual block
input/output(1
536,
16×k)
(16×k)
dot
conc
at
(512
)
(l)(l,
512
)
(512
, 512
)
(153
6)
(512
)
(n)
(n, 5
12)
(512
, 512
)
(512
)
(m)
(m, 5
12)
(512
, 512
)
a
b
sim
ulat
ion
para
met
ers
view
para
met
ers
visu
al m
appi
ngpa
ram
eter
s
23
• A$binary$classifier$(CNN)$with$weights$#• Trained$to$differentiate$prediction$(fake)$and$ground$truth$(real)$images• Input:$real/fake$image$$ and$the$input$parameters$%
• Output:$likelihood$value$!" $, % ∈ [0,1]• 1$means$real• 0$means$fake
• Regressor'and'discriminator'are'trained'in'an'adversarialmanner1
• Given'a'batch'of'real'images'!":$%&,'fake'images' '!":$%&,'and'parameter'settings'(":$%&9 Regressor'is'trying'to'fool'discriminator'by'minimizing
9 Discriminator'is'trying'to'differentiate'real'and'fake'images'by'minimizing
Adversarial*Loss*ℒ*+,
24
1I.'J.'Goodfellow,'J.'Pouget9Abadie,'M.'Mirza,'B.'Xu,'D.'Warde9Farley,'S.'Ozair,'A.'Courville,'and'Y.'Bengio.'Generative'adversarial'nets.'In'Proceedings'of'Advances'in'Neural'Information'Processing'Systems,'pp.'2672–2680,'2014.
ℒ*+,_. = −12345"
$%&log9: '!4, (4
ℒ*+,_< = −12345"
$%&log9: !4, (4 + >?@ 1 − 9: '!4, (4
ground'truth ℒAB*CℒDEB ℒ*+,
Total&Loss• A"weighted"combination"of"ℒ"#$% and"ℒ$&'
• ℒ"#$% and"ℒ$&' are"complimentary"with"each"other5 ℒ"#$%:"overall"feature"level"difference"between"image"pairs5 ℒ$&':"local"details"that"the"real"and"fake"images"differ"the"most
ℒ = ℒ"#$% + λℒ$&'
25
ground"truth ℒ"#$%ℒ+,# ℒ$&' ℒ"#$% + λℒ$&'
Training• Updating)!" and)#$ w.r.t. the)loss)functions)iteratively
26
Parameter'Space'Exploration'with'InSituNet• Forward'prediction. Predicting'image'for'new'parameter'settings
• Backward'sensitivity'analysis. Computing'the'sensitivity'of'the'parameters
27
Subregion Sensitivity of: BwsA Visualization ViewCompute Overall Sensitivity Curve
Simulation Parameters
BwsA
1 1.5 2 2.5 3 3.5 3.8
Visual Mapping Parameters
Isovalue of temperature:15 20 25
View Parameters
theta
0 60 120 240 300 360160
phi
-90 -30 0 30 60 90-54
Parameters View
sens
itivi
ty
300350400450500550600
sensitivity0 80
salinity30 40
forward'prediction
backward'sensitivity'analysis
• Three%simulations:%SmallPoolFire,%Nyx,%and%MPAS:Ocean
• Comparing%the%prediction%with%the%ground%truth
Results
28
Nyx
Smal
lPoo
lFire
MPA
S-O
cean
Lmse LfeatGround Truth Ladv_R Lfeat +10-2Ladv_R
log
dens
ity9
12.5
salin
ity30
40te
mpe
ratu
re30
018
50
Results• Nyx$(cosmological$simulation) •MPAS6Ocean$(ocean$simulation)
29
Results
30
Datasets(and(timings:(tsim,(tvis,(and(ttr are(timings(for(running(ensemble(simulations,(visualizing(data(in(situ,and(training(InSituNet,(respectively;(tfp and(tbp are(timings(for(a(forward(and(backward(propagation(ofthe(trained(InSituNet,(respectively.
•We#propose#InSituNet,#a#deep#learning4based#image#synthesis#model#for#parameter#space#exploration#of#large4scale#ensemble#simulations
•We#evaluate#the#effectiveness#of#InSituNet in#analyzing#ensemble#simulations#that#model#different#physical#phenomena
Conclusion
31
Neural'Network'Assisted'Visual'Analysis'of'Yeast'Cell'Polarization'Simulation
32
Experimental Biologist Computational Biologist
Mathematical Simulation Model
Simulation Domain
Computational Biologist
Mathematical Simulation Model
• Protein species of interest : Cdc42
• Identify parameters configurations that can simulate high Cdc42 polarization
• Polarization: Asymmetric localization of protein concentration in a small region of the cell membrane
Simulation Background
Microscopic Image Simulation
Domain
Computational Biologist
• High-dimensional input/output spaces! 35 uncalibrated simulation input parameters! 400-dimensional output
• Computationally expensive! ~2.3 hrs/execution
• Challenging to perform interactive exploratory analysis of the parameter space
Challenges
Mathematical Simulation Model
Simulation Domain
Computational Biologist
• Neural network-based surrogate model
• Mimics the expensive simulation during analysis
• Facilitates interactive visual analytics
• Quick preview of predicted results for new parameters configurations
Proposed Approach
Mathematical Simulation Model
Data Scientist Surrogate Model
≈Simulation
Domain
• Quickly preview results for new parameters
• Perform parameter sensitivity analysis
• Discover interesting parameter configurations
• Validate the surrogate model
• Extract insights from trained surrogate
VA System Requirements
≈Surrogate Simulation
Computational Biologist
Visual'Analysis'Workflow
38
Neural Network Surrogate Model
NNVA: Visual Analysis System
Yeast Cell Polarization Simulation
training data new parameters
uncertainty visualization
parameter sensitivity
parameter optimizatio
nmodel
diagnosis
dropout
Act.Max-Min
Weight Matrix
visual queries, new input parameter configurations, etc.
Network Structure and Training
35 1024
ReLU
Drop
out1(0.3)
800
500
400
ReLU
ReLU
Drop
out1(0.3)
InputLayer
Hidden Layer (H0)
Hidden Layer (H1)
Hidden Layer (H2)
OutputLayer
( p1,
p 2, …
p35
)
Simulation Domain
Network Structure and Training
35 1024
ReLU
Drop
out1(0.3)
800
500
400
ReLU
ReLU
Drop
out1(0.3)
InputLayer
Hidden Layer (H0)
Hidden Layer (H1)
Hidden Layer (H2)
OutputLayer
( p1,
p 2, …
p35
)
Visualization: Predicted protein concentration is color-mapped and laid-out radially
Cdc42 concentration
0 200 400
Network Structure and Training
35 1024
ReLU
Drop
out1(0.3)
800
500
400
ReLU
ReLU
Drop
out1(0.3)
• Loss Function: MSE
• Training data size: 3000
• Uniformly sampled parameter space
• Validation data size: 500
• Final Accuracy: 87.6%
InputLayer
Hidden Layer (H0)
Hidden Layer (H1)
Hidden Layer (H2)
OutputLayer
Uncertainty in Neural Networks using Dropout
• Important to visualize the prediction uncertainty in the exploration process
• Dropout: Randomly ignoring the output of neurons in a layer
• Training phase: ! Acts as regularizer to avoid overfitting
• Prediction phase[1]:! Uncertainty quantification of predicted result
[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016
Uncertainty in Neural Networks using Dropout
• Important to visualize the prediction uncertainty in the exploration process
• Dropout: Randomly ignoring the output of neurons in a layer
• Training phase: ! Acts as regularizer to avoid overfitting
• Prediction phase[1]:! Uncertainty quantification of predicted result
• Visualized using standard deviation bands
[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016
Uncertainty
bands
Parameter Sensitivity Analysis• Influence of individual parameters on different areas of output domain! Measure of how much the output changes for a small change in the input! Useful for parameter tuning
• Partial derivative of output w.r.t input:
• Computed using backpropagation
!"#!$#
"#$#
Parameter Sensitivity Analysis• Influence of individual parameters on different areas of output domain! Measure of how much the output changes for a small change in the input! Useful for parameter tuning
• Visualize detail parameter sensitivity across the cell membrane400
35
Parameter Sensitivity Analysis• Influence of individual parameters on different areas of output domain! Measure of how much the output changes for a small change in the input! Useful for parameter tuning
• Visualize detail parameter sensitivity across the cell membrane
• Interactive sensitivity brush to select area of interest
Parameter Optimization• Recommend parameter configurations which maximizes/minimizes the predicted protein
concentration values at different regions
• Activation Maximization:! Keeping the weights fixed, update the input (via gradient ascent) such that it maximize the model output
i,e.
• Activation Minimization: !"#
max#
'( # − * # − #+ ,
L2 -regularizer
m-.#'( # + * # − #+ ,
Parameter Optimization• Recommend parameter configurations which maximizes/minimizes the predicted protein
concentration values at different regions
• Interactive brushes to select specific areas of the cell membrane
Act. Max
Act. Min
Act. Max-Min
Visual'Analysis'System
50
• Instance View: ! Detail analysis of the predicted result for a specific
parameter configuration of interest
• Instance View: ! Detail analysis of the predicted result for a specific
parameter configuration of interest
• Parameter Control Board:! Interactively calibrate the 35 simulation parameters
• Instance View: ! Detail analysis of the predicted result for a specific
parameter configuration of interest
• Parameter Control Board:! Interactively calibrate the 35 simulation parameters
• Quick View ! Visualize predicted protein concentration
• Instance View: ! Detail analysis of the predicted result for a specific
parameter configuration of interest
• Parameter Control Board:! Interactively calibrate the 35 simulation parameters
• Quick View ! Visualize predicted protein concentration
• Parameter List View:! Progressively store newly discovered sets of parameters
• Instance View: ! Detail analysis of the predicted result for a specific
parameter configuration of interest
• Parameter Control Board:! Interactively calibrate the 35 simulation parameters
• Quick View ! Visualize predicted protein concentration
• Parameter List View:! Progressively store newly discovered sets of parameters
• Model Analysis View:! Analysis the weight matrices of the trained network
Use Case 1: Discover New Parameter Configurations
• Identify parameters configurations that can simulate high Cdc42 polarization
• Polarization Factor (PF): Measure of the extent of polarization [0.0, 1.0]
• Found several parameter configurations that simulated high PF (>0.8)! Highest PF = 0.82
• Compared with previous analysis efforts (using polynomial surrogate models)
Angles in degrees
Cdc
42 c
onc.
Raw$image$data$(PF$=$0.87)
NNVA$discovered$(PF$=$0.82)
MCMC$(PF$=$0.57)Simulated$Annealing$(PF$=$0.64)
Use Case 2: Analyzing the trained surrogate• To extract and validate the knowledge learned by the trained network
• Visualize and analyze the weight matrices35 1024 800
500
400
InputLayer
H0 H1
H2OutputLayer
Use Case 2: Analyzing the trained surrogate• First Weight Matrix
w1
w1024
102435
• k_24cm0, k_24cm1• k_42a, k_42d
• q, h
• k_24cm0, k_24cm1, k_42a, k_42d, q, h, C42_t
Correlated Parameters
Insufficient parameter ranges
Row-wise Sorted
H0
Input
Case Study 2: Analyzing the trained surrogate• Final Weight Matrix
w1
w400
400
50035
H2Output
Input
neuron indices in H2 layer• Patterns with high weights to the
center neurons• Avg. Parameter sensitivity to such
patterns• Top 15 aligns with the domain
knowledge
Sorted Avg. Sensitivity
400
Conclusion and Future Work
• Neural network-assisted analysis backend for interactive visual analytics
• Utilized post-hoc analysis operation on trained model to facilitate scientific enquires
• [Future] Investigate more effective model interpretation techniques. Currently, weight matrix analyses require ML knowledge
• [Future] Explore additional post-hoc analysis techniques for neural networks to interpret abstract domain level scientific concepts from the model