View
0
Download
0
Category
Preview:
Citation preview
Machine Learning applications in Machine Learning applications in HEPHEP
Dr. Carlos Javier Solano SalinasJefe Laboratorio de Altas Energías y Simulación Computacional
Investigador RENACyT de CONCYTECLider UNI en experimentos MINERvA y DUNE en FERMILAB
Escuela Profesional Ciencia de la Computación
rumbo Acreditación
OUTLINE
1. Machine Learning in HEP
2. DCNN and DANN in MINERvA
3. Pandora
4. Feynman Computer Center - FERMILAB
Machine Learning Machine Learning
in High Energy in High Energy
Physics (HEP)Physics (HEP)(Nature Vol 560. 2 ag 2018)
Large Hadron Collider – LHCFrontier between Switzerland and France
Machine learning for calorimetry at CMS. The mass distribution of Z bosons decay (Z → e+e−)
Separating signal from background in ATLAS experiment. BDT-score distribut for search for Higgs decay (H → τ+τ−)
FERMILAB - USA
Neutrino selection and isolation in MicroBooNE
Exploring NOvA’s event-selection neural network using t-distributed stochastic neighbour embedding (t-SNE)
MINERMINERA (A (MMain ain ININjector jector EExpexpeRRiment iment -A-A))
(using DCNN and DANN)(using DCNN and DANN)
Beam-line
MINERvA MINOSDIS2010
NuMI Beamline Graphic courtesy B. Zwaska
Detector MINERA
VetoWall
LHe¼ ton
Blancos Nucleares con He, C, Fe, Pb, H2O,CHEn mismo experimento reduce errores sistematicos entre nucleos
Blanco centellador finamente segmentado ycompletamente activo. 8.3 tons, 3 tons fiducial
120 “modulos” planos. Masa total: 200 tons. Total canales: ~32K
MINOS Near Detector(Muon Spectrometer)
CHALLENGE FOR ANALYSIS IN
PARTICLE PHYSICS THIS CENTURY
So much data, both in channels and in number of events poses
unique challenges
Inspiration from vision and images: use Deep Convolutional
Neural Networks to extract geometric features
Requires a huge number of parallel processes and so was
enabled by the advent of GPUs.
Inspiration from vision and images: domain-adversarial training
Machine Learning algorithms are complicated
MINERvA AT FERMILAB
120 modules for tracking and
calorimetry (32k readout
channels)
MINOS near detector serves as
a muon spectrometer.
Made up of planes of strips in 3
orientations: X, U, and V.
Includes He target, water target
and 5 nuclear targets made up
of C, Fe, Pb
MINERVA VERTEX FINDINGPROCEDURAL ALGORITHM WALKS BACK MAIN TRACK AND USESSECONDARY TRACKS. IN DIS EVENTS LARGE AND COMPLICATEDHADRONIC SHOWERS MAY MASK THE PRIMARY VERTEX
MINERvA VERTEX FINDINGTreat localization as a classification
problem: DNN gives prediction which
segment out of the 11 segments(or which
plane out of 67 plane number) an
interaction is from.
Only 4 planes between most targets (2
planes between targets 4 and 5). That
means only a single U or V view `pixel’.
We want to keep our resolution in z to be
able to find the segment or plane that the
interaction took place in so we use non-
square kernels so that pooling is only
along U, V or X.
ML: DEEP CONVOLUTIONAL NEURAL NETWORKS
Feature extraction is realized within the ML Algorithm possibly in the same layers as the non-linear combination of features or possibly separately. In a NN the output is compared to a loss function (when the neural network is used as a classifier).
It is possible to use the Convolutional NN purely for feature extraction and feed the extracted features into a different sort of MLA for classification or regression.
DEEP NEURAL NETWORKIn a deep NN, the early layers of the
network `learn’ local features while the
later layers `learn’ global features.
This is the `hierarchal model’ where the
representations in early layers are
combined in the later layers.
These deep layers allow for more
complicated combinations of the
features fed in from the inputs.
For deep networks, non-linear layers
are required in order to have an
advantage over a shallow linear
network.
CONVOLUTIONAL NEURAL NETWORK
• These types of networks are well suited for feature extraction for
things like images with geometric structures.
Particle physics events have geometric structures which are
procedural algorithms (or scanners) identify.
• Convolutional networks have fewer parameters that are fit due to
having only a single parameter across the space (for a given kernel).
Parameters describe how the kernel is applied.
• In MINERvA we have time and energy information (obvious use of
`depth’)
● Final convolutional layer is a `semantic’ representation rather than
a spatial representation.
DEEP CONVOLUTIONAL NN FOR VERTEX FINDING
• We have three separate convolutional towers that look at each of the X, U, and Vimages.• These towers feature image maps of different sizes at different layers of depth to reflect the different information density in the different views.• The output of each convolutional tower is fed to fully connected layer, thenconcatenated and fed into another fully connected layer before being fed into the lossfunction.
PandoraPandora
Machine Learning is more than just Neural Networks!!!
Pandora team
FERMILABFERMILAB
Grid Computing - LHC-CERN
Grid Computing - LHC-CERN
FERMILAB - USA
FEYNMAN COMPUTING CENTER
Since 2006 until 2018 we sent 7 master students to FERMILAB to work in their
thesis projects in one year stays (with finantial help of FERMILAB),
They were students from Physics and Engineering Physics.
Now it is time for Computer Science Master students!
Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Pandora teamSlide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33Slide 34
Recommended