23
Increased warning times in JET APODIS disruption predictor by using confidence qualifiers J. Vega 1 , S. Dormido-Canto 2 , A. Murari 3 , G. A. Rattá 1 , R. Castro 1 and JET Contributors* EUROfusion Consortium, JET, Culham Science Centre, Abingdon, OX14 3DB, UK 1 Laboratorio Nacional de Fusión, CIEMAT, Madrid, Spain 2 Departamento de Informática y Automática, UNED, Madrid, Spain. 3 Consorzio RFX, Padua, Italy. *See the author list of “Overview of the JET results in support to ITER” by X. Litaudon et al. to be published in Nuclear Fus ion Special issue: overview and summary reports from the 26th Fusion Energy Conference (Kyoto, Japan, 17-22 October 2016) Acknowledgements This work was partially funded by the Spanish Ministry of Economy and Competitiveness under the Projects No ENE2015-64914-C3-1-R and ENE2015-64914-C3-2-R. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission

Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Increased warning times in JET APODIS disruption

predictor by using confidence qualifiers

J. Vega1, S. Dormido-Canto2, A. Murari3, G. A. Rattá1, R. Castro1 and JET Contributors*

EUROfusion Consortium, JET, Culham Science Centre, Abingdon, OX14 3DB, UK

1Laboratorio Nacional de Fusión, CIEMAT, Madrid, Spain 2Departamento de Informática y Automática, UNED, Madrid, Spain. 3Consorzio RFX, Padua, Italy.

*See the author list of “Overview of the JET results in support to ITER” by X. Litaudon et al. to be published in Nuclear Fusion Special issue: overview and summary reports from the 26th Fusion Energy Conference (Kyoto, Japan, 17-22 October 2016)

Acknowledgements

This work was partially funded by the Spanish

Ministry of Economy and Competitiveness under

the Projects No ENE2015-64914-C3-1-R and

ENE2015-64914-C3-2-R.

This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission

Page 2: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Motivation

• Machine learning methods are used to learn something from past examples (model creation) in order to make predictions for new examples

• Machine learning methods have shown the potential to distinguish between disruptive and non-disruptive behaviours through the use of binary classifiers

• A training process with disruptive and non-disruptive examples allows determining a separation frontier between both plasma states to classify the disruptive/non-disruptive character of new examples

• Sometimes, the selection of disruptive examples is challenging

• When an alarm is triggered, only mitigation actions are possible

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Disruption predictordecision function

non-disruptivebehaviour

disruptivebehaviour

For avoidance/mitigation purposes, three

plasma states have to be differentiated:

non-disruptive, abnormal and disruptive

Avoidance Mitigation

Page 3: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Motivation

• Machine learning methods could be used to find useful relationships between variables to discriminate among three plasma states: non-disruptive, abnormal and disruptive

• Abnormal predictions will trigger avoidance techniques

• Disruptive predictions will trigger mitigation techniques

• For machine protection purposes, in a first approach, the relationships

could not necessarily be related to physics quantities • The objective is recognizing the presence of dangerous behaviours for

the machine although the physics reasons are unknown

• From the machine learning perspective, an important goal would be to put into operation a single system able to decide between non-disruptive, abnormal and disruptive behaviours

• It should be emphasised that the predictor system has to be of general application (i.e. not focused on specific plasma events but able to identify any abnormal behaviour)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Disruption predictor

control loop

Non-disruptive

behaviour?

Completely disruptive

behaviour?

No

Yes

Yes

No

Avoidance

techniques

Mitigation

actions

The aim of this presentation is not the proposal of avoidance or mitigation methods but analysing

potential capabilities of machine learning to recognise abnormal plasma conditions and to trigger

alarms

Page 4: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Outline

• Disruption avoidance and mitigation

• Critical questions about machine learning and disruption avoidance

• A novel machine learning method to distinguish abnormal and disruptive behaviours

• Application to JET through the APODIS disruption predictor outputs

• Summary

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Page 5: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Disruption avoidance and mitigation

• Existing techniques of avoidance/mitigation methods are • Injection of significant amount of gases through fast valves

• M. Lehnen et al. Nuclear Fusion 51 (2011) 123010 (12pp)

• M. Bakhtiari et al. Nuclear Fusion 51 (2011) 063007 (9pp)

• Killer pellets • G. Pautasso et al. Nuclear Fusion, 36, 10 (1996) 1291-1297

• N. Commaux et al. Nuclear Fusion 51 (2011) 103001 (9pp)

• ECRH injection • B. Esposito et al. Phys. Rev. Lett. 100, 045006 (2008)

• B. Esposito et al. Nuclear Fusion 51 (2011) 083051 (9pp)

• Depending on the disruption type and the time available between the alarm and the disruption, different strategies can be much more desirable than others

• Reliable disruption classifiers could be of big help • A. Murari et al Nuclear Fusion, 53 (2013) 033006 (9pp)

• B. Cannas et al Nuclear Fusion, 53, 9 (2013) 093023

• Disruption time predictors would be essential elements for proper selection of an avoidance/mitigation strategy

• G. Pautasso et al. Nuclear Fusion 42 (2002) 100-108

• F. C. Morabito et al. Nuclear Fusion 41, 11 (2001) 1715-1723

• B. Cannas et al. Nuclear Fusion 44 (2004) 68-76

• J. Vega et al. JET TFM May 8th, 2014

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Page 6: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Disruption avoidance and mitigation • A pre-requisite to trigger avoidance/mitigation actions is the real-time identification of

abnormal/disruptive conditions through a corresponding alarm

• The reaction time can include • any computation time to select a specific A/M methodology

• the necessary time to fire technical systems

• the plasma response time to the A/M actions

• The warning time has to be greater than the reaction time • However, there are no reliable ‘disruption time predictors’ so far

• Therefore, in the presence of an alarm, the faster reaction the better

• Could machine learning help in making the decision about triggering alarms to start either avoidance or mitigation actions in a general context?

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

time

alarm avoidance/mitigation start

disruption

reaction time

warning time

Page 7: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Critical questions about machine learning and

disruption avoidance

• Drawback: machine learning techniques require training (the larger dataset the

better).

• This is a problem for ITER and DEMO

• There are alternatives to classical approaches of machine learning in disruption

prediction (valid for mitigation actions)

• Predictors from scratch: at least, one disruption is necessary to start the learning process. New

learning is added after each missed alarm

– S. Dormido-Canto et al. Nuclear Fusion 53 (2013) 113001 (8pp)

– J. Vega et al. Nuclear Fusion. 54 (2014) 123001 (17pp)

• Predictors based on anomaly detections: no data from past discharges are needed. Under test in

the JET real-time network

– J. Vega et al. 1st EPS Conference on Plasma Diagnostics. April 14-17, 2015. Frascati, Italy

– J. Vega et al. Proc. of the 26th Symposium on Fusion Engineering (SOFE 2015). May 31st-June 4th, 2015. Austin

(TX), USA

– S. Esquembri et al. Conference Record of the 20th IEEE Real-Time Conference. Jun 5th-10th, 2016. Padova, Italy

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Page 8: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Critical questions about machine learning and

disruption avoidance • Can we qualify the prediction of avoidance actions in a running discharge?

• How can be false alarms discriminated?

• How sure are we to trigger an avoidance alarm and not a mitigation alarm?

• Some kind of measure is necessary (probability, index, error bars …)

• By assuming a reasonable level of confidence to trigger avoidance methods,

what specific actions have to be put into operation?

• Magnetic field, plasma current, injected power, gas fuelling, modifying pulse

schedule, changing operation scenario, …

• In contrast to many existing fusion devices, ITER pulses will primarily operate with a pulse

schedule managed by logic based on plasma conditions rather than being strictly driven by

a time schedule

• Additional predictors (machine learning, theoretical based or thresholds based) can

be necessary

• Type of abnormal condition, potential riskiness, technical system failure, human error, not

enough information, …

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Page 9: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Machine learning and avoidance alarms

• Is really possible to develop a machine learning predictor to identify

abnormal plasma behaviours that have to be corrected to avoid

disruptions?

• Training is an issue: no reliable ways of classifying examples with

the label ‘abnormal behaviour’

• It does not seem easy the creation of a three-class classifier to

distinguish three different behaviours: safe, abnormal and

disruptive

• Could machine learning methods be used to find useful relationships

between variables to discriminate among three plasma states with only

two types of examples (disruptive/non-disruptive) in the training

process?

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

• Avoidance actions can only be carried out after the recognition of abnormal

behaviours

non-disruptive disruptive

abnormal

non-disruptive disruptive

Safe, abnormal, disruptive?

Page 10: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Conceptual view of classifiers and ‘abnormal’

examples

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

• Usually, training datasets are not linearly separable

• Low reliability predictions are ‘strange’ examples

• The ‘strangeness’ has to be quantified

• Low reliability predictions can be used to recognise ‘abnormal’ examples

• How?

• Is there any mathematical support for this assumption?

Classified as non-disruptive However, the closest examples are disruptive

Low reliability in the prediction

non-disruptive disruptive Separating hyper-plane

Classified as disruptive However, the closest examples are non-disruptive

Low reliability in the prediction

Training examples of class ‘non-disruptive’

Training examples of class ‘disruptive’

Examples are feature vectors: nx

Examples to classify

Page 11: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Conformal predictors

• Conformal predictors allow qualifying a prediction through a nonconformity measure • Intuitively, this is a way of measuring how different a new example is from

old examples

• Given a nonconformity measure and a bag of examples, the nonconformity score for each example in the bag is

• Because a nonconformity measure can be scaled, the numerical value , does not, by itself, tell how unusual finds to be. Therefore, a comparison of to the other is necessary

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

nA

1,...,

nz z

iz

1 1 1: ,..., , ,..., ,

i n i i n iA z z z z z

i

nA i

z

i j

Page 12: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Conformal predictors

• A convenient way of making the comparison is to compute the fraction

• The p-value is the fraction of the examples in the bag as nonconforming as

• It should be noted that

• If it is small (close to its lower bound 1/n for a large n), then is very nonconforming (an

outlier, i.e. very strange)

• If it is large (close to its upper bound 1), then is very conforming

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

# 1,..., :

:j i

i

j np value z

n

iz

1

1i

p value zn

iz

iz

Page 13: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Recipe to classify with a conformal predictor

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

1 1 1bag underlying classifier

nonconfor

Given a of examples , ..., , , and , ..., , an ,

a and a = , to classimi fyty m inteas o 1 of classes ( isur

unknown), th

e new ex

e confor

ample

n i i i i M

n n n

z z z y y L L

A z x y M y

x

1

1,

Provisionally set ,

1,

Set , ..., ,

# 1, ..., |Set the p-value:

:= lab

mal evaluation can be performed with the following algorithm:

k

n n k

i n n i

i n

y

FOR k M

z y

FOR i n

A z z z

END FOR i

i np

n

END FOR k

Label prediction

x

el of largest p-value

: largest p-value. : 1 - 2 p-valuend

Credibility Confidence

It is assumed that xn belongs to class k

The p-value is the fraction of nonconformity scores that are equal or greater than n

Nonconformity scores

Page 14: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Conformal prediction applied to recognise non-disruptive, abnormal and

disruptive behaviours from disruptive/non-disruptive training examples

1 1bag

underlying classifier

Given a of training examples , ..., , , and , ,

an , noncormity measure new exampla and a = , to classify into 1

of 3 classes

e

n i i i i

n n n

non disrup disruptivez z z y y tive

no

A z y

n

x

x

1

1, 2

Provisionally set ,

1,

Set , ..., ,

# 1, ..., |Set the p-value:

, , , the following evaluation can be carried out:

k

n n k

i n n i

i n

y

FOR k

z y

FOR i n

A z z z

END FOR i

i np

n

END FOR k

abnorm disrupdisrupti

Label pr

v tivee

e

al

x

1 ,

,

,

disruptive

disruptive

disruptive

non disruptive

non disruptive

non disruptive

n

abnif p pn

dict on di

disruptiv

ion sruptiif p

ormal

p p

vep

if e

The nonconformity scores can be seen as the ‘prediction reliabilities’

Nonconformity scores

,

,

n n

n n

non disruptivez

z disruptive

x

x

11

11

disruptive

non disruptivep

n

pn

Page 15: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Preliminary results in JET

• Bag of examples

• Training dataset of the APODIS 2nd layer classifier

• Underlying classifier

• Support Vector Machine classifier

• Nonconformity measure

• Samples to classify

• Outputs of APODIS 1st layer ( ) every 32 ms (from plasma start to extinction)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

3

n x

distance to the separating hyperplane well classified

distance to the separating hyperplane bad classifiedn

ifA

if

Page 16: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

APODIS review

• Input signals: plasma current, locked mode amplitude, total input power, plasma internal inductance, plasma density, stored diamagnetic energy time derivative, and radiated power

• As a discharge is in execution, the three most recent 32 ms temporal segments are classified as disruptive or non-disruptive

• The three models may disagree about the discharge behaviour 2nd layer

t t - 32 t - 64 t - 96

M1 M2 M3

t + 32 t t - 32 t - 64 t - 96

M1 M2 M3

t + 64 t + 32 t t - 32 t - 64 t - 96

M1 M2 M3

t + 96 t + 64 t + 32 t t - 32 t - 64 t - 96

M1 M2 M3

The classifiers operate in parallel on consecutive time windows

PREDICTOR

First

layer

Second

layer

Decision Function: SVM classifier

[-64, -32] [-96, -64] [-128, -96]

M1 (SVM)

M2 (SVM)

M3 (SVM) M1, M2 and M3 are 3 independent models

Train temporal segments (ms) w.r.t. disruption

J. Vega et al. Fus. Eng. Des. 88 (2013) 1228-1231

Page 17: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Preliminary results in JET

• Bag of examples

• Training dataset of the APODIS 2nd layer classifier

• Underlying classifier

• Support Vector Machine classifier

• Nonconformity measure

• Samples to classify

• Outputs of APODIS 1st layer ( ) every 32 ms (from plasma start to extinction)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

3

n x

distance to the separating hyperplane well classified

distance to the separating hyperplane bad classifiedn

ifA

if

Page 18: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Support Vector Machines

• SVM gives the distance (with sign) of the examples to the separating hyper-plane

• d1= +1.5

• d2= - 2.1

• The label of an example is determined through the sign(distance to the separating hyper-plane)

• Sample 1 belongs to class {+1}: sign(d1) > 0

• Sample 2 belongs to class {-1}: sign(d1) < 0

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Class {+1} Class {-1} Separating hyper-plane

1 2 d1

d2

Page 19: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Preliminary results in JET

• Bag of examples

• Training dataset of the APODIS 2nd layer classifier

• Underlying classifier

• Support Vector Machine classifier

• Nonconformity measure

• Samples to classify

• Outputs of APODIS 1st layer ( ) every 32 ms (from plasma start to extinction)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

3

n x

distance to the separating hyperplane well classified

distance to the separating hyperplane bad classifiedn

ifA

if

Page 20: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Preliminary results in JET

• Bag of examples • Training dataset of the APODIS 2nd layer classifier

• Underlying classifier • Support Vector Machine classifier

• Nonconformity measure

• Examples to classify • Outputs of the APODIS 1st layer ( ) every 32 ms (from plasma start to extinction)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

3

nx

distance to the separating hyperplane well classified

distance to the separating hyperplane bad classifiedn

if

ifA

Avoidance/mitigation predictor

M1 (SVM)

M2 (SVM)

M3 (SVM)

and non dis disruptivp v eru ti ep t p t : prediction reliabilities

Page 21: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Predictions

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

1 ,

,

,

disruptive

disruptive

disruptive

non disruptive

non disruptive

non disruptive

non disrupt

abnorma

disruptive

if p pn

Label prediction if p p

if p p

e

l

iv

The temporal evolution of the ‘prediction reliabilities’ is analysed

, non disrupti disruptivevet p p

The first time

a mitigation alarm is triggered

non disrupdisrupti vee tivp p

Two consecutive

predictions in which

1

an avoidance alarm is issued

disruptive non disruptivep pn

Page 22: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Very preliminary results with JET discharges

• This avoidance/mitigation predictor (AMP) has been

applied to 789 JET discharges in the range 82429 –

83793

• 81 unintentional disruptions successfully predicted by

APODIS

• 708 non-disruptive discharges

• All disruptions are predicted

• 94% with more than 10 ms of warning time

• 37% of the alarms correspond to avoidance alarms

• Average(warning time) ± standard deviation

• APODIS: 428 ± 1166 ms

• AMP: 606 ± 1874 ms

• False alarms rates

• APODIS: 0.99%

• AMP: 5.08%

• For APODIS warning times between 150 ms and 2 s,

on average, the AMP recognises avoidance alarms 66

ms earlier than APODIS (only mitigation)

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.

Page 23: Increased warning times in JET APODIS disruption predictor ......Increased warning times in JET APODIS disruption predictor by using confidence qualifiers 1J. Vega , S. Dormido-Canto2,

Summary

• Machine learning tools can be used to recognise in a single predictor the need of triggering either avoidance or mitigation alarms

• The discrimination between avoidance and mitigation alarms is crucial

• The development of a three class predictor with only two types of training examples has been accomplished through the theory of conformal predictors

• Only disruptive and non-disruptive examples are necessary

• Preliminary application to JET data to distinguish between avoidance or mitigation is promising • More than 1/3 of the alarms are avoidance ones

• The triggering of specific avoidance or mitigation techniques requires additional inputs to decide

• The application provided is an initial approach to be analysed in more detail and not depending on APODIS at all • Other types of confidence classifiers (for example, probabilistic predictors) have to be developed

• Machine learning methods can be used in a variety of approaches for combining avoidance and mitigation predictions • The essential point is to find proper features to differentiate possible states

• Classical approaches (large datasets for training are required)

• From scratch approaches

• Anomaly detection approaches

2nd IAEA TM 2017. MIT PSFC. Cambridge, MA, USA.