44
Explainable Artificial Intelligence Nick Bassiliades, Professor School of Informatics Aristotle University of Thessaloniki, Greece [email protected] Talk @ Rules: Logic and Applications, 17/12/2019 Interrogating the AI systems

Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Explainable Artificial Intelligence

Nick Bassiliades, ProfessorSchool of Informatics

Aristotle University of Thessaloniki, Greece

[email protected]

Talk @ Rules: Logic and Applications, 17/12/2019

Interrogating the AI systems

Page 2: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

2

• Professor, School of Informatics, Aristotle University of Thessaloniki, Greece

• Scientific specialization: Knowledge Systems

• Knowledge Representation & Reasoning

• Rule-based systems, Logic programming,Defeasible Reasoning, Knowledge-based / expert systems

• Semantic Web

• Ontologies, Linked Open Data, Semantic Web Services

• Multi-agent systems

• Reputation/trust, knowledge-based interaction (argumentation, negotiation, brokering)

• Applications on e-Learning, e-Government, e-Commerce, Electric Vehicles

A few words about the speaker

Nick Bassiliades

(Νικόλαος Βασιλειάδης)http://intelligence.csd.auth.gr/people/bassiliades

Page 3: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Ioannis Mollas, PhD student• Explainable AI (mostly works on explainable

Machine Learning)

• Credits for the most part of the talk, including the fancy looks

• Funded by project AI4EU (A European AI On

Demand Platform and Ecosystem)

• EU Horizon 2020 research and innovation programme under grant agreement No 825619

• https://www.ai4eu.eu/

3

Credits

Page 4: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

A few words about my institution

• Aristotle University of Thessaloniki, Greece

• Largest (?) University in Greece and South-East Europe

• Since 1925, 41 Schools, ~1.8K faculty, ~45K students

• School of Informatics

• Since 1992, 30 faculty, 3 departments, 6 research labs, ~1500 undergraduate students, ~180 MSc students, ~110 PhD students, ~160 PhD graduates, >5000 pubs

• Intelligent Systems Laboratory (http://intelligence.csd.auth.gr)

• 4 faculty, 18 PhD students, 4 post-graduate affiliates, 19 PhD graduates

• Research on Artificial Intelligence, Machine Learning / Data Mining, Knowledge Representation & Reasoning / Semantic Web, Planning, Multi-Agent Systems

• >470 publications, >40 projects

Page 5: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Our TeamThis project has received funding fromthe European Union's Horizon 2020research and innovation program undergrant agreement No 825619.

Ioannis VlahavasProfessor

Nick BassiliadesProfessor

Grigorios TsoumakasAssistant Professor

Ioannis MollasPhD Student

Page 6: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

BUILDING THE EUROPEAN AI ON-DEMAND PLATFORM

• European Union’s landmark Artificial

Intelligence project

• AI ecosystem containing knowledge,

algorithms, tools and resources

• 80 partners, 21 countries

• 3 years project

• €20m budget

About AI4EU

Page 7: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

BUILDING THE EUROPEAN AI ON-DEMAND PLATFORM

• Open and sustainable AI On-Demand-Platform

• Unite stakeholders via high-profile conferences and virtual events

• Develop a Strategic Agenda for European AI

• Establish an Ethics Observatory to ensure development of human-centred AI

• Roll out of €3m in Cascade Funding

GoalsCreate a leading collaborative AI European platform

Page 8: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Work Packages

WP1

WP2

WP3

WP4

WP5WP6

WP7

WP8

WP9

WP7

Platform Design and Implementation

Management and Enrichment of The European AI

On-Demand Platform

Ecosystem Creationand Development

Promoting European Ethical, Legal, Cultural and Socio- Economic Values

for AIPilot Experiments with the Platform

Filling AI Technological Gaps

Technology Transfer Program

European AI Research and Innovation Roadmap

Project Management

Page 9: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

BUILDING THE EUROPEAN AI ON-DEMAND PLATFORM

This WP will reinforce European excellence and leading position worldwide in major AI research and application domains, through research and innovation

efforts to fill important technology gaps.

1. Develop new AI tools and techniques (to be included in the AI4EU Platform)

2. Consolidate and strengthen excellence in AI in EU

Objectives:

WP7 Filling AI Technological Gaps

Page 10: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Human

Centred

AI

Verifiable

CollaborativeIntegrative

Physical

Explainable

Page 11: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Task 7.1 Explainable AI

ExplainableInterpretableComprehensibleIntelligible, Justifiable

Understandable

Definition: An AI system should allow

humans to understand the reasons behind its recommendations or decisions. It should be possible

to know the data, rationale and arguments that lead to a result, to question them and to correct them

Tasks:✓ Community creation and aggregation

activities❖ Summer School covering explainable AI ❖ Workshops and tutorials ❖ …

✓ Research activities❖ Survey for explainable AI ❖ Methodology and software components❖ …

Partners: AUT

ORU

TRT

UCC

USO

Intelligent Systems, Department of Informatics

Aristotle University of Thessaloniki

Page 12: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Back to XAI

12

Page 13: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

XAI: Explainable Artificial Intelligence

Explainable AI (XAI) refers to methodsand techniques in the applicationof artificial intelligence technology (AI)such that the results of the solution canbe understood by human experts.

13

Page 14: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Outline

❑ Why XAI?

❑ Interpretable Machine Learning

✓ Transparent and Black Box Models

✓ Explainable ML

✓ Types of Data

✓ Opening the Black Box

✓ Implementations

14

Page 15: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Where we need Explainable AI Etc.

Health Care &

Personalized Medicine

Self Driving

Cars & Smart Cities

Credit Scores &

Financial Information

Robotic Assistants GDPR Regulation:

‘Right to Explanation”

User’s Trust,

Miss Classification,

Result Understanding

«At the individual level, designers have both ethical and legal responsibilities to provide such justification for decisions that could result in death, financial loss, or denial of parole.», Don Monroe 2018[3] 15

Page 16: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Interpretable Machine Learning

“A way to present the result of a black box model in

human readable terms”[1]

Each community addresses this meaning from a different perspective

16

Page 17: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Recognized Interpretable Models

Or Transparent Boxes:

❖ Decision Trees

❖ Rule Based Systems

❖ Decision Tables

❖ Linear Models

❖ Decision Sets

❖ K-NN

17

Page 18: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Black Box Predictor... Means?

A black box predictor is the result of a machine

learning algorithm, whose internals are:

• known to his creator or

• unknown to his creator

and uninterpretable to other people.

Some known black box models are: Support Vector Machines, Neural Networks, Tree Ensemble,

Deep Neural Networks and Non-Linear Models

18

Page 19: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Dimensions of Interpretability

3. Which is the Nature of User Expertise?

2. Has Time Limitation?

1. Is it Global or Local?

4. Which is the Shape of Explanation?

19

Page 20: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Desired Features of Explainable Model

Accuracy:

Measuring the

accuracy and

other metrics of

the model

Interpretability:

Measuring

Comprehensibility

of model

Fidelity:Measuring the

imitation of the

black box

Other:Fairness,

privacy,

usability,

reliability,

robustness and

scalability

20

Responsive-

ness :

Measuring the

time for

explaining an

instance

Page 21: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Shapes of Explanations

IF Proline>=990.0 THEN Wine=1

IF Color intensity<=3.85 AND

Color intensity<=3.52 THEN

Wine=2

IF Flavanoids<=1.41 AND

Proline>=470.0 THEN Wine=3

IF Proline>=680.0 AND

Alcohol>=12.93 THEN Wine=1

IF Hue>=0.69 THEN Wine=2

IF TRUE THEN Wine=2

Textual Form Visual Form Graphical Form Dialectical Form

Why did you predict that?

Because of $Arg0$

But this $Arg1$ is valid too!

Attacking Arg1

Attacking Arg3

That’s true, but $Arg2$

defends $Arg0$!

21

”You were classified as cat

because you have pointy ears :)”

Page 22: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Comprehensibility Measure[2,6]

✓ Human Centered Evaluation

✓ Number of regions (parts of the model)

✓ Number of conditions (in a rule)

✓ Number of non-zero weights (in linear models)

✓ Depth of the tree (in a decision tree)

Simplification Methods like pruning are used in order to enhance

the comprehensibility and to avoid overfitting.

But do we want to avoid overfitting when explaining?

22

Page 23: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Comprehensibility Measure[2,6]

23

If f1 > 20 and f2 <= 123 and f4 > 0.5 and f4 <= 1 and f5 <= 0

and f6 > 9 and f7 <= 10 and f8 <= 1 and f8 > 0.7 and f9 <= 19

and f10 > 10 and f10 <= 100 and f11 <= 1.23 and f12 >

12.1452 and f12 <= 15 and …….. then ”class A”

You classified as “class A” because 0.5 < f5 <= 1 and f9 < 19.

Which one would you prefer?

Page 24: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Types of Data

Tabular Data Image Data Text Data

24

Page 25: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Model Explanation

Outcome Explanation

Model Inspection

Opening the Black

Box

Explanation for

Fixed Dataset or

Capable to Explain

new Examples

Reverse

Engineering on

Black Box

Replace with

Transparent Box

Agnostic

Explanators?

25

Page 26: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

• Can explain indifferently any kind of Black Box in most of

the papers

• Most of the times, It has a specific type of data (Tabular,

Images, Texts, Other), but rarely It could be agnostic on

type of data too

Some Advantages of Agnostic Explanators:

- They can explain models, even without dataset knowledge

- They can explain models remotely

Agnostic

Explanator?

26

Page 27: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

Model Explanation:

Is aimed at understanding the overall logic behind the black box.

Provides a global explanation, through a transparent box.Examples: Surrogate Method Explanators, Knowledge Distillation

Outcome Explanation:

Model Inspection:

Provides a visual or textual representation of some specific

properties.Examples: Permutation Importance, PDP, ICE, SHAP

Is aimed at understanding the outcome of an instance. Provides

a local explanation, through a transparent box.Examples: Lime, Anchors

27

Page 28: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

Train Data

(X,y)

Training

Train and Test Data

(X,X_test)

PredictingOracle Data

(X, y_predicted,

X_test

y_predicted_test)

Training

Surrogate or Meta-Learner Method[7]

28

Page 29: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Use of Scikit

Learn for

Explainable

Modelshttps://scikit-learn.org

Use of Orange

for Explainable

Modelshttps://github.com/biola

b/orange3

TPDecision Tree

or Linear Model

TPCN2 – Rule

Based

IF Proline>=990.0

THEN Wine=1

IF Color

intensity<=3.85

AND Color

intensity<=3.52

THEN Wine=2

Surrogate methodGlobal or Local

Surrogate methodGlobal or Local

29

Page 30: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

P

New Instance Synthetic Neighborhood:by removing words from the instance randomly

TPLinear Model

audits

Mathematic Formulation

L(f, g, πx) + Ω(g)

LIME[5]

local interpretable

model-agnostic

explanationshttps://github.com/marcotcr/lime

State of the Art

30

Page 31: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Mathematic Formulation

L(f, g, πx) + Ω(g)

LIME[5]

local interpretable

model-agnostic

explanationshttps://github.com/marcotcr/lime

State of the Art

31

Disadvantage on Sparse Data!

Why?

LIME can only generate 2n

different neighbours, where n the number of non-zero values.

We proposed LioNets[9]…

Page 32: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Create neighbours to the abstract space using a decoder trained with the neural network as encoder will lead to better, bigger and more representative neighbourhoods, thus better explanations.

32

Page 33: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

LioNets Architecture

Instance

Prediction

Deep Neural Network Classifier

LioNets will try to interpret a neural network’s prediction. Thus, it is a model-specific outcome explanator

33

Page 34: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

[a,b,c][a’,b,c][a,b’,c][a,b,c’][0,b,c][a,0,c][a,b,0]

…[0,0,c][0,b,0][a,0,0]

LioNets Architecture

Original Dimensions

Representation of instance in the penultimate layer

[a,b,c]

NeighbourhoodGeneration Process

Neighbourhoodon reduced dimensions

Audit to Decode

Neighbour Instanceon original dimensionsAudit for label

Add to Oracle Dataset

Audit for label

Transparent Model

Feature 5

Feature 4

Feature 20

F1

ExtractExplanations

Trained Weights

Instance

Prediction

34

Page 35: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

Permutation Importance[4]

Inst. F1 F2 F3 FN Target

x1 0 1 blue cold 1

x2 1 1 red warm 0

x3 0 0 blue cold 1

Train Data

(X,y)

Training

Audit

New Data

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue warm 0

x2 1 1 red cold 1

x3 0 0 blue cold 1

Inst. F1 F2 F3 FN Predicted

x1 0 1 red cold 1

x2 1 1 blue warm 0

x3 0 0 blue cold 1

0

0,05

0,1

0,15

0,2

0,25

0,3

0,35

F1 F2 F3 FN

Features Importance

Weight/Importance of feature NRandomize a

feature

35

Page 36: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

PDP (Partial Dependence Plot)[4]

Inst. F1 F2 F3 FN Target

x1 0 1 blue cold 1

x2 1 1 red warm 0

x3 0 0 blue cold 1

Train Data

(X,y)

Training

Audit

New Data

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue cold 1

x2 1 1 red cold 1

x3 0 0 blue cold 1

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue warm 0

x2 1 1 red warm 0

x3 0 0 blue warm 1

0

0,2

0,4

0,6

0,8

1

1,2

"cold" "warm"

PDP for Feature N

Change of target for specific value on feature N

36

Mean Prediction

Page 37: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

SHAP/Shapley Value [4]

Inst. F1 F2 F3 FN Target

x1 0 1 blue cold 1

x2 1 1 red warm 0

x3 0 0 blue cold 1

Train Data

(X,y)

Training

Audit

New Data

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue cold 1

x2 1 1 red cold 0

x3 0 0 blue cold 1

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue cold 1

x2 1 1 blue warm 1

x3 0 0 blue cold 1

-0,4

-0,3

-0,2

-0,1

0

0,1

0,2

0,3

0,4

F1 = 0 F2 = 1 F3 = 'blue' FN = 'cold'

SHAP Plot

Shapley value of each feature value

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue cold 1

x2 ? ? ? cold ?

x3 ? ? ? cold ?

Inst. F1 F2 F3 FN Predicted

x1 0 1 blue cold 1

x2 ? ? blue ? ?

x3 ? ? blue ? ?37

Page 38: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

SHAP/Shapley Value Examples[4]

Can do anything!

38

Model Inspection via Feature Importance (left) and Partial Dependence Plots (right)(computed using SHAP values)

Page 39: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

39

SHAP/Shapley Value Examples[4]

Can do anything!

Outcome explanation

Page 40: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Reverse

Engineering on

Black Box

Train Data

(X,y)

Training a

high-accuracy

deep neural

network as

“Teacher”

Unlabeled

Data

Explainabl

e Model

Knowledge Distillation[8]

Using Transparent boxes to mimic black boxes’ accuracies

… Input Layer

Output Layer

New Train Data from Labeled and Unlabeled Data annotated by the deep neural networkare used to train the

“Student”

40

Page 41: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Existing XMLLibraries

Scikit

LearnSkater

SHAP Eli5 + Others

LIME

LioNets

41

iml

Page 42: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Bibliography

1. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti.

2018. A Survey Of Methods For Explaining Black Box Models.

2. Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable

Artificial Intelligence (XAI).

3. Don Monroe. 2018. AI, Explain Yourself.

4. Christoph Molnar. 2018. Interpretable Machine Learning.

5. Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the

Predictions of Any Classifier.

6. Johan Huysmans, Karel Dejaeger, Christophe Mues, Jan Vanthienen, and Bart Baesens. 2011. An

empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models.

7. Pedro Domingos. 1998. Knowledge Discovery Via Multiple Models

8. Geoffrey Hinton, Oriol Vinyals, Jeff Dean. 2015. Distilling the Knowledge in a Neural Network

9. I. Mollas, N. Bassiliades, G. Tsoumakas. 2019. LioNets: Local Interpretation of Neural Networks through

Penultimate Layer Decoding.

10. Čyras, Kristijonas and Satoh, Ken and Toni, Francesca. 2016. Abstract argumentation for case-based

reasoning.

11. Čyras, Kristijonas and Satoh, Ken and Toni, Francesca. 2016. Explanation for case-based reasoning via

abstract argumentation.

12. Martin Možina, Jure Žarbkar, Ivan Bratko. 2007. Argument based machine learning.

42

Page 43: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

Before we go…

43

Some say: Future Is Model Agnostic Models

Others say: Future is New Models, Interpretable by Design

We Say Model Specific Implementations

Page 44: Explainable Artificial Intelligencefsvg.math.ntua.gr/RulesLogApps/2019/NickBassiliades-XAI.pdf · 2019-12-27 · Explainable Artificial Intelligence Nick Bassiliades, Professor School

The End

Explainable Artificial Intelligence

Nick Bassiliades

[email protected]

“Magic model on the core,

Explain yourself in front of

all”

BB TB

Interrogating the AI systems