103
Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI-CNR), Dino Pedreschi (University of Pisa) Contributors: S. Rinzivillo, R. Guido0 (ISTI-CNR) A. Monreale, F. Turini, S. Ruggieri (University of Pisa) h"p://ai4eu.org/ h"p://www.sobigdata.eu/ h"p://www.humane-ai.eu/ XAI ERC-AdG-2019 “Science & technology for the eXplanaIon of AI decision making”

Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explainable  AI:  From  Theory  to  Mo7va7on,  Applica7ons  and  Challenges

Lecturers:  Fosca  Gianno0  (ISTI-­‐CNR),  Dino  Pedreschi  (University  of  Pisa)  

Contributors:  S.  Rinzivillo,  R.  Guido0  (ISTI-­‐CNR)  

 A.  Monreale,  F.  Turini,  S.  Ruggieri  (University  of  Pisa)  

h"p://ai4eu.org/     h"p://www.sobigdata.eu/     h"p://www.humane-­‐ai.eu/    

XAI  ERC-­‐AdG-­‐2019    “Science  &  technology  for  

the  eXplanaIon  of  AI  decision  making”  

Page 2: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 3: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

What  is  ”Explainable  AI”  ?  

Explainable-­‐AI  explores  and  invesIgates  methods  to  

produce  or  complement  AI  models  to  make  accessible  

and  interpretable  the  internal  logic  and  the  outcome  of  

the  algorithms,  making  such  process  understandable  by  

humans.  

3  06  September  2019   DSSS2019,  Data  Science  Summer  School  Pisa  

Page 4: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

What  is  ”Explainable  AI”  ?  

Explicability,  understood  as  incorporaIng  both  intelligibility  (“how  does  it  work?”  for  non-­‐experts,  e.g.,  paIents  or  business  customers,  and  for  experts,  e.g.,  product  designers  or  engineers)  and  accountability  (“who  is  responsible  for”).  

•  5  core  principles  for  ethical  AI:  

–  beneficence,  non-­‐maleficence,  autonomy,  and  jusIce  

–  a  new  principle  is  needed  in  addiIon:  explicability  

   

4  06  September  2019   DSSS2019,  Data  Science  Summer  School  Pisa  

[Floridi  2019  

Page 5: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Material  based  on  (our)  XAI  Tutorial  at  AAAI2019

h"ps://xaitutorial2019.github.io/  

Disclaimer:  • As  MANY  interpretaRons  as  research  areas  (check  out  work  in  Machine  Learning  vs  Reasoning  community)  

•  Not  an  exhausIve  survey!  Focus  is  on  some  promising  approaches  

•  Massive  body  of  literature  (growing  in  Ime)  

•  MulI-­‐disciplinary  (AI  –  all  areas,  HCI,  social  sciences)  

•  Many  domain-­‐specific  works  hard  to  uncover  

•  Many  papers  do  not  include  the  keywords  explainability/interpretability!  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 6: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Mo7va7ng  Example  (1) • Criminal  JusIce  

•  People  wrongly  denied  

•  Recidivism  predicIon  

•  Unfair  Police  dispatch  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Rudin  2018]  

propublica.org/arIcle/how-­‐we-­‐analyzed-­‐the-­‐compas-­‐recidivism-­‐algorithm  

aclu.org/other/statement-­‐concern-­‐about-­‐predicIve-­‐policing-­‐aclu-­‐and-­‐16-­‐civil-­‐rights-­‐privacy-­‐racial-­‐jusIce  

nyImes.com/2017/06/13/opinion/how-­‐computers-­‐are-­‐harming-­‐criminal-­‐jusIce.html  

Page 7: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Mo7va7ng  Example  (2)

•  Finance:  •  Credit  scoring,  loan  approval  

•  Insurance  quotes  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

community.fico.com/s/explainable-­‐machine-­‐learning-­‐challenge  

h"ps://www.i.com/content/e07cee0c-­‐3949-­‐11e7-­‐821a-­‐6027b8a20f23  

Page 8: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Mo7va7ng  Example  (3)

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Caruana  et  al.  2015,  Holzinger  et  al.  2017,  Magnus  et  al.  2018]  

Patricia  Hannon  ,h"ps://med.stanford.edu/news/all-­‐news/2018/03/researchers-­‐say-­‐use-­‐of-­‐ai-­‐in-­‐medicine-­‐raises-­‐

ethical-­‐quesIons.html  

• Healthcare    •  AI  as  3rd-­‐party  actor  in  physician-­‐paIent  relaIonship  

•  Learning  must  be  done  with  available  data.  Cannot  randomize  cares  given  to  paIents!  

•  Must  validate  models  before  use.  

Page 9: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Mo7va7on  (4)

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Caruana  et  al.  2015,  Holzinger  et  al.  2017,  Magnus  et  al.  2018]  

• CriIcal  Systems  

Page 10: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

The  Need  for  Explana7on  

• CriRcal  systems  /  Decisive  moments  •  Human  factor:    

•  Human  decision-­‐making  affected  by  greed,  prejudice,  faRgue,  poor  scalability.  

• Bias  •  Algorithmic  decision-­‐making  on  the  rise.    

•  More  objecIve  than  humans?  •  PotenIally  discriminaIve    •  Opaque    •  InformaIon  and  power  asymmetry  

•  High-­‐stakes  scenarios  =  ethical  problems!  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Lepri  et  al.  2018]  

Page 11: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Since  25  May  2018,  GDPR  establishes  a  right  for  all  individuals  to  obtain  “meaningful  explanaIons  of  the  logic  involved”  when  “automated  (algorithmic)  individual  decision-­‐making”,  including  profiling,  takes  place.  

Right  of  Explana7on

11  

Page 12: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Tutorial  Outline  (1)

•  ExplanaRon  in  AI                    •  ExplanaIons  in  different  AI  fields  

•  The  Role  of  Humans  

•  EvaluaIon  Protocols  &  Metrics  

•  Explainable  Machine  Learning                  • What  is  a  Black  Box?  

•  Interpretable,  Explainable,  and  Comprehensible  Models  

•  Open  the  Black  Box  Problems  

• ApplicaRons                    

 06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 13: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

References

[Caruana  et  al.  2015]  Caruana,  Rich,  et  al.  "Intelligible  models  for  healthcare:  PredicIng  pneumonia  risk  and  hospital  30-­‐day  readmission."  Proceedings  of  the  21th  ACM  SIGKDD  InternaIonal  Conference  on  Knowledge  Discovery  and  Data  Mining.  ACM,  2015.  

[Gunning  2017]  Gunning,  David.  "Explainable  arIficial  intelligence  (xai)."  Defense  Advanced  Research  Projects  Agency  (DARPA),  nd  Web  (2017).  

[Holzinger  et  al.  2017]  Andreas  Holzinger,  Bernd  Malle,  Peter  Kieseberg,  Peter  M.  Roth,  Heimo  Mller,  Robert  Reihs,  and  Kurt  Zatloukal.  Towards  the  augmented  pathologist:  Challenges  of  explainable-­‐ai  in  digital  pathology.  arXiv:1712.06657,  2017.    

[Lepri  et  al.  2018]  Lepri,  Bruno,  et  al.  "Fair,  Transparent,  and  Accountable  Algorithmic  Decision-­‐making  Processes."  Philosophy  &  Technology  (2017):  1-­‐17.  

[Floridi  et  al.  2019]  Floridi,  Luciano  and  Josh  Cowls      “A  Unified  Framework  of  Five  Principles  for  AI  in  Society”.  Harvard  Data  Science  Review,  1,  

2019  

 

 

 

 

 

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 14: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explana7on  in  AI

Page 15: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (1)

• Machine  Learning  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Auto-­‐encoder  

Oscar  Li,  Hao  Liu,  Chaofan  Chen,  Cynthia  Rudin:  Deep  Learning  for  Case-­‐

Based  Reasoning  Through  Prototypes:  A  Neural  Network  That  Explains  

Its  PredicIons.  AAAI  2018:  3530-­‐3537  

Surogate  Model  Mark  Craven,  Jude  W.  Shavlik:  ExtracIng  Tree-­‐Structured  

RepresentaIons  of  Trained  Networks.  NIPS  1995:  24-­‐30  

Feature  Importance,  ParIal  Dependence  Plot,  Individual  CondiIonal  ExpectaIon  

Interpretable  Models:    

•  Linear  regression,    

•  LogisIc  regression,    

•  Decision  Tree,    

•  Naive  Bayes,    

•  KNNs  

Page 16: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (2)

• Computer  Vision  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Uncertainty  Map  

Saliency  Map  

Alex  Kendall,  Yarin  Gal:  What  UncertainIes  Do  We  Need  in  Bayesian  Deep  Learning  for  

Computer  Vision?  NIPS  2017:  5580-­‐5590  

Julius  Adebayo,  JusIn  Gilmer,  Michael  Muelly,  Ian  J.  Goodfellow,  Moritz  Hardt,  Been  

Kim:  Sanity  Checks  for  Saliency  Maps.  NeurIPS  2018:  9525-­‐9536  

Visual  ExplanaIon  Lisa  Anne  Hendricks,  Zeynep  Akata,  Marcus  Rohrbach,  Jeff  Donahue,  Bernt  Schiele,  

Trevor  Darrell:  GeneraIng  Visual  ExplanaIons.  ECCV  (4)  2016:  3-­‐19  

Page 17: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (3)

• Game  Theory  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Shapley  AddiIve  ExplanaIon  

Sco"  M.  Lundberg,  Su-­‐In  Lee:  A  Unified  Approach  to  InterpreIng  Model  PredicIons.  NIPS  2017:  4768-­‐4777  

Page 18: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (4)

•  Search  and  Constraint  SaIsfacIon  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Conflicts  resoluIon  

Barry  O'Sullivan,  Alexandre  Papadopoulos,  Boi  FalIngs,  Pearl  Pu:  RepresentaIve  ExplanaIons  for  

Over-­‐Constrained  Problems.  AAAI  2007:  323-­‐328  

Constraints  relaxaIon  

Ulrich  Junker:  QUICKXPLAIN:  Preferred  ExplanaIons  and  

RelaxaIons  for  Over-­‐Constrained  Problems.  AAAI  2004:  

167-­‐172  

Page 19: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (5)

• Knowledge  RepresentaIon  and  Reasoning  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Explaining  Reasoning  (through  JusIficaIon)  e.g.,  SubsumpIon  

Deborah  L.  McGuinness,  Alexander  Borgida:  Explaining  SubsumpIon  in  DescripIon  Logics.  IJCAI  (1)  

1995:  816-­‐821  

Diagnosis  Inference  

Alban  GrasIen,  Patrik  Haslum,  Sylvie  Thiébaux:  Conflict-­‐

Based  Diagnosis  of  Discrete  Event  Systems:  Theory  and  

PracIce.  KR  2012  

AbducIon  Reasoning  (in  Bayesian  Network)  

David  Poole:  ProbabilisIc  Horn  AbducIon  and  Bayesian  

Networks.  ArIf.  Intell.  64(1):  81-­‐129  (1993)  

Page 20: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (6)

• MulI-­‐agent  Systems  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Agent  Strategy  SummarizaIon  Ofra  Amir,  Finale  Doshi-­‐Velez,  David  Sarne:  Agent  Strategy  SummarizaIon.  AAMAS  2018:  1203-­‐1207  

ExplanaIon  of  Agent  Conflicts  and  Harmful  InteracIons  KaIa  P.  Sycara,  Massimo  Paolucci,  MarIn  Van  Velsen,  Joseph  A.  Giampapa:  The  

RETSINA  MAS  Infrastructure.  Autonomous  Agents  and  MulI-­‐Agent  Systems  7(1-­‐2):  

29-­‐48  (2003)  

Explainable  Agents  Joost  Broekens,  Maaike  Harbers,  Koen  V.  Hindriks,  Karel  van  den  Bosch,  Catholijn  M.  Jonker,  

John-­‐Jules  Ch.  Meyer:  Do  You  Get  It?  User-­‐Evaluated  Explainable  BDI  Agents.  MATES  2010:  28-­‐39  

Page 21: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (7)

• NLP  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

LIME  for  NLP  

Marco  Túlio  Ribeiro,  Sameer  Singh,  Carlos  Guestrin:  "Why  Should  I  Trust  You?":  Explaining  the  

PredicIons  of  Any  Classifier.  KDD  2016:  1135-­‐1144  

Explainable  NLP  

Hui  Liu,  Qingyu  Yin,  William  Yang  Wang:  Towards  Explainable  NLP:  A  GeneraIve  

ExplanaIon  Framework  for  Text  ClassificaIon.  CoRR  abs/1811.00196  (2018)  

Fine-­‐grained  explanaIons  

are  in  the  form  of:    

•  texts  in  a  real-­‐world  

dataset;  

•  Numerical  scores  

Page 22: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (8)

• Planning  and  Scheduling  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

XAI  Plan  Rita  Borgo,  Michael  Cashmore,  Daniele  Magazzeni:  Towards  Providing  ExplanaIons  for  AI  Planner  

Decisions.  CoRR  abs/1810.06338  (2018)  

Human-­‐in-­‐the-­‐loop  Planning  

Maria  Fox,  Derek  Long,  Daniele  Magazzeni:  Explainable  Planning.  CoRR  abs/

1709.10256  (2017)  

Page 23: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Overview  of  explana7on  in  different  AI  fields  (9)

• RoboIcs  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

NarraIon  of  Autonomous  Robot  Experience  

Stephanie  Rosenthal,  Sai  P  Selvaraj,  and  Manuela  Veloso.  VerbalizaIon:  

NarraIon  of  autonomous  robot  experience.  In  IJCAI,  pages  862–868.  AAAI  

Press,  2016.  

From  Decision  Tree  to  human-­‐friendly  informaIon    Raymond  Ka-­‐Man  Sheh:  "Why  Did  You  Do  That?"  Explainable  Intelligent  

Robots.  AAAI  Workshops  2017  

Daniel  J  Brooks  et  al.  2010.  Towards  State  SummarizaIon  for  Autonomous  

Robots..  In  AAAI  Fall  Symposium:  Dialog  with  Robots,  Vol.  61.  62.  

Page 24: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Summarizing:  the  Need  to  Explain  comes  from  …

• User  Acceptance  &  Trust      [Lipton  2016,  Ribeiro  2016,  Weld  and  Bansal  2018]    

•  Legal  •  Conformance  to  ethical  standards,  fairness  

•  Right  to  be  informed                                                  [Goodman  and  Flaxman  2016,  Wachter  2017]  

•  Contestable  decisions  

•  Explanatory  Debugging                          [Kulesza  et  al.  2014,  Weld  and  Bansal  2018]  

•  Flawed  performance  metrics  

•  Inadequate  features  •  DistribuIonal  drii    

•  Increase  Insigh{ulness                                          [Lipton  2016]  •  InformaIveness  

•  Uncovering  causality                    [Pearl  2009]  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 25: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

More  ambi7ously,  explana7on  as    Machine-­‐Human  Conversa2on

-­‐  Humans  may  have  follow-­‐up  quesIons  

-­‐  ExplanaIons  cannot  answer  all  users’  concerns  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Weld  and  Bansal  2018]  

Page 26: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 27: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science    

Oxford Dictionary of English  

Lecture  on  Explainable  AI  

Page 28: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Role-­‐based  Interpretability

•  End  users  “Am  I  being  treated  fairly?”  

“Can  I  contest  the  decision?”  

“What  could  I  do  differently  to  get  a  posiIve  outcome?”  

•  Engineers,  data  scienRsts:  “Is  my  system  working  as  designed?”  

• Regulators  “  Is  it  compliant?”    

An  ideal  explainer  should  model  the  user  background.    

 06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Tomse"  et  al.  18]  

[Tomse"  et  al.  2018,  Weld  and  Bansal  2018,  Poursabzi-­‐Sangdeh  2018,  Mi"elstadt  et  al.  2019]  

“Is  the  explanaIon  interpretable?”  à  “To  whom  is  the  explanaIon  interpretable?”  

No  Universally  Interpretable  ExplanaIons!  

Page 29: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Evalua7on:  Interpretability  as  Latent  Property

• Not  directly  measurable!  

• Rely  instead  on  measurable  outcomes:  

•  Any  useful  to  individuals?  

•  Can  user  esImate  what  a  model  will  predict?  

•  How  much  do  humans  follow  predicIons?  

•  How  well  can  people  detect  a  mistake?  

• No  established  benchmarks  

• How  to  rank  interpretable  models?  Different  degrees  of  interpretability?  

 

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Interpretability    

Page 30: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explainable  AI  Systems

 

Transparent-­‐by-­‐design  systems  

 

 

 

Post-­‐hoc  ExplanaRon    (black-­‐box  explanaIon)  systems  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Mi"elstadt  et  al.  2018]  

Black-­‐box    

AI  System  

ExplanaIon  Sub-­‐system  

Input  Data  ExplanaRon  

𝑦   

Input  Data  

Interpretability    

Black-­‐box  System  

Transparent  System  

𝑦   

Page 31: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

(Some)  Desired  Proper7es  of  Explainable  AI  Systems

•  InformaIveness  

•  Low  cogniIve  load  

• Usability  

•  Fidelity    

• Robustness  

• Non-­‐misleading  

•  InteracIvity  /ConversaIonal  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

[Lipton  2016,  Doshi-­‐velez  and  Kim  2017,  Rudin  2018,  Weld  and  Bansal  2018,  Mi"elstadt  et  al.  2019]  

Page 32: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

(thm)  XAI  is    interdisciplinary

•   For  millennia,  philosophers  have  asked  the  quesIons  about  what  consItutes  an  explanaIon,  what  is  the  funcIon  of  explanaIons,  and  what  are  their  structure    

•  [Tim  Miller  2018]    

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 33: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

References [Tim  Miller  2018]  Tim  Miller  ExplanaiIon  in  ArIficial  Intelligence:  Insight  from  Social  Science  

[Alvarez-­‐Melis  and  Jaakkola  2018]  Alvarez-­‐Melis,  David,  and  Tommi  S.  Jaakkola.  "On  the  Robustness  of  Interpretability  Methods."  arXiv  preprint  arXiv:1806.08049  (2018).  

[Chen  and  Rudin  2018]:  Chaofan  Chen  and  Cynthia  Rudin.  An  opImizaIon  approach  to  learning  falling  rule  lists.  In  ArIficial  Intelligence  and  StaIsIcs  (AISTATS),  2018.  

[Doshi-­‐Velez  and  Kim  2017]  Doshi-­‐Velez,  Finale,  and  Been  Kim.  "Towards  a  rigorous  science  of  interpretable  machine  learning."  arXiv  preprint  arXiv:1702.08608  (2017).  

[Goodman  and  Flaxman  2016]  Goodman,  Bryce,  and  Seth  Flaxman.  "European  Union  regulaIons  on  algorithmic  decision-­‐making  and  a"  right  to  explanaIon"."  arXiv  preprint  arXiv:1606.08813  (2016).  

[Freitas  2014]  Freitas,  Alex  A.  "Comprehensible  classificaIon  models:  a  posiIon  paper."  ACM  SIGKDD  exploraIons  newsle"er  15.1  (2014):  1-­‐10.  

[Goodman  and  Flaxman  2016]  Goodman,  Bryce,  and  Seth  Flaxman.  "European  Union  regulaIons  on  algorithmic  decision-­‐making  and  a"  right  to  explanaIon"."  arXiv  preprint  arXiv:1606.08813  (2016).  

[Gunning  2017]  Gunning,  David.  "Explainable  arIficial  intelligence  (xai)."  Defense  Advanced  Research  Projects  Agency  (DARPA),  nd  Web  (2017).  

[Hind  et  al.  2018]  Hind,  Michael,  et  al.  "Increasing  Trust  in  AI  Services  through  Supplier's  DeclaraIons  of  Conformity."  arXiv  preprint  arXiv:1808.07261  (2018).  

[Kulesza  et  al.  2014]  Kulesza,  Todd,  et  al.   "Principles  of  explanatory  debugging   to  personalize   interacIve  machine   learning."  Proceedings  of   the  20th   internaIonal  conference  on  intelligent  user  interfaces.  ACM,  2015.  

[Lipton  2016]  Lipton,  Zachary  C.  "The  mythos  of  model  interpretability.  Int.  Conf."  Machine  Learning:  Workshop  on  Human  Interpretability  in  Machine  Learning.  2016.  

[Mifelstatd  et  al.  2019]  Mi"elstadt,  Brent,  Chris  Russell,  and  Sandra  Wachter.  "Explaining  explanaIons  in  AI."  arXiv  preprint  arXiv:1811.01439  (2018).  

[Poursabzi-­‐Sangdeh  2018]  Poursabzi-­‐Sangdeh,  Forough,  et  al.  "ManipulaIng  and  measuring  model  interpretability."  arXiv  preprint  arXiv:1802.07810  (2018).  

[Rudin  2018]  Rudin,  Cynthia.  "Please  Stop  Explaining  Black  Box  Models  for  High  Stakes  Decisions."  arXiv  preprint  arXiv:1811.10154  (2018).  

[Wachter   et   al.   2017]    Wachter,   Sandra,   Brent  Mi"elstadt,   and   Luciano   Floridi.   "Why   a   right   to   explanaIon   of   automated   decision-­‐making   does   not   exist   in   the   general   data  protecIon  regulaIon."  InternaIonal  Data  Privacy  Law  7.2  (2017):  76-­‐99.    

[Weld  and  Bansal  2018]  Weld,  D.,  and  Gagan  Bansal.  "The  challenge  of  craiing  intelligible  intelligence."  CommunicaIons  of  ACM  (2018).  

[Yin  2012]  Lou,  Yin,  Rich  Caruana,  and   Johannes  Gehrke.  "Intelligible  models   for  classificaIon  and  regression."  Proceedings  of   the  18th  ACM  SIGKDD   internaIonal  conference  on  Knowledge  discovery  and  data  mining.  ACM,  (2012).  

 

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 34: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explainable    Machine  Learning

Page 35: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Bias  in  Machine  Learning

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     35  

Page 36: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

 COMPAS  recidivism  black  bias  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 37: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

No  Amazon  free  same-­‐day  delivery  for  restricted  minority  neighborhoods  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 38: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

 The  background  bias

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 39: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Interpretable  ML  Models

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     39  

Page 40: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Recognized  Interpretable  Models

Linear  Model  

Rules  

Decision  Tree  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 41: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Black  Box  Model

A  black  box  is  a  DMML  model,  whose  internals  are  either  unknown  to  the  observer  or  they  are  known  but  uninterpretable    by  humans.  

-  Guido},  R.,  Monreale,  A.,  Ruggieri,  S.,  Turini,  F.,  Gianno},  F.,  &  Pedreschi,  D.  (2018).  A  survey  of  methods  for  explaining  black  box  

models.  ACM  Compu<ng  Surveys  (CSUR),  51(5),  93.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 42: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Complexity

• Opposed  to  interpretability.  

•  Is  only  related  to  the  model  and  not  to  the  training  data  that  is  unknown.  

• Generally  esImated  with  a  rough  approximaIon  related  to  the  size  of  the  interpretable  model.  

•  Linear  Model:  number  of  non  zero  weights  in  the  model.  

• Rule:  number  of  a"ribute-­‐value  pairs  in  condiIon.  

• Decision  Tree:  esImaIng  the  complexity  of  a  tree  can  be  hard.  

-  Marco  Tulio  Ribeiro,  Sameer  Singh,  and  Carlos  Guestrin.  2016.  Why  should  i  trust  you?:  Explaining  the  predic>ons  of  any  classifier.  KDD.  

-  Houtao  Deng.  2014.  Interpre>ng  tree  ensembles  with  intrees.  arXiv  preprint  arXiv:1408.5456.  

-  Alex  A.  Freitas.  2014.  Comprehensible  classifica>on  models:  A  posi>on  paper.  ACM  SIGKDD  Explor.  Newsle".  06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 43: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Open  the  Black  Box  Problems

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     43  

Page 44: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Problems  Taxonomy

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 45: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

XbD  –  eXplana7on  by  Design

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 46: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

BBX  -­‐  Black  Box  eXplana7on

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 47: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Classifica7on  Problem

X  =  {x1,  …,  xn}  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 48: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Model  Explana7on  Problem

Provide  an  interpretable  model  able  to  mimic  the  overall  logic/behavior  of  the  black  box  and  to  explain  its  logic.  

 

X  =  {x1,  …,  xn}  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 49: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Outcome  Explana7on  Problem

Provide  an  interpretable  outcome,  i.e.,  an  explana>on  for  the  outcome  of  the  black  box  for  a  single  instance.  

x  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 50: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Model  Inspec7on  Problem

Provide  a  representaIon  (visual  or  textual)  for  understanding  either  how  the  black  box  model  works  or  why  the  black  box  returns  certain  predicIons  more  likely  than  others.  

X  =  {x1,  …,  xn}  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 51: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Transparent  Box  Design  Problem

Provide  a  model  which  is  locally  or  globally  interpretable  on  its  own.  

X  =  {x1,  …,  xn}  

x  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 52: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Categoriza7on

•  The  type  of  problem    

 

•  The  type  of  black  box  model  that  the  explanator  is  able  to  open  

•  The  type  of  data  used  as  input  by  the  black  box  model  

•  The  type  of  explanator  adopted  to  open  the  black  box  

 

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 53: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Black  Boxes

• Neural  Network  (NN)  

•  Tree  Ensemble  (TE)  

•  Support  Vector  Machine  (SVM)  

• Deep  Neural  Network  (DNN)  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 54: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Types  of  Data

Text  

(TXT)  

Tabular  

 (TAB)  

Images    

(IMG)  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 55: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explanators

• Decision  Tree  (DT)  

• Decision  Rules  (DR)    

•  Features  Importance  (FI)  

•  Saliency  Mask  (SM)  

•  SensiIvity  Analysis  (SA)  

• ParIal  Dependence  Plot  (PDP)  

• Prototype  SelecIon  (PS)  

• AcIvaIon  MaximizaIon  (AM)  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 56: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Reverse  Engineering

•  The  name  comes  from  the  fact  that  we  can  only  observe  the  input  and  output  of  the  black  box.  

•  Possible  acIons  are:  

•  choice  of  a  parIcular  comprehensible  predictor  

•  querying/audiIng  the  black  box  with  input  records  created  in  a  controlled  way  using  random  perturba>ons  w.r.t.  a  certain  prior  knowledge  (e.g.  train  or  test)  

•  It  can  be  generalizable  or  not:  

• Model-­‐AgnosIc  

• Model-­‐Specific  

Input   Output  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 57: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Model-­‐Agnos7c  vs  Model-­‐Specific

independent  

dependent  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 58: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Solving  The  Model  Explana7on  Problem

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 59: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Global  Model  Explainers

•  Explanator:  DT  •  Black  Box:  NN,  TE  •  Data  Type:  TAB    

•  Explanator:  DR  •  Black  Box:  NN,  SVM,  TE  

•  Data  Type:  TAB    

•  Explanator:  FI  •  Black  Box:  AGN  •  Data  Type:  TAB  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 60: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Trepan  –  DT,  NN,  TAB

01 T = root_of_the_tree()02 Q = <T, X, {}>03 while Q not empty & size(T) < limit04 N, XN, CN = pop(Q)05 ZN = random(XN, CN)06 yZ = b(Z), y = b(XN)07 if same_class(y ∪ yZ)08 continue09 S = best_split(XN ∪ ZN, y ∪ yZ)10 S’= best_m-of-n_split(S)11 N = update_with_split(N, S’)12 for each condition c in S’13 C = new_child_of(N)14 CC = C_N ∪ {c}15 XC = select_with_constraints(XN, CN)16 put(Q, <C, XC, CC>)  

-  Mark  Craven  and  JudeW.  Shavlik.  1996.  Extrac>ng  tree-­‐structured  representa>ons  of  trained  networks.  NIPS.  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 61: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

RxREN  –  DR,  NN,  TAB

-  M.  Gethsiyal  Augasta  and  T.  Kathirvalavakumar.  2012.  

Reverse  engineering  the  neural  networks  for  rule  

extrac>on  in  classifica>on  problems.  NPL.  

01 prune insignificant neurons

02 for each significant neuron

03 for each outcome

04 compute mandatory data ranges

05 for each outcome

06 build rules using data ranges of each neuron

07 prune insignificant rules

08 update data ranges in rule conditions analyzing error  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 62: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Solving  The  Outcome  Explana7on  Problem

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 63: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Local  Model  Explainers

•  Explanator:  SM  •  Black  Box:  DNN,  NN  •  Data  Type:  IMG  

 

•  Explanator:  FI  •  Black  Box:  DNN,  SVM  

•  Data  Type:  ANY    

•  Explanator:  DT  •  Black  Box:  ANY  •  Data  Type:  TAB  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 64: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Local  Explana7on

•  The  overall  decision  boundary  is  complex  

•  In  the  neighborhood  of  a  single  decision,  the  boundary  is  simple  

•  A  single  decision  can  be  explained  by  audiIng  the  black  box  around  the  given  instance  and  learning  a  local  decision.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 65: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

LIME  –  FI,  AGN,  “ANY”

01 Z = {}

02 x instance to explain

03 x’ = real2interpretable(x)

04 for i in {1, 2, …, N}

05 zi= sample_around(x’)

06 z = interpretabel2real(zi)

07 Z = Z ∪ {<zi, b(zi), d(x, z)>}

08 w = solve_Lasso(Z, k)

09 return w

-  Marco  Tulio  Ribeiro,  Sameer  Singh,  and  Carlos  Guestrin.  2016.  Why  should  i  trust  you?:  

Explaining  the  predicIons  of  any  classifier.  KDD.  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 66: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

LORE  –  DR,  AGN,  TAB

01 x instance to explain

02 Z= = geneticNeighborhood(x, fitness=, N/2)

03 Z≠ = geneticNeighborhood(x, fitness≠, N/2)

04 Z = Z= ∪ Z≠05 c = buildTree(Z, b(Z))

06 r = (p -> y) = extractRule(c, x)

07 ϕ = extractCounterfactual(c, r, x)

08 return e = <r, ϕ>  

Riccardo  Guido},  Anna  Monreale,  Salvatore  Ruggieri,  Dino  Pedreschi,  Franco  Turini,  and  Fosca  Gianno}.  2018.  Local  rule-­‐based  explana>ons  

of  black  box  decision  systems.  arXiv  preprint  arXiv:1805.10820  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 67: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

LORE:  Local  Rule-­‐Based  Explana7ons crossover  

mutaIon  

fitness  

x  =  {(age,  22),  (income,  800),  (job,  clerk)}  

GeneIc  Neighborhood  

deny  grant  

-  Guido},  R.,  Monreale,  A.,  Ruggieri,  S.,  Pedreschi,  D.,  Turini,  F.,  &  Gianno},  F.  (2018).  Local  Rule-­‐Based  

Explana>ons  of  Black  Box  Decision  Systems.  arXiv:1805.10820.  

Fitness  FuncIon  evaluates  which  

elements  are  the  “best  life  forms”,  

that  is,  most  appropriate  for  the  

result.    

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 68: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Local  Rule-­‐Based  Explana7ons

r  =  {age  ≤  25,  job  =  clerk,  income  ≤  900}  -­‐>  deny  

Φ  =  {({income  >  900}  -­‐>  grant),  

                 ({17  ≤  age  <  25,  job  =  other}  -­‐>  grant)}  

ExplanaIon  

•  Rule  

•  Counterfactual  

deny  grant  

x  =  {(age,  22),  (income,  800),  (job,  clerk)}  

   

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 69: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Random  Neighborhood   GeneIc  Neighborhood  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 70: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Local  2  Global

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 71: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Local  First  …

x1  =  {(age,  22),  (income,  800),  (job,  clerk)}  

deny  grant  

x2  =  {(age,  27),  (income,  1000),  (job,  clerk)}   r1  =  {age  ≤  25,  job  =  clerk,  income  ≤  900}  -­‐>  deny  

r2  =  {age  >  25,  job  =  clerk,  income  ≤  1500}  -­‐>  deny  

xn  =  {(age,  26),  (income,  1800),  (job,  clerk)}  

...  

rn  =  {age  ≤  25,  job  =  clerk,  income  >  1500}  -­‐>  grant  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 72: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

…  then  Local  to  Global while score(fidelity, complexity) < α find similar theories

merge them Bayesian Information Criterion

Jaccard(coverage(T1), coverage(T2))

Union on concordant rules Difference on discording rules r1  

r2  

rn  

…  

r’1  

r’2  

r’3  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     Lecture  on  Explainable  AI  

Page 73: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Meaningful  Perturba7ons  –  SM,  DNN,  IMG

01 x instance to explain

02 varying x into x’ maximizing b(x)~b(x’)

03 the variation runs replacing a region R of x with:

constant value, noise, blurred image

04 reformulation: find smallest R such that b(xR)≪b(x)

-  Ruth  Fong  and  Andrea  Vedaldi.  2017.  Interpretable  explana>ons  of  black  boxes  by  meaningful  perturba>on.  arXiv:1704.03296  (2017).  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 74: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

SHAP  (SHapley  Addi7ve  exPlana7ons)

•  SHAP  assigns  each  feature  an  importance  value  for  a  parIcular  predicIon  by  means  of  an  addiIve  feature  a"ribuIon  method.  

•  It  assigns  an  importance  value  to  each  feature  that  represents  the  effect  on  the  model  predicIon  of  including  that  feature  

•  Lundberg,  Sco"  M.,  and  Su-­‐In  Lee.  "A  unified  approach  to  interpreIng  model  

predicIons."  Advances  in  Neural  Informa<on  Processing  Systems.  2017.  

Page 75: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Solving  The  Model  Inspec7on  Problem

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 76: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Saliency  maps

76  

Julius  Adebayo,  JusIn  Gilmer,  Michael  Christoph  Muelly,  Ian  Goodfellow,  Moritz  Hardt,  and  Been  Kim.  Sanity  checks  for  saliency  maps.  2018.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science    

Page 77: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Interpretable  recommenda7ons

77  

L.  Hu,  S.  Jian,  L.  Cao,  and  Q.  Chen.  Interpretable  recommendaIon  via  a"racIon  

modeling:  Learning  mulIlevel  a"racIveness  over  mulImodal  movie  contents.  

IJCAI-­‐ECAI,  2018.  06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science    

Page 78: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Inspec7on  Model  Explainers

•  Explanator:  SA  •  Black  Box:  NN,  DNN,  AGN  •  Data  Type:  TAB    

•  Explanator:  PDP  •  Black  Box:  AGN  •  Data  Type:  TAB    

•  Explanator:  AM  

•  Black  Box:  DNN  •  Data  Type:  IMG,  TXT  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 79: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

VEC  –  SA,  AGN,  TAB

•  SensiIvity  measures  are  variables  calculated  as  the  range,  gradient,  variance  of  the  predicIon.  

•  The  visualizaIons  realized  are  barplots  for  the  features  importance,  and  Variable  Effect  Characteris>c  curve  (VEC)  plo}ng  the  input  values  versus  the  (average)  outcome  responses.  

-  Paulo  Cortez  and  Mark  J.  Embrechts.  2011.  Opening  black  box  data  mining  models  using  sensi>vity  analysis.  CIDM.  

VEC  

feature  distribuIon   black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 80: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Prospector  –  PDP,  AGN,  TAB

•  Introduce  random  perturba>ons  on  input  values  to  understand  to  which  extent  every  feature  impact  the  predicIon  using  PDPs.  

•  The  input  is  changed  one  variable  at  a  >me.  

-  Ruth  Fong  and  Andrea  Vedaldi.  2017.  Interpretable  explana>ons  of  black  boxes  by  meaningful  perturba>on.  arXiv:1704.03296  (2017).  

black  box    

audi>ng  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 81: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Solving  The  Transparent  Design  Problem

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 82: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Transparent  Model  Explainers

•  Explanators:    

•  DR  

•  DT  

•  PS  

•  Data  Type:    

•  TAB  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 83: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

CPAR  –  DR,  TAB

• Combines  the  advantages  of  associaIve  classificaIon  and  rule-­‐based  classificaIon.    

•  It  adopts  a  greedy  algorithm  to  generate  rules  directly  from  training  data.    

•  It  generates  more  rules  than  tradiIonal  rule-­‐based  classifiers  to  avoid  missing  important  rules.    

•  To  avoid  overfiRng  it  uses  expected  accuracy  to  evaluate  each  rule  and  uses  the  best  k  rules  in  predicIon.  

-  Xiaoxin  Yin  and  Jiawei  Han.  2003.  CPAR:  Classifica>on  based  on  predic>ve  associa>on  rules.  SIAM,  331–335  06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 84: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

CORELS  –  DR,  TAB

•  It  is  a  branch-­‐and  bound  algorithm  that  provides  the  opImal  soluIon  according  to  the  training  objecIve  with  a  cerIficate  of  opImality.  

•  It  maintains  a  lower  bound  on  the  minimum  value  of  error  that  each  incomplete  rule  list  can  achieve.  This  allows  to  prune  an  incomplete  rule  list  and  every  possible  extension.    

•  It  terminates  with  the  opImal  rule  list  and  a  cerIficate  of  opImality.  

-  Angelino,  E.,  Larus-­‐Stone,  N.,  Alabi,  D.,  Seltzer,  M.,  &  Rudin,  C.  2017.  Learning  cer>fiably  op>mal  rule  lists.  KDD.  06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 85: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

References

•  Guido},  R.,  Monreale,  A.,  Ruggieri,  S.,  Turini,  F.,  Gianno},  F.,  &  Pedreschi,  D.  (2018).  A  survey  of  methods  for  explaining  black  box  models.  ACM  Compu<ng  Surveys  (CSUR),  51(5),  93  

•  Finale  Doshi-­‐Velez  and  Been  Kim.  2017.  Towards  a  rigorous  science  of  interpretable  machine  learning.  arXiv:1702.08608v2  

•  Alex  A.  Freitas.  2014.  Comprehensible  classifica>on  models:  A  posi>on  paper.  ACM  SIGKDD  Explor.  Newsle".  

•  Andrea  Romei  and  Salvatore  Ruggieri.  2014.  A  mul>disciplinary  survey  on  discrimina>on  analysis.  Knowl.  Eng.  

•  Yousra  Abdul  Alsahib  S.  Aldeen,  Mazleena  Salleh,  and  Mohammad  Abdur  Razzaque.  2015.  A  comprehensive  review  on  privacy  preserving  data  mining.  SpringerPlus    

•  Marco  Tulio  Ribeiro,  Sameer  Singh,  and  Carlos  Guestrin.  2016.  Why  should  i  trust  you?:  Explaining  the  predic>ons  of  any  classifier.  KDD.  

•  Houtao  Deng.  2014.  Interpre>ng  tree  ensembles  with  intrees.  arXiv  preprint  arXiv:1408.5456.  

•  Mark  Craven  and  JudeW.  Shavlik.  1996.  Extrac>ng  tree-­‐structured  representa>ons  of  trained  networks.  NIPS.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 86: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

References

•  M.  Gethsiyal  Augasta  and  T.  Kathirvalavakumar.  2012.  Reverse  engineering  the  neural  networks  for  rule  extrac>on  in  classifica>on  problems.  NPL  

•  Riccardo  Guido},  Anna  Monreale,  Salvatore  Ruggieri,  Dino  Pedreschi,  Franco  Turini,  and  Fosca  Gianno}.  2018.  Local  rule-­‐based  explana>ons  of  black  box  decision  systems.  arXiv  preprint  arXiv:1805.10820  

•  Ruth  Fong  and  Andrea  Vedaldi.  2017.  Interpretable  explana>ons  of  black  boxes  by  meaningful  perturba>on.  arXiv:1704.03296  (2017).  

•  Paulo  Cortez  and  Mark  J.  Embrechts.  2011.  Opening  black  box  data  mining  models  using  sensi>vity  analysis.  CIDM.  

•  Ruth  Fong  and  Andrea  Vedaldi.  2017.  Interpretable  explana>ons  of  black  boxes  by  meaningful  perturba>on.  arXiv:1704.03296  (2017).  

•  Xiaoxin  Yin  and  Jiawei  Han.  2003.  CPAR:  Classifica>on  based  on  predic>ve  associa>on  rules.  SIAM,  331–335  

•  Angelino,  E.,  Larus-­‐Stone,  N.,  Alabi,  D.,  Seltzer,  M.,  &  Rudin,  C.  2017.  Learning  cer>fiably  op>mal  rule  lists.  KDD.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 87: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Applica7ons

Page 88: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Challenge: Public transportation is getting more and more self-driving vehicles. Even if trains are getting more and more autonomous, the human stays in the loop for critical decision, for instance in case of obstacles. In case of obstacles trains are required to provide recommendation of action i.e., go on or go back to station. In such a case the human is required to validate the recommendation through an explanation exposed by the train or machine.

AI Technology: Integration of AI related technologies i.e., Machine Learning (Deep Learning / CNNs), and semantic segmentation.

XAI Technology: Deep learning and Epistemic uncertainty

Obstacle  Iden7fica7on  Cer7fica7on  (Trust)  -­‐  Transporta7on

EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     88  

Page 89: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Challenge: Globally 323,454 flights are delayed every year. Airline-caused delays totaled 20.2 million minutes last year, generating huge cost for the company. Existing in-house technique reaches 53% accuracy for predicting flight delay, does not provide any time estimation (in minutes as opposed to True/False) and is unable to capture the underlying reasons (explanation).

AI Technology: Integration of AI related technologies i.e., Machine Learning (Deep Learning / Recurrent neural Network), Reasoning (through semantics-augmented case-based reasoning) and Natural Language Processing for building a robust model which can (1) predict flight delays in minutes, (2) explain delays by comparing with historical cases.

XAI Technology: Knowledge graph embedded Sequence Learning using LSTMs

Explainable  On-­‐Time  Performance  -­‐  Transporta7on

Jiaoyan  Chen,  Freddy  Lécué,  Jeff  Z.  Pan,  Ian  Horrocks,  Huajun  Chen:  Knowledge-­‐Based  Transfer  

Learning  ExplanaIon.  KR  2018:  349-­‐358  

Nicholas  McCarthy,  Mohammad  Karzand,  Freddy  Lecue:  Amsterdam  to  Dublin  Eventually  Delayed?  

LSTM  and  Transfer  Learning  for  PredicIng  Delays  of  Low  Cost  Airlines:  AAAI  2019  

EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     89  

Page 90: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Challenge: Accenture is managing every year more than 80,000 opportunities and 35,000 contracts with an expected revenue of $34.1 billion. Revenue expectation does not meet estimation due to the complexity and risks of critical contracts. This is, in part, due to the (1) large volume of projects to assess and control, and (2) the existing non-systematic assessment process.

AI Technology: Integration of AI technologies i.e., Machine Learning, Reasoning, Natural Language Processing for building a robust model which can (1) predict revenue loss, (2) recommend corrective actions, and (3) explain why such actions might have a positive impact.

XAI Technology: Knowledge graph embedded Random Forrest

EuADS Summer School 2019 - Explainable Data Science

Explainable  Risk  Management  -­‐  Finance

Jiewen  Wu,  Freddy  Lécué,  Christophe  Guéret,  Jer  Hayes,  Sara  van  de  Moosdijk,  Gemma  Gallagher,  

Peter  McCanney,  Eugene  Eichelberger:  Personalizing  AcIons  in  Context  for  Risk  Management  Using  

SemanIc  Web  Technologies.  InternaIonal  SemanIc  Web  Conference  (2)  2017:  367-­‐383  

90  

Page 91: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

1

2

3

Data analysis

for spatial interpretation

of abnormalities:

abnormal expenses

Semantic explanation

(structured in classes:

fraud, events, seasonal)

of abnormalities

Detailed semantic

explanation (structured

in sub classes e.g.

categories for events)

Challenge: Predicting and explaining abnormally employee expenses (as high accommodation price in 1000+ cities).

AI Technology: Various techniques have been matured over the last two decades to achieve excellent results. However most methods address the problem from a statistic and pure data-centric angle, which in turn limit any interpretation. We elaborated a web application running live with real data from (i) travel and expenses from Accenture, (ii) external data from third party such as Google Knowledge Graph, DBPedia (relational DataBase version of Wikipedia) and social events from Eventful, for explaining abnormalities.

XAI Technology: Knowledge graph embedded Ensemble Learning

Explainable  anomaly  detec7on  –  Finance  (Compliance)

Freddy  Lécué,  Jiewen  Wu:  Explaining  and  predicIng  abnormal  

expenses  at  large  scale  using  knowledge  graph  based  

reasoning.  J.  Web  Sem.  44:  89-­‐103  (2017)  

EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     91  

Page 92: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Counterfactual  Explana7ons  for  Credit  Decisions  

•  Local,  post-­‐hoc,  contrasIve  explanaIons  of  black-­‐box  classifiers  

• Required  minimum  change  in  input  vector  to  flip  the  decision  of  the  classifier.    

•  InteracIve  ContrasIve  ExplanaIons  

Challenge: We predict loan applications with off-the-shelf, interchangeable black-box estimators, and we explain their predict ions with counterfactual explanations. In counterfactual explanations the model itself remains a black box; it is only through changing inputs and outputs that an explanation is obtained.

AI Technology: Supervised learning, binary classification.

XAI Technology: Post-hoc explanation, Local explanation, Counterfactuals, Interactive explanations

Rory  Mc  Grath,  Luca  Costabello,  Chan  Le  Van,  Paul  Sweeney,  Farbod  Kamiab,  Zhao  Shen,  Freddy  Lécué:  Interpretable  Credit  ApplicaIon  PredicIons  With  Counterfactual  ExplanaIons.  

FEAP-­‐AI4fin  workshop,  NeurIPS,  2018.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 93: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Counterfactual  Explana7ons  for  Credit  Decisions  

Rory  Mc  Grath,  Luca  Costabello,  Chan  Le  Van,  Paul  Sweeney,  Farbod  Kamiab,  Zhao  Shen,  Freddy  Lécué:  Interpretable  Credit  ApplicaIon  PredicIons  With  Counterfactual  ExplanaIons.  

FEAP-­‐AI4fin  workshop,  NeurIPS,  2018.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 94: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Rory  Mc  Grath,  Luca  Costabello,  Chan  Le  Van,  Paul  Sweeney,  Farbod  Kamiab,  Zhao  Shen,  Freddy  Lécué:  Interpretable  Credit  ApplicaIon  PredicIons  With  Counterfactual  ExplanaIons.  

FEAP-­‐AI4fin  workshop,  NeurIPS,  2018.  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 95: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

predict.nhs.uk/tool  

Challenge: Predict is an online tool that helps patients and clinicians see how different treatments for early invasive breast cancer might improve survival rates after surgery.

AI Technology: competing risk analysis

XAI Technology: Interactive explanations, Multiple representations.

Breast  Cancer  Survival  Rate  Predic7on

David  Spiegelhalter,  Making  Algorithms  trustworthy,  NeurIPS  2018  Keynote  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 96: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Reasoning  on  Local  Explana7ons  of  Classifica7ons  Operated  by  Black  Box  Models

•  DIVA (Fraud Detection IVA) dataset from Agenzia delle Entrate containing about 34 milions IVA declarations and 123 features.

•  92.09% of the instances classified with label '3’ by the KDD-Lab classifier are classified with the same instance and with an explanation by LORE.

•  Master  Degree  Thesis  Leonardo  Di  Sarli,  2019  

Page 97: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

(Some)  Soeware  Resources

•  DeepExplain:  perturbaIon  and  gradient-­‐based  a"ribuIon  methods  for  Deep  Neural  Networks  interpretability.  github.com/marcoancona/DeepExplain    

•  iNNvesRgate:  A  toolbox  to  iNNvesIgate  neural  networks'  predicIons.  github.com/albermax/innvesIgate    

•  SHAP:  SHapley  AddiIve  exPlanaIons.  github.com/slundberg/shap    

•  ELI5:  A  library  for  debugging/inspecIng  machine  learning  classifiers  and  explaining  their  predicIons.  github.com/TeamHG-­‐Memex/eli5    

•  Skater:    Python  Library  for  Model  InterpretaIon/ExplanaIons.  github.com/datascienceinc/Skater    

•  Yellowbrick:  Visual  analysis  and  diagnosIc  tools  to  facilitate  machine  learning  model  selecIon.  

github.com/DistrictDataLabs/yellowbrick    

•  Lucid:  A  collecIon  of  infrastructure  and  tools  for  research  in  neural  network  interpretability.  github.com/tensorflow/lucid  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 98: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Conclusions

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 99: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

•  Explainable  AI  is  moIvated  by  real-­‐world  applicaRon  of  AI  

 

• Not  a  new  problem  –  a  reformulaIon  of  past  research  challenges  in  AI  

 

• MulI-­‐disciplinary:  mulIple  AI  fields,  HCI,  cogniIve  psychology,  social  science    

•  In  Machine  Learning:    

•  Transparent  design  or  post-­‐hoc  explanaIon?  

•  Background  knowledge  ma"ers!  

•  In  AI  (in  general):  many  interesIng  /  complementary  approaches  

EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Take-­‐Home  Messages

06  September  2019  

Page 100: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Open  The  Black  Box!

•  To  empower  individual  against  undesired  effects  of  automated  decision  making    

•  To  implement  the  “right  of  explanaIon”  

•  To  improve  industrial  standards  for  developing  AI-­‐powered  products,  increasing  the  trust  of  companies  and  consumers  

•  To  help  people  make  be"er  decisions  

•  To  preserve  (and  expand)  human  autonomy  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 101: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Open  Research  Ques7ons

•  There  is  no  agreement  on  what  an  explana>on  is  

•  There  is  not  a  formalism  for  explana>ons  

•  There  is  no  work  that  seriously  addresses  the  problem  of  quan>fying  the  grade  of  comprehensibility  of  an  explanaIon  for  humans  

• What  happens  when  black  box  make  decision  in  presence  of  latent  features?  

• What  if  there  is  a  cost  for  querying  a  black  box?  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 102: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Future  Challenges

• CreaIng  awareness!  Success  stories!  

•  Foster  mulI-­‐disciplinary  collaboraIons  in  XAI  research.  

• Help  shaping  industry  standards,  legislaIon.  

• More  work  on  transparent  design.    

•  InvesIgate  symbolic  and  sub-­‐symbolic  reasoning.  

•  Evalua<on:  • We  need  benchmark  -­‐  Shall  we  start  a  task  force?  

• We  need  an  XAI  challenge  -­‐  Anyone  interested?  

•  Rigorous,  agreed  upon,  human-­‐based  evaluaIon  protocols  

06  September  2019   EuADS  Summer  School  2019  -­‐  Explainable  Data  Science     h"ps://xaitutorial2019.github.io/  

Page 103: Explainable AI...2019/12/09  · Explainable AI: From Theory to Mo7va7on, Applica7ons and Challenges Lecturers: Fosca Gianno0 (ISTI‐CNR), Dino Pedreschi (University of Pisa) Contributors:

Explainable  AI:  From  Theory  to  Mo7va7on,  Applica7ons  and  Limita7ons

h"p://ai4eu.org/     h"p://www.sobigdata.eu/     h"p://www.humane-­‐ai.eu/    

XAI  ERC-­‐AdG-­‐2019    “Science  &  technology  for  

the  eXplanaIon  of  AI  decision  making”  

We hire!! Postdocs wanted