63
Toward Explainable Artificial Intelligence - RFEX: Improving R andom F orest Ex plainability (but first…concerns about AI…) Prof. Dragutin Petkovic Computer Science Department, San Francisco State University (SFSU) Director, SFSU Center for Computing for Life Sciences [email protected] Copyright D. Petkovic (except when noted) 01/14/19

Toward Explainable Artificial Intelligence - RFEX ... · Current approaches in RF Explainability • Feature ranking uses RF-provided variable importance measures like e.g. RF-provided

Embed Size (px)

Citation preview

Toward Explainable Artificial Intelligence - RFEX: Improving Random

Forest Explainability (but first…concerns about AI…)

Prof. Dragutin Petkovic

Computer Science Department, San Francisco State University (SFSU)

Director, SFSU Center for Computing for Life Sciences

[email protected]

Copyright D. Petkovic (except when noted)

01/14/19

About this talk

• AI – issues, ethics, concerns, latest controversies (mainly data from US) quick overview and motivation for doing work on AI Explainability

• Our attempt to improve Random Forest Explainability - RFEX

Copyright D. Petkovic (except when noted)

AI is the future – right?

New AI driven economy with HUGE promises

Copyright D. Petkovic (except when noted)

AI will directly impact everyday people and not only business

• Autonomous cars – Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars NY Times 12/31/18

https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html

• Loan approvals – https://www.lending-express.com/blog/ai-and-machine-learning-are-the-future-of-alternative-

lending/

• Getting jobs – Amazon AI system for job applicant filtering discontinued

• https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report

– Babysitting jobs involve AI scan of social media and WWW

• https://www.washingtonpost.com/technology/2018/11/16/wanted-perfect-babysitter-must-pass-ai-scan-respect-attitude/?noredirect=on&utm_term=.09aaef69e086

• Rental approval – https://www.landlordo.com/will-ai-artificial-intelligence-change-rental-industry/

• Policing and crime prevention – https://www.forbes.com/sites/andrewarnold/2018/04/21/can-ai-help-us-predict-and-prevent-

crimes-in-the-future/

• News and information filtering? Even news anchors are becoming digital – https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china

Copyright D. Petkovic (except when noted)

AI Promise Huge investments in AI

• US and Chinese companies and governments investing heavily, and now EU (e.g. Germany) too https://www.reuters.com/article/us-germany-intelligence/germany-plans-3-billion-in-ai-investment-government-paper-idUSKCN1NI1AP )

• US Universities – MIT AI Investment in College for AI - $ 1 B !!!!

• https://www.nytimes.com/2018/10/15/technology/mit-college-artificial-intelligence.html

– Stanford effort in human AI (and investments are coming in) • http://hai.stanford.edu/

– Notably all academic efforts have component of AI ethics

Copyright D. Petkovic (except when noted)

Copyright D. Petkovic (except when noted)

Hmmmm… is AI answer to many of our problems? How to make sure AI is good for humanity and not a problem or even tool of oppression

One of the big issues facing humanity!!!!!

Copyright D. Petkovic (except when noted)

BUT…..

Many failures of AI are gaining visibility now

• Some WWW sites devoted to tracking major AI Failures – https://medium.com/syncedreview/2017-in-review-10-ai-

failures-4a88da1bdf01

• Fatal Uber crash – https://techcrunch.com/2018/05/24/uber-in-fatal-

crash-detected-pedestrian-but-had-emergency-braking-disabled/

– Pedestrian detected but emergency brake feature was disabled…due to many false alarms?

Copyright D. Petkovic (except when noted)

Copyright D. Petkovic (except when noted)

Wrong image recognition

Copyright D. Petkovic (except when noted)

Fooling facial recognition

Copyright D. Petkovic (except when noted)

Wrong interpretation of street signs

View from decision makers (not researchers)

Copyright D. Petkovic (except when noted)

Mary is deciding whether to adopt a AI-based diagnostic method

Copyright D. Petkovic (except when noted)

Mary has to make a decision based on Current state-of-the-art of presenting

AI data •AI algorithm used •Info about the Training DB •Information about specific SW used •Accuracy and methods used to estimate it

Mary’s decision is critical for patients’ well being and for the company. Mary has legal responsibility too

Copyright D. Petkovic (except when noted)

Mary got this from AI vendor – state of the art ways to present AI results:

• Algorithm used: Random Forest

• SW used: R toolkit

• Training Data: 1000000000 samples with ground truth, each with 155 features

• Accuracy: F1 score of 0.9 using 5 fold cross validation

Copyright D. Petkovic (except when noted)

TO TRUST OR NOT TO TRUST?

Copyright D. Petkovic (except when noted)

What could go wrong (1): Errors and Bias in Training Database

• AI models heavily depend on training data

• Although seemingly correct the decision might be fundamentally wrong due to errors/bias in the training data - some examples can be found in: – S. Kaufman, S. Rosset, C. Perlich: “Leakage in Data Mining: Formulation, Detection, and

Avoidance”, ACM Transactions on Knowledge Discovery from Data 6(4):1-21, December 2012 - hospital where patients were sent depended on outcomes but this was inadvertently used as key feature for prediction

– C. Kuang: “Can AI be Taught to Explain Itself”, NY Times Magazine Nov 2017 – study on lung care missed important cases – they were not in the training data

– Many cases reported where training data did not contain proper mix of genders, races and cases

AI can generate wrong decisions and cause bias/discrimination due to bias or other issues in training data

Copyright D. Petkovic (except when noted)

What could go wrong (2): Non-relevant features used for decisions

• AI algorithm made (seemingly correct) decisions based on non-relevant features (e.g. background) – e.g. images that contained areas of positive interest also

had certain features NOT related to areas of interest which were in fact used by AI producing seemingly correct classification (e.g. AI system for classifying gender from eye images in fact used makeup information)

BUT this AI model would not work on future data (although test results seem good)

Copyright D. Petkovic (except when noted)

What could go wrong (3): Algorithms for AI

• Bias

• Sensitivity and poor robustness to errors causing wrong decisions due to minor data variations

• Hard to understand, tune up and control

• Management reluctant to adopt technologies they do not understand

Copyright D. Petkovic (except when noted)

Mary is not the only one being concerned

Copyright D. Petkovic (except when noted)

Research community and universities starting to take notice of AI issues

• Recent Workshops, e.g. – E.g. D. Petkovic (Chair), L. Kobzik, C. Re: “Workshop on Machine learning and deep

analytics for biocomputing: call for better explainability”, Pacific Symposium on Biocomputing PSB 2018, Hawaii

• DARPA Program for Explainable AI – https://www.darpa.mil/program/explainable-artificial-intelligence

• Programs and Centers in Bioethics at major schools, e.g. – Stanford center for Biomedical Ethics

http://med.stanford.edu/bioethics/about.html

• New MIT Initiative ($ 1 B investment!!!) in AI has component of AI Ethics – http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-

college-of-computing-1015

• Stanford Human-Centered AI how to make AI good for humanity http://hai.stanford.edu/

Copyright D. Petkovic (except when noted)

Initiatives at regulatory and government levels emerging

• New EU General Data Protection laws (GDPR) effective May 2018 – includes strong data privacy and “right to know” how algorithms work (recital 71)

– https://www.privacy-regulation.eu/en/r71.htm

• US Congress interviewed high-tech execs on data use and privacy and is taking notice

– AI Caucus https://artificialintelligencecaucus-olson.house.gov/

– Some talk on regulation of AI http://theconversation.com/congress-takes-first-steps-toward-regulating-artificial-intelligence-104373

• Asilomar 23 AI Principles adopted by CA legislature

– https://futureoflife.org/ai-principles/

• Canada and France will explore AI ethics with an international panel -It's all about developing responsible and "human-centric" AI.

– https://www.engadget.com/2018/12/07/canada-france-ai-ethics/?yptr=yahoo

• Technical standards, regulations and best practices will eventually emerge e.g.

– New IEEE Standard 70001 on Transparency of Autonomous Systems https://standards.ieee.org/develop/project/7001.html

Copyright D. Petkovic (except when noted)

BUT ALSO: Mainstream media is starting to notice people get concerned

NY Times Magazine Nov 21 2017

Economist Feb 2018

Copyright D. Petkovic (except when noted)

BBC Y factor – excellent podcast on machines and morals

Rethinking about Safety of autonomous cars

Copyright D. Petkovic (except when noted)

From Forbes Magazine, April 2018

Copyright D. Petkovic (except when noted)

Newsweek, 11/2017

Copyright D. Petkovic (except when noted)

Awareness among tech workers increasing

• Google workers protest militarization of AI – https://www.zdnet.com/article/google-employee-protests-now-

google-backs-off-pentagon-drone-ai-project/

• Google employees protest development of search engine for China – https://www.usatoday.com/story/tech/news/2018/08/16/google-

employees-sign-petition-protesting-work-secret-chinese-search-engine-project/1011617002/

• Google AI Principles – https://www.blog.google/technology/ai/ai-principles/

• Social media helps spread these kinds of actions and

concerns among high tech workers and citizens too

Copyright D. Petkovic (except when noted)

Remember the history – people against machines

• Nedd Lud movement in 19th century of English textile workers against machines and automation

– https://en.wikipedia.org/wiki/Luddite

Copyright D. Petkovic (except when noted)

Remember the history….

Isaak Asimov: Three laws of Robotics – 1942 1. A robot may not injure a human being or, through inaction,

allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Copyright D. Petkovic (except when noted)

Remember the history – expert systems in 80s

• Limited power BUT they had some explainability feature (“click on ?”)

– “Show me the rules that produced the decision”

Copyright D. Petkovic (except when noted)

Remember the Future: Terminator II

• Genesis of Skynet (Terminator II movie) – https://www.youtube.com/watch?v=4DQsG3TKQ0I

– Terminator says: “The Skynet Funding Bill is passed. The system goes on-line August 4th, 2018. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Skynet fights back.”

• You think this is just fiction? Ask Elon Musk – Raj Dasgupta: “Why Elon Musk Has Fears Of Skynet

Coming True”, Forbes Nov. 21 2017 – https://www.forbes.com/sites/forbestechcouncil/2017/11/21/why-

elon-musk-has-fears-of-skynet-coming-true/#74181f4824eb

Copyright D. Petkovic (except when noted)

AI evolved from 60s

• Pattern recognition Expert Systems AI Machine learning DeepLearning….

• Huge increase in computing power (CPU, memory, networks, Internet)

• Some new AI alg produced great results BUT most are very hard to explain (Neural Networks, DeepLearning, SVM)

Copyright D. Petkovic (except when noted)

So what is good and ethical AI?

Copyright D. Petkovic (except when noted)

From 23 AI Asilomar Principles (endorsed by CA Legislature)

The section on AI Ethics and Values https://futureoflife.org/ai-principles/

• Safety

• Failure Transparency

• Judicial Transparency

• Responsibility

• Value Alignment

• Human Values

• Personal Privacy

• Liberty and Privacy

• Shared Benefit

• Shared Prosperity

• Human Control

• Non-subversion

• AI Arms Race

Very nice and noble, bit how to verify and develop for this?

Copyright D. Petkovic (except when noted)

What to do? Where to start? • While making AI adhere to above noble principles (e.g. 23

Asilomar AI Rules etc.) is hugely complex we in technical community can at least start from some obvious problems and try to solve them

• How can I trust some system if I do not understand how it works and how to control it? – Most current AI systems do not offer effective explainability or

transparency, especially new ones based on Deep learning, multilayer NN etc.

• We chose to start from Explainability/Transparency

Copyright D. Petkovic (except when noted)

What is AI Explainability?

• Easy to use information explaining why and how the AI approach made its decisions

– AI Model Explainability: helps explain the ML model as a whole

– AI Test Sample explainability: helps explain decision on specific sample (often user confidence is guided by ML accuracy on specific samples they know about)

• Targeted especially for non-AI experts (who are often adopters and decision makers)

Copyright D. Petkovic (except when noted)

Benefits of better ML Explainability • Increased confidence and trust of application and domain

experts as well as public in adopting ML; • Better validation, audit and prevention of cases where ML

approach produces results based on fundamentally wrong reasons or can behave in unsafe manner

• Simplification and reduction of the cost of application of ML in practice (e.g. by knowing which smaller feature subsets produce adequate accuracy)

• Improved “maintenance” where ML method has to be changed or tuned to new data or decision needs;

• Possible discovery of new knowledge and ideas (e.g. by discovering new patterns and factors that contribute to ML decisions)

• Most often addresses correlation and not causality

Copyright D. Petkovic (except when noted)

Our contribution

Copyright D. Petkovic (except when noted)

Case Study: Improving the explainability of Random Forest

classifier – user centered approach Dragutin Petkovic† 1, 3, Russ Altman2, Mike

Wong3, Arthur Vigil4 1Computer Science Department, San Francisco State University (SFSU)

2Department of Bioengineering, Stanford University 3SFSU Center for Computing for Life Sciences, 1600 Holloway Ave., San

Francisco, CA 94132 4Twist Bioscience, 455 Mission Bay Boulevard South, San Francisco, CA

94158

(Pacific Symposium on Biocomputing, Hawaii, January 2018)

Copyright D. Petkovic (except when noted)

Random Forest ML

• Widely used

• Excellent performance

• Abundance of SW tools available e.g. R toolkit

• Based on ensemble of trees, each trained on a subset of training data, then all trees vote for the decision

• Has built in cross validation

• Amendable to explanation and offers feature importance ranking (e.g. Mean Decrease in Accuracy)

– L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp.5–32, 2001

Why we chose RF?

• Popular

• One of the most powerful (see comparison of 13 ML alg. on 165 publicly available databases – https://psb.stanford.edu/psb-

online/proceedings/psb18/olson.pdf

• Good tools available (R, SciKit, EWEKA)

• Amenable to explainability (unlike NN, DeepLearning) tree based, has feature ranking

Copyright D. Petkovic (except when noted)

RF feature ranking and accuracy • Feature Ranking: We use MDA – Mean Decrease in Accuracy (part of

RF alg.) - provided by all RF implementations – for each feature in dataset:

randomly permute feature; make predictions on this permuted data; record average decrease in accuracy vs. using unpermuted data;

– Permuting more important features result in larger decrease in accuracy (Permutation base ranking is more robust and less biased (in R tool 4 and later) )

• RF consists of ensemble of disjoint trees, so best features used for + class

may not be the same as those for – class MDA can be computed for + and – class separately (MDA+; MDA-) – important in case of highly unbalanced data (FEATURE data is unbalanced)

• RF Accuracy: we use F1 score for + class – F1 = 2* (precision*recall)/(precision + recall) – Precision/recall optimized by varying cutoff for ensemble tree voting

Current approaches in RF Explainability

• Feature ranking uses RF-provided variable importance measures like e.g. RF-provided Gini, MDA (mean decrease in accuracy) or others, to present them in tables or horizontal bar charts sorted by chosen variable importance measure. – Too simplistic, not done for + vs. – class separately. Lack of tradeoffs between

features used and accuracy.

• Rule extraction from trained RF. This method consists of: • a) performing standard RF training; • b) defining rules by analyzing trained RF trees (resulting in very large set of

rules, order of 100 K); and • c) reducing the number and complexity of extracted rules by optimization

to reduce to 10s – 100s of rules, each with 1-10 or so conditions. – Hard to interpret by humans; rules often complex; lack of tradeoffs between

accuracy and number of rules used.

• No “user design and evaluation” with key adopters who are often non-RF

expert users – the key constituency

Training Data

Random Forest Training

Accuracy: F1, OOBN, confusion Matrices, ntree, mtry…

Trained RF tree Ensemble

RFEX: Creation of Explainable RF Model

RFEX summary Report (one page)

RFEX Explainability Model data

Traditional RF classification

RFEX: RF Explainability enhancement

Driven by User needs

RFEX explainable model – driven by User/Adopter Needs and Questions

• What is the loss/tradeoffs of accuracy if I use only certain subset of most important features?

• What are most important features contributing to ML prediction and how do they rank in importance?

• Also, tell me more about features: – What is the relationship of most important features for + vs. – class,

is there any overlap? – What is “direction” of features? Abundance (“more of it” or

“presence”) or deficiency (“less of it” or “absence”)? What thresholds I can use to determine this? What are basic class specific feature stats?

– Which features interact together?

• Can explainable ML model be presented in an easy to understand and simple summary for ML/domain experts and non-experts?

• Finally: evaluate if the RFEX explainable model is helpful and intuitive to domain experts?

• (Start with correlation not necessarily causality)

RFEX pipeline summary – general steps in providing more explainable RF

1. Establish Base RF Accuracy using all features (use F1 score ) 2. Rank features/variables (e.g. use MDA and do it separately

for + and – class if data is unbalanced) reduce dimensionality

3. Provide tradeoffs between features used and accuracy (e.g. what accuracy we can get using only top K ranked features)

Then work only with Top N features (N usually 2-5% of total number of features for 90% of original accuracy!!!)

4. Explain how Top N features are used by RF – Determine class-specific feature stats: e.g. feature direction

namely its abundance (more of it) or deficiency (less of it) or some other feature statistics (AV/SD/RANGE)

– Determine which features interact with each other (MFI, correlation)

5. Create easy to use RFEX data and report (one page)

From AWS case study of Stanford-SFSU collaboration

Stanford FEATURE

Copyright D. Petkovic (except when noted)

STANFORD FEATURE DATA

Note unbalanced training data (many fewer positive samples) Our previous work achieved good RF prediction but we were not sure why! K. Okada, L. Flores, M. Wong, D. Petkovic, “Microenvironment-Based Protein Function Analysis by Random Forest”, Proc. ICPR - International Conference on Pattern Recognition, Stockholm, 2014

F score: Main accuracy measure

Copyright D. Petkovic (except when noted)

New RFEX measures to explain how features are used by RF: Feature Direction and Mutual Feature

Interaction

• Feature Direction - DIR(I) + (n) or – (n): denoting fraction of times (n) when feature I was above (+) (abundance) or below (-) (deficiency) the threshold when making correct prediction, for all trees in the forest making a correct prediction, and for all test samples

• Mutual Feature Interaction MFI(I,J) for features I and J - count of times features I and J appear on the same tree path making a correct prediction, for all trees in RF ensemble, and for all test samples. – Note that MFI only measures statistical pair-wise feature co-

occurrences and not necessarily causality.

Ranking of top 20 features with MDA+ and - MDA for ASP_PROTEASE.4.ASP.OD1

Some top features overlap in + vs. – classification All directions very consistent (high %) If features overlap their direction is opposite

Ranking for + class Ranking for -class

Model Performance using a subset of features

We varied the number of features used to train our RF model from 2 to 20 and plotted the f-score for each trained model to show how model performance varied as we increased the number of features used.

Observations:

•RF classifiers trained on just a few (between 10 and 20) features performed very close to RF using all 480 features

•Some of our models perform well even with just 2 or 3 features. Others showed a steeper drop off

•Standard deviation over CV trials Is small

F-score using all 480 features

Average f-score for CV fold, and its SD

39

Trade-offs in using subset of top ranked features vs. accuracy

These charts reveal that one needs very few (from 2-6 depending on a model) top ranked features to achieve 90% or better of the accuracy when all 480 features are used.

RFEX One Page Summary report

Classic way Of presenting RF accuracy

Copyright D. Petkovic (except when noted)

But what do real users think, especially non AI experts?

• Non-expert user studies seldom done in research community (but often non-expert users are key adopters!)

• We involved 13 users with various AI and problem domain experience

• Measured formally how RFEX increases user confidence vs. using only traditional RF results

Copyright D. Petkovic (except when noted)

RFEX Usability review – anonymous survey of 13 expert and non-expert users

Measure how RFEX increases user confidence vs. using only traditional RF results

(1 low….5 high) Copyright D. Petkovic (except when noted)

55

SETAP •San Francisco State University •University of Applied Science, Fulda •Florida Atlantic University

SW Engineering Teamwork Assessment and Prediction

Original work supported by NSF TUES grant #1140172 55

SETAP –assessment of teaching teamwork in SW Engineering (SE) using Machine Learning

• Use ML to predict student teams that will fail – Do it early so interventions can be made on time

• ML Training database: only objective and qualitative measures: time spent on activities, how often late, how many instructor interventions, attendance…paired with instructors’ grade of team success (ML classes A and F)

• Training data is collected from 74 student teams from joint SFSU, Fulda, FAU SE classes from Fall 2012 through Fall 2015 - 383 Students and 18 class sections, with over 400 data points collected for each team.

• ML technology: RF and RFEX

Copyright D. Petkovic (except when noted)

RFEX Summary of SETAP RF classification

57

TAM feature name MDA value

F1 score Using top

K TAM features

AV/SD For all Teams rated F

AV/SD For all teams rated A

lateIssueCount 18.6 N/A 0.68/0.7 0.06/0.2

issueCount 6.5 0.375 2/1.7 0.8/0.8

helpHoursStandard Deviation 6.1 0.42 0.95/0.62 1.12/0.6

standardDeviation Meeting HoursAverageByWeek

4.6 0.44 1.11/1.3 0.95/0.5

standardDeviation HelpHoursTotalByWeek 4.4 0.48 1.7/0.97 2.3/1.12

codingDeliverables HoursAverage 4.2 0.52 1.18/0.9 1.9/0.9

averageNonCoding DeliverablesHoursAverageByWeek

3.8 0.55 2.2/0.95 1.88/0.6

averageCodingDeliverablesHoursAverage ByWeek

2.9 0.60 1.3/1.06 1.5/0.9

averageResponses ByWeek 2.91 0.63 4.96/2.0 5.6/1.67

averageCoding DeliverablesHoursAverageByStudent

2.5 0.63 1.23/1.0 1.5/0.96

TAM feature name MDA value

F1 score Using top K TAM

features

AV/SD For all teams rated F

AV/SD For all teams

rated A

uniqueCommit MessageCount

9.9 N/A 72.3/39.2 114.4/95.

averageUniqueCommit MessageCountByWeek

9.5 0.4 18.5/9.3 28.4/17

commitMessageLength Total

4.73 0.42 5698.4/ 3590.3

7329.7/ 5741.4

standardDeviationInPerson MeetingHoursAverage ByWeek

4.72 0.53 1.02/0.56 0.73/0.6

standardDeviationCoding DeliverablesHoursTotal ByWeek

4.5 0.55 10.9/7.23

7.22/6.6

standardDeviationCoding DeliverablesHoursAverage ByWeek

4.2 0.58 2.3/1.33 1.5/1.25

standardDeviationUnique CommitMessagePercent ByWeek

4.05 0.61 0.14/0.08 0.10/ 0.07

meetingHoursTotal 3.6 0.68 44.2/23.8 39/27.42

standardDeviation NonCoding DeliverablesHoursAverage ByStudent

2.95 0.69 0.7/0.4 0.85/0.6

averageCodingDeliverables HoursTotalByStudent 2.93 0.698 13.5/9.0 10.9/7.9

One Page RFEX Summary for SE Process during Time interval T2

One Page RFEX Summary for SE Product during Time interval T3

What did we learn from RFEX that helps educators in teamwork failure prediction

• We use RFEX summary tables to generate recommendations for faculty to catch teams bound to fail

• Factors indicating possible failure include: – Teams who are late on delivery of any requested items

especially early in the class – Teams start coding late after initial design – Teams do not use meaningful, unique and rich enough

comments to code commits and submissions – Code submission level differs widely among team

members

• Also: Provide ample coaching to global teams since even those global teams working well seem to need more attention.

Copyright D. Petkovic (except when noted)

Future work on RFEX

• Improve RFEX

• Create sample explainability RFEX

• Provide RFEX toolkit (e.g. based on Jupyter http://jupyter.org/ )

Copyright D. Petkovic (except when noted)

How to make AI more ethical and explainable good for humanity

• Technical solutions are hard but have to be tried

– Improve epxlainability of existing AI alg

– Provide tools to detect and correct bias in alg. and training databases

• Education: academia, government, industry, public and future leaders

• Legal and legislative instruments and laws

• Certification of developers and algorithms and databases?

Copyright D. Petkovic (except when noted)

Copyright D. Petkovic (except when noted)

Future researchers, decision makers and adopters – please be aware of your responsibility to make AI force of good!!!

Acknowledgements

• Profs Russ Altman, Stanford;

• Prof. Les Kobzik, Harvard,

• Dr. Reza Ganaghan, Google

• Mike Wong SFSU CCLS

• SFSU Grad students: A. Vigil, S. Barlaskar, J. Young

• Faculty working with us on AI Ethics certificate: Profs. J. Tiwald, C. Montemayor (SFSU Philosophy), H. Yue (SFSU CS Deparment, D. Kleinrichert (SFSU School of Business)

• NIH, NSF funding

Copyright D. Petkovic (except when noted)

Thank You

Copyright D. Petkovic (except when noted)