Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
RESPONSIBLE ARTIFICIAL INTELLIGENCE
Prof. Dr. Virginia Dignum
Chair of Social and Ethical AI - Department of Computer Science
Email: [email protected] - Twitter: @vdignum
• A technologyo Currently mostly pattern matching (stochastic, non-deterministic)
• A next step in digitizationo Everything is AI?! (problem-solving, robots, IoT, cyber…)
• A field of scienceo The study of methods for making computers behave intelligentlyo Model intelligence as means to understand intelligence
• An entityo Magic, all-knowing, all-powerful
WHAT IS AI?
• Not just algorithm• Not just machine learning
• Propertieso Adaptable behaviouro Interactiveo Autonomous
• But• AI applications are not alone
o Socio-technical AI systems
WHAT IS AI?
Autonomy
AIsystem
Socio-technical
AI system
• What AI systems cannot do (yet)o Common sense reasoning
Understand context Understand meaning
o Learning from few exampleso Learning general conceptso Combine learning and
reasoning
• What AI systems can do (well)o Identify patterns in data
Images Text Video
o Extrapolate those patterns to new data
o Take actions based on thosepatterns
AI IS NOT INTELLIGENCE!
AI IS NOT INTELLIGENCE!
AI IS NOT INTELLIGENCE!
RESPONSIBLE AI
• AI can potentially do a lot. Should it?• Who should decide?• Which values should be considered? Whose values?• How do we deal with dilemmas?• How should values be prioritized? • …..
Essential question: shoud we use AI for this?
Responsible AI is
• Lawful• Ethical • Reliable• Beneficial• Verifiable
Responsible AI recognises that
• AI systems are artefacts• We set the purpose
RESPONSIBLE AI
AI AND ETHICS - SOME CASES
• Chatbots o Mistaken identity (is it a person or a bot?) o Manipulation of emotions / nudging / behaviour change support
• Automated manufacturing o Efficiency/profit versus jobso Training and education (for people!)
• Self-driving cars o Who is responsible for the accident by self-driving car? o Which decisions can a car make? What do these decisions mean?
PRINCIPLES AND GUIDELINES
https://ethicsinaction.ieee.orghttps://ec.europa.eu/digital-single-
market/en/high-level-expert-group-artificial-intelligence
https://www.oecd.org/going-digital/ai/principles/
Responsible / Ethical / Trustworthy....
https://ethicsinaction.ieee.org/https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligencehttps://www.oecd.org/going-digital/ai/principles/
• Strategies / positionso IEEE Ethically Aligned Designo European Uniono OECDo WEFo Council of Europeo National strategies:
Tim Dutton, https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd
o ...
• Declarations o Asilomaro Montrealo ...
MANY INITIATIVES (AND COUNTING...)
Check Alan Winfield blog: http://alanwinfield.blogspot.com/2019/04/an-
updated-round-up-of-ethical.html
https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfdhttp://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html
EU HLEG OECD IEEE EAD• Human agency and
oversight• Technical robustness
and safety• Privacy and data
governance• Transparency• Diversity, non-
discrimination and fairness
• Societal and environmental well-being
• Accountability
• benefit people and the planet
• respects the rule of law, human rights, democratic values and diversity,
• include appropriate safeguards (e.g. human intervention) to ensure a fair and just society.
• transparency and responsible disclosure
• robust, secure and safe
• Hold organisations and individuals accountablefor proper functioning of AI
• How can we ensure that A/IS do not infringe human rights?
• effect of A/IS technologies on human well-being.
• How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable?
• How can we ensure that A/IS are transparent?
• How can we extend the benefits and minimize the risks of AI/AS technology being misused?
BUT ENDORSEMENT IS NOT (YET) COMPLIANCE
EU HLEG OECD IEEE EAD• Human agency and
oversight• Technical robustness
and safety• Privacy and data
governance• Transparency• Diversity, non-
discrimination and fairness
• Societal and environmental well-being
• Accountability
• benefit people and the planet
• respects the rule of law, human rights, democratic values and diversity,
• include appropriate safeguards (e.g. human intervention) to ensure a fair and just society.
• transparency and responsible disclosure
• robust, secure and safe
• Hold organisations and individuals accountablefor proper functioning of AI
• How can we ensure that A/IS do not infringe human rights?
• effect of A/IS technologies on human well-being.
• How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable?
• How can we ensure that A/IS are transparent?
• How can we extend the benefits and minimize the risks of AI/AS technology being misused?
regulation standardsobservatory
The promise of AI:Better decisions
HOW DO WE MAKE DECISIONS?
WHICH DECISIONS SHOULD AI MAKE?
• Value alignment:o aligning the values (objectives) of machines with those of usero ‘easy’ for one user, hard for multiple / societal settings
• Asimov laws:o No room for uncertainty / trade-offs
• Artificial moral agentso Ethical agents have responsibilities while ethical patients have rights
because harm to them matterso Robot Personhood (EP) vs AI as ethical agent but not patient
Decisions for the benefit of the whole society?
FROM INDIVIDUAL TO SOCIAL
HOW DO WE MAKE DECISIONS TOGETHER?
DESIGN IMPACTS DECISIONS IMPACTS SOCIETY
• Choices• Formulation • Involvement• Legitimacy• Voting system
email: [email protected] Twitter: @vdignum
HOW SHOULD AI MAKE DECISIONS?
TAKING RESPONSIBILITY
• in Designo Ensuring that development processes take into account ethical
and societal implications of AI and its role in socio-technical environments
• by Designo Integration of ethical reasoning abilities as part of the behaviour
of artificial autonomous systems
• for Design(ers)o Research integrity of stakeholders (researchers, developers,
manufacturers,...) and of institutions to ensure regulation and certification mechanisms
IN DESIGN: PROCESS
• Doing the right thing
• Doing it right
• AI needs ARTo Accountabilityo Responsibilityo Transparency
TAKING RESPONSIBILITY: ART
Responsibility
Autonomy
AIsystem
Socio-technical
AI system
• AI systems (will) take decisions that have ethical grounds and consequences
• Many options, not one ‘right’ choice
ETHICS IN DESIGN: AI – DOING IT RIGHT
• Principles for Responsible AI = ARTo Accountability
Explanation and justification Design for values
o Responsibility
o Transparency
• Optimal AI is explainable AI
ETHICS IN DESIGN: AI – DOING IT RIGHT
• Principles for Responsible AI = ARTo Accountability
Explanation and justification Design for values
o Responsibility Autonomy Chain of responsible actors Human-like AI
o Transparency
ETHICS IN DESIGN: AI – DOING IT RIGHT
• Principles for Responsible AI = ARTo Accountability
Explanation and justification Design for values
o Responsibility Autonomy Chain of responsible actors Human-like AI
o Transparency Data Algorithms Processes, choices and decisions
Watson AlphaGo
28
CONCERN: EXPLAINABLE AI
UserAISystem
Sensemaking Operations
• Why did you do that?• Why not something else?• When do you succeed?• When do you fail?• When can I trust you?• How do I correct an
error?
• Machine learning is currently the core technology
• Machine learning models are opaque, non-intuitive, and difficult for people to understand
©IBM
©Marcin Bajer/Flickr
WHAT IS AN EXPLANATION?
CorrectCompreensible
TimelyComplete
Parsimonous
NO AI WITHOUT EXPLANATION
explanation
• XAI is for the user:o Who depends on decisions, recommendations, or actions of the systemo Just in time, clear, concise, understandable
• XAI is about:o provide an explanation of individual decisionso enable understanding of overall strengths & weaknesseso convey an understanding of how the system will behave in the futureo convey how to correct the system’s mistakes
BY DESIGN: ARTIFICIAL AGENTS
Design for Values
Should we teach ethics to AI?• What does that mean?• What is needed?
• Can we teach ethics to AI?• Should we teach ethics to AI?
o What does that mean?o What is needed?
• Decisions mattero Our decisions matter
• Good results mattero What is good? Accurate/efficient/fair/sustainable?
• The real dilemmaso 95% accurate but not explainable or 80% accurate but explainable?o Energy costs: more AI per joule!
VALUES AND DILEMMASSecurity Privacy
Efficiency Safety
Accountability Confidentiality
Prosperity Sustainability
AND
AND
AND
ANDMoral overload –
We cannot have it all
• Still, we need to trust systems. Check their compliance against our values.
ONE PROBLEM
input output
• black boxes cannot always be avoided
• High-level values are dependent on the context.• Values have different interpretations in different contexts
and cultures.
• We need to make these choices explicit and contextual!
THE OTHER PROBLEM
• Doing the right thingo Elicit, define, agree, describe, report
• Doing it righto Explicit values, principles, interpretations, decisionso Evaluate input/output against principles
GLASS BOX APPROACH
• Domain-agnostic, to allow for adaptation to any application.
• Context-aware, to explicitly describe in which context a functionality relates to a value.
• Implementable, able to be encoded in a programming language.
• Computationally tractable, to allow for verification and monitoring in reasonable time.
ANDREAS THEODOROU | T: @RECKLESSCODING
FORMALISING THE GLASS BOX
Theorem: Answering the question “does A count as B in
context c in the Glass Box?” is equivalent to checking whether
the implication A B holds propositionally with the
assumptions of c.
FORMALISING THE GLASS BOX: VERIFICATION
• This means that we can reason about what holds in theGlass Box in reasonable time (well within the reach of SAT-solvers and answer set programming approaches).
DESIGN FOR VALUES METHODOLOGY
©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
values
norms
functionalities
interpretation
concretization
What a person, or society, consider
importantEvaluative
Possible / accepted responsesMeasurable
Resource andcontext dependentImplementable
FROM VALUES TO NORMS
• Norms are based on reasons for actiono Usually reasons for and against certain actions
• Norms are rules for actiono One ought to do (prescription) what one has most reason to do all
things considered
• Norms are not always obligations/prohibitions but can also be recommendation
FROM NORMS TO FUNCTIONALITIES
• Context-dependent
• Requires more informationo Scope of normo Specification of goalso Specification of means
CONCERN: BIAS AND DISCRIMINATION
• Bias is inherent on human data – we need bias to make sense of world
• But we dont want AI to be prejudiced!• Unbiasing: Are we creating new bias ?
BIAS IS MORE THAN BIASED DATA
• Who is collecting the data?• Whose data is being collected?• Which data is collected?
o Why don’t we keep information about the colour of the socks of the data collector?
• How and by who is the data labelled?o Amazon Mechanical Turk approaches
• What is the training data?o The cheapest and easiest to attain?
DEALING WITH BIAS
• COMPAS – recidivism risk identification
DEALING WITH BIAS• COMPAS – recidivism risk identification• Is the algorithm fair to all groups?
DEALING WITH BIAS• COMPAS – recidivism risk identification• Is the algorithm fair to all groups?
True
labe
l
Predicted label
CONFIRMATION BIAS
• tendency to search for, interpret, favour, and recall information in a way that confirms one's pre-existing beliefs or hypotheses.
Predictive policing
GUIDELINES TO DEVELOP ALGORITHMS RESPONSIBLY
• Who will be affected?• What are the decision criteria we are optimising for?• How are these criteria justified?• Are these justifications acceptable in the context we are
designing for?
• How are we training our algorithms?
• Regulation• Certification• Standards• Conduct
AI principles are principles for us
FOR DESIGN(ERS): PEOPLE
TRUSTWORTHY AI
CERTIFICATION FOR AI?
BENEFITS
Not innovation vs ethics but ethics as stepping stone for innovation
• Taking an ethical perspectiveo Business differentiation (“Ethics is the new green”)o Certification to ensure public acceptance
• Principles and regulation are drive for transformationo Better solutionso Return on Investment
• 18% researchers at conferences are women• 80% professors are men• Workforce
o Google: 2,5% black, 3,6% Latino, 10% womeno Facebook: 3,8% black, 5% Latino, 15% women
CONCERN: WHO IS DEVELOPING AI?
• Design impacts decisions impacts society impacts design
• Regulation requires understanding what and why regulate
• AI systems are tools, artefacts made by people: We set the purpose
• AI can give answers, but we ask the questions
THANK YOU!
Twitter: @vdignum
responsibLE �Artificial IntelligenceWhat is AI?What is AI?AI is not intelligence!AI is not intelligence!AI is not intelligence!Responsible AIResponsible AIAI and ETHICS - Some casesPrinciples and GuidelinesMany Initiatives (and counting...)Slide Number 12But Endorsement is not (yet) ComplianceSlide Number 14Slide Number 15How Do We make decisions?WhiCH decisions SHOULD AI MAKE?From InDIvidual to SOcialHow Do we make decisions Together?Design impacts Decisions IMPACTS SOCIety How should AI make decisions?Taking responsibility IN Design: process�Taking Responsibility: ARTEthics IN Design: AI – doing it rightEthics IN Design: AI – doing it rightEthics IN Design: AI – doing it rightConcern: Explainable AIWhat is an Explanation?NO AI without EXPLANATIONby Design: Artificial AgentsSlide Number 32Values and dilemmasOne problemThe other problemGlass BOX ApproachFormalising the glass boxFormalising the glass box: verificationDesign for Values MethodologyFrom values to normsFrom norms to Functionalities Concern: Bias and DiscriminationBIAS is More than BIASED DATADealing with biasDealing with biasDealing with biasConfirmation biasGuidelines to develop algorithms responsiblySlide Number 49Trustworthy AIBenefitsConcern: Who is Developing AI?Slide Number 53Thank YOU!