Upload
fet-aware-project-self-awareness-in-autonomic-systems
View
254
Download
2
Tags:
Embed Size (px)
DESCRIPTION
By Alan Winfield
Citation preview
Designed by Alan Winfield
Self-Awareness in Autonomic Systems
Safety and Ethics
Designed by Alan Winfield
Outline
• The problem of safety in autonomic systems
– and why we need a radical new approach
• The problem of ethics in autonomic systems
– using robots as an example
• Self-awareness might provide a powerful means for building safe and ethical autonomic systems
Designed by Alan Winfield
The safety problem 1
• For any engineered system to be trusted, it must be safe– We already have many examples of complex
engineered systems that are trusted; passenger airliners, for instance
– These systems are trusted because they are designed, built, verified and operated to very stringent design and safety standards
– The same will need to apply to autonomous systems
Designed by Alan Winfield
The safety problem 2
• The problem of safe autonomous systems in unstructured or unpredictable environments, i.e.
– robotsdesigned to share human workspaces and physically interact with humans must be safe,
– yet guaranteeing safe behaviour is extremely difficult because the robot’s human-centred working environment is, by definition, unpredictable
– it becomes even more difficult if the robot is also capable oflearning or adaptation
Designed by Alan Winfield
The ethical problem
• Use autonomous robots as a case study
– Four ethical problems
– Asimov’s three laws of robotics
– Asimov revised: 5 ethics for roboticists
– But could robots themselves be ethical..?
Designed by Alan Winfield
Four ethical problems
• The problem of autonomous robots that pull the trigger
• The problem of robots that induce an emotional reaction, or dependency
• The problem of humanoid robots that appear to be intelligent but are not
• The problem of who is responsible when a robot causes harm
Designed by Alan Winfield8
Asimov’s three laws of robotics
1. a robot may not injure a human being or, through inaction, allow a human being to come to harm;
2. a robot must obey any orders given to it by human beings, except where such orders would conflict with the first Law, and
3. a robot must protect its own existence as long as such protection does not conflict with the first or second Law.
Designed by Alan Winfield9
Asimov revised: 5 ethics for roboticists
1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
Designed by Alan Winfield10
Asimov revised: 5 ethics for roboticists
2.Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
Designed by Alan Winfield11
Asimov revised: 5 ethics for roboticists
3.Robots are products. They should be designed using processes which assure their safety and security.
Designed by Alan Winfield12
Asimov revised: 5 ethics for roboticists
4.Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
Designed by Alan Winfield13
Asimov revised: 5 ethics for roboticists
5. The person with legal responsibility for a robot should be attributed.
Draft ethical principles proposed by UK EPSRC/AHRC working group on robot ethics, September 2010:http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx
Designed by Alan Winfield
But could a robot be ethical?
• An ethical robot would require:
– The ability to predict the consequences of its own actions (or inaction)
– A set of ethical rules against which to test each possible action/consequence, so it can choose the most ethical action
– New legal status..?
Designed by Alan Winfield
Using internal models
• Internal models might provide a level of functional self-awareness
– sufficient to allow robots to ask what-if questions about both the consequences of its next possible actions
– the same internal modelling architecture could conceivably embody both safety and ethical rules
– See slide set 12 Systems with Internal Models
Designed by Alan Winfield
A thought experiment
Consider a robot that has four possible next actions:1. turn left2. move ahead3. turn right4. stand still
Which action would lead to the least harm to the human?
Designed by Alan Winfield
In conclusion
• I strongly suspect that internal models might prove to be theonly way to guarantee safety in robots, and by extension autonomous systems, in unknown and unpredictable environments
– and just maybe provide ethicalbehaviours too
http://alanwinfield.blogspot.com/
Designed by Alan Winfield
References
• Woodman R, Winfield AFT, Harper C and Fraser M, Building Safer Robots: Safety Driven Control, International Journal of Robotics Research. 31 (13), 1603-1626, 2012.
• Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2008
• M. Anderson and S. L. Anderson. Machine Ethics. Cambridge University Press, 2011
• Royal Academy of Engineering, Autonomous Systems: Social, Legal and Ethical Issues, August 2009
– http://www.raeng.org.uk/societygov/engineeringethics/pdf/Autonomous_Systems_Report_09.pdf
• Draft ethical principles proposed by EPSRC/AHRC working group on robot ethics, September 2010
– http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx