3
The legal implications of the consequences of the actions of robots endowed with artificial intelligence are currently the object of discussion at the European Parliament. In this opinion piece, Orsolya Zara, legal and policy advisor to an MEP at the European Parliament in Brussels, provides some insights into changes pertaining to robots liability that may need to be implemented in civil law. Published in EuroScientist via SciencePOD. Robo sapiens: a new legal person on the horizon? Reframing the law to account for responsibility of robots who can make their own decisions The emergence of a new generation of robots capable to learn on the fly, based on experiences learned from their environment, brings a fresh set of challenges to issues of responsibility. Examples of autonomous robots include those embedded in a self-driving car or an algorithm on the stock-exchange market concluding business deals. This article examines the legal conditions under which autonomous robots can be considered liable for the consequences of their actions, when they

Robo sapiens: a new legal person on the horizon? · Robo sapiens: a new legal person on the horizon? ... consumer rights and liability for defective products. ... autonomous robot

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Robo sapiens: a new legal person on the horizon? · Robo sapiens: a new legal person on the horizon? ... consumer rights and liability for defective products. ... autonomous robot

The legal implications of the consequences of the actions of robots endowed with artificial intelligence

are currently the object of discussion at the European Parliament. In this opinion piece, Orsolya Zara,

legal and policy advisor to an MEP at the European Parliament in Brussels, provides some insights into

changes pertaining to robots liability that may need to be implemented in civil law.

Published in EuroScientist via SciencePOD.

Robo sapiens: a new legal person on the horizon?

Reframing the law to account for responsibility of robots who can make their

own decisions

The emergence of a new generation of robots capable to learn on the fly, based on experiences

learned from their environment, brings a fresh set of challenges to issues of responsibility. Examples

of autonomous robots include those embedded in a self-driving car or an algorithm on the

stock-exchange market concluding business deals. This article examines the legal conditions under

which autonomous robots can be considered liable for the consequences of their actions, when they

Page 2: Robo sapiens: a new legal person on the horizon? · Robo sapiens: a new legal person on the horizon? ... consumer rights and liability for defective products. ... autonomous robot

have caused damage. We will also explore the consequences of a shift in how the law attributes a

legal status—not so distant to that of legal persons—to this new breed of autonomous robots.

Responsibility from autonomy

Autonomous robots have been the focus of interest of the French commission of reflection on the

research ethics related to digital science and technology CERNA since 2013 and some private

companies as well. What has attracted the attention of legal experts is their very nature: their

autonomy.

Indeed, their ability to adapt to new inputs and to act independently, without any external control or

intervention, means that they could potentially cause damage and perhaps even be considered liable

for it. Legal experts have been debating what happens if the robot takes a decision that causes harm,

as a consequence of its own learning process that modified its pre-programmed commands.

Legislators need to decide whether such robots have non-contractual or even contractual liability.

Typically such liability issues are covered under well-established legislation covering product safety,

consumer rights and liability for defective products.

Legislative gap

Existing legislation currently fails to address the issue of autonomous robot liability. Let’s take the

case of a robot compliant with all safety regulations when put into circulation. This robot could make

unforeseen decisions causing damage, as a result of an autonomous learning and adaptation process.

So should the decision, which led to the damage be considered wrong?

In the scenario where the robot’s decision is wrong, legal experts may seek to attribute responsibility

for the damage. They may therefore need to consider whether the robot, as a product itself, was

defective by consequence. They would also need to understand whether the state of scientific and

technical knowledge at the time of the product release was sufficient to identify such a defect. In the

case of autonomous robots, they would only be regarded as defective if they were unable to

learn—not if they make a damaging autonomous decision.

In addition, once the robot is put into circulation, the liability of a producer or programmer could

only be proved in exceptional cases. Indeed, a normal functioning robot would be unpredictable.

Alternatively, the producer could enjoy a form of immunity similar to that of the protection of

firearm manufacturers in the United States.

Robot liability

All these questions point to the need for new legal regulations. Such new legislation might lead to

the full or partial direct liability of the robot for its own acts or omissions. It might sound far-fetched

but it is no longer the territory of sci-fi writers.

To establish a robot's liability, we need to find the answers to a series of questions. First: does an

autonomous robot have a legal capacity? After all, it is able to engage in transactions, handle the

business of its owner and maintain a particular relationship with others. If we assume that an

Page 3: Robo sapiens: a new legal person on the horizon? · Robo sapiens: a new legal person on the horizon? ... consumer rights and liability for defective products. ... autonomous robot

autonomous robot has a certain legal capacity, can it acquire rights and undertake obligations? Can it

conclude contracts? Going one step further, can an independently acting robot be sued?

Furthermore, assuming that they possess a degree of self-determination, shall we treat these robots

as legal persons? Or do we need to create a special legal e-person status for robots? What would be

the extent of this e-persons’ rights and obligations? How do we separate an e-person's limited from

unlimited liability? Will an e-person be authorised to instigate court proceedings? Can it be sued if it

caused damage to a third person by acting autonomously, should no natural person be found at the

end of the chain of liability?

Liability cover

A possible solution could be a compulsory insurance scheme for robots, similar to our car insurance.

However, obtaining compensation for punitive damages could remain too complicated or too costly

within the framework of an insurance scheme. Therefore, a different solution could involve the

creation of a special compensation fund for people affected by robot-induced damage. The

advantage of this solution is that it precludes from establishing the fault or liability of the

autonomous robot and removes the need for robot insurance. The fact that the robot caused

damage, creates a sufficient basis for indemnification in itself.

But who will pay into this compensation fund? Those who have economic interest in the functioning

of such robots would be the primary contributors. The economic interest in robots is not limited to

those who manufacture, programme, sell or use them. Indeed, everyone is likely to enjoy the

benefits of robotics, both on a private and on a societal level. Therefore paying into a compensation

fund is in the interest of all and could be raised as a new tax.

Yet, a scenario where robots themselves will pay into the fund also seems possible. Think, for

example, about driverless taxis that might transfer the fare--or part of it--into the fund. And the

unused part of the fund that has not been used for compensation payment could be re-invested in

research and development. Thus, this would encourage manufacturers to develop safer robots and

help spread their use in further areas.

The European Parliament is currently working on answering these kinds of questions. EuroScientist

readers are invited to join in the discussion of these highly important legal questions.

Orsolya Zara

Orsolya is the legal and policy advisor to an MEP at the European Parliament, Brussels. She is also a

member of the Association on the Rights of Robots (Association du droit des robots (ADDR)), based

in Paris, France.

Photo credit: Jiuguang Wang (CC BY-SA 2.0)