12
ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION THE BEST OR THE WORST THING EVER TO HAPPEN TO HUMANITY?

ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATIONTHE BEST OR THE WORST THING EVER TO HAPPEN TO HUMANITY?

Page 2: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

ContentsContents1 THE EDGE OF THE PRECIPICE ................................................................. 03

2 WHAT IS AI, RPA AND ROBOTICS? ......................................................... 04

3 TIME FOR A CONTRACT REFRESH.......................................................... 05

02 | Artificial Intelligence, Robotics and Automation

Page 3: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

As Professor Stephen Hawking said[1], we do not yet fully understand and cannot predict the true impact of AI, and yet the race to business and operational transformation via the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation (RPA), is on an inexorable rise.

And whilst there may be some debate as to the socio-economic impact of the rise of the machines and whether they will in time decimate the human race in a form of science fiction disaster movie, for the time being their use is slightly more prosaic. There is no doubt that AI and RPA are here to stay, and businesses, academic institutions and governments are being encouraged to develop their intelligence further, and so it is essential to look to the intelligent future and work to both facilitate innovation, allowing businesses to embrace technology and at the same time mitigate any associated risks. We examine some of the business opportunities and challenges faced, as well as providing our insight on how to manage these issues both in strategic sourcing programmes and in transformative, technology-enabled projects.

THE EDGE OF THE PRECIPICE

[1] http://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of

www.dlapiper.com | 03

Page 4: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

Neural networks – an example of machine learning; a neural network is a connected network of many simple processors, modelled on the human brain.

Deep learning – a form of machine learning concerned with the human brain’s function and structure.

Heuristics – a ‘rule of thumb’, more akin to gut feeling (as opposed to algorithms which will guarantee an outcome), used in AI to problem solve quickly.

Robotic process automation (RPA) – the use of software to perform repeatable or clerical operations, previously performed by a human.

Machine learning – is the ability of a machine to improve its performance in the future by analysing previous results. Machine learning is an application of AI.

Artificial Intelligence – technically a field of computer science and a phrase coined by John McCarthy in the late 1950s, AI is the simulation of human intelligence by machines, often sub-divided into ‘strong’ and ‘weak’ AI (strong or hard AI is true human mimicry, often the focus of Hollywood, whereas weak or soft AI is more often focussed on a narrow task).

There is much talk of AI, robotics and RPA, almost on an interchangeable basis. In this paper, these terms are defined as having the following meanings:

WHAT IS AI, RPA AND ROBOTICS?

One thing it is important to note is that, in spite of the hype surrounding RPA, it won’t do much by itself “out of the box”. It needs to be taught and it will continue to learn before and after deployment, as indicated in the diagram below. This means that the use of RPA comes with an investment cost and a time requirement that is important to bear in mind when seeking to understand when the issues set out here are likely to manifest. It also goes some way to underlining that the use of RPA requires a relatively long-term investment in order to obtain and maintain the full potential benefits.

Initiation – no dataTraining with initial data

Patient presents with symptoms

CT scan carried out

Results of testingfed into system

Training phase

AccuracyLevel:

“Does this scan indicate cancerous growth?”

Live use phase

Testing on new data

Testing on new data

Human trained via medical school, on-the-job experience – after 10+ years of practice has seen several thousand scans

Machine combines results from potentially hundreds of thousands of scans, including input from edge cases from the best diagnosticians

50% 63% 74%

04 | Artificial Intelligence, Robotics and Automation

Page 5: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

The potential benefits of implementing AI or RPA within a business can be significant and even transformative for the commercial well-being of that company – so long as it is set up to succeed. The use of AI and RPA, particularly in outsourcing deals, can give rise to a number of novel and differently nuanced issues that, if not addressed at the outset, could create some significant issues for the future.

SERVICE LEVELS AND FAILURE

Broadly speaking, current service level models are devised to incentivise suppliers to avoid ‘low grade’ issues that might arise if staff do not follow proper processes. This is because human beings are by definition fallible and will be more or less efficient depending upon a large number of factors.

This is not the case with AI-based services, which do not (or should not) suffer from the same challenges as those that will likely give rise to human error. Accordingly, it is not unreasonable to expect improved service levels for processes supported by RPA and in fact, this will often be perceived to be one of the key reasons for the implementation of RPA.

The flip side is that if RPA failures occur, there is a far greater risk that the incidents will be catastrophic rather than minor. This is because AI-based systems tend to work at a demonstrable accuracy level or will, if this accuracy level cannot be achieved, fail in a significant way far below the relevant standard. It is far less likely that such systems will degrade by small margins as human-provided services might. When a defect or error occurs its likelihood of repetition and going unseen is increased beyond that of a human error, as it will likely have been “programmed” within the RPA solution and accordingly will become part of the norm. Only continued oversight

and management of the solution will enable these errors to be recognised, unless the RPA itself can recognise its own errors.

CONFIDENTIALITY AND IP

Software has been writing software without human intervention for some time. Who owns the resulting new code? Similarly, valuable, derived data from huge raw data sets may be sold in much the same way as market data. Likewise, who owns a new derived data set which has been created by the machine?

Clearly, any applicable agreement will need to include terms that deal with the relevant issues. The key here is understanding the likely different outputs that might be created as a consequence of the deployment of the AI or RPA technology. It will be important to rethink the provisions insofar as they relate to matters such as configurations, outputs that reflect or are a manifestation of business rules, and templates generated by the AI or RPA.

There are two particular issues that may require different treatment – background IP and know-how provisions.

It is not unusual for customers to agree that modifications or enhancements to the supplier’s background IP are owned by the supplier (often on the basis that they are worthless without the underlying product). However, in AI or RPA deployments, this category of IP can instead have its own intrinsic value that the customer ought to consider before letting go, because, for example, if used by the supplier or a third party it would allow that entity to replicate the customer’s business practices (potentially even more efficiently than the customer) or because it is something that the customer will continue to need to own because of the value to the company itself.

TIME FOR A CONTRACT REFRESH

www.dlapiper.com | 05

Page 6: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

Similarly, most customers will agree a know-how clause, permitting the supplier to use the knowledge gained by the supplier in the course of providing the services. But this ought to be reconsidered on the basis that it might aquire knowledge, not of humans, but of machines and software opening up the possibility for the supplier to re-use material and knowledge that the customer believes to have been protected.

AUDIT AND TECHNOLOGY

Customers often ask for audit rights – especially in particular sectors such as financial services where a regulated entity is required to ensure appropriate audit rights and may incur substantial sanctions from its regulators if it cannot audit and monitor the work of its service providers.

Such monitoring is easier within the traditional sourcing environment, when a supplier can be audited mainly through a review of documents, reports and procedures. Any work done by a human can be checked by another human relatively easily. In the new context of AI and RPA, it is more difficult to work out how the AI system is working (and evolves) through the service.

If a machine learning-based system has formulated its own pattern-matching approaches to determine the probability of a given action being the correct response to particular inputs, human auditors will not necessarily be able to derive or follow the underlying logic and reassure themselves in the same way that they might be able to by interviewing workers to check their level of training and competency. It may well be that instead of the traditional accountants and audit professionals, additional forensic IT experts should be added to the team that performs the audit.

HR, REDUNDANCIES AND KNOWLEDGE TRANSFER

Those implementing AI and RPA clearly need to understand the HR consequences. Transformational programmes will need to address process risks such as collective consultation requirements, where failure could potentially delay progress or give rise to significant financial penalties. Equally, potential redundancies will undoubtedly be a sensitive issue, as well as potentially triggering severance payments. Newly created roles on the back of change may give rise to redeployment and retraining obligations for those displaced. Both remuneration design and representation structures will potentially be impacted and come into play.

A particular challenge will be understanding the impact of AI and RPA on the workforce sufficiently to identify legal obligations and not fall foul of timing issues by failing to comply with any obligations in the required timescales, for example collective consultation processes or filing notification of redundancies with competent authorities. Another difficulty where there is a proposed outsourcing and transformation will be understanding whether or not automatic transfer rules apply such as those under TUPE/ARD or similar legislation, and who has the ability to effect redundancies pre or post transfer. This will involve asking important questions around exactly when and how transformation will impact employees, and navigating the legal constraints accordingly.

06 | Artificial Intelligence, Robotics and Automation

Page 7: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

Customer

Supplier

Customer’s employees

Normal transfer-in/transfer-out TUPE model for outsourced services

AI-based service provision – transfer-in, gradual redundancy

Customer’s (or replacement supplier’s) employees

Service start date Service end date

TUPETUPE

Supplier’s employees

Customer

Supplier

Customer’s employees

What IP/knowledge does the customer get?

Service start date Service start date

TUPE

Supplier’s employees

It is accepted practice, where TUPE/ARD or similar legislation applies, that offer and acceptance rewards are used to re-engage employees who are involved in providing a given service that is to be outsourced, and that these employees may transfer to the supplier upon the commencement of service provision.

Generally, when a customer transfers its employees to the supplier, it may expect to transfer those of the supplier’s employees (or at least a skilled and knowledgeable subset of them) who were providing the services either back to the customer or onward to a replacement supplier where the services are terminated. From a customer perspective, this is aimed at ensuring that the customer can continue with the services directly (or with third parties) with the same standards of service and with the benefit of relevant know-how, as well as not saddling the supplier with staff it no longer requires and the associated workforce restructuring issues.

Where RPA or AI is involved in the service provision, some or possibly all of the employees previously providing the services within the customer organization may have become redundant during the period of service supply as a result of the deployment of RPA or AI. It follows that there may be few, if any, employees to transfer back to the customer or onward to a new supplier, and a resulting loss of know-how transfer.

Upon contract termination, if the AI system is licensed software, it may well remain with the supplier, along with the experience and machine learning that it has developed during the provision of the services. In that context, it is important to address in the contract how to reimport information into a new AI system so as to have an accelerated period of learning. Exit provisions are accordingly becoming more relevant, and also need to address who will own, or have rights to use, the IPR in the tool itself, at least insofar as it represents a reflection of the customer’s activities and operations.

www.dlapiper.com | 07

Page 8: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

LIABILITY

At present, heads of uncapped loss are negotiated assuming failure modes that we have seen in other contracts where the work is done by humans. However, if a substantial portion of the work is to be undertaken using artificial intelligence, the most likely failure modes will be different, and the traditional liability positions take on a new significance.

Where AI is undertaking an increasing share of the work, with humans checking only a small portion of its output, errors might accumulate more rapidly and be caught less frequently. Similarly, whilst a machine might generally work more quickly than a human work force, and work twenty-four hours a day instead of eight hour shifts, the resilience of the machine needs to be considered. If it goes down, that is the equivalent of every person in a human workforce not turning up: no work gets done. This makes low-level failure – the type that a service level and service credit regime in a contract might be designed to avoid – less likely, and catastrophic failure a bigger issue.

In addition, depending on the nature of the system and its ability to back-up its ‘experience’ in the form of its stored patterns for processing the work, if that pattern is lost after the human work force that was previously doing the work has moved on, then the customer’s ability to

undertake the work or even meaningfully recreate the AI systems to undertake that work is badly compromised. The literal loss of corporate memory would be acute.

The net result is that failures are likely to be a rarer species, but potentially more severe. The potential for lower-value claims from the customer against the service provider is perhaps reduced, but the customer will remain very nervous about a major outage and even more concerned about the loss of those precious experience patterns that represent the AI itself.

As a result, the traditional approach whereby suppliers are nervous of and customers often accepting of a position where the supplier takes little, if any, liability for the business impact or even for loss of data on customers, may require, at least from a customer perspective, a re-think – AI and RPA are not providing a service to support the business, they have become the business. Similarly, customers may see the lower end of the current market-standard financial caps as not being sufficient if the truly catastrophic failure occurs, whereas a supplier will want to achieve an ongoing balance between the risks it can accept versus its reward, together with its inherent capability and desire to be able to take on material liability for what might be perceived as ‘run of the mill’ services.

COMMITTED BENEFITS

One of the principal benefits of deploying an AI or RPA solution is to reduce and eradicate costs on a long-term basis. Many contracts already include an element of commitment to benefits on the part of the supplier as part of the deal and which will often be achieved by the implementation of AI and RPA (as well as some more traditional methods such as process improvement and rate arbitrage). We believe most major AI and RPA deals (whether standalone or as part of a broader outsourcing) will contain a level of committed benefits whereby the supplier contractually promises to save the customer money and if this is not achieved will pay the customer an amount to make up the shortfall. The likely quid pro quo will be a request from the supplier to share in the excess saving.

08 | Artificial Intelligence, Robotics and Automation

Page 9: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

Contractualising the mechanism by which the savings are committed is absolutely key, and often fundamental to the rationale of selecting that supplier over another. This will likely require a clear understanding of the baseline of costs against which the saving is to be delivered, what the saving actually is and how it is to be quantified and an explanation of how the customer can be sure that the saving has actually been delivered and that it is sustainable.

PROTECTION ON EXIT

A key risk with an AI or RPA deployment is the term of the relevant licence and what happens if/when this comes to an end; the same risks exists at the end of an outsourcing transaction of which AI or RPA plays a key part. If deployed RPA or AI is suddenly removed, the customer may be faced with a material spike in costs as it has to replace the solution, both temporarily and then permanently. There is also a significant loss of knowledge which could impact on the customer’s ability to conduct its business. Accordingly, it is key to address three major issues before the contract is signed, rather than leaving these to be dealt with upon exit. These issues are:

1) the “leave behind” IPR, whether owned or licensed to the customer – this will need to cover configurations of software, manifestations of business rules applied by the customer, process improvements and anything embedded within a process that would “break” the process if it was removed;

2) a continuing standalone licence to the AI/RPA software – it is preferable to negotiate a standalone licence to the version of the software being used by the customer at the point of exit, on terms that can survive a termination, even if this means a separate fee is payable for it; and

3) an obligation to deliver the transformation itself, not just the commercial benefits – most AI/RPA-heavy deals are of course transformative. Where the deal involves a level of committed benefits, there is a risk that the customer concentrates on the supplier “cutting a cheque” to achieve the benefits, even if operationally

the underlying change is not delivered. This is dangerous because (i) the supplier might not be able and willing to stand behind it in the long term and so it might lead to a negotiation and an unravelling of one of the fundamentals of the deal and (ii) without the delivery of the transformation project, the customer is not being transformed and so on exit will be – operationally – even further behind its desired state than it was at the beginning of the transaction.

REGULATORY OVERSIGHT

In many sectors, not least the Financial Services sector, customers are subject to increasing regulation in connection with their use of technology and outsourcing to support their business. Many of the regulatory requirements are aligned to “traditional” outsourcing models and can be difficult to apply directly to transactions that have a heavily automated aspect to them. Ensuring regulatory compliance whilst achieving the full benefits of an outsourcing harnessing AI and robotics will need to be a carefully approached task.

INTERPLAY WITH OTHER SOFTWARE

Whilst some AI or automated systems might operate on a standalone basis, more often than not the relevant systems will be connecting to and interacting with other systems within the wider IT environment. Where this happens, the terms upon which software running within the wider environment (i.e. that the AI or automated system might interact with) need to be considered. It may not be the case that the contemplated interaction falls within the scope of the licence applicable to such third-party software, or if it does, it may trigger provisions which impact upon the licence fee for that third-party system.

Many software licences now specifically address the situation where the licensed system is to interface with AI or other form of automated system instead of human users. In extreme cases, this form of interaction might simply be prohibited. In others, the terms might provide for differential fee structures based upon how the system is to be used. For instance, where software is licensed on

www.dlapiper.com | 09

Page 10: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

the basis of a fee per user or fee per ‘seat’, any human users might count as a single user, whereas automated systems count as multiple users – commonly between 3 and 10 – on the basis that the automated systems have the potential to use the system at a significantly greater rate than a human user might.

There is some plausible logic for this when looked at from the perspective of the software vendors. If large numbers of human users are rapidly replaced by a much smaller number of automated systems, and those automated systems only count as one ‘user’ despite doing the same volume of work as several human users might previously have done, then the vendor’s future revenue stream will soon dry up. The ongoing cost of support and future development work does not change, but now needs to be spread across a smaller population of largely robotic ‘users’ to maintain the same revenue and margin position, so the fee for these new types of users has to be greater. Whilst that argument is logical, from the customer’s perspective a large differential in pricing for apparently the same ‘user’ access might still seem to be an unfair charging scheme.

To avoid this problem, moving to a ‘pay as you go’ charging scheme based on transactions processed, or compute power consumed, or some other common ‘cloud’ or ‘X as a Service’ type of metric might be sensible. Under those types of models, the automation problem is solved, as the charges are based on the level of work done with the system, regardless of whether the work is done by human users or an automated system.

Checking the third-party licence terms of any software with which the AI/automated system will interface should be a critical part of developing the business case for any AI or automation implementation project.

ERRORS IN DATA AND PERPETUATION OF MISTAKES

In the background section above, we set out how machine learning-based systems are ‘trained’ via a positive feedback loop. In theory, as the system is exposed to more data, it ought to continually improve and the accuracy of its ‘decisions’ should therefore increase.

As with most computer systems however, the old adage of ‘garbage in, garbage out’ still applies. It is quite possible that, if an apparently highly performing system is continually exposed to poor quality data, or data which suggests incorrect decisions are in fact correct, then its accuracy – in objective terms – will gradually diminish (but its accuracy in performance terms will be high).

Any biases, inaccuracies or bad assumptions which are present in the human users (whose actions form the training data used to train the system) will be reflected in the decisions made by the trained system. Similarly, if the system is continually being fed data from different sources, and one source is continually providing incorrect feedback on the decisions taken by the machine learning system, that will impact upon the accuracy of the system.

In scenarios where a particular AI system is used to provide services to many different customers, then both the platform vendor, and each customer, has an interest in ensuring that none of the users ‘pollute’ the system by inputting bad data that could diminish the accuracy of the system’s output for all users. In such circumstances, it is in all parties’ interests that each customer commits to ensuring the quality of any data fed into the system and commits to avoiding any action which could result in the quality of the system being compromised.

DATA PROTECTION AND INFORMED CONSENT

AI and smart robots pose some obvious data protection concerns (and we will address such topics in more detail later in this series of articles). Such concerns take on a new relevance once we take into account the substantial sanctions that may be applied under the new European General Data Protection Regulation.

The main concerns stem from the fact that any AI system by definition is based on the processing of a large volume of data. Initially, such data may not be personal data within the meaning of the Regulation, but it can become personal data (i.e. it is attributable to a specific person) or even sensitive data, as a result of deep pattern matching techniques and other processing that AI might perform.

10 | Artificial Intelligence, Robotics and Automation

Page 11: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

This may result in data being processed in a manner for which consent had not been granted, without any other relevant justification being applicable, or beyond the boundaries set out by earlier consent. Furthermore, the AI solution may end up making its own decisions about the data management, thus changing the purposes laid out by the data controller who should be ultimately responsible for the data processing.

Furthermore, depending on the complexity of the system and the ability to detect “unusual” activity, it may be harder to determine when an AI-based system is being hacked, making it more difficult to determine whether there has been a resulting data breach. All such issues will have to be carefully addressed in the design phase, when it is being decided how an AI solution will function and what technical controls can be applied, and also in any agreement between parties involved in using that AI solution to process data.

Last but not least, and this is a rather pervasive point, it should be carefully determined between the parties who is responsible for what, if there is any dependency, particularly considering all parties that may incur liabilities when dealing with smart robots or artificial intelligence.

ARE WE HEADING TO ARMAGEDDON?

Whether we believe we are or not, the use of RPA and AI is on an exorable journey to transform not just sourcing contracts but our day-to-day lives. The continued use of RPA and AI signals a need to look again at many transition contract terms through this new lens to ensure that they continue to be relevant and enable businesses to garner the full benefit of transformational outsourcing deals and AI and RPA implementation.

With careful thought and attention to the issues, deploying of AI/RPA can be transformative, competitively advantageous and deliver real business benefit. Maybe then, AI and RPA will be one of the best things to happen after all.

If you would like is discuss any of the issues raised here, please contact your usual DLA Piper contact or email [email protected]

www.dlapiper.com | 11

Page 12: ARTIFICIAL INTELLIGENCE, ROBOTICS AND AUTOMATION/media/files/insights/... · the implementation of digital technologies, such as artificial intelligence (AI) and robotic process automation

www.dlapiper.com

DLA Piper is a global law firm operating through various separate and distinct legal entities. Further details of these entities can be found at www.dlapiper.com.

This publication is intended as a general overview and discussion of the subjects dealt with, and does not create a lawyer-client relationship. It is not intended to be, and should not be used as, a substitute for taking legal advice in any specific situation. DLA Piper will accept no responsibility for any actions taken or not taken on the basis of this publication. This may qualify as “Lawyer Advertising” requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.

Copyright © 2018 DLA Piper. All rights reserved. | MAR18 | 3289622