15
Future Workforce – 2018 Thought Leadership Roundtable Report By: Garry Mathiason, Natalie Pierce, Matthew Scherer, and Jill Weimer

Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Future Workforce – 2018 Thought Leadership Roundtable Report

By: Garry Mathiason, Natalie Pierce, Matthew Scherer, and Jill Weimer

Page 2: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

The development and deployment of increasingly sophisticated artificial intelligence (AI), robots, and other automated systems are transforming workplaces globally, redefining needed workforce roles, skills, and jobs, and reinventing work itself. Big data, predictive analytics, deep learning, biometrics, algorithmic bias, blockchain tokens, and collaborative robot safety standards are just a handful of terms now becoming commonplace in human resource management. While technology has been “the” instrument of change for much of human history, its exponentially accelerating arrival, fueled by increasingly nimble robots, mining of big data, and the automation of predictive analytics through deep learning, is beyond anything experienced. At the same time, most workplace policies, regulations, and laws were established long before such changes were even foreseen.

As Littler Mendelson P.C. developed the largest employment and labor practice in the world, it closely monitored the accelerating global infusion of technology into the workplace. It became increasingly predictable that robotics, AI, and advanced automation would become the largest collective industry reaching into almost every business, organization, and employer worldwide.

In 2013, Littler established a Robotics, AI, and Automation Practice Group to spotlight the technologically-induced workforce changes, unmask the workplace legal issues created, and innovate compliance solutions. Regulations clearly would not keep pace with technology, and employers need guidance to help speed the pace of workplace adoption of transformative technologies to keep competitive in global markets.

On November 12, 2018, Littler’s Robotics, AI and Automation Practice Group hosted its third Future Workforce Roundtable, this time also inviting Littler’s Workplace Policy Institute® (WPI™) to co-host the event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government, academia, law, ethics, and business to address the formation and challenges of the future workforce. The multi-disciplinary expertise, diversity, and global representation of the distinguished participants (many of whom returned to participate for a third time on the Roundtable) provided for intense discussions, debate, and deliberations with all members contributing.

Building on past thought leadership forums, the 2018 Future Workforce Roundtable participants addressed eight critical inquiries.1 With respect to the first three inquiries, participants generally agreed that disruptive technology’s principal challenge was how to qualify displaced workers for new roles and jobs created by the technology. Participants were also in general agreement that the contingent workforce is a significant part of the workforce, would continue to expand, and would require—and in fact, welcome—upskilling and lifelong learning. These contingent workers include those participating in the gig economy, as well as temporary, part-time and full-time workers, independent contractors, staffing employees and direct hires with short-term assignments. Finally, while most participants determined that precise future roles and jobs could not be identified at present, they described the skills that workers will need, and called for lifelong training. According to the participants, business will need to be a primary provider of such training, with government and academia supporting these efforts.

Littler’s WPI has issued a separate brief report2 on these three inquires and the guidance participants offered on how to prepare American businesses for “technology-induced displacement of employees” (TIDE), which is the mission of the EMMA Coalition, a nonprofit, nonpartisan organization developed by Littler’s WPI and Prime Policy Group, a preeminent bipartisan government relations firm.

1 (1) Is the principle challenge of disruptive technology qualifying displaced workers for newly created roles and jobs, rather than attempting to preserve

jobs that would otherwise be eliminated? (2) What are the future roles and jobs created by disruptive technologies, how does the current workforce

become qualified for those positions and who in our society is or should be responsible for the transition? (3) How will the global contingent workforce

be transformed as disruptive technologies mold the future workforce within the context of current and changing workforce laws and regulations?

(4) Can and how are privacy and cybersecurity being redefined and maintained within the future global workforce? (5) Is full transparency possible,

needed, or counterproductive in the context of identifying, reducing, or preventing algorithmic bias? (6) How is a workable legislative, regulatory and

judicial roadmap developed that identifies needs, respects the role of innovation and considers unintended consequences? (7) Will AI and Robotic

competition among governments accelerate the arrival of disruptive technologies in U.S. and EU workplaces? Do existing workplace laws protecting

privacy and prohibiting discrimination based on protected categories inhibit U.S. and EU competitiveness? (8) What are the most important and

immediate ethical challenges of AI in the workplace? In recognizing the AI imperative and building a practical roadmap for businesses, what are the best

responses to these ethical challenges?

2 Michael J. Lotito and Matthew U. Scherer, Thought Leaders Predict AI’s Impact on the Workforce, WPI Report (Dec. 3, 2018).

Page 3: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Redefining and Maintaining Global Workforce Privacy and Cybersecurity

Inquiry 4: Can and how are privacy and cybersecurity being redefined and maintained within the future global workforce?

In 2018, data privacy and digital identification were at the center of employers’ regulatory radar. Participants agreed that companies have gained access to more and more data in recent years, and that the amount of employee data employers possess and store is increasing rapidly. Such data may be collected purposefully or inadvertently, and can be drawn from a variety of internal and external sources — from wearable devices that track health and fitness information, to security protocols that rely on biometric data, to information obtained from outside data vendors. It was discussed that over half of all employers are using and/or collecting biometric identity information on workers. Ranging from fingerprint and retinal scans to facial recognition and vein scanning, these biometrics can improve security and protect one’s identity, but proper means of adoption and use is critical to avoid a host of unintended privacy and cybersecurity dangers.

Multiple federal and state statutes and regulations protect the collection, storage and privacy of health information, and workplace biometric information in particular is subject to a patchwork of legal consent and security requirements. It was reported that over 100 class actions are pending under the Illinois Biometric Information Privacy Act (BIPA). Currently, Illinois is the only state providing a private right of action for persons, including workers, who are “aggrieved by” violations of BIPA’s information and written consent requirements.

While increased privacy and security controls were greatly anticipated, especially regarding the duty to protect the privacy of personnel information, some Roundtable participants noted that because actual damages were difficult to establish, many proposed privacy right-of-action statutes had been unsuccessful. Earlier this year, however, the U.S. District Court for the Northern District of California refused to dismiss a BIPA class action, finding plaintiffs had standing to bring their claims under the law. The Ninth Circuit will review the ruling.

In addition, eight days after the Roundtable, the Illinois Supreme Court heard oral argument in Rosenbach v. Six Flags Entertainment Corporation regarding whether to overturn a lower court decision requiring actual damages to justify a cause of action under the BIPA. Several participants reported that the pace and scope of data collection is likely to increase further as companies incorporate machine-learning systems into their operations, as most machine-learning applications require substantial data for training and validation.

While employers are finding increasing uses for (and increasing access to) employee data, as predicted, regulators worldwide have focused attention on regulating companies’ ability to collect, maintain, and use that data. The EU’s General Data Protection Regulation (GDPR), which took effect earlier this year, is the most visible and expansive regulatory effort in this space, but legislatures and regulatory agencies in many U.S. states are actively considering enhanced data privacy laws. GDPR has a much broader definition of “personal data” than those in U.S. breach notification laws. This will be challenging for many U.S. companies required to comply with GDPR. Employers collect “personal data” as defined by GDPR at the application phase of employment, so many will have difficulty obtaining the required level of consent from the applicant.

GDPR imposes notice obligations, requires truly voluntary consent for use of personal data, includes strict security regulations and, often, creates a need for new technologies to manage the requirements at scale. Several participants anticipated that U.S. privacy consent requirements would move closer to the GDPR standards. Other Roundtable members complained about relying on the US-EU Privacy Shield, as it is being challenged in the EU for not offering sufficient data protections, including failing until recently to nominate a State Department Ombudsman to oversee compliance complaints.

As in other areas involving new technologies, government efforts over data protection and cybersecurity may not properly account for the realities of current technology. Even though multiple means of establishing digital identity exist, many can be imitated, especially fingerprints and facial identity. These challenges are driven by the underlying digital technologies, which often were not designed to be digitally secure. But while technology is in some sense the problem in companies’ efforts to comply with data privacy and security laws, it may also create solutions. Blockchains and deep learning may provide companies with new ways of

Page 4: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

validating and securing data in their possession and creating the required roadmap for where the data goes once it is collected. The new laws require a level of monitoring and reporting nearly impossible for a human to manage regarding the collection and organization of data.

Several participants reported that monitoring technology is being developed and should be available soon. Implementation of this new and evolving technology will require cybersecurity professionals who are already scarce. Given the increasing attention that data privacy and cybersecurity are receiving from governments and the media, Roundtable participants agreed that companies must continue to closely monitor regulatory developments to ensure that their collection and retention of employee data stays compliant with laws, and invest in cybersecurity classroom and on-line training, apprenticeships, and support-focused vocational and community college cybersecurity programs. Compliance will be a process rather than a single solution, and informed regulators are critical of developing governmental oversight and controls.

The Challenge of Transparency and Explainability

Inquiry 5: Is full transparency possible, needed, or counterproductive in the context of identifying, reducing, or preventing algorithmic bias?

AI applications that rely on complex machine-learning algorithms, including applications that use multilayered neural networks whose inner workings are an indecipherable “black box,” have been proliferating rapidly. Developing such algorithmic systems is spurring growing calls for tech companies to build transparency—or “explainability”—into the systems they design. Employers that adopt such technologies will increasingly face similar pressures. Several Roundtable members opined that if an employee or a member of the public sues a company because of a decision made by an algorithmic system, the company should be prepared to disclose aspects of how the algorithm operates, anticipate defending the lack or impossibility of full transparency, and show evidence of continuing human oversight. Employers should anticipate increasingly finding themselves in litigation defending their AI systems with the outside AI developers.

But the very nature of deep learning and other powerful modern forms of AI makes true explainability, much less full transparency, very difficult or impossible. Several participants commented that a blanket transparency requirement would severely limit innovation and place U.S. and European AI development at a competitive disadvantage. But participants strongly agreed that ethical standards must be applied to AI decision-making and algorithmic biases that violate legal requirements or core values. A more workable and achievable goal that companies should set as they adopt complex machine-learning systems would be focusing less on transparency or on explaining the workings of the algorithmic “black box,” but instead on taking a preventive approach. Such a strategy must involve close monitoring of the inputs into such systems, and careful tracking and scrutinizing of the outputs generated from those inputs. Monitoring and periodic auditing will give companies the ability to check whether the algorithms are working as intended and to raise red flags if outcomes are different or unexpected. Having human oversight or control somewhere in the decision-making process is recommended. Several participants suggested acquiring AI programs that have been pre-tested for algorithmic biases on disclosable sample data. Even if true transparency cannot be achieved, companies can reduce their litigation and compliance risks through such proactive measures.

The Value of Informed Regulators and Developing Needed Narrowly Focused Regulations While Minimizing Both Unintended Harm and Disincentives to Innovation

Inquiry 6: How is a workable legislative, regulatory and judicial roadmap developed that identifies needs, respects the role of innovation and considers unintended consequences?

One recurring theme of the Roundtable discussions related to the uncertainty surrounding the regulatory climate for AI and robotics. While the absence of regulatory activity is generally seen as a boon to

Page 5: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

innovation, that advantage dissipates when new technologies must comply with preexisting legal frameworks. The consensus of the Roundtable participants is numerous technologies exist that could be deployed in the workplace, but are not being embraced (or used at all) because employers are unsure whether the technologies comply with existing laws. This unsettled regulatory climate has had a negative impact on the deployment of potentially advantageous technologies in some sectors.

For example, a participant explained that until this year, potentially life-saving telemedicine technologies that could have allowed stroke specialists to examine patients in rural areas were effectively banned in Texas—the state with the largest rural population in the country—because the Texas medical board required physicians to have an in-person visit with a patient to establish care. Similarly, algorithmic employee recruitment and selection tools are already available, and many more are in development, that would allow employers to create a more diverse and equitable workforce, but some employers have been reluctant to adopt them because of uncertainty regarding how such tools will be assessed by the federal Equal Employment Opportunity Commission (EEOC) and equivalent state agencies.

Some of this regulatory inaction can be attributed to the regulators’ lack of knowledge about the benefits and risk profiles of new technologies. Relatedly, some governmental institutions take a reactive, rather than a preventive, approach to regulation when assessing the novel and unfamiliar. Roundtable participants expressed concern that if regulators do not take a more proactive and preventive approach, legislatures or regulators will act only after a catastrophic event. Governmental institutions may feel political pressure to enact strict and innovation-stifling regulations rather than regulations that account for the benefits of AI and robotics, besides the risks.

All this points to an overriding need for employers, tech companies, and educators to engage with policymakers to educate them on the benefits and risks of the many applications of AI and robotics in the workplace. Helping regulators and policymakers adopt a forward-looking mindset, focusing on prevention rather than correction, will help ensure that regulation enhances, rather than hinders, the competitiveness of American companies.

Part of such an approach could be the creation of regulatory “safe harbors” allowing companies to test novel applications of AI and robotics designed to advance attempts to implement fairer hiring, retention and promotions practices, but where compliance with the letter of the law is unclear in the event of unintended and unanticipated adverse impact on certain protected categories. For example, legislatures, and where constitutionally appropriate, enforcement agencies, could permit companies to use algorithmic hiring tools to identify and recruit promising candidates from disadvantaged protected groups without facing risk of liability for disparate treatment discrimination. Another example could be providing employers a safe harbor against discrimination claims, or at least a strong presumption of compliance with anti-discrimination laws, if a third-party audit of similar algorithmic hiring tools shows no adverse disparate impact on protected groups.

Roundtable members realized that adopting forward-thinking regulations and statutes will be difficult in the current polarized political climate. But with America’s economic rivals increasingly prioritizing the development and rapid deployment of new applications of AI, the need to leverage the potential of AI for the greater good of America’s workers and companies could represent a rare potential space for bipartisan action. A combination of safe harbors to test tools designed for this greater good, while incentivizing employers to audit AI general population recruiting tools to identify and prevent unlawful discrimination, could gain more bipartisan support than considering the safe harbors separately.

Page 6: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Building a Practical and Ethical Roadmap for AI and Robotics Development While China and Russia Race to Become the World Leaders

Inquiries 7 and 8: Will AI and Robotic competition among governments accelerate the arrival of disruptive technologies in U.S. and EU workplaces? Do existing workplace laws protecting privacy and prohibiting discrimination based on protected categories inhibit U.S. and EU competitiveness?

What are the most important and immediate ethical challenges of AI in the workplace? In recognizing the AI imperative and building a practical roadmap for businesses, what are the best responses to these ethical challenges?

Most Roundtable participants agreed that a titanic global competition has formed regarding AI and robotic research, development, and deployment, both civilian and military. But there is an under-awareness of a new “cold war-type competition and our national priorities are scattered. Russian President Vladimir Putin has announced that the nation that leads in AI will “be the ruler of the world.” Similarly, China has announced it will dominate the field of AI and robotics by 2030. A participant—one of the world’s leading experts on robotics in China—reported that China’s spending on robotics in 2017 increased three times faster than it did in the U.S. The Congressional Research Service identified China as a “leading competitor” in using AI to develop military applications. During the Roundtable, it was reported that a year earlier, China mandated that all advanced civilian technology developments be simultaneously available to the military. Meanwhile, the U.S., some worker-based anti-war groups are challenging the ethics of technology firms developing AI for military projects.

Three key takeaways from the discussion included: (1) The U.S., EU, and their allies hold the advantage in developing AI and robotics, especially from a talent perspective. The U.S., however, lacks a clear awareness of the importance of this competition, has no national priority or spending initiative comparable to China and Russia, has failed to produce adequate number of STEM graduates, and maintains an immigration policy that threatens to erode the talent advantage; (2) Western privacy concerns, especially in biometrics, has slowed development compared to nations such as China, where over half of the country’s 1.38 billion population is now in the national facial identification system. Privacy and individualism, however, are key values and would not preclude competitiveness if it became a national priority; and (3) U.S. military leaders know of this challenge and are responding without compromising deeply held ethical mandates, such as keeping the decision to kill under human control.

Participants strongly support the establishment and continuation of ethical standards for AI and robotics in the workplace. Brief reference was made to the Asilomai 23 AI principles sponsored by The Future of Life Institute, which are a set of principles intended to promote the safe and beneficial development of artificial intelligence. The State of California adopted these principles — which include research issues, ethics and values, and longer-term issues — in 2017. One of the Littler-participating attorneys contributed to the development.

During the Roundtable, a new term was advanced: “algorithmic workplace governance.” This term evolves from the phrase “algorithmic governance.” Essentially, under this form of governance, algorithms are used to make key decisions that enforce processes and values of the society. For example, in Russia and China, the form of central government and government control can be enhanced by algorithms programed to enforce established government policies. With digital identification, this technology has the potential to be a complete police force. Through cameras and other data sources, regulations and laws can be enforced without human involvement. This places great power in the hands of the few who make and program the rules.

Page 7: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Applying this principle to the workforce, algorithms could decide who is hired, where the worker is assigned, monitor worker performance, identify any infractions of workplace rules, decide who deserves promotion, establish and administer compensation levels, and identify who is to be laid off or terminated. While some fairness advantages come from such a world, it removes human decision-making and maximizes worker control. Participants viewed our workplaces as primarily remaining under human control; this was more consistent with a form of democratic workplace governance. Reserved for a future Roundtable is whether the workforces of 2020 and beyond would benefit from some elements of objective AI-tested selection, such as for promoting diversity, while continuing to benefit from human, outside-the-box decision-making.

Page 8: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

LITTLER MENDELSON, P.C. | EMPLOYMENT & LABOR LAW SOLUTIONS WORLDWIDE™

ABOUT LITTLER: Littler is the largest global employment and labor law practice, representing management in all aspects of employment and labor law and serving as a single-source solution provider to the global employer community. Consistently recognized in the industry as a leading and innovative law practice, Littler has been litigating, mediating and negotiating some of the most influential employment law cases and labor contracts on record for more than 75 years. Littler Global is the collective trade name for an international legal practice, the practicing member entities of which are separate and distinct professional firms. For more information visit littler.com.

Page 9: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Thought Leaders Predict AI’s Impact on the Workforce

By Michael J. Lotito and Matthew U. Scherer

Page 10: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

On November 12, 2018, Littler Mendelson P.C., the world’s largest labor and employment law firm, hosted a Roundtable of distinguished leaders from government, industry, and academia to discuss the rapid evolution of the workplace and workforce due to AI, robotics, and other automation technologies. The Roundtable attendees had diverse backgrounds and perspectives. They participated in a wide-ranging conversation regarding many of the legal and ethical issues surrounding these emerging technologies.

The primary topic of discussion during the morning session of the Roundtable was the impact that automation will have on the workforce, and what companies, industry groups, and workers1 must do to prepare for the disruptive impact of AI, robotics, and other emerging technologies. These issues go to the very core of the mission of the Emma Coalition,2 a nonpartisan, nonprofit organization formed by Littler with the goal of saving American capitalism in the 21st century by preparing America’s employers and workers for the coming Technology-Induced Displacement of

Employees, or TIDE. In furtherance of that mission, the Emma Coalition has prepared the following report summarizing the thoughts that the Roundtable participants voiced and the points of consensus they reached during the morning session of the Roundtable.3

1 We use the term “worker” rather than “employee” to emphasize the important role that freelancers and other contingent workers will play in the

economy of the future. Companies’ efforts to build a workforce with the necessary skill set can and must extend to contingent workers, who some

studies suggest are more likely than traditional employees to participate in programs that provide them with new skills.

2 https://www.littler.com/service-solutions/wpi/emma-coalition

3 Tom Green, founding Editor in Chief of Robotics Business Review and current Editor in Chief of Asian Robotics Review, attended the Roundtable and

prepared his own summary of the gathering and the discussion surrounding the Emma Coalition and its mission. Tom Green, Workplace Survival Skills in

an Age of Automation, Asian Robotics Review, available at https://asianroboticsreview.com/home240-html.

Page 11: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

The Coming of the TIDE

The consensus of Roundtable participants is that while automation is likely to displace workers in many occupations, it also will spur enormous demand for workers in both existing fields and in new occupations that technological change will generate. The founders of the Emma Coalition refer to this process of labor market disruption as Technology-Induced Displacement of Employees, or TIDE.

Jobs Created, Jobs Displaced, and Jobs Transformed

During the rapid rise of sophisticated AI over the past decade, considerable public attention has focused on the potential threat to workers in jobs that are vulnerable—or, at least, that appear to be vulnerable—to automation. In the media, stories have proliferated suggesting that a third or more of developed-world jobs will be vulnerable to automation within the next two decades.

The unspoken assumption underlying these concerns about automation is that the workers at companies that automate are more vulnerable to economic dislocation than workers at companies that do not automate. But in reality, the outlook for workers at companies that do not automate may be far more bleak. The increases in efficiency and productivity from the incorporation of AI and robotics into companies’ operations mean that companies that refrain from adopting automation technologies may find themselves at a decisive competitive disadvantage. Over the past two decades, retail companies that failed to recognize and leverage the potential benefits to their business from e-commerce struggled to survive in the digital economy. With the rise of AI and robotics, companies that fail to automate tasks that can be done more efficiently and reliably by automated systems will fall behind their competitors. In the worst case, such companies will fail altogether, which likely would mean the loss of all jobs at that company.

Fortunately, automation is not a zero-sum game, where the automation of certain tasks means a reduction in the amount of work available for humans to do. Today, in an era when companies in sectors as diverse as financial services to grocery stores have aggressively incorporated AI and automation into their operations, unemployment in the United States stands at less than 4%, a historically low level. Going forward, the most successful companies will not be those that simply automate for the sake of automation or that take a short-term approach that focuses only on the savings and profit margins that automation can bring. Instead, successful companies will identify the tasks that they will need performed in the near future, identify which tasks are best-suited for humans, and use the profits and savings generated by automation to upskill and leverage the capabilities of their workers. Such companies will provide their workers with the skills needed to perform tasks for which humans will remain best-suited.

Automation is also likely to transform many jobs that it does not directly displace, just as technological change has brought sweepingly broad changes to many industries in recent decades. Consider the jobs related to flying a commercial airliner. In the cockpit, increasingly sophisticated autopilot and computer systems in commercial airliners have eliminated the need for flight engineers during the past 30 years while increasing airlines’ demand for IT specialists. At the same time, the job of the airline pilot has changed substantially, with pilots having to learn how to operate and navigate the electronic displays of modern cockpits while also having to take over the active monitoring of aircraft systems that flight engineers used to perform.

A similar trend will undoubtedly play out in many other industries and jobs with the rise of AI and automated systems. In addition to the jobs that automation will create and those it will displace, AI and other emerging technologies will radically transform entire industries and economic sectors in the coming decades. As a result, many jobs that are not directly displaced by automation will, like modern airline pilots, nevertheless require workers to develop new and different skills than those jobs require today. Autonomous vehicles will require mechanics with skills quite different from those required to maintain traditional human-driven vehicles. In health care, future surgeons will need to learn how to operate robotic surgical systems. The rise of telemedicine and virtual office visits will require physicians to master completely different ways of examining and interacting with patients.

Forward-thinking companies both in traditional blue-collar industries such as automotive manufacturing as well as in digital-era sectors such as e-commerce are already establishing training programs that provide

Page 12: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

their workers with the skills workers will need to remain part of the future workforce. The consensus of Roundtable participants is that the models established by these companies will be a blueprint for companies that seek to achieve the correct mix of appropriately skilled human workers and smart automation that will be the formula for economic success in the global economy.

Companies know the skills they currently need from their workers. But the quickening pace of automation will force companies to identify not merely the skills they currently require, but also the skills they will need its workers to have several years into the future. This is not a trivial task, given the continuously evolving labor market landscape both within and across industries. Indeed, in some cases, it may be wholly impossible to identify some future positions that companies will have available, since many of the jobs that workers will hold a decade from now may not even exist yet—just as the job of “Social Media Manager” did not exist in the middle of the last decade.

The McKinsey Global Institute and the World Economic Forum, along with many other think tanks and other institutions, have issued reports about jobs most vulnerable to automation as well as which jobs they project to be in demand in the coming years and decades. But while these reports provide companies with valuable data regarding the future of the labor market as a whole, they do not speak to the conditions within specific industries, much less take into account the future plans and circumstances of individual employers. In projecting future talent needs, companies will have to rely on AI and the power of predictive analytics to spot trends and identify the jobs that will constitute their future workforce—and, by extension, the skills they will require workers to have to perform those jobs effectively.

Training the Workforce of the Future

The key challenge that employers and workers will face to minimize the disruptive impact of automation will be providing workers—especially those displaced by automation—with access to training programs that provide them with the skills needed for the jobs of the future. Roundtable participants were asked what steps companies, trade groups, and educational institutions could take to provide workers with the training needed to remain competitive in the global labor market. Three points of consensus emerged during the discussion that followed: (1) building the workforce of the future will require a greater emphasis on lifelong learning; (2) flexible training programs should be designed that can quickly adjust and adapt to the frequent changes in labor-market demands that will accompany automation; and (3) training programs should place greater emphasis on providing workers with general competencies and skills that will be transferable to jobs in multiple fields.

Lifelong Learning

The traditional model of education has long-seemed to operate on the unspoken assumption that most workers will be able to complete their education in their late teens or early 20s, and then embark on an occupation or profession that will keep them employed for several decades. The shortcomings of that model have been laid bare in recent years with the decline of once-robust manufacturing regions, many of which entered periods of economic decline as workers lost their jobs without being provided with a viable path to retool their skill set and reenter the workforce. The “one career” model is likely to break down completely in the coming decades, as automation disrupts one industry after another and forces workers to learn new skills—or hone existing skills—periodically to remain part of the workforce.

Worker training in the 21st century still must start with traditional K-12 education, and there will continue to be demand for workers with college or postgraduate degrees in many fields. Improving the quality of America’s traditional education system, particularly as it relates to the STEM subjects (Science, Technology, Engineering, and Mathematics), will thus be essential. But the Roundtable participants were unanimous that a sustainable model of workforce training also will require building an educational culture that places a greater emphasis on lifelong learning and providing workers with new skills throughout their careers. The most successful companies will be those that build such lifelong learning into their business models, thus bringing the skill and talent pipelines of their workforce within their control. This will allow companies to integrate training, whether in the form of “upskilling” or “reskilling,” into their employees’ daily work.

Page 13: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

Flexibility and Agility

To be able to effectively train workers in a labor market landscape that will be continuously changing due to automation, the best training programs will be those that are flexible and agile. The operators of training programs must be able to reorient and even redesign them with some frequency as it becomes clear which jobs will be most affected by automation, and, as new data becomes available, on which skills will be most in-demand in the coming years.

Such programs will have to provide workers with skills that will remain in demand long enough to provide the workers (and the companies or other institutions that underwrite the costs of the programs) with an adequate return on the investment in training. This requires a delicate balance; projections regarding which jobs will be available in five years will be more accurate than those regarding the jobs that will be available in 10 years, but a job that is available in 10 years will provide companies and workers with a greater return on their time and money invested in training. This further underscores the need for individual companies and industry groups to actively participate in the management of such training programs, because those industry actors will have the best sense of the prevailing labor market trends within their industries and spheres of operations. Such direct participation will also allow individual companies to incorporate workforce training programs into their own medium- and long-term strategic plans.

To implement these training programs, companies and industry groups should partner with America’s strong network of community colleges and increasingly sophisticated online learning platforms. Companies will need to leverage AI, as well as longer-established digital technologies, to spot deficiencies in workers’ skills and knowledge and to help reduce overhead on training costs. But effective training programs will not be able to rely on digital technology alone; many American workers lack the computer skills necessary to complete an educational program through digital platforms, and computer proficiency will not be necessary for many of the jobs projected to grow in the future. Consequently, it will be essential for companies to partner with established traditional education providers to ensure that workers of all ages and learning styles are able to participate in the training programs.

Many companies today—including several participants at the Roundtable—have already partnered with community colleges and other local educational institutions to establish training programs that are both well-targeted and diverse. Designing and adapting training programs that will remain robust in a constantly shifting labor-market climate will require continuous engagement between industry actors and education providers.

Focusing on Fundamental Skills and Competencies

Another point of consensus from Roundtable participants was that training programs should place a greater emphasis on broader competencies whose value will be durable and applicable to many jobs, and not just on discrete skills required for particular jobs. Training thus should include providing workers with generalizable skills such as problem-solving, communications, and analytical reasoning—competencies that are relevant to many different categories of jobs, and for which humans are likely to outperform machines for years to come. Providing workers with such broader competencies will enable those workers to complete and transition between training programs more quickly than workers whose training programs are designed to prepare them merely for one specific job.

Of course, a focus on such broader competencies must be in addition to, and not in lieu of, concrete skills that will allow a worker to successfully obtain employment upon completion of the training program. For these more job-specific skill sets, the focus should be on providing workers with the skills needed for entry-level jobs. This will allow workers to reenter the workforce and obtain the skills needed for further advancement as necessary, either through on-the-job training and experience or through additional training or certification programs catered to the skills needed for higher-level positions.

Page 14: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

The Road Ahead

The foregoing principles should guide employers, industry groups, and educational system as they attempt to build a workforce with the skills needed for America’s workers and companies to remain competitive in the coming decades. There are, of course, many questions that still must be answered. How do we design training programs with the flexibility, agility, and breadth that will be necessary? Who will underwrite the costs of establishing training programs? And how can the funding for them be structured so that all workers can participate, including and perhaps especially those unable to pay out-of-pocket for the cost of training? What role will federal, state, and local governments play?4

America’s institutions in both the public and private sector are long overdue in addressing these issues, but there are promising signs on the horizon. In August 2018, the Department of Commerce announced the creation of American Workforce Policy Advisory Board, whose mission is to address the need to provide workers with the skills needed to thrive in an economy that “is changing at a rapid pace because of the technology, automation, and artificial intelligence that is shaping many industries.” It remains to be seen how the Advisory Board goes about fulfilling its mission, or how much federal funding will be made available to implement any recommendations it makes. In the meantime, the “rapid pace” of technological change means that real, implementable solutions need to be formulated sooner rather than later.

The challenge of finding those solutions is even more daunting than this description suggests, because technological change is far from the only disruptive force affecting the America’s workers and workplaces. The increasing prominence of the contingent workforce may require regulators and legislators to reexamine the longstanding model, predicated on employer/employee relationships, for providing workers with workplace protections and benefits. The contingent workforce also raises new questions about how workers can have an effective voice regarding workplace conditions and policies and what role organized labor could play in providing that voice. States have traditionally been laboratories for experimentation for labor-market policies, but given the enormous resources that other economic powers5—particularly China—are investing in reorienting their workforces, it is not clear that state and local governments will be able to marshal the necessary resources, much less develop a cohesive national strategy, for making America’s workforce competitive in the coming decades.

Consequently, while the establishment of the Advisory Board is a promising development, companies and industry groups must be proactive in facing the challenges of the TIDE. Companies should, without delay, begin expanding in-house training programs, building relationships with community colleges and online learning platforms, and establishing public-private partnerships to ensure that companies will have access to workers with the skills they will need workers to have in the future global labor market. It is the Emma Coalition’s mission to ensure that companies take the necessary steps to make sure that they—and America’s workers—remain competitive in the 21st century economy.

4 Legislation dealing with these issues has been introduced in the U.S. Senate by Senators Warner and Coons. https://www.scribd.com/

document/393386960/LLTA-Leg-Text-11-16-18.

5 Little Hoover Commission Report #245, Artificial Intelligence: A Roadmap for California, https://lhc.ca.gov/sites/lhc.ca.gov/files/Reports/245/

Report245.pdf.

Page 15: Future Workforce - Robotics Business Review · event. Littler’s Robotics and AI practice and WPI assembled 40 world-class thought leaders and authorities in science, government,

LITTLER MENDELSON, P.C. | EMPLOYMENT & LABOR LAW SOLUTIONS WORLDWIDE™

ABOUT LITTLER: Littler is the largest global employment and labor law practice, representing management in all aspects of employment and labor law and serving as a single-source solution provider to the global employer community. Consistently recognized in the industry as a leading and innovative law practice, Littler has been litigating, mediating and negotiating some of the most influential employment law cases and labor contracts on record for more than 75 years. Littler Global is the collective trade name for an international legal practice, the practicing member entities of which are separate and distinct professional firms. For more information visit littler.com.