17
FEATURING RESEARCH FROM FORRESTER AI Deep Learning Workloads Demand A New Approach To Infrastructure Unlock The True Potential Of AI For An Intelligent Connected World

Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

FEATURING RESEARCH FROM FORRESTER

AI Deep Learning Workloads Demand A New Approach To Infrastructure

Unlock The True Potential Of AI For An Intelligent Connected World

Page 2: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

2

Unlock The True Potential Of Ai For An Intelligent Connected World

Research From Forrester: AI Deep Learning Workloads Demand A New Approach To Infrastructure

About Tata Consultancy Services Ltd. (TCS)

1

5

17

IN THIS DOCUMENT

AI is now main stream and the driving force for the future ‘Intelligent Connected World’, poised to

create exponential value across range of Industries. It is pushing the boundaries of compute with

complex data-centric workloads requiring developers to create different set of software solutions

for diverse computing architectures spanning scalar, vector, matrix and spatial. The range of

innovative AI hardware-accelerator architectures continues to expand with chip manufacturers

introducing specialized chipset architectures such as neural network processing units (NNPUs),

field programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) in

addition to graphical processing units (GPUs) for handling different AI workloads at both Edge and

Cloud. This diversity adds additional dimension of complexity in developing optimized AI solutions

as the choice of platform plays a critical role in building fit for purpose solutions. In addition,

any changes to H/W platform choices at a later stage will result in significant cost and time

considerations.

There is a transformational shift across the AI development lifecycle that is defining the evolution

of multi-architecture AI paradigms. We are partnering to drive this transition towards unified,

standards based programming models to empower developers with software tools that can target

any processing resource across diverse architectures.

TCS is fostering an ecosystem of Technology players to enable enterprises explore avenues of

innovation leveraging this new paradigm of AI development to foray into new markets, create new

products and provide cutting edge domain solutions. For instance, leveraging it’s deep domain,

technology and business process expertise, TCS is partnering with leading innovators such as Intel

to help enterprises unlock the full potential of AI leveraging the right mix of hardware and software

architectures.

AI continues to push the boundaries creating tremendous impact across industries and it beholds

a great opportunity for both enterprises and the society at large. However, it requires ecosystems

of players across technology, industry and academia to come together to realize true end-to-end

value from AI.

V. Rajanna

Senior Vice President & Global Head – Technology Business Unit Tata Consultancy Services (TCS)

Rajanna is a key member of senior leadership team at TCS and held various critical roles. He is a multifaceted leader, and, over two and half decades, has left a valuable imprint on Industry, Academia and the Government. His experience spans various countries across the globe, and encompasses areas such as Sales, Delivery, Business Operations, Human Resources, Research & Development, Marketing, and Education. Prior to his current role, Rajanna was heading the Telecom OEM Business Unit and under his astute leadership, revenues from the unit grew by two-and-a-half times. Rajanna nurtured and built the largest Technology customer partnership for TCS. He was the first CEO for TCS in China.

Page 3: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

33

INTEL’S MISSION: AI EVERYWHERE

“Intel is focused on enabling businesses to spur innovation and discovery through Artificial Intelligence. Our

extensive collaboration with TCS combines the best innovations from both companies to define the next wave of

AI solutions across industry verticals. The collaboration allows us to accelerate innovation by integrating AI into

critical business processes and address ever-increasing business complexity. Together, we are driving advanced

AI solutions to enable enterprises to extract maximum value from their IT investment.”

- Prakash Mallya | VP & MD SMG Intel India

Intel is a data-centric company that recognizes the transformational power of artificial intelligence (AI), and

is investing heavily in the future of AI. Our system-level strategy provides customers with the most diverse

portfolio of compute, matched with advanced memory, storage, and communications fabric. We offer

the industry’s only mainstream CPUs with AI acceleration built in, and an accessible range of discrete AI

accelerators, including edge VPUs, FPGAs, forthcoming discrete GPUs, and domain-specific architectures.

But hardware can’t do AI by itself. This high-performance hardware is supported by mature software built on

open components optimized to run best on Intel® hardware. Intel® Distribution of OpenVINO™ toolkit greatly

simplifies the deployment of models across heterogeneous hardware. Currently, Intel IT is using OpenVINO to

simplify defect detection in our manufacturing process.

Intel IT is focused on integrating AI into critical business processes to help the company address exponentially

growing business complexity in product development, manufacturing, sales, and supply chain. Our Sales AI

platform simplifies account management and recommends actions based on both customer and market activities,

while AI-based transformation of our supply chain has reduced time to decision from six months to one week.

IT collaboration with product testing teams built AI into Intel’s validation processes, saving significant time and

money. Most recently, we worked with Intel’s Client Computing Group to incorporate AI Intel® Dynamic Tuning

Technology—into select designs of an upcoming Intel® Core™ processor generation. This new feature will offer

AI-based pre-trained algorithms to predict workloads, allow higher turbo boost when responsiveness is needed,

and allow extended time in turbo for sustained workloads.

Intel IT’s AI collaboration with Intel’s business units has proven clear, validated business value of more than $1

Billion USD over the last three years. We have hundreds of AI models running every day at Intel. These solutions

have helped us achieve predictable business outcomes, better products, and optimized manufacturing processes

at scale. Using AI insights also helps Intel reduce product cost and time to market. We are rapidly expanding our

AI efforts across Intel as we see the value it has already delivered and the enormous potential.

Page 4: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

44

Prakash Mallya

Vice President, Sales and Marketing Group Managing Director, Intel India

Prakash Mallya is Vice President in the Sales and Marketing Group and India country manager at Intel Corporation. He is responsible for developing new growth areas for the company in the region. Mallya joined Intel in 2000 as a business development manager for the financial services segment in India. Earlier in his Intel career, Mallya had overall responsibility for sales, marketing and the enabling of Intel products and solutions across Southeast Asia. Mallya also previously served as country manager for Malaysia and Singapore. As head of sales and marketing operations in those countries, he was responsible for the growth of Intel’s business through channel distribution, local PC and server manufacturers and multinationals. He has held various leadership roles in multiple regions during his two decades at Intel. Mallya holds a bachelor’s degree in electrical and electronics engineering from Regional Engineering College, Tiruchirappalli, and earned his MBA degree from Bharathidasan Institute of Management, Tiruchirappalli, both in India.

Page 5: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

5

AI Deep Learning Workloads Demand A New Approach To InfrastructureGPUs Dominate Now, But A Broader Landscape Of AI Chips And Systems Is Evolving Quickly

by Mike Gualtieri and Christopher VoceMay 4, 2018 | Updated: May 18, 2018

FORRESTER.COM

Key TakeawaysAI Deep Learning Workloads Thrive With Massively Parallel ArchitecturesAI chips are a collection of traditional and emerging options, sometimes sporting thousands of cores, specifically designed to perform computations conducive to deep learning. Without AI chips such as graphics processing units (GPUs), deep learning would not be practical.

GPUs Got This Party StartedNvidia GPUs are the most popular chips for deep learning. But field programmable gate arrays (FPGAs) and a parade of new options from vendors such as Intel and startups are on the way.

Buy Now, But Prepare For ObsolescenceEnterprises must do AI, therefore they must do deep learning, and therefore they must use AI chips and systems. The AI chips and systems you buy or use in the cloud today will be obsolete in about one year because AI chip innovation is so rapid.

Why Read This ReportOne breakthrough of AI is deep learning: a branch of machine learning that can uncannily identify objects in images, recognize voices, and create other predictive models by analyzing enterprise data. Deep learning can use regular CPUs, but for serious enterprise projects, data science teams must use AI chips such as GPUs that can handle massively parallel workloads to more quickly train and retrain models on large data sets. This report will help I&O professionals understand their AI infrastructure options — chips, systems, and cloud — to execute on deep learning.

Page 6: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

6

© 2018 Forrester Research, Inc. Opinions reflect judgment at the time and are subject to change. Forrester®, Technographics®, Forrester Wave, TechRadar, and Total Economic Impact are trademarks of Forrester Research, Inc. All other trademarks are the property of their respective companies. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

Forrester Research, Inc., 60 Acorn Park Drive, Cambridge, MA 02140 USA+1 617-613-6000 | Fax: +1 617-613-5000 | forrester.com

Table Of Contents

AI Is The Fastest-Growing Workload On The Planet

AI Workloads Require AI Chips And Systems

GPUs Are The Dominant Option For Training, But The Landscape Is Diverse

Recommendations

Buy Short-Term, Think Long-Term

What It Means

Cleverness Can’t Compete Without Brute Force

Supplemental Material

Related Research Documents

AI Is Ready For Employees, Not Just Customers

Automation Drives The I&O Industrial Revolution

Deep Learning: An AI Revolution Started For Courageous Enterprises

TechRadar™: Automation Technologies, Robotics, And AI In The Workforce, Q2 2017

FOR INFRASTRUCTURE & OPERATIONS PROFESSIONALS

AI Deep Learning Workloads Demand A New Approach To InfrastructureGPUs Dominate Now, But A Broader Landscape Of AI Chips And Systems Is Evolving Quickly

by Mike Gualtieri and Christopher Vocewith Srividya Sridharan, Michele Goetz, and Renee Taylor

May 4, 2018 | Updated: May 18, 2018

Share reports with colleagues. Enhance your membership with Research Share.

Page 7: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

7

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

AI Is The Fastest-Growing Workload On The Planet

AI is not one monolithic technology.1 It is composed of building-block technologies, one of which is deep learning.2 Deep learning is a branch of machine learning that can uncannily identify objects in images, recognize voices, and create other predictive models by analyzing enterprise data. Enterprise AI engineering teams use deep learning to build AI models, and application development teams use those models to add AI smarts to applications.3 There’s one important thing about AI deep learning: It has an insatiable appetite for silicon, requiring the compute power to accommodate two types of workloads (see Figure 1):4

› Training to build models. AI engineers and data scientists use deep learning frameworks such as TensorFlow or Microsoft Cognitive Toolkit to analyze historical data about a specific domain, such as image data about car accident damage. The algorithms analyze that data and correlate it to existing adjusters’ reports. The result is a trained model that can analyze new images of car accident damage to predict the type of damage and cost to repair.5

› Inferencing to make decisions based on trained models. Once AI engineers or data scientists create a model, they use it in production applications to make predictions. For example, an insurer can use a trained deep learning model to analyze photographs of property damage to identify the type of damage and estimate the cost of repairs. Think of inferencing as an input/output service — an application passes the necessary data to a service, and the service uses the inferencing model to return a result.

Page 8: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

8

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

FIGURE 1 Two AI Deep Learning Workloads Need Immense Compute Power

Source: Adapted graphic from Nvidia

Training

Deep learning

Goal: Learn to recognize automobiledamage as competently as a

human expert insurance adjuster.

InferenceGoal: Use the trained model in an

application to automateautomobile damage assessments.

Trainingdata

Newdata

Damagedfender

“Damaged fender”Damagedfender

Nodamage

“?”

Page 9: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

9

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

AI Workloads Require AI Chips And Systems

This isn’t your grandfather’s analytics. It’s not about querying data. Deep learning algorithms are all math. AI infrastructure must not only accommodate big data; it also must supply massive compute capacity for math operations on vectors, matrices, and tensors.6 Deep learning is not practical without special infrastructure that is conducive to both high volumes of data and high volumes of calculations. That’s why AI infrastructure is necessary. That’s why all the internet giants, including Amazon, Facebook, Google, and Microsoft, have massive investments in AI infrastructure. Forrester defines AI infrastructure as:

Integrated circuits, computer systems, and/or cloud services that are designed to optimize the performance of AI workloads, such as deep learning model training and inferencing.

AI infrastructure is composed of AI chips, AI systems, and AI cloud systems (see Figure 2 and see Figure 3):

› AI chips cater to specific deep learning and inference demands. Enterprises can run deep learning on regular CPUs, but for more intense enterprise projects, AI engineering teams often employ high-core-count chips such as Nvidia GPUs to train and retrain models on large data sets more quickly.7 GPUs were designed to perform math operations to render complex graphics, but it just so happens that those math operations can also be used for deep learning as well as other math-intensive high-performance compute (HPC) applications, such as simulation.8 But GPUs are no longer the only game in town. Chip vendors such as Intel, cloud providers such as Google (only available in the cloud), and a slew of startups offer alternative chips conducive to deep learning. Intel is also optimizing future Xeons to handle more AI workloads with enhancements to the processor instruction set.9

› AI systems are packaged infrastructure solutions. AI systems can be as simple as dropping a GPU card into an existing computer system. Many AI engineering teams do just that to an existing workstation. However, for more intense deep learning projects, more compute power is necessary. Vendors such as Cray, Dell, IBM, and Hewlett Packard Enterprise (HPE) have developed AI-specific servers and offerings. These are often based on existing HPC systems and add in one or more processing cards full of GPUs, such as Nvidia’s TESLA P100. Vendors also bundle AI systems with the software necessary for AI engineering teams to do projects. Nvidia offers its own GPU-based DGX systems. Systems integrators also offer services to “build” AI systems. Although AI systems are available as on-premises hardware, most vendors offer access to them in the cloud.

› AI cloud solutions offer tremendous scalability and pay-as-you-go pricing. Public cloud providers such as Amazon Web Services (AWS), Google, Microsoft, and others offer instances that are powered by CPUs, GPUs, FPGAs, and other options. Google has designed a chip called a tensor processing unit (TPU) that is optimized to use Google’s popular open source machine learning framework — TensorFlow. Microsoft and AWS are reported to be designing their own

Page 10: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

10

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

chips as well for use in the cloud and on devices.10 Enterprises that wish to get started quickly can always choose a cloud option, but remember: Deep learning workloads can quickly consume resources, so expenses can mount.11

FIGURE 2 AI Infrastructure Is Composed Of Three Elements: AI Chips, AI Systems, And AI Cloud

AI systems include clusters of AI chips and additional high-performance features, such as fast interconnect and data access.

AI chips can massively parallelize operations amenable to AI model training and/or inferencing.

AI cloud provides AI systems on demand, and therefore it is instantly scalable.

AI chips AI systems AI cloud

Page 11: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

11

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

FIGURE 3 AI Infrastructure: Representative Vendors And Products

Graphcore

IBM

Intel

Nvidia

Wave Computing

Intelligence Processing Unit (IPU)

PowerAI

Xeon, FPGAs, Nervana Neural Network Processor (NNP), Movidius VPU

Tesla GPU

Data�ow Processing Unit (DPU)

Representativevendors Product(s)

Cray

Dell EMC

Exxact

HPE

IBM

Lambda Labs

Nvidia

Wave Computing

CS-Storm, XC Series

Ready Solutions

Exxact Tensor series

Apollo Systems

PowerAI

TensorBook, Quad, and Blade

DGX Systems

Wave Systems

Representativevendors Product(s)

Amazon Web Services (AWS)

Google

IBM

Microsoft

Oracle

AWS Deep Learning AMIs, GPU-and FPGA-based instances

Cloud TPU; Cloud GPU

GPU cloud servers

Azure Deep Learning Virtual Machine

Oracle Cloud Infrastructure Bare Metal GPU

Representativevendors Product(s)

AI chip vendors AI systems vendors AI cloud vendors

GPUS ARE THE DOMINANT OPTION FOR TRAINING, BUT THE LANDSCAPE IS DIVERSE

When it comes to AI deep learning, GPUs get all the press. That’s because GPU systems are readily available and dramatically reduce the time necessary to train models. Model training that took days on CPU systems takes hours on GPU systems. But it is still early days for AI and deep learning. Today (see Figure 4):

› Nvidia GPUs dominate the market for training deep learning models. Nvidia was prescient in seeing the demand for deep learning and has outpaced rival chip manufacturers thus far. The most popular deep learning software frameworks work with Nvidia, and most of the hardware and

Page 12: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

12

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

cloud vendors offer systems that include Nvidia GPUs. Full-stream oil and gas firm Baker Hughes, a GE company, uses Nvidia GPUs to create deep learning models for well planning and to predict machinery failure.12 Forrester has interviewed numerous enterprise customers from a diverse set of industries, including banking, insurance, retail, and healthcare — all use Nvidia GPUs to train models. Most of the public cloud providers, including Microsoft and Amazon, also use Nvidia GPUs to train deep learning, although they use other technologies as well.

› Emerging application-specific integrated circuits (ASICs) aim to outperform GPUs. Chip and cloud vendors are not ceding the AI market to Nvidia. In addition to Google and its TPUs, a whole host of existing and startup vendors offer or are designing chips to make deep learning model training even faster. Giants such as Intel offer optimized math engines in Xeon, but they also plan to offer accelerator chips purpose-built for deep learning applications. There are too many startups to list, but noteworthy are Graphcore and Wave Computing, which aim to enter the market with purpose-designed chips for deep learning.

› Inferencing can benefit from different options. While GPUs are the dominant option in training, there are differentiated options for inferencing. Once a model is trained, it can be used in production applications. A trained model has a certain topology that is static until AI engineers or data scientists build another iteration of the model. For example, FPGAs have programmable logic blocks that you can optimize to run trained models faster than GPUs. Intel’s Movidius VPU chips offer lower-power inferencing in edge use cases like surveillance for detection, tracking, and classification.

FIGURE 4 AI Chips Vary In Silicon Architectures

• Already present in AI infrastructure; some have AI optimized instruction sets

• Suitable for experimentation and modest training

• Hundreds of cores amenable to parallelize operations; ideal for training deeplearning models

• Existing support for popular deep learning frameworks like TensorFlow and MXNet

• Programmable architecture ideal for inferencing on already-trained models

• Special software is required to translate trained model to the FPGA’s con�gurable logic blocks.

• Purpose-designed chip architectures to handle AI/deep learning training and/or inferencing workloads

• Vendors that create these chips often label them as IPU, DPU, NNP, etc., to re�ect their design and branding.

CPUs GPUs FPGAs ASICs

Central processingunits

Graphics processingunits

Field programmablegate arrays

Application-specificintegrated circuits

Page 13: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

13

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

Recommendations

Buy Short-Term, Think Long-Term

Remember when you got a new laptop every other year because the pace of innovation was so rapid? That’s where we are with AI chips, systems, and cloud. The pace of AI infrastructure innovation is fueled by the insane growth of AI, highly competitive chip and cloud vendors, and deep learning software innovations. It doesn’t mean that enterprises should wait for the dust to settle. No. Enterprises have to move forward with AI and, more importantly, make their scarce AI engineering and data science teams as productive as possible by giving them the most performant-possible infrastructure to train AI models. Infrastructure and operations (I&O) pros must collaborate with application development and delivery (AD&D) pros to:

› Match AI chips with machine learning frameworks. AI chips are impotent without deep learning software that knows how to use them. AI engineering and data science teams must help decision makers understand what deep learning frameworks they are using now and plan to use in the future. It’s highly likely that they are using more than one. Most frameworks support Nvidia GPUs, but as new AI chips appear they may not support the frameworks you are using. Also, some AI chips will be specifically optimized for a single framework. Google TPUs are AI chips optimized for TensorFlow and are currently available only in Google Cloud. AI engineering teams that use TensorFlow may prefer that option because Google claims that TPUs are orders of magnitude faster that TensorFlow workloads running on GPUs.

› Think “hybrid,” because cloud can get expensive. Cloud often appears to be the perfect future-proof solution for AI since you pay-as-you-go and cloud providers often offer the latest and greatest AI chips and systems. However, cloud can get more expensive than an on-premises solution if your AI engineers are running workloads around the clock and if you already have open space in a data center. Other factors to consider include the time it may take to convince enterprise security and risk teams to let data into the cloud. A hybrid cloud solution is ideal for most enterprises to optimize the speed of experimentation, overall cost, quick access to new technology, and solution time-to-market.

› Consider AI systems that can leverage future AI chips. Many types of vendors offer AI systems that you can use on-premises or in the cloud, including enterprise systems vendors (Cisco Systems, Cray, Dell, HPE, and IBM), systems integrators, and even Nvidia with its DGX platform. The best of these vendors will architect their systems to replace existing or add new AI chips at a reasonable price. They design these systems to support multiple GPUs and optimize inter-GPU communication.

› Separate architecture options for training and inferencing. The fact that GPUs are the dominant solution for training doesn’t lock you into using an alternative option for inferencing. For instance, Microsoft trains models using GPUs. It then uses Microsoft Brainwave software to convert to optimize FPGAs in order to rapidly inference the models. Power requirements and algorithm optimization might make ASICs, FPGAs, or even CPUs a more attractive option.

Page 14: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

14

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

What It Means

Cleverness Can’t Compete Without Brute Force

Remember the Planet of the Apes movies? The orangutans are the politicians. Chimpanzees are the scientists. Gorillas are the muscle. Well, AI infrastructures are . . . the gorillas of AI. Enterprises and I&O leaders who wish to leverage AI to remain or become leaders in their industry must equip their AI engineering and data science teams with the best and fastest tools. That certainly means staying abreast of open source innovation and leveraging differentiated enterprise data. But it also means providing those same teams with the fastest possible AI infrastructure to accelerate the AI business innovation life cycle. Why take three days to train one iteration of a deep learning model when you could do it in 1 hour? The algorithms, amount of data, and number of iterations necessary to train a good model will only get more intense. Don’t make data science and AI engineering teams beg I&O for AI infrastructure, or your enterprise will fall behind.

Engage With An Analyst

Gain greater confidence in your decisions by working with Forrester thought leaders to apply our research to your specific business and technology initiatives.

Forrester’s research apps for iOS and Android.Stay ahead of your competition no matter where you are.

Analyst Inquiry

To help you put research into practice, connect with an analyst to discuss your questions in a 30-minute phone session — or opt for a response via email.

Learn more.

Analyst Advisory

Translate research into action by working with an analyst on a specific engagement in the form of custom strategy sessions, workshops, or speeches.

Learn more.

Webinar

Join our online sessions on the latest research affecting your business. Each call includes analyst Q&A and slides and is available on-demand.

Learn more.

Page 15: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

15

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378

Supplemental Material

COMPANIES INTERVIEWED FOR THIS REPORT

We would like to thank the individuals from the following companies who generously gave their time during the research for this report.

Amazon

Google

Graphcore

Intel

Microsoft

NVIDIA

Endnotes1 Forrester offers two definitions for artificial intelligence: pure AI and pragmatic AI. See the Forrester report “Artificial

Intelligence: What’s Possible For Enterprises In 2017.”

For details on the building block technologies of AI, see the Forrester report “TechRadar™: Artificial Intelligence Technologies, Q1 2017.”

2 For more information on how deep learning is a revolution, see the Forrester report “Deep Learning: An AI Revolution Started For Courageous Enterprises.”

3 See the Forrester report “Deep Learning: An AI Revolution Started For Courageous Enterprises.”

4 By “silicon” we mean integrated circuits also known as chips.

5 See the Forrester report “A Machine Learning Primer For BT Professionals.”

6 In mathematics a vector is a quantity that has a direction and magnitude. In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions that is arranged in rows and columns. In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Source: “What is a Tensor?” Dissemination of IT for the Promotion of Materials Science (DoITPoMS), University of Cambridge (https://www.doitpoms.ac.uk/tlplib/tensors/what_is_tensor.php).

7 GPUs are the most popular AI chips used today. GPUs are also used for graphics, of course, but they are also used for blockchain applications — specifically, mining.

8 Multiple research studies have shown that GPUs are orders of magnitude faster than CPUs to train deep learning models. Source: John Lawrence, Jonas Malmsten, Andrey Rybka, Daniel A. Sabol, and Ken Triplin, “Comparing TensorFlow Deep Learning Performance Using CPUs, GPUs, Local PCs and Cloud,” Semantic Scholar, May 5, 2017 (https://pdfs.semanticscholar.org/42ce/ccb61c35613bc262c47b35e392ec79ac247d.pdf).

9 Intel’s current Xeon Scalable processors include new architecture and instructions that benefit many workloads, including AI deep learning training and inference.

10 Source: Dina Bass and Ian King, “Microsoft pushes further into chip design as it jockeys to be artificial-intelligence leader,” The Seattle Times, July 24, 2017 (https://www.seattletimes.com/business/microsoft-pushes-further-into-chip-design-as-it-jockeys-to-be-artificial-intelligence-leader/) and Jon Swartz and Barron’s, “Amazon Has Designs On A.I. Chips,” Nasdaq, February 25, 2018 (https://www.nasdaq.com/article/amazon-has-designs-on-ai-chips-cm926372).

11 Workloads consume lots of processing power and tend to run for long periods of time.

12 Source: Tony Paikeday, “NVIDIA and Baker Hughes, a GE Company, Pump AI into Oil & Gas Industry,” NVIDIA Blog, January 29, 2018 (https://blogs.nvidia.com/blog/2018/01/29/baker-hughes-ge-nvidia-ai/).

Page 16: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

We work with business and technology leaders to develop customer-obsessed strategies that drive growth.

PRODUCTS AND SERVICES

› Core research and tools › Data and analytics › Peer collaboration › Analyst engagement › Consulting › Events

Forrester Research (Nasdaq: FORR) is one of the most influential research and advisory firms in the world. We work with business and technology leaders to develop customer-obsessed strategies that drive growth. Through proprietary research, data, custom consulting, exclusive executive peer groups, and events, the Forrester experience is about a singular and powerful purpose: to challenge the thinking of our clients to help them lead change in their organizations. For more information, visit forrester.com.

CLIENT SUPPORT

For information on hard-copy or electronic reprints, please contact Client Support at +1 866-367-7378, +1 617-613-5730, or [email protected]. We offer quantity discounts and special pricing for academic and nonprofit institutions.

Forrester’s research and insights are tailored to your role and critical business initiatives.

ROLES WE SERVE

Marketing & Strategy ProfessionalsCMOB2B MarketingB2C MarketingCustomer ExperienceCustomer InsightseBusiness & Channel Strategy

Technology Management ProfessionalsCIOApplication Development & DeliveryEnterprise Architecture

› Infrastructure & OperationsSecurity & RiskSourcing & Vendor Management

Technology Industry ProfessionalsAnalyst Relations

142531

Page 17: Unlock The True Potential Of AI For An Intelligent Connected … · 2 Unlock The True Potential Of Ai For An Intelligent Connected World Research From Forrester: AI Deep Learning

ABOUT TATA CONSULTANCY SERVICES LTD. (TCS)

Tata Consultancy Services is an IT services, consulting and business solutions

organization that has been partnering with many of the world’s largest businesses

in their transformation journeys for over 50 years. TCS offers a consulting-led,

cognitive powered, integrated portfolio of business, technology and engineering

services and solutions. This is delivered through its unique Location Independent

Agile™ delivery model, recognized as a benchmark of excellence in software

development.

A part of the Tata group, India’s largest multinational business group, TCS has

over 448,000 of the world’s best-trained consultants in 46 countries. The company

generated consolidated revenues of US $22 billion in the fiscal year ended March

31, 2020, and is listed on the BSE (formerly Bombay Stock Exchange) and the NSE

(National Stock Exchange) in India. TCS’ proactive stance on climate change and

award-winning work with communities across the world have earned it a place in

leading sustainability indices such as the Dow Jones Sustainability Index (DJSI),

MSCI Global Sustainability Index and the FTSE4Good Emerging Index.

For more information, visit us at www.tcs.com.