40

Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally
Page 2: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Today’s Data Centers

• O(100K) servers/data center

• Tens of MegaWatts, difficult to power and cool

• Very noisy

• Security taken very seriously

• Incrementally upgraded • 3 year server depreciation, upgraded quarterly

• Applications change very rapid (weekly, monthly)

• Many advantages including economies of scale, data all in one place, etc.

• Very high volumes means every efficiency improvement counts

• At data center scales, don’t need to get an order of magnitude improvement to make sense

• Positive ROI at large scale easier to achieve

• How can we improve efficiencies?

Page 3: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Microsoft Cloud Services

Page 4: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Efficiency via Specialization

ASICs

Source: Bob Broderson, Berkeley Wireless group

FPGAs

Page 5: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Original Design Requirements

Page 6: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Version 1: Designed for Bing

Page 7: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Configuration?

Page 8: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

~2013 Microsoft Open Compute Server

Two 8-core Xeon 2.1 GHz CPUs64 GB DRAM4 HDDs, 2 SSDs10 Gb Ethernet

Page 9: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Catapult V1 Accelerator Card

• Altera Stratix V D5 (2011 part)• Logic: ~5.5M ASIC

gates+registers• 2014 20Kb memories

• 2x40b @200MHz = 4TB/sec• PCIe Gen 2 x8, 8GB DDR3

• What if single FPGA too small?

Stratix V

8GB DDR3

PCIe Gen3 x8

Page 10: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Scalable Reconfigurable Fabric

• 1 FPGA board per Server

• 48 Servers per ½ Rack

• 6x8 Torus Network among FPGAs

• 12.5Gb over SAS SFF-8088 cables

Data Center Server (1U, ½ width)

Page 11: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Version 2:Designed for all Microsoft

Page 12: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Economies of Scale

In data center, sameness == goodness

•Not possible everywhere, e.g., GPU SKUs

Have enough difference (divergence) from new components

•E.g., Intel processors change every 1-2 years

Divergence is very costly

•Stock spares for each type of machine

•Many parts already end-of-lifed

•Keeping track of what needs to be purchases

•Reduced volumes == increased pricing

V1 designed for Bing, could others use V1?

Can we design a system that is useful for all data center users?

Page 13: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

FPGAs to Azure

• At the time, Microsoft moving to “single SKU”

• Azure needed to adopt FPGAs as well

• Azure networking saw a need for

• network crypto

• Software defined networking acceleration

• But, could you do it with our V1 architecture?

• Was designed without network modifications because Azure not on board

• Developed new architecture to accommodate all known uses

Page 14: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Converged Bing/Azure Architecture

CPU CPU FPGA

NIC

DRAM DRAM DRAM

WCS 2.0 Server Blade Catapult V2

Gen3 2x8

Gen3 x8

QPI Switch

QSFP

QSFP

QSF

P

40Gb/s

40Gb/s

WCS Gen4.1 Blade with Mellanox NIC and Catapult FPGA

Pikes Peak

WCS Tray

Backplane

Option Card

Mezzanine

Connectors

Catapult v2 Mezzanine card

• Completely flexible architecture1. Can act as a local compute accelerator

2. Can act as a network/storage accelerator

3. Can act as a remote compute accelerator

Page 15: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Network Connectivity (IP)

Page 16: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

How Should We View Our Data Center Servers?

Depends on your point of view!

Page 17: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

http://www.vectorworldmap.com/vectormaps/vector-world-map-v2.1.jpg

Page 18: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

http://texina.net/world_map_chinese.jpg

Page 19: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

A Texan’s View of the US

From Cardcow.com

Page 20: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

http://pandawhale.com/post/15302/california-for-beginners

Page 21: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Classic View of Computer

DRAM

CPU network

Storage

Page 22: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Networking View of Computer

Acc

DRAM

CPU network

Storage

Network “offload”

Page 23: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

“Offload” Accelerator view of Server

NIC

DRAM

Acc

Acc

Acc

CPU network

Storage

Page 24: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Our View of a Data Center Computer

network

DRAM

CPU

FPGA

Storage

DRAM

CPU

DRAM

Acc

Page 25: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Benefits

• Software receives packets slowly• Interrupt or polling• Parse packet, start right work

• FPGA processes every packet anyways• Packet arrival is an event that FPGA deals with• Identify FPGA work, pass CPU work to CPU

• Map common case work to FPGA• Processor never sees packet• Can read/modify system memory to keep app state

consistent

• CPU is complexity offload engine for FPGA!

• Many possibilities• Distributed machine learning• Software defined networking• Memcached get

Page 26: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Case 1Use as a local accelerator

Bing Ranking as a Service

Page 27: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

IFM 1

IFM 2

IFM 44

IFM 3

IFM 1

IFM 2

IFM 44

IFM 3

IFM 1

IFM 2

IFM 44

IFM 3

Bing Document Ranking Flow

SaaS 1

SaaS 2

SaaS48

SaaS 3

Ranking-as-a-Service (RaaS) - Compute scores for how relevant each selected document is for the search query- Sort the scores and return the results

Selection-as-a-Service (SaaS)- Find all docs that contain query terms, - Filter and select candidate documents for ranking

Selection as a Service (SaaS)

IFM 1

IFM 2

IFM 44

IFM 3

IFM 1

IFM 2

IFM 44

IFM 3

IFM 1

IFM 2

IFM 44

IFM 3

RaaS 1

RaaS 2

RaaS48

RaaS 3

Ranking as a Service (RaaS)

Query

SelectedDocuments

10 blue links

Page 28: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

FE: Feature ExtractionQuery: “FPGA Configuration”

NumberOfOccurrences_0 = 7 NumberOfOccurrences_1 = 4 NumberOfTuples_0_1 = 1{Query, Document}

~4K Dynamic

Features

~2K Synthetic

Features

L2 Score

Document

Score

Page 29: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Bing Production Results

software

FPGA

99.9% Query Latency versus Queries/sec

HW

vs.

SW

Lat

ency

an

d L

oad

average software load

99.9% software latency

99.9% FPGA latency

average FPGA query load

Page 30: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Case 2

Use as a remote accelerator

Page 31: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Feature Extraction FPGA faster than needed

• Single feature extraction FPGA much faster than single server

• Wasted capacity and/or wasted FPGA resources

• Two choices• Somehow reduce performance

and save FPGA resources• Allow multiple servers to use

single FPGA?

• Use network to transfer requests and return responses

Page 32: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Inter-FPGA communication

ToR

FPGA

NIC

Server

FPGA

NIC

Server

FPGA

NIC

Server

FPGA

NIC

Server

CS0 CS1 CS2 CS3

ToR

FPGA

NIC

Server

FPGA

NIC

Server

FPGA

NIC

Server

FPGA

NIC

Server

SP0 SP1 SP2 SP3• FPGAs can encapsulate

their own UDP packets

• Low-latency inter-FPGA communication (LTL)

• Can provide strong network primitives

• But this topology opens up other opportunities

L0

L1/L2

Page 33: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Hardware Acceleration as a Service Across Data Center (or even across Internet)

ToR ToR

CS CS

ToR ToR

Bing Ranking SW

HPC

Bing Ranking HW

Speech to text

Large-scaledeep learning

Page 34: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Current Bing Acceleration:DNNs

Page 35: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

BrainWave: Scaling FPGAs To Ultra-Large DNN Models• Use FPGAs to implement Deep Neural

Network evaluation (inference)• Map model weights to internal FPGA

memories• Huge amounts of bandwidth

• Since FPGA memories are limited, distribute models across as many FPGAs as needed

• Use HaaS to manage multi-FPGA execution, LTL to communicate

• Designed for batch size of 1• GPUs, Google TPU designed for larger

batch sizes, increases queuing delay or decreases efficiency

Page 36: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Case 3

Use as an infrastructure accelerator

Page 37: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

FPGA SmartNIC for Cloud Networking• Azure runs Software Defined Networking on the hosts

• Software Load Balancer, Virtual Networks – new features each month

• Before, we relied on ASIC to scale and to be COGS-competitive at 40G+• 12 to 18 month ASIC cycle + time to roll out new HW is too slow to keep up with SDN

• SmartNIC gives us the agility of SDN with the speed and COGS of HW• Base SmartNIC provide common functions like crypto, GFT, QoS, RDMA on all hosts

TranspositionEngine

Rew

rite

SLB Decap SLB NAT VNET ACL Metering

Rule Action Rule ActionRule Action Rule Action Rule Action Rule ActionDecap* DNAT* Rewrite* Allow* Meter*

SmartNIC

VFP

VMSwitch

VM

SR-IOV(Host Bypass)

50GQoSCrypto RDMAFlow Action

Decap, DNAT, Rewrite, Meter1.2.3.1->1.3.4.1, 62362->80

GFT

Page 38: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Azure Accelerated Networking

• SR-IOV turned on• VM accesses NIC hardware directly,

sends messages with no OS/hypervisor call

• FPGA determines flow of each packet, rewrites header to make data center compatible

• Reduces latency to roughly bare metal

• Azure now has the fastest public cloud network• 25Gb/s at 25us latency

• Fast crypto developed

NIC

VFP

Hypervisor

Guest OS

VM

NIC

VM

GFT/FPGA

Page 39: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Catapult Academic Program

• Jointly funded by Intel and Microsoft• Some system administration/tools servers funded by NSF under FAbRIC

• In UTexas supercomputer facility under FAbRIC project• Provide

• PCIe device driver, shell (initially compiled/encrypted, discussing source access under NDA with lawyers)

• “Hello, world!” programs• Individual V1 boards sent to you• Remote access to 6*48 servers each with V1 board • Accessible with one page proposal [email protected]

• See https://aka.ms/catapult-academic for details

Page 40: Today’s Data Centers · Today’s Data Centers •O(100K) servers/data center •Tens of MegaWatts, difficult to power and cool •Very noisy •Security taken very seriously •Incrementally

Conclusion: Hardware Specialization on Homogeneous Machines• FPGAs being deployed for all new Azure and Bing machines

• Many other properties as well

• Ability to reprogram a datacenter’s hardware• Converts homogenous machines into specialized SKUs dynamically

• Same hardware supports DNNs, Bing Features, Azure Networking

• Hyperscale performance with low latency communication• Exa-ops of performance with a O(10us) diameter