Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Differentiation in Flexible Data Center Interconnect
© Copyright 2016 Xilinx .
Industry Leadership Momentum
$2.38B FY15 revenue 3,500 employees world wide
>55% Market segment share
3500+ Patents 60 industry first
Leading Process technology 28nm, 20nm, 16nm 7nm
Founded 1984 20,000 customers
Page 2
© Copyright 2016 Xilinx .
Page 3
Momentum in Multi-Market Growth Drivers
Smarter. Connected. Differentiated.
Cloud Computing Embedded Vision Industrial IoT 5G Wireless
© Copyright 2016 Xilinx .
On Demand, Scalable, Efficient, SDN
Differentiation in Data Center Interconnect (DCI)
Page 4
© Copyright 2016 Xilinx .
Page 5
Why is Data Center Interconnect Needed?
Site limitations force DCs to be diversely located within
the metro
– Multiple DCs few kms away power from different sub-stations
Economically driven locations for data centers
– Reasons can be Subsidies, Power Cost, Environmental & Scale
Transactional magnification process
Large distances between user and data
Optimization of IT infrastructure spend
*:Source: Rise of High-Capacity Data Center Interconnect in Hyper-Scale Service Provider Systems by ACG Research Infinera Cloud Express Infograhic
© Copyright 2016 Xilinx .
Page 6
Why is Data Center Interconnect Needed?*
100G DCI ports needed as a % of number of servers in building
– 1.25% for hyperscale DC to 0.6% for medium size DC
– Eg. for hyperscale DC with 200,000 servers 2520, 100G ports 252 Tbps
*:Source: Rise of High-Capacity Data Center Interconnect in Hyper-Scale Service Provider Systems by ACG Research
© Copyright 2016 Xilinx .
Need to Connect Clouds to the Transport Boxes
Want Large Connections between devices – Physical Ports; carry a large pipe
– Maximize the use of Deployed Transport Gear
Address Router & Transport Asymmetry – Transport is expensive, maximize entire Link Budget
– Decouples Router and Transport developments
Seamless Mechanism to Aggregate Ports to a Single Link – Channelization to address future MLG applications
Facilitate Bandwidth on Demand – Easily Grow, Shrink
– Add and Tear down links for BWoD
Transport Today
Transport 2017
Page 7
Diagram Source: Ethernet Alliance
© Copyright 2016 Xilinx .
Page 8
DCI Transport: Automation Network Discovery – Key for Mega DCs
DCI customers want to be able to control the entire link through SDN (high level
orchestration)
SDN controller should be able to know the details about all connectivity between the link
– Routers talk to each other with LLDP (Link Layer Discovery Protocol)
– Automatically detect which ports are connected & can create a map of the connection
Pac
ket
Gea
r R
ou
ter/
Swit
ch>1
00G
Dat
a Pi
pe
↔ F
lexE
QSFP28QSFP28
QSFP28
Flex
E ↔
>10
0G D
ata
Pip
e↔
Tra
nsp
ort
Fra
me
Tran
spo
rt G
ear
QSFP28QSFP28
QSFP28
Optical Network
Tran
spo
rt G
ear
Flex
E ↔
>10
0G D
ata
Pip
e↔
Tra
nsp
ort
Fra
me
>100
G D
ata
Pip
e ↔
Fle
xE
Ro
ute
r/Sw
itch
P
acke
t G
ear
Transported in a single WavelengthFlexE
QSFP28QSFP28
QSFP28
QSFP28QSFP28
QSFP28
FlexE
Modulations & Symbol Rates provide flexibility
Management Entity(SDN)
© Copyright 2016 Xilinx .
Page 9
Security is Critical
Everyone wants to secure their communication
– Data Center are trying to make sure their data is secure
– Today DCs are very secure inside the building
• See Google’s DC videos (Biometric entrances etc.)
– But what about the data between Data Centers?
Need for security is key
– Security requirements are changing and evolving
– Security function to be in the FPGA
• MACSec or Bulk Encryption
Involve Xilinx into network security architecture discussion
– We can help you push this angle … this is a key differentiator
© Copyright 2016 Xilinx .
Router to Transport
Page 10
Data Center Interconnect Value Proposition: FlexE System Value
Efficient LAG - Match Transport Bandwidth
Increase Interfaces Reliability
Flexibility for Any Speed, Any Client, Any Network & Custom Rates
Break-through in Network Architecture & Integration w. Different Generations
Performance/Watt for Data Center Interconnect
5 Advantages
Break Ethernet and Transport Roadmap Marriage
Next Generation
Connectivity
Data Centers
interconnection
Transport
Infrastructure
Diagram Source: (1) Keynote_Layer0vsLayer1_SDN_Wellbrock.pdf @ POTE 2013 (Verizon; Glen Wellbrock)
(2) OFC 2016 – Cisco diagrams from presentation ‘FlexE, a New Reality’ (OFC Expo Theater, March 23rd 2016)
(2) (2)
(1)
© Copyright 2016 Xilinx .
Page 11
Generic Block Diagram
DCI boxes are simple
– Client optics on one side + line DSP and
optics on the other
Inside the FPGA want to have
– Fixed or Flex Ethernet on the client side +
FlexE on the line side (no need for OTN here)
– LLDP snooping
– Security features like MACSec & BULK
security
DSP
DS
P F
un
cti
on
s
FPGA
LLDP Snooping /Statistics
Fle
xE
Or
Fix
ed
E
the
rne
t
En
cry
pti
on
(M
AC
Se
c /
Bu
lk)
Fle
xE
OR
oth
er
Tra
nsp
ort
R
ate
Ma
tch
ing
Cli
en
t O
pti
cs
© Copyright 2016 Xilinx .
Page 12
Simple 500G DCI Chip Based on Xilinx Virtex UltraScale+ FPGA
500G of DCI with future application space for security & aggregation Can scale to greater than 1Tb in VU13P device if no application space is needed
QSFP28
Xilinx Virtex UltraScale Plus (VU9P)
5
QSFP28QSFP28
QSFP28
4
3
2
1
CMAC Hard IP
80
2.3
bj F
ECD
eco
din
g &
C
orr
ect
ion
10
0G
E R
X
PC
S
100GE MAC
10
0G
E TX
P
arti
al P
CS
80
2.3
bj F
EC
Enco
din
g
100GE TX Path
LLDP Snooping
100GE DCI Slice
QSFP28
DSP
(s)
Application Space in FPGA Fabric for the FUTURE
© Copyright 2016 Xilinx .
Page 13
FlexE Demo at OFC 2016
World’s first public demonstration of a FlexE equipment interoperation
Spirent n x 100G Test Set
FlexE +Traffic Analyzer &
Generator
QSFP28
QSFP28
QSFP28
Xilinx Development Board
CFP4
CFP4
CFP4 Flex
E Sh
im
Flex
MA
C (U
p to
400
Gbp
s)
Pack
et G
ener
ator
&
Mon
itor
QSFP28 CFP4
200/300Gpbs
© Copyright 2016 Xilinx .
Page 14
The Xilinx Differentiating Advantage
Combining Software Intelligence with Flexible Hardware Optimization and Any-to-Any Connectivity
ALL PROGRAMMABLE
© Copyright 2016 Xilinx .
Page 15
Technology Leaders Join Forces to Bring an Open Acceleration
Framework to Data Centers and Other Markets
AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx collaborate on new Cache Coherent
Interconnect for Accelerators (CCIX) that will allow multiple processor architectures and accelerators
to seamlessly share data
© Copyright 2016 Xilinx .
Page 16
Xilinx Expands its 16nm UltraScale+ Product Roadmap to Include
Acceleration Enhanced Technologies for the Data Center
Combines 16nm UltraScale+ programmable logic with HBM memory and new
accelerator interconnect technology for heterogeneous computing
Acceleration Enhanced FPGAs
© Copyright 2016 Xilinx .
Thank You.
Aerospace &
Defense
Industrial &
Medical
Test, Measure
& Emulation Automotive
Wireless
Communications
Audio
Video
Broadcast Consumer
Wired Comms &
Data Center
Contact ([email protected]) to see
Xilinx’s Data Center Interconnect Demonstration
Video1: FlexE 200G & 300G - Video2: Tier 1 platform
Page 17