Upload
termilab-
View
257
Download
0
Tags:
Embed Size (px)
Citation preview
DATA CENTER SOLUTION
Vladimir Urayev
Sr. DC Sales Specialist, EMEA
Industry Trends
CLOUD ADOPTION
Platforms and Vendors of Choice are … Shifting
Data Centers and Clouds are Changing
Projected amount of data center network
traffic by 2017.
(1 ZB = 1 trillion GB)
7.7 ZB248%1
Data center outages caused by human
error.
Enterprises who have hybrid cloud strategy.
82%3
Scale & Automation will be critical to creating future-proof clouds and data centers.
1. Ponemon Institute, 2013 Study on Data Center Outages2. Cisco Global Cloud Index
3. RightScale 2015 State of the Cloud Report
Cloud Adoption Driving Higher Bandwidth100GbE spending growing to nearly $8 billion over next 4 years
Source: Dell’Oro Ethernet Switch Market Update 1Q2014
WHY JUNIPER NETWORKS?
JUNIPER’S DATA CENTER SOLUTIONS HELP ADDRESS CHALLENGES
• How does Juniper’s solution help address challenges?
• How does Juniper’s solution help create value added Cloud Services?
Technology Innovations
• Open standards
• Chipset innovations
• Software innovations
Partnerships
• Solution Partnerships
• Product Partnerships
• VARs, PS Partners
Proven Solutions
• Tested and validated designs
• Design, Implementation guides
Market Deployments
• Close interaction with leading edge customers
• Feedback loops
JUNIPER’S DATA CENTER SOLUTION Consists of a Rich Set of Partners and Ecosystem Members
DCI
MX vMX
PTX QFX
SECURITY
vGW, SRX
SDN
Contrail
SWITCH FABRIC
QFX
VC Fabric
IP Clos
Storage
Virtualization/
Cloud
HBA/NIC
Global
Load-balancing
Metro
Transport
Handsets
Security
DATA CENTER CUSTOMERS
Cloud Operator,SP background
Cloud Operator, automation background
Financial Data Center
Content Provider, Data Center
Enterprise Data Center
QFX10000 LINE OF SWITCHES
Industry’s most scalable, open and future-proof spine/core cloud switches
Internet
MX (USG)
Virtual & PhysicalSecurity
QFX, EX, and QFabic Switching
Private Cloud
Hosted/Managed
MX (USG)
Virtual & PhysicalSecurity
QFX, EX, and QFabic Switching
Private Cloud
Public Cloud(Hybrid)
Junos Space
Network Director
WAN
Multi-Data Center, Multi-Cloud, One Network Architecture
Campus and Branch
ANY NETWORK OR SDN
Networking End to End
Cloud Switching PortfolioHow to Fit into Spine-Leaf Solution
SPINE
MODULAR
LEAF
FIXED
EX9200 QFX10000
QFX5100
QFX5100-24Q QFX10002
10 GIGABIT ETHERNETOCP NETWORKINGAPPLICATION
INTEGRATED SWITCHING
SCALE UP ARCHITECTUREUp to 480 X 100 GbE Ports
GIGABIT ETHERNET
EX4300QFX5100-24Q-AA
QFX-PFA-4QOCX1100
MOST SCALABLE
System Throughput
• Only 100G-capable 2RU switches
• 2x 100G port density in given size
• Up to 4x the total system throughput
Buffers
• 100x the buffer size
Logical Tables
• Up to 4x FIB scale
• Up to 8x MAC scale
• Industry Only 2M host route (or VM) scale
5.76 Tbps8 SLOT
48 Tbps16 SLOT96 Tbps
2.88 Tbps
Linux (Centos)
OPEN
Open & standard automation, monitoring and SDN API
Open architecture to create & run applications alongside JUNOS
Open & standards-based for a multi-vendor network
PFEx86 CPU
BSD BSD
Guest App
Guest VM
Junos RE 0 (VM)
Junos RE 1 (VM)
KVM
Gu
est
Ap
p
PFE (Native Linux)
Platform (Native Linux) A
nal
ytic
s
Au
tom
atio
n
Linux (Yocto)
Netconf
Junos XML-RPC DMI
CLI, SNMP, PyEZ, RubyEZ, Junos Script Open Standardized API • Programmable Access to Control Plane,
Data Plane and Platform• Thrift, REST, JSON/XML, YANG
QFX10002 FIXED SWITCHES
QFX10002-72Q
2RU
5.76 Tbps
• 2RU Fixed Switches:
• 36 x 40G QSFP+ / 12 x 100G QSFP28 / 144 x 10G SFP+
• 72 x 40G QSFP+ / 24 x 100G QSFP28 / 288 x 10G SFP+
• Intel Quad Core Ivy Bridge 2.4Ghz CPU, 16GB SDRAM
• Front-to-back airflow with 3 rear fan trays
• AC & DC Power
• QFX10002-72Q: 2+2 / 2+1 redundancy
• QFX10002-36Q: 1+1 redundancy
QFX10002-36Q QFX10002-72Q
System throughput 2.88 Tbps 5.76 Tbps
10G Density (SFP+) (breakout) 144 288
40G Density (QSFP+) 36 72
100G Density (QSFP28) 12 24
QFX10008 / QFX10016 MODULAR SWITCHES
QFX10008
13RU
8 Slot
48 Tbps
QFX10016
21RU
16 Slot
96 Tbps
• Mid plane-less orthogonal interconnect architecture
• 6 cell-based switch fabric cards with N + 1 redundancy
• Redundant Routing Engines
• Intel Quad Core Ivy Bridge 2.4Ghz CPU, 32GB SDRAM
• Front-to-back airflow with 2 rear fan trays
• AC & DC Power with N+1 redundancy
• 8-slot: 6 PSUs, 16-slot: 10 PSUs
• Line Cards:
• 60 x 10G SFP+ with 6 x 40G QSFP+ / 2 x 100G QSFP28
• 36 x 40G QSFP+ / 12 x 100G QSFP28
• 30 x 100G QSFP28 / 24 x 40G QSFP+ + 6 x 100G QSFP28
QFX10008 QFX10016
10G Density (SFP+) (Native) 480 960
10G Density (SFP+) (breakout) 1152 2304
40G Density (QSFP+) 288 576
100G Density (QSFP28) 240 480
QFX10000-36Q
QFX10000-30C
QFX10000-60S-6Q
Q5 PACKET FORWARDING ENGINE
10GE/40GE/100GE(MAC & PHY)
500Gbps, 28nm10G/25G SerDes
400GE Ready*
On-Chip Analytics
Adaptive ECMP LB
Flow Table, MPLS
L2, L3 Overlay Tunnel routing(VXLAN, NVGRE)
High Frequency Monitoring & PTP
L2, L3 - IPv4, IPv6
Virtual Output Queuing &
Traffic Engineering
DEEP BUFFERS: NEEDIncast Speed mismatch
Elephant & mice flow in multipath networkMicroburst
8 seconds
1.6 - 2 seconds
600 msec – 1.2 seconds
320 msec
60 msec
280 usec
400 usec
Benefits
o High speed memory & 20-50 msec buffer
o No head of line blocking
o 95% efficient bandwidth efficiency at any traffic load
o Efficient elephant and mice flow handling
o Predictable & measurable quality assurance
Juniper solution
o QFX10000 with Q5 ASIC & hybrid memory cube
o Virtual output queue
o Variable size cell fabric
o Dynamic load balancing
Ethernet FabricJunos Fusion
SOLUTION ARCHITECTURES
QFX10000
L2/L3
IP/MPLS Fabric
L3
Multi-tierMC-LAG
L2/L3
L2
Virtual Network
Overlays
QFX10000 SCALEFeature QFX10002-36Q QFX10002-72Q QFX10008/QFX10016
Routes (IPv4/IPv6) 256K
Host Routes 2M
MACs Up to 256K Up to 512K Up to 1M
ARPs Up to 144K Up to 256K Up to 256K
Multicast Routes / Groups Up to 128K
VoQs 384K
Output queues 8 / port
Forwarding Classes / Loss Priorities 8 / 3
VPNs 4K
Labels 32K / 80K / 32K
L2 Domains 4K FRS (16K FRS+1)
GRE tunnels 4K
LAGs 72 144 1K
Members / LAG 64
ECMP 64-way
Filters / Policers 8K / PFE
Filter Terms 64K / PFE
vMembers 256K
SPAN / RSPAN/ ERSPAN 48 sessions
SOFTWARE LICENSING
BaseL2, IPv4/6
Routing Protocols: OSPFv2/3, PIM v4/6
PFLAdvanced Routing Protocols:
BGP v4/6, ISIS v4/6Overlays: VxLAN, OVSDB, EVPN-VxLAN
AFLMPLS: L3VPN, L2VPN, EVPN, MVPN, FRR
QFX5100-AA
QFX5100-AA: Application Acceleration Switch
40G Data Center switch
Network application hosting on native Guest VMs
Innovative hardware design
Very low latency up to 550ns*
Carrier grade JUNOS network operating system
Holistic data center network solutions – Connect seamlessly, Simplify network operations
*Based on Broadcom PFE value. Lower latency values can be derived from PFA depending on the custom application logic
Application Acceleration Switch
QFX5100-AA
24x40G QSFP+ ports 4x40G FPGA ports
QFX-PFA: Packet Flow Accelerator Module
Optional add-on FPGA Packet Flow Acceleration module
Accelerate compute intensive, real-time business critical application processing
Use Java to program packet flows through FPGA
Lower latency based on logic customization
Packet Flow Accelerator Module
QFX-PFA
24x40GE
Base System – Application Acceleration Switch
Routing Engine –Junos VM (System)
Guest VMUser Application
KVM Hypervisor
Linux Host OS
QFX5100-AA Switch: Use Cases
• Host network applications and containers in Guest VMs native to switch• Performance monitoring/ analytics
applications
• Cloud Analytics Engine Compute agentPre-built AnalyticsD Collector
• Wireshark in text version
• Container support to run Docker on CentOS
• Hadoop Map Reduce
Comparing QFX5100 and QFX5100-AA Feature QFX5100 QFX5100-AA
CPU Dual Core 1.5 Ghz Quad Core 2.5 Ghz
Memory 8GB 32 Gb
Storage 32GB 128GB
Guest VM I/O Bandwidth 1 Gbps 20 Gbps
MPLS Yes Yes
L3VPN Yes Yes
ISSU YesYes (without PFA
module)
BGP Yes Yes
VxLAN Yes Yes
IS-IS Yes Yes
Virtual Chassis Yes No
Virtual Chassis fabric Yes No
IPCLOS fabric Yes No
Junos Fusion
Coherent Data Center Fabric Architecture
Q: W
he
n
a bear
fights a
shark,
wh
o
win
s? A:
It d
ep
end
s o
n
wh
ether
the figh
t w
as on
th
e b
each
or
in th
e w
ater.
We
sho
uld
p
ick the
locatio
n
wh
ere w
e ch
oo
se to
invest
ou
r en
ergy figh
ting.
Multi-tier
MC-LAG VCF Junos Fusion
IP
Fabric
Ethernet Fabric
JUNOS: one common operating system for all fabrics
Business Critical IT & Private Cloud SaaS, Web Services
QFabric
<4,260Servers < 1,500 Servers 10,000+<6,000 Servers
Virtual Network
…
What is Junos Fusion?
Data center networking with simplified management
at scale
Open Standards & programmability
IEEE 802.1BR and JSON-RPC APIs
Resilient
Plug-and-play provisioning
1GE-100GE
Junos Fusion
Cascade
Port
Upstream
Port
Extended Port
Server/Storage Ports
1GE/10GE/40GE
Junos Fusion
Junos Fusion Terminology
Aggregation
Device
Satellite
Device
Internal and External Ports
Junos Fusion
Internal Ports
IEEE 802.1BR
External Ports
Any Protocol
Routers,
Switches,
Server ,
Storage ,
Appliances
Simplicity of Operations at Scale
Junos Fusion
Single Point of configuration & management for DC POD
of up to 128 racks (64@FRS)
High Availability: Dual Control Planes
Junos Fusion
Active-active route engines for maximum resiliency
Active REs can run different Junos version for maximum availability
& separation
Flexible Deployment Models
• MC-LAG style of provisioning
• Server dual homing across satellites on two different fusion clusters
• Single homed satellites
• Simplified provisioning
• No MC-LAG peer configuration CLI
• ICL VLAN auto-provisioning
• Auto provisioned LAGs between AD and SD
• Server dual homing across two satellites in same fusion cluster
Junos Fusion Junos Fusion Junos FusionMC-LAG
SATELLITE DEVICESAGGREGATION DEVICES
Supported Aggregation and Satellite Devices
QFX10000 series
FRS with QFX10002 15.1X53-D15EX4300 Copper (1GE)QFX5100 (10GE/40GE)
Modes of Operation
• Extended Mode
• All traffic from SD is sent to AD for forwarding
• Local Switching Mode
• Traffic to destinations on originating SD switched locally
• Traffic to non local destinations sent to AD for forwarding
Extended Mode
Aggregation Device
IEEE 802.1BR
Satellite Device
0 1 n
0 1 n
In extended mode, each physical port on satellite device is represented in the
aggregation device management/control/forwarding plane.
Extended Mode: Forwarding
Ethernet
HeaderPayload
Ethernet Traffic
IEEE 802.1BR traffic
Aggregation Device
Satellite Device
1 2
0 1 n
Ethernet
HeaderPayload
IEEE
802.1BR
ECID: Port 1
Ethernet
HeaderPayload
IEEE
802.1BR
ECID: Port 2
Ethernet
HeaderPayload
Local Switching Mode
Aggregation Device
API Interface to AD &
3rd Party Applications
Satellite Device
0 1 n
0 1 n
• In local switching mode, API interface (JSON-RPC) can be used from AD to configure local switching on SD
• IEEE 802.1BR alone does not provide these additional functions
• Local switching mode uses a combination of IEEE 802.1BR and JSON-RPC API
Forwarding Applications with API
Aggregation Device
Satellite Device
0 1 n
Ethernet
HeaderPayload
Ethernet
HeaderPayload
Sniffer/MonitoringDevice
App-1
Storage
App-2
Transactional data
APPLICATIONS ENABLED WITH API BETWEEN SD & AD
• Local Switching for destination on same SD
• Port Mirroring
• Application Specific uplink selection
• Storage vs transactional data traffic
• Mice vs elephant flows
• Port based pinning (Coarse)
• Flow based pinning (Granular)
Examples of Applications Enabled Using APIAPIs in Extended and Local Switching Mode
• Interface Statistics Collection
• Software Image Management
• Environmental monitoring
• Visibility (Centralized event monitoring)
• Traffic Steering (Only available in local switching mode)
High Level Software Architecture
Satellite
Device
Satellite Network OS
LLDP API
(json-rpc)
Aggregation
Device
IEEE802.1BR CSP
CSP: Control and Status Protocol
Yocto Linux
Software Upgrade
• SD software management from AD
• 3rd party application or Network Director using REST/JSON API
• SD Software image automatically upgraded when discovered
• Group SDs into different software upgrade groups for flexibility
• SDs in different software upgrade groups can have different image
Junos Fusion
Software Upgrade
Group 1
Software Upgrade
Group N
NG Satellites without SW Upgrade on AD
Next Gen Satellite
Junos Fusion
Connecting Multiple Junos Fusion Systems
EVPN-VXLAN
Fusion-2
Fusion-1
Fusion-4
Fusion-3
Cloud Analytics Engine
CLOUD ANALYTICS ENGINE
Cloud Analytics Engine (CAE)
Network tells you what you need to know
Automated, proactive, end-to-end
Visualize and correlate physical and virtual
Data collected streamed at wire rate
Application-centric view of intelligent network
User requests data from
device
User driven, per-device
Low frequency and capacity data extraction
You need to know what you want to know
Limited visibility into virtual tunnels and paths
Network-centric approach to data collection
The Old Way
Analytics
DevOps
OperationsApps
Developer
Network Admin
Application Visibility & Performance Management
Capacity Planning and Optimization
Troubleshooting & root cause analysis
CAE Use Cases
CAE enables Co-ordinated problem solving across teams –
improving IT efficiency and reducing cost
Analytics and Orchestration Layer
USE CASE: PATH DETECTION & VISIBILITYProvide integrated visibility into the actual physical network in use
Compute Node A Compute Node B
Flow Paths
Red App: S1
S1
S2
S3
S4
S2 S4
Green App: S1 S3 S4
Blue App: S1S3
S4S2
REST Call to
Compute Agent
S1
S2
S3
S4
USE CASE: PATH ATTRIBUTESData Recorded by Device Agent in OAM Reply
Compute Node A Compute Node B
Flow Paths
Red App: S1
S2
S3
S4
S2 S4
Green App: S1 S3 S4
Blue App: S1S3
S4S2
Analytics and Orchestration Layer
• Timestamp of probe ingress and egress
• Per Hop Latency
• Ingress Interface
• Hash Computed Egress Interface
• Buffer and Queue Statistics
• Interface Error Statistics
• Bandwidth Utilization at Ingress and Egress
• ECMP Bucket Utilization
• CPU Utilization
• Memory Utilization
Network Statistics
HostStatistics
Analytics and Orchestration Layer
USE CASE: LATENCY CALCULATIONSProvide Per Hop Latency per traffic Flow
Compute Node A Compute Node B
S1
S2
S3
S4
REST Call to
Compute Agent
Flow Latency
Red App:
S1
S2
S2 S4
T+1 T+2 T+3
Time stamp:
T+1Time stamp:
T+3
Time stamp:
T+2
Request
OVERLAY_INFO
Probe
USE CASE: OVERLAY / UNDERLAY CORRELATIONProvide integrated visibility into the actual physical network in use
Compute Node A Compute Node B
S1
S2
S3
S4
Analytics and Orchestration Layer
S1
S2
S3
S4
VNI: Red
VNI: Blue
VNI: Green
VM 1 VM 2
VM 3 VM 4
VM 5
VM 6 VM 7
VM 8 VM 9
VM 10
Overlay Awareness
S1> show analytics overlay vxlan summary
VNI Red: VM1, VM2, VM6, VM7
VNI Blue: VM3, VM4, VM8, VM9
VNI Green: VM5, VM10
Overlay Awareness
S2> show analytics overlay vxlan summary
VNI Red: VM1, VM2, VM6, VM7
VNI Blue: VM3, VM4, VM8, VM9
Overlay Awareness
S3> show analytics overlay vxlan summary
VNI Blue: VM3, VM4, VM8, VM9
VNI Green: VM5, VM10
Thank you