6
2009 … 2012 … 2015 SDN invisible to IT GENI VLANs to lab Data analysis network Up and Running Now KC Wang Clemson University Apr 30 2014, Arlington, CC;NIE PI MeeAng 5 40gig lr 40gig lr Z9000 s4820 Pica 8 3920 s4810 Brocade MLX-e (Palmetto) 100gig 40gig lr 40gig lr 40 gig lr Rhodes Brown Room Sirrine 10gig tx 10gig tx 10gig sr 10gig sr 10gig sr 10gig sr Poole End Users End Users CCNIE/Science DMZ 10gig 10gig 10 gig 10gig (Palmetto) I2 AL2S S4810 S4810 McAdams 10gig sr 10gig sr S4810 Biotech 10gig sr 10gig sr 40 gig lr 40 gig lr S4810 Internet via clight Campus network 10gig 10gig Pica 8 3920 EIB 10gig sr 10gig sr 40gig lr Perfsonar 01 & 03 10.19.150.114 10.19.48.28 10.19.58.10 10.19.152.46 10.19.14.61 10.19.198.79 10.19.238.20 10.19.0.46 40gig lr Barre 40gig lr 10gig sr 10.19.55.80 S4810 S4810 10gig sr 10gig sr 10gig sr 10gig sr 10gig sr Riggs Jordan 40gig lr s4810 10gig sr 10gig sr Daniel 10.19.248.25 10.19.121.60 10.19.140.50 Controller: Layer 2, untagged, default forwarding by controller VLAN;based staAc flows for Selected projects Access Distribution Distribution Access Cisco 3560 Switches Cisco 3750 Stacked Switches Cisco 6509 Switches Cisco 3560 Switches Cisco 3750 Stacked Switches 10G 1G upgradeable to 10G 1G upgradeable to 10G Dual cross-stack etherchannel Dual cross-stack etherchannel Up to 1G to desktop Inside of single building Inside of single building Clemson AS 12148 C-Light optical transport SCLR Transport MUSC AS 13429 SoX -TransitRail AS 10490 TransitRail AS 11164 SoX-I2/NLR AS 10490 Internet2 AS 11537 NLR AS 19401 SoX-Cogent AS 10490 Cogentt AS 174 10G trunk to Atlanta Internet 1G trunk to Greenville Qwest Greenvillle AS 209 Internet Clemson Campus Legacy I2/Internet High-Level Overview of CU Brocade Connectivity Brocade AL2S 1 SciDMZ CENIC USC CIC OmniPOP FRGP Colorado 2 Wisc UEN Utah Palmetto perfSONAR 03|04 NIH ATLA perfSONAR 01 4 |02 Notes: 1 Paths through AL2S are mapped with OESS GUI 2 Upcoming/Planned or Prelim Discussions Have Occurred 3 GENI being migrated off ION (to new mappings via FOAM) 4 perfSONAR01 is being migrated from Brocade to T1600, so there will be dedicated pairs facing Internet and AL2S CHIC ASHB DENV SALT ITC VSS Poole VSS ITC Border Poole Border CLight T1600 100G 10G 10G 10G (each) 10G (each) 40G Traffic from private (RFC1918) sources are policy-routed through campus firewalls, which do the necessary translation to "real" addresses. Statics route traffic [non-RFC1918] across AL2S, based on destination hosts/subnets Default pathway out of the Brocade is through CLight. Original by CKonger 04-Mar-2014 20:00 ET Revised by CKonger 07-Apr-2015 17:00 ET ION-ATL GENI 3 PNWGP SEAT TL/PW via UW/CENIC Hawaii 2 LOSA OARnet NCBI Ohio UW 2 SoX Vandy 2 10G Research Servers EIB GENI Rack NoX Harvard 2 FOAM GENI 3 OrangeFS (testing) 10G (40G 2 ) CLEV HOUH PNWGP Wa St 2 BOST SEAT SDN trial by IT 20 buildings science DMZ, lots of issues, sparse p2p research traffic, producHon grade service SDN in produc7on by IT New data center pods New strategies for networking, security, disaster recovery Roadmap to Opera,onal SDN Workshop 7/14/2015 1 KC Wang Clemson University

Operational SDN - Networking and Information Technology ... · Wants$in$aSDN$ • Basic$feature$parity$ – Campus$ • ConnecHvity,$IAM,$Scalability$ – DataCenter$ • MulHKtenancy,$scalability,$HA$

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

2009  …  2012  …  2015  

SDN  invisible  to  IT  •  GENI  VLANs  to  lab  •  Data  analysis  network  

Up#and#Running#Now#

KC#Wang##Clemson#University# Apr#30##2014,#Arlington,#CC;NIE#PI#MeeAng# 5#

40gig lr 40gig lr

Z9000

s4820

Pica 8 3920

s4810

Brocade MLX-e

(Palmetto)

100gig

40gig lr

40gig lr

40 gig lr

Rhodes

Brown Room

Sirrine

10gig tx

10gig tx

10gig sr

10gig sr

10gig sr

10gig sr

PooleEnd Users

End Users

CCNIE/Science DMZ

10gig

10gig

10 gig

10gig(Palmetto)

I2 AL2S

S4810

S4810

McAdams

10gig sr

10gig sr

S4810

Biotech

10gig sr

10gig sr

40 gig lr

40 gig lr

S4810

Internet via clight

Campus network

10gig

10gig

Pica 8 3920

EIB

10gig sr

10gig sr

40gig lr

Perfsonar 01 & 03

10.19.150.114

10.19.48.28

10.19.58.10

10.19.152.46

10.19.14.61

10.19.198.79

10.19.238.20

10.19.0.46

40gig lr

Barre

40gig lr

10gig sr

10.19.55.80

S4810S4810

10gig sr

10gig sr

10gig sr

10gig sr

10gig sr

RiggsJordan

40gig lr

s4810

10gig sr

10gig sr

Daniel

10.19.248.25

10.19.121.6010.19.140.50

Controller:)

Layer#2,#untagged,#default#forwarding#by#controller#

#VLAN;based#staAc#flows#for#

Selected#projects##

Access Distribution Core Distribution AccessCisco 3560 Switches

Cisco 3750 Stacked Switches

Cisco 6509 Switches

Cisco 3560 Switches

Cisco 3750 Stacked Switches

10G

1G upgradeable to 10G

1G upgradeableto 10G

Dual cross-stack etherchannel

Dual cross-stack etherchannel

Up to 1G to desktopInside of single building Inside of single building

Clemson AS 12148

C-Light optical transport

SCLR TransportMUSCAS 13429

SoX -TransitRailAS 10490

TransitRailAS 11164

SoX-I2/NLRAS 10490

Internet2AS 11537

NLRAS 19401

SoX-CogentAS 10490

CogenttAS 174 10G trunk

to AtlantaInternet1G trunk to Greenville

Qwest Greenvillle

AS 209

Internet

Clemson Campus

Legacy I2/Internet

High-Level Overview of CU Brocade Connectivity

Brocade

AL2S1

SciDMZ

CENIC

USC

CIC OmniPOPFRGP

Colorado2 Wisc

UEN

Utah

Palmetto

perfSONAR03|04

NIH

ATLA

perfSONAR 014|02

Notes:1Paths through AL2S are mapped with OESS GUI2Upcoming/Planned or Prelim Discussions Have Occurred3GENI being migrated off ION (to new mappings via FOAM)4perfSONAR01 is being migrated from Brocade to T1600,so there will be dedicated pairs facing Internet and AL2S

CHIC ASHBDENV

SALTITCVSS

Poole VSS

ITCBorder

PooleBorder

CLightT1600

100G

10G

10G

10G (each)

10G(each)

40G

Traffic from private (RFC1918) sources are policy-routed through campus

firewalls, which do the necessary

translation to "real" addresses.

Statics route traffic[non-RFC1918]

across AL2S, based on destination hosts/subnets

Default pathway out of the Brocade is through CLight.

Original by CKonger 04-Mar-2014 20:00 ETRevised by CKonger 07-Apr-2015 17:00 ET

ION-ATL

GENI3

PNWGP

SEAT

TL/PWvia

UW/CENIC

Hawaii2

LOSA

OARnet

NCBIOhioUW2

SoX

Vandy2

10G

Research Servers

EIBGENIRack

NoX

Harvard2

FOAM

GENI3

OrangeFS(testing)

10G(40G2)

CLEV

HOUH

PNWGP

Wa St2

BOSTSEAT

SDN  trial  by  IT  •  20  buildings  science  DMZ,  lots  of  

issues,  sparse  p2p  research  traffic,  producHon  grade  service  

SDN  in  produc7on  by  IT  •  New  data  center  pods  •  New  strategies  for  networking,  

security,  disaster  recovery  Roadmap  to  Opera,onal  SDN  Workshop  

7/14/2015   1  KC  Wang    Clemson  University  

SDN  Landscape  -­‐  View  from  Clemson  

Science  DMZ  

HPC/  PalmeQo  

Admin  Comp  

Medicaid  

OpenFlow  Hybrid  Switch  

Legacy  Border  Routers  

Internet  AL2S  100G  

Disaster  Recovery  

ONOS  

Roadmap  to  Opera,onal  SDN  Workshop  7/14/2015   2  KC  Wang    Clemson  University  

Campus  IT  OperaHon  •  SDN  domains  in  and  around  campus  –  Each  is  an  “autonomous”  system  with  own  authority,  architecture,  resource,  constraints  

–  OperaHon  focuses  on  design  of  network  boundaries  •  Not  just  SDNs  •  Design  around  resources  present  at  boundary  

–  VLANs,  e.g.,  as  used  by  GENI,  I2  AL2S,  CloudLab  –  SDN-­‐enabled  flow  space,  e.g.,  I2  FlowSpaceFirewall  

•  ImplicaHons  on  monitoring  •  New  demands  driven  by  applicaHons  –  Beyond  network  connecHvity  

•  Compute,  storage,  security,  load-­‐balancing,  firewall,  …  Roadmap  to  Opera,onal  SDN  Workshop  

7/14/2015   3  KC  Wang    Clemson  University  

Campus  IT  Infrastructure  •  Cloud  OrchestraHon  –  VmWare,  OpenStack  –  CloudLab  

•  VDI  –  OpenStack  +  Neutron  networking  

•  Disaster  Recovery  –  Big  Switch  BCF  +  BigTap  pods  

•  SDN-­‐ready  Wi-­‐Fi  •  IdenHty  management  –  Fedushare  –  ApplicaHon/aQribute  based  access  control  

Roadmap  to  Opera,onal  SDN  Workshop  7/14/2015   4  KC  Wang    Clemson  University  

Wants  in  a  SDN  •  Basic  feature  parity  

–  Campus  •  ConnecHvity,  IAM,  Scalability  

–  Data  Center  •  MulH-­‐tenancy,  scalability,  HA  •  Flood  ProtecHon,  Disaster  recovery  design  

•  SimplificaHon  +  backward  compaHbility  –  SimplificaHon  is  great  –  e.g.,  manage/monitor  one  “big  switch”  instead  of  

100s  of    –  Backward  compaHbility  is  crucial  -­‐  legacy  “interfaces”  is  important  for  

operaHon  conHnuity  –  such  as  SNMP  MIBs  

•  New  Features  –  Great,  one  at  a  Hme,  not  the  biggest  rush  

•  Managed/manageable  programmability  (e.g.,  mulHple  independent  SDN  instances)  

Roadmap  to  Opera,onal  SDN  Workshop  7/14/2015   5  KC  Wang    Clemson  University  

Wants  for  “Things  Out  There”  •  Resources  over  mulH-­‐domain  networks  –  Sobware  Defined  Exchange  (SDX)  

•  Current  SDX  examples  have  demonstrated  –  Finer-­‐grain-­‐than-­‐BGP  pairing  –  Inter-­‐domain  resource  sHtching  –  Inter-­‐domain  policy  handling  

•  Research  needed  on  resource  abstracHon,  a  new,  possibly  composite  view  –  AbstracHon,  policy,  economics  

•  MulH-­‐domain  agreement  – Model,  soluHon,  resiliency  

•  IdenHty  and  Access  Management  Roadmap  to  Opera,onal  SDN  Workshop  

7/14/2015   6  KC  Wang    Clemson  University