38
Extending the Internet Extending the Internet Exchange to Exchange to the Metropolitan Area the Metropolitan Area Keith Mitchell [email protected] Executive Chairman London Internet Exchange ISPcon, London 23rd February 1999

Extending the Internet Exchange to the Metropolitan Area Keith Mitchell [email protected] Executive Chairman London Internet Exchange ISPcon, London 23rd

Embed Size (px)

Citation preview

Extending the Internet Extending the Internet Exchange toExchange to

the Metropolitan Areathe Metropolitan Area

Keith [email protected]

Executive Chairman

London Internet Exchange

ISPcon, London

23rd February 1999

Mostly a Case StudyMostly a Case Study

• Background

• IXP Architectures& Technology

• LINX Growth Issues

• New LINX Switches

• LINX Second Site

What is the LINX ?What is the LINX ?

• UK National IXP

• Not-for-profit co-operative of ISPs

• Main aim to keep UK domestic Internet traffic in UK

• Increasingly keeping EU traffic in EU

• Largest IXP in Europe

LINX StatusLINX Status

• Established Oct 94 by5 member ISPs

• Now has 63 members• 7 FTE dedicated staff• Sub-contracts co-location to 2 neutral

sites in London Docklands:• Telehouse• TeleCity

• Traffic doubling every ~4 months !

LINX MembersLINX Members

AT & T Frontier Technology OleaneANS UK GlobalCenter OnyxAtlas GlobalOne Planet OnlineBT Internet Services Graphnet PSI UKCable & Wireless GTS (Sovam) RedNetTeleWest (Cable Internet) GX Networks (Xara) QUZACarrier1 HighwayOne TechnocomCerbernet IBM Global Network Tele DanmarkClaranet ICL (ECRC) TeleglobeCOLT INSnet TeliaCompuserve IPf U-Net InternetDemon Internet Services Ireland Online UUNET UKDeutsche Telekom mediaWays UKERNA (JANET)

DIALnet Mistral VASnetDirect Connection Nacamar VBCnetEasynet NETCOM WireHub!Esat Net NetKonect Wisper BandwidthEuroNet Nildram XTMLExodus NTL Internet Zoo CorporationFreedom 2 Surf

LINX MembersLINX Members by Country by Country

33

14

5

3 1 1 1 1 1 1

UK COM/US DE IE

SE CA FR RU

DK EU/CH

Exchange Point HistoryExchange Point History

• Initially established in 1992 by:• MFS, Washington DC - “MAE-East”• Commercial Internet Exchange,

Silicon Valley - “CIX-West”

• Amsterdam, Stockholm, others soon afterwards

• Now at least one in every European, G8, OECD etc country

IXP ArchitecturesIXP Architectures

• Initially:• 10baseT router to switch• FDDI between switches• commonly DEC Gigaswitches

• More recently:• 100baseT between routers and

switches• Cisco Catalyst 5000 popular

IXP TechnologiesIXP Technologies

• 10Mbps Ethernet

• 100Mbps Ethernet

• FDDI

• ATM

• Gigabit Ethernet

IXP Technologies -IXP Technologies -EthernetEthernet

• 10baseT is only really an option for small members with 1 or 2 E1 circuits and no servers at IXP site

• all speeds of Ethernet will be present in ISP backbones for servers for some time to come

IXP Technologies -IXP Technologies -100baseT100baseT

• Cheap

• Proven

• Supports full duplex

• Meets most non-US ISP switch port bandwidth requirements

• Range limitations can be overcome using 100baseFL

IXP Technologies - IXP Technologies - FDDIFDDI

• Proven

• Bigger 4k MTU

• Dual-attached more resilient

• Longer maximum distance

• Full-duplex proprietary only

IXP Technologies -IXP Technologies -ATMATM

• Only used at US federally-sponsored NAPs, PARIX• Ameritech, PacBell, Sprint, Worldcom; FT

• Initially serious deployment problems• “packet-shredding” led to poor

bandwidth efficiency

• Now >1Gbps traffic at NAPs

IXP Technologies -IXP Technologies -ATMATM

• Some advantages:• inter-member bandwidth limits• inter-member bandwidth measurement• “hard” enforcement of peering policy

restrictions

• But:• High per-port cost, especially for

>155Mbps• Limited track record for IXP applications

IXP Technologies -IXP Technologies -Gigabit EthernetGigabit Ethernet

• Cost-effective and simple high bandwidth

• Ideal to scale inter-switch links

• Not good router vendor support yet

• Standards very new

• Highly promising for metropolitan and even longer distance links

LINX ArchitectureLINX Architecture

• Originally Cisco Catalyst 1200s:• 10baseT to member routers• FDDI ring between switches

• Until 98Q3:• Member primary connections by

FDDI and 100baseT• Backup connections by 10baseT• FDDI and 100baseT inter-switch

Old LINX TopologyOld LINX Topology

Old LINX InfrastructureOld LINX Infrastructure

• 5 Cisco Switches:• 2 x Catalyst 5000, 3 x Catalyst 1200

• 2 Plaintree switches• 2 x WaveSwitch 4800

• FDDI backbone• Switched FDDI ports• 10baseT & 100baseT ports• Media convertors for fibre ether

(>100m)

Growth IssuesGrowth Issues

• Lack of space for new members

• Exponential traffic growth

• Bottleneck in inter-switch links

• Needed to upgrade to Gigabit backbone within existing site 98Q3

• Nx100Mbps trunking does not scale (MAE problems)

Statistics and looking glass at http://www2.linx.net/

Switch IssuesSwitch Issues

• Catalyst and Plaintree switches no longer in use• Catalyst 5000s appeared to have

broadcast scaling issues regardless of Supervisor Engine

• FDDI could no longer cope• Plaintree switches had proven too

unstable and unmanageable• Catalyst 1200s at end of useful life

LINX Growth SolutionsLINX Growth Solutions• Find second site within 5km

Gigabit Ethernet range via open tender

• Secure diverse dark/dim fibre between sites from carriers

• Upgrade switches to support Gigabit links between them

• Do not offer Gigabit member connections yet

LINX Growth ObstaclesLINX Growth Obstacles

• Existing Telehouse site full until 99Q3 extension ready

• Poor response to Q4 97 site ITT:• only 3 serious bidders• successful bidder pulled out after

messing us around for 6 months :-(

• Only two carriers were prepared and able to offer dark/dim fibre after months of discussions

Gigabit Switch OptionsGigabit Switch Options

• Evaluated 6 vendors:• Cabletron/Digital, Cisco, Extreme,

Foundry, Packet Engines, Plaintree

• Some highly cost-effective options available

• But needed non-blocking, modular, future-proof equipment, not workgroup boxes

Metro GigabitMetro Gigabit• No real MAN-distance fibre to test

kit out on :-(

• LINX member COLT kindly lent us a “big drum of fibre”

• Most kit appears to work to at least 5km

• Some interoperability issues with dim to dark management convertor boxes

TelehouseTelehouse• Located in London Docklands

• on meridian line at 0º longitude !

• 24x7 manned, controlled access• Highly resilient infrastructure• Diverse SDH fibre from most UK

carriers• Diverse power from national grid,

multiple generators• Owned by consortium of Japanese

banks, KDD, BT

LINX and TelehouseLINX and Telehouse

• Telehouse is “co-locate” provider• computer and telecoms “hotel”

• LINX is customer

• About 100 ISPs are customers, including 50 LINX members• other members get space from LINX

• Facilitates LAN interconnection

LINX 2nd SiteLINX 2nd Site

• Secured good deal with two carriers for diverse fibre• but only because LINX is special

case

• New ITT:• bid deadline mid-Aug 98• 8 submissions

• Awarded to TeleCity Sep 98

LINX and TeleCityLINX and TeleCity

• TeleCity is new VC co-lo startup• sites in Manchester, London• London site 3 miles from Telehouse

• Same LINX relationship as Telehouse• choice for members

• Space for 800 customer racks

• LINX has 16-rack suite

New InfrastructureNew Infrastructure

• Packet Engines PR-5200• Chassis based 16 slot switch• Non-blocking 52Gbps backplane• Used for our core, primary switches• One in Telehouse, one in TeleCity• Will need a second one in Telehouse

within this quarter• Supports 1000LX, 1000SX, FDDI and

10/100 ethernet

New InfrastructureNew Infrastructure

• Packet Engines PR-1000:• Small version of PR-5200• 1U switch; 2x SX and 20x 10/100• Same chipset as 5200

• Extreme Summit 48:• Used for second connections• Gives vendor resiliency• Excellent edge switch -

low cost per port and• 2x Gigabit, 48x 10/100 ethernet

New InfrastructureNew Infrastructure

• Topology changes:• Aim to be able to have major failure

in one switch without affecting member connectivity

• Aim to have major failures on inter-switch links with out affecting connectivity

• Ensure that inter-switch connections are not bottlenecks

New backboneNew backbone

• All primary inter-switch links are now gigabit

• New kit on order to ensure that all inter-switch links are gigabit

• Inter-switch traffic minimised by keeping all primary and all backup traffic on their own switches

Current StatusCurrent Status

• Old switches no longer in use

• New Switches live since Dec 98

• TeleCity site has been running since Dec 98

• First in-service member connections at TeleCity soon

• Capacity for up to 100x traffic growth

IXP Switch FuturesIXP Switch Futures

• Vendor claims of 1000baseProprietary 50km+ range are interesting

• Need abuse prevention tools:• port filtering, RMON

• Need traffic control tools:• member/member bandwidth limiting

and measurement• What inter-switch technology will

support Gigabit member connections ?

ConclusionsConclusions

• Extending Gigabit beyond your LAN is hard, but not technically

• Only worth trying if you have your own fibre

• Some London carriers are meeting the challenge of providing dark fibre• now 4-5 will do this

Contact InformationContact Information

• http://www.linx.net/

[email protected]

• Tel +44 1733 705000

• Fax +44 1733 353929