33
Ken Calvert University of Kentucky Laboratory for Advanced Networking FIND PI Meeting FIND: Postmodern Internet Architecture Project Collaborators: J. Griffioen, O. Ascigil, S. Yuan Also: U. Maryland (Spring, Bhattacharjee), U. Kansas (Sterbenz) Thanks: NSF

FIND: Postmodern Internet Architecture Project ... · Ken Calvert University of Kentucky Laboratory for Advanced Networking FIND PI Meeting FIND: Postmodern Internet Architecture

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Ken Calvert University of Kentucky Laboratory for Advanced Networking

FIND PI Meeting

FIND: Postmodern Internet Architecture Project Collaborators: J. Griffioen, O. Ascigil, S. Yuan Also: U. Maryland (Spring, Bhattacharjee), U. Kansas (Sterbenz) Thanks: NSF

FIND PI Meeting

Because direct connection is too expensive – need to share channels.

FIND PI Meeting

Forwarding Nodes Shared (infrastructure) Channels

How might the (inter)network layer be designed “from scratch” today?

Function of the network layer: allow infrastructure to be shared “Get packets from A to B via C”

Goals: Separate concerns to allow access to a greater portion of the design space, distribute control

Decouple routing from forwarding Support autoconfiguration (except where policy is involved)

Approach: Separate mechanism and policy

“Design for tussle” – ask “What mechanism could help with this?”

Omit what’s not crucial Use good ideas from 20 years of research – including FIND Don’t be overly constrained by today’s technology; have faith in engineering

FIND PI Meeting

Node Identifiers The network exists to share channels

Instead: Name channels

Why

Don’t have to change names when changing levels of abstraction

A clean inductive approach

FIND PI Meeting

Naming nodes

FIND PI Meeting

F E

D C B

H I

K J

L G

3 1

2

4 5

9 8

6

7 A

?

?

?

?

Naming Channels

FIND PI Meeting

2

a b

c d

e

f

g i

j

k

8

p m n o

q

r s

t

b

h u

v w

x

y z

Topology-based addressing Why:

Avoid need for address-assignment authority

Hierarchical addresses leak information about other addresses (i.e., attackers can easily guess/enumerate host addresses)

Instead: self-configuring, self-certifying Channel IDs from a large, flat space

ID = hash of public key [cf. CGA]

FIND PI Meeting

Hop-by-hop path determination Why:

Enable a greater range of path selection policies, competition

Allow providers to control (only) the use of their infrastructure

Instead: Layered source routing

Packets carry sequence of channel IDs; nodes forward to next channel

Simple forwarding lookup

Exact match

Small forwarding tables

FIND PI Meeting

Universal Destinationhood Why: Give endpoints control over their own reachability

Instead: Endpoints register with EID-to-Locator mapping service to be “findable”

FIND PI Meeting

Simple Inductive Model

No fixed number of levels of hierarchy

Abstraction for scalability: hide topology inside realms

Source routing

Packets carry sequence of channel IDs

Users choose providers, providers choose paths through their infrastructure [cf. NIRA]

Identifier-locator separation

Locators tie destination IDs to infrastructure

Explicit data-plane policy-compliance check (“Motivation”)

Decouple customer-provider relationship from topology

Packets carry proof of policy-compliance

FIND PI Meeting

Base case:

Link-state discovery of infrastructure channel topology

Nodes advertise willingness to relay packets between channels

Optionally: QoS-related information

E.g., classes of service w/pricing, long-term statistics

Topology/routing service collects information about infrastructure channels and connectivity

Topology/routing service computes paths between infrastructure channels

Destinations register with EID-to-Locator (E2L) service

FIND PI Meeting

FIND PI Meeting

a

b

c

d

e

Infrastructure channels

End channels

FIND PI Meeting

a

b

c

d

e

Endpoint

(Application) Forwarding Nodes

FIND PI Meeting

a

b

c

d

e

Nodes advertise transit capabilities to topology/routing service.

Locators

Topology/Routing service only knows about (shared) infrastructure channels

Locator ties EID to infrastructure topology

(EID, {path1, ..., pathk})

EID-to-locator resolution system (part of infrastructure)

Consequences:

Destinations control whether they are “findable”

Multhihoming is handled naturally

Additional resolution step

FIND PI Meeting

d

e

Locator: (E2, {d, e})

FIND PI Meeting

a

b

c

d

e

Destination endpoints

register their mappings with EID-to-Locator

service.

FIND PI Meeting

a

b

c

d

e

To send to E2:

1. Get locator from E2L.

FIND PI Meeting

a

b

c

d

e

To send to E2:

1. Get locator from E2L. 2. Ask topology service

for paths connecting source and destination

locators

FIND PI Meeting

a

b

c

d

e

To send to E2:

1. Get locator from E2L. 2. Ask topology service

for paths connecting source and destination

locators.

3. Choose path, transmit.

FIND PI Meeting

a

b

c

d

e

To send to E2:

1. Get locator from E2L. 2. Ask topology service

for paths connecting source and destination

locators.

3. Choose path, transmit. 4. Cache paths for later

use.

FIND PI Meeting

a

b

c

d

e

All paths are symmetric.

Destination uses reverse path to respond.

Inductive step

Realms = administrative regions

Parts of the network that look like nodes

Border channels mark boundaries where abstraction happens

Internal topology info does not cross realm borders

Only transit service is advertised outside the realm

Border channels are visible on both sides of realm boundary

Locators

E2L service is hierarchical

Each level extends the ingress/egress path associated with the lower-level locator according to its own policies

FIND PI Meeting

Border channels are visible at both levels E2L services have hierarchical relationship

Locators registered at lower level propagate to higher level (if desired)

Paths are extended according to provider

policy traffic engineering

FIND PI Meeting

a

b

c

d

e

Packet enters realm – path fault: next channel not directly attached

FIND PI Meeting

a

b

c

d

e

Packets enters realm – path fault: next channel not directly attached

Ingress node pushes header with forwarding directive (and motivation) to get packet across realm

FIND PI Meeting

a

b

c

d

e

Packets enters realm – path fault: next channel not directly attached

Ingress node pushes header with forwarding directive (and motivation) to get packet across realm

Egress node pops outer header, forwards packet

FIND PI Meeting

a

b

c

d

e

Paths chosen according to stakeholders’ policies

Destination provider chooses ingress path(s) during locator construction

Source provider chooses egress paths, similarly

Source or proxy/broker chooses transit path

FIND PI Meeting

path: a.b.c.e.j.p.s.v.z

a b

c

d

e

f

g

h

j

k

m n

p

q

s v

u z

Locator: {p.s.v.z,q.s.u.z}

Motivation: In-band policy enforcement mechanism

Forwarding nodes use it to answer the question:

“Why should I relay this packet?"

Contains hard-to-forge evidence that

the source is a customer of the provider, or

the packet pertains to the operation of the network, or

a trusted upstream party vouches for the packet

Why: enables competition among transit providers

Empowers users [cf. NIRA, Nimrod]

Implement: packet carries proof that somebody knew a secret also known by the forwarding node [cf. Platypus, TVA]

FIND PI Meeting

Problem: Forwarding nodes cannot share a key with every user Solution: Delegation hierarchy

Nodes share secrets with providers

Derive others on demand (and cache)

FIND PI Meeting

Provider Forwarding Node (secret x)

...

h(h(x,b1),bn) b3

Brokers ID: b0.bk

secret = h(h(x,b0),bk)

Users

b0 b1

b2

Problem: Need to strictly limit number of crypto operations per packet But don’t want to limit length of delegation chain

Solution: Nodes precompute and store 2N hashes (for N-bit principal IDs) , derive delegated secret by XORing hashes per ID bits

FIND PI Meeting

(secret x)

0 1

... ...

Secret for ID 100...1

Small headers Path segment: ~102 bytes, Motivation: ~102 bytes

... times hierarchy depth

Need larger MTUs!

Global EID-to-locator resolution system

Current approach: hierarchical DHT [cf. Canon]

Paths are not transferable between endpoints

FIND PI Meeting

Header Size Access Backbone

1981 102 bits 104 bits/sec 106 bits/sec

Today 102 bits 107-108 bits/sec 1010 bits/sec 105 bits

Pomo routing and forwarding: minimalist network layer, designed for tussle

Channel-oriented paradigm, simple inductive structure

Top and bottom levels use same mechanisms

Self-configuring EIDs w/ EID-locator resolution

Distributed, hierarchical, policy-compliant path selection

Intended to provide access to greater range of path determination policies, competition, QoS

Explicit data-plane policy enforcement

Endpoint control over visibility/reachability

Implementation status: prototype, Emulab evaluation

FIND PI Meeting