24
FIND experimental requirements David D. Clark

FIND experimental requirements David D. Clark. FIND Future Internet Design (FIND) is an NSF program (now folded in to NetSE) to envision the Internet

Embed Size (px)

Citation preview

FIND experimental requirements

David D. Clark

FIND

Future Internet Design (FIND) is an NSF program (now folded in to NetSE) to envision the Internet of 15 years from now.

FIND was one of the early drivers for the development of GENI.

FIND research spans the layers from technology through classic Internet architecture to application design.

The original Science Plan As part of the original work on GENI, we

prepared a Science Plan (also called the Research Plan), which listed various requirements.

Those requirements have not changed. The GENI plan, as then proposed, (and

now?) did not fully meet those requirements. Suggests a need to revisit some assumptions

and design approaches.

General requirements

A real, distributed network. Not a bunch of routers in a rack. As wide a reach as possible.

Reach to the edge. Allow experimental edge equipment direct connection

to the GENI-based experiments. Access to real users.

Creates a tension with the desire for realistic lower layer technology, e.g. optical layer.

Long-running experiments.

Background commentary There are a wide range of experiments (with a wide

range of requirements) that might be posed for GENI. It is not realistic to imagine that we can build a single

fully general facility that can support all of these experiments at reasonable cost. Implies a need to be creative, make clever choices. Make some

compromises. The MREFC process distorted the translation of

requirements into facilities E.g. a single unified facility.

All of these assumptions should be reconsidered. One or many? Is GENI one thing? A better set of outcomes?

The excitement is at the edge While (some of us) want to mess with the core

Security, management, etc. The real action is at the edge.

New devices (mobile, embedded) New networks (wireless of all sorts)

Not all parts of the network will look like ethernet! Cars and other mobile networks.

To emulate the future, need all this in the experiment. Not so clear how to virtualize. Does this matter? Remember the goal of reaching real users.

The excitement is at higher layers

Design patterns for applications. Highly distributed, clouds, etc.

New support mechanisms Identity frameworks, location frameworks, etc.

Important to ask: to what extent can we explore these on the existing Internet?

New protocol stacks Researchers want to try new protocols at the

network, transport (and higher) layers. New means of authentication. New mechanisms to deal with soft state in the network. …

Implies the need to replace the protocol stack in the end node. Facilitating this should be part of GENI effort. Mobile devices, not just PCs. Remember, we want real users. A real experimental tension here…

Lower level research? Two sorts of reasons.

We need to do a better job supporting apps. Security, availability, management, economics,

etc. (It is not the data plane, except indirectly…) What happens there will influence the design of

applications. The research is not independent.

We don’t have the lower levels right. Management, operations, etc. Some of this is perhaps more independent.

Packets

We want to try out packet formats that are not IPv4 and IPv6. Congestion and its control. Novel addressing modes New mechanisms for security. New concepts in network management. New schemes for tracking payment. …

This capability must reach all parts of the experiment.

New functions “in” or “on” the net Not all boxes that are topologically “in the network” are

routers. Security enforcement devices. Encryption devices. Packet inspection devices. Application support devices. …

Implies different node requirements. More storage, processing, etc. Very high performance. Network devices today are highly purpose-tuned. How can we provide generality for a range of experiments with

different topological requirements? Processing nodes should be in the net, not just at the edges.

Emulating a real network The core of a big ISP today does not forward

packets, but is built of flows that carry aggregates of packets. Optical lambdas, MPLS circuits, etc. Not all parts of the network will look like ethernet!

Cost and complexity is a major driver in real nets.

A future architecture should do a better job of linking the management of these layers.

How should these capabilities be made available to experimenters? Should this be a GENI goal?

Optics in the core The original proposal for the GENI platform had

rather sophisticated optical components in the core. E.g. ROADMs.

This had major cost implications. This had major “virtualization” implications.

How do you virtualize a ROADM, since it is not a packet device?

Not well done in original proposal. The goal was to at least emulate how real

networks look today.

One alternative Let the folks who want to play with real

optics build their own environment. Smaller scale?

Build the large scale testbed out of packets (IP, ethernet: does it matter?) and tunnel our “new packets” inside them.

Is this a better idea? (It has limitations that should be recognized, as well as benefits.)

Picking compromises

If we fold optics in… More realistic (but for what class of experiments?)

QoS, non-packet end-to-end services. Intrinsic availability, security, management, etc. Cross-layer protocol designs.

If we use packets and tunnel… Much easier to achieve scale and reach real users. Lower layer “technology complexity” will have to be

simulated. Is this an issue?

Another way of saying this

The independence of the parts of the Internet (e.g. apps from link technology) is as a result of the “hourglass” design, the end-to-end design, etc.

Assuming that experiments can tunnel over packets risks baking in today’s hour-glass architecture in tomorrow’s experiments. My opinion: the risk of guiding research toward a

presumption of the hourglass has to be mitigated in some way.

Management As the previous slide tried to emphasize, a

lot of future Internet research is about management, not the data plane.

So GENI is not just virtualized data planes, but virtualized network management schemes. Fault diagnosis.

Virtualization messes this up. Set up and tear down of circuits. Want to mimic real operators, not just users.

A general challenge for GENI

Many of the proposed ideas for a future Internet stress management, security, etc.

It may be less clear how to virtualize the experimental infrastructure to allow these to be demonstrated than to virtualize the data plane. For example, to demonstrate improved availability,

the GENI platform would ideally mimic the baseline failure modes of the eventual technology.

What is the best set of compromises? Again: one GENI facility or many?

Scale and speed

Some experiments stress scale 100’s of routers, 100’s of fixed end-nodes,

N00’s of mobile devices, and rich connectivity.

Others mention “thousands of interconnected devices”.

Speed was not an issue for this experiment. These are miniscule experiments!

Real scale 10K-100K AS regions. Routers with 500 10 Gb ports.

100 striped lambdas… Millions of multi-homed end-nodes. Highly heterogeneous environment.

How are we going to try (and validate) ideas at these scales?

Some specific requirements Rich connectivity (to experiment with novel

routing.) Need for multiple regions (emulate different

operators). High bandwidth paths between slices?

Heavy-duty isolation among slices. Detecting physical location.

All devices should be able to do this. Universal crypto capability. Allow experiments in virtualized architectures.

Recursive virtualization?

Instrumentation

In some respects, a confused discussion. Clearly, we need to gather data on how

experiments are going. But is this done in the infrastructure, or in the

slice? A future Internet must include native

capabilities for instrumentation. Is “data” something generic that is

shared? Wishful thinking?

Non-requirements

Satellite Not a sufficiently distinct challenge.

Residential access networks Again, a challenge, but not sufficiently

distinctive. If have wireless nets and high-bandwidth links to

end-nodes, good enough.

The experimental landscape

How many experiment? PlanetLab has demonstrated that there can

be 1000’s of experiments. But how many folks want to try out a new

Internet? Perhaps we need different sorts of tools for

experimental deployment at different “layers”. Again, one GENI or many?