38
1 Emulab Node Lifecycle

1 Emulab Node Lifecycle. 2 Overview: Node Lifecycle State Machines

  • View
    223

  • Download
    3

Embed Size (px)

Citation preview

1

Emulab Node Lifecycle

2

Overview:Node Lifecycle State Machines

3

State Machines and stated

• Emulab uses a centralized service to track what each node does: stated– Booting, self-configuration, reloading images,

shutdown, etc.– One server for all nodes; each node tracked

individually

• Tracking based on state machines that describe what nodes should be doing

4

Example: Normal Node BootLocal vnodeLocal PC

5

Example: PC w/ partial OS supportLocal PC PC+OS w/ minimal Elab support

6

State Overview

• Each OS has a state machine– describes what is valid– Many OS’s can follow the same state machine

• Node (or boss) sends event for changes

• stated records,checks in state machine (in DB) if transition from state A to B is valid– Takes actions if not (mail, reboot, retry, etc.)– Also takes action when we “time out” in a state

7

States for Communication

• Programs on boss depend on state transitions to find out about important events– Reboot nodes, wait for ISUP state– Reloading: wait for RELOADDONE, then ISUP

• States can also have actions associated with them (“state triggers”)– E.g. when reloading finishes, we check if it is a

node being cleaned up before going free, and release it if necessary

8

Example: Node Reload

9

All together now…

12

Now, more depth on some of the node lifecycle pieces

• Node bootstrapping

• Node self-configuration

• TMCC/CD: Testbed Master Control Client & Daemon

13

EmulabNode Bootstrapping

14

Requirements

• Ability to take control of a node regardless of current state

• Ability to restore node to a known state

• All with no manual intervention

• Provide this capability to users

15

Taking Control:Rebooting a Node

• Multi-step approach– “ssh reboot”– ICMP “Ping of Death” (IPoD)– Power cycle

• Encapsulated in node_reboot• Available to users

16

node_reboot

• Authenticates the caller

• Sends an event to stated – stated knows what the node is doing

– Can prevent reboots at bad time

• stated reruns node_reboot in “really do it” mode

19

Taking control:Catching the Boot

• PXE-enabled NICs, first boot choice

• PXE downloads a boot loader via TFTP

• Emulab boot loaders may then– Boot from a particular disk partition– Download a standalone kernel– Download an MFS-based FreeBSD

20

The PXE boot process

21

Taking control: Catching the Boot (wide-area nodes)

• Use bootable CDROM

• CDROM contains an MFS-based FreeBSD system

• Contact Emulab (using secure tmcc) for instructions:– Apply patches– Reload disk– Just boot from disk

22

Restoring a Node:Disk reloading

• Frisbee: the multi-threaded, multi-filesystem, multicast marvel!

• Images are intelligently compressed using filesystem specific knowledge

• Image distribution is client driven:– Each client independently requests data– Clients “snoop” each other’s requests

• Client has network, unzip, disk threads• Server takes requests from multiple clients

and multicasts data

23

Disk reloading II

• Frisbee server (frisbeed) is started up to feed appropriate image

• Client node boots into FreeBSD MFS– Obtains info about what image to load– Runs frisbee client (frisbee)– Performs post-frisbee customizations

• Users can reload their own disks at any time (os_load)

24

Disk reloading(wide-area)

• Initiated by the CDROM system

• Copies or streams the image from Emulab via ssh

• Feeds into imageunzip• Distribution in this manner means:

– TCP for flow control and reliability– ssh for privacy

25

Summary:A typical bootstrap scenario

• Experiment creation requests nodes with FreeBSD or Linux

• DB state is setup, nodes are rebooted

• On each, PXE loads pxeboot which boots from appropriate disk partition

• Nodes come up and self-configure

• When freed, nodes are reloaded with the default image

26

Bootstrap Issues

• PXE-based boot does not scale well

• PXE (DHCP) requires MAC broadcast

• Alternative: CD/floppy/flash-based

• Speed issues:– Biggest time sink: the BIOS (2 min)– DHCP can be slow (10-15 sec)– Disk reload not a problem (30-90 sec)

28

Emulab NodeSelf-configuration

29

What is “Self-configuration”?

• Nodes run “stock” OS install and customize themselves at boot time

• Alternative: pre-customized disk images– Issues: speed and space

• Alternative: post-disk load customization– Issue: compatibility

• Disadvantage of self-configuration:– Portability: must be adapted to every OS

30

Emulab self-configurationEmulab Remote

Emulab Local

TraditionalConfiguration feature

x x

xxx

xxxxxx

x

xxxx

xxxxx

Network identity

Shared filesystems

User accounts and keys

Hosts file

Network interfaces

IP tunnels

Link shaping

Routing

Tar and RPM installation

Daemon and agent startup

Custom user script execution

31

Features of the Implementation

• Non-intrusive– Single hook on the host (rc scripts)– Adds two directories of scripts (mostly perl)– Some changes/replacements of standard files

• Mostly “just works” on Unix-like systems– Linux, FreeBSD, OpenBSD to date– Should be easy to port to others

• Windows XP support partially done

32

Where is my Control Net??

• Must locate the control net interface

• Bus search order different in BSD and Linux

• Cannot rely on DB since we can’t reach it!

• Current:– Hack scripts to ID based on kernel boot output– Lame: must be customized, tied to node type

• Future:– DHCP on all interfaces, use IF that replies?

33

The process

• rc.testbed run as last step of node boot

• Tell boss we are configuring (TBSETUP)

• If node status is “free,” we are done

• Get all TMCD-provided info in one transfer

• Set up FS mounts, accounts

• Construct rc.foo scripts for initializing the rest– interfaces, routes, tarballs...– Scripts generated depend on target environment

34

The process(we're not done yet!)

• Network setup: run scripts for setup of interfaces, tunnels, link shaping, routes

• User files: run scripts for RPMs and tarballs

• Run daemons: healthd, idled, watchdog

• Run agents: program, link, trafgen

• Tell boss we are up (ISUP)

• Configure virtual nodes

35

Configuring non-PC nodes

• IXP network processor– Parasitic relationship with the host PC– Much of the configuration done from host PC

• Multiplexed (“virtual”) nodes– Like IXPs, has “sub node” relationship with host– Many-to-one relationship makes it

advantageous to perform setup from the physical node

– Still many aspects performed by the node itself

36

Configuring non-PC nodes(cont.)

• Future: Cisco routers– First cut: build config file (from template) and

have router reconfigure– More advanced: allow custom router OS and

config file

41

The Emulab Master Control Protocol (TMCC/TMCD)

42

Executive Summary

• Simple protocol for transfering state between nodes and “boss” (essentially a database proxy)

• Primarily used for node self-configuration

• Flexible authentication and transport protocol

• The “Swiss-army knife” (or “kitchen sink”) of Emulab protocols

43

TMCC/TMCD

• Testbed Master Control Client and Daemon

• Used to request and return:– Configuration info for a node– State transitions

• Uses a simple ASCII message format, suitable for perl parsing

• Can use UDP, TCP, SSL on TCP

• Client has “caching” and proxy modes

44

ActionCommand

Returns Emulab node name

Returns project and experiment ID

Returns IP info for network interfaces

Returns user names and public keys to install

Returns list of RPMs to install

Returns list of NFS directories to mount

Returns list of static routes to install

Sets current node state

List of virtual nodes for this physical node

Bulk return of all info needed at boot time

nodeid

status

ifconfig

accounts

rpms

mounts

routing

state

vnodelist

fullconfig

TMCC API• Usage: tmcc command argument …

• Returns NAME=VALUE pairs (usually)

• Examples:

45

TMCC Security(local node)

• Identify their server via config file, compiled in name, or DNS

• Authentication at the server based solely on IP address

• Vulnerabilities:– Malicious impersonation of server on control net– Malicious impersonation of another node

46

TMCC Security(wide area nodes)

• SSL: single node private key used by all

• Nodes can ensure they are talking to the server

• Server can ensure it is talking to some node

• Vulnerability: node private key is in filesystem of all nodes, crack one node and you have them all

47

Issues

• Scaling– Every client made 20+ calls at boot time– Mitigated with “bulk transfer” and proxies– Still a hot spot (e.g., ISALIVE messages)

• Security– Highly DOS-able– Mitigate with caching, per-experiment proxy?