Index: script.txt ================================================================== --- script.txt +++ script.txt @@ -45,19 +45,19 @@ ** Simple management vs. Distributed node ownership - In contrast with esp. indoors testbeds that belong wholly to the same entity. ** Features vs. Lightweight, low cost (free & open) -- Devices ranging from PCs to embedded boards located on roofs (or worse). -# Node on roof, frozen tower. +- Devices ranging from PCs to embedded boards. - Need light system able to run on a variety of devices. ** Familiarity & flexibility vs. System stability - Familiar Linux env with root access to researchers. - Keep env isolation (nodes are shared by experiments). - Keep node stability (to avoid in-place maintenance, some difficult to reach node locations). +# Frozen tower. ** Flexibility vs. Network stability - Network experiments running on nodes in a production network. - Allow interaction with CN at the lowest level possible but not disrupting or overusing it. @@ -77,27 +77,23 @@ - Lots of different devices (disparate connectivity and software openness). - Lots of different link technologies (wireless, wired, fiber). * Community-Lab testbed architecture ** Overall architecture -This architecture applies to all testbeds using the CONFINE software. All -CONFINE software and documentation is released under Free licenses. Anyone +This architecture applies to all testbeds using the CONFINE software. Since +all CONFINE software and documentation is released under Free licenses, anyone can setup a CONFINE testbed. # Move over overlay diagram less overlay connections plus overlay network. - A testbed consists of a set of nodes managed by the same server. - Server managed by testbed admins. - Network and node managed by node admins (usually node owners). - Node admins must adhere to a set of conditions. - - Problematic nodes are not eligible for experimentation. - Solves management vs. ownersip problem. - All components in testbed reachable via management network (tinc mesh VPN). - - Server and nodes offer APIs on that network. + - Avoids problems with firewalls and private networks. - Avoids address scarcity and incompatibility (well structured IPv6 schema). - - Avoids problems with firewalls and private networks. - - Thus avoids most CONFINE-specific network configuration of the node (CD). - Public addresses still used for experiments when available. - - Odd hosts can also connect to the management network. - Gateways connect disjoint parts of the management network. - Allows a testbed spanning different CNs and islands through external means (e.g. FEDERICA, the Internet). - A gateway reachable from the Internet can expose the management network (if using public addresses). @@ -118,57 +114,46 @@ # Node simplified diagram, hover to interesting parts. - The community device - Completely normal CN network device, possibly already existing. - Routes traffic between the CN and devices in the node's local network (wired, runs no routing protocol). - - CD/RD separation allows minimum CONFINE-specific configuration for RD, but - adds one hop for experiments to CN. - The research device - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware customized by CONFINE. + - Experiments run here. The separation between CD and RD allows: + - Minumum CONFINE-specific tampering with CN hardware. + - Minimum CN-specific configuration for RDs. + - Greater compatibility and stability for the CN. - Slivers are implemented as Linux containers. - LXC: lightweight virtualization (in Linux mainstream). - - Resource limitation. - - Allows a familiar env with resource isolation and keeping node - stability. - - Root access to slivers always available to researchers via SSH to RD. + - Easier resource limitation, resource isolation and node stability. + - Provides a familiar env for researchers. - Control software - - Manages containers and resource isolation using LXC. + - Manages containers and resource isolation through LXC tools. - Ensures network isolation and stability through traffic control (QoS) and filtering (from L2 upwards). - Protects users' privacy through traffic filtering and anonimization. - - Provides various services to slivers through internal bridge. - Optional, controlled direct interfaces for experiments to interact - directly with the CN. - - CD/RD separation allows greater compatibility and stability, as well as - minimum CN-specific configuration, avoids managing CN hardware. + directly with the CN (avoiding the CD). - The recovery device can force a hardware reboot of the RD from several triggers and help with upgrade and recovery. -** Alternative node arrangements -Compatible with the current architecture. -- RD hosts CD as a community container: low cost (one device), less stable. - Not yet implemented. -- CD hosts RD as a KVM: for a powerful node such as a PC, in the future with - radios linked over Ethernet and DLEP. - ** Node and sliver connectivity # Node simplified diagram, hover to interesting parts. Slivers can be configured with different types of network interfaces depending on what connectivity researchers need for experiments: -- Home computer behind a NAT router: a private interface placed into the - internal bridge, where traffic is forwarded using NAT to the CN. Outgoing - traffic is filtered to ensure network stability. -- Publicly open service: a public interface (with a public CN address) placed - into the local bridge, with traffic routed directly to the CN. Outgoing - traffic is filtered to ensure network stability. -- Traffic capture: a passive interface placed on the bridge of the direct - interface used for capture. Incoming traffic is filtered and anonimized by - control software. +- Home computer behind a NAT router: a private interface with traffic + forwarded using NAT to the CN. Outgoing traffic is filtered to ensure + network stability. +- Publicly open service: a public interface (with a public CN address) with + traffic routed directly to the CN. Outgoing traffic is filtered to ensure + network stability. +- Traffic capture: a passive interface using a direct interface for capture. + Incoming traffic is filtered and anonimized by control software. - Routing: an isolated interface using a VLAN on top of a direct interface. - Other slivers with isolated interfaces must be within link layer reach. All - traffic is allowed. + It only can reach other slivers of the same slice with isolated interfaces + on the same link. All traffic is allowed. - Low-level testing: the sliver is given raw access to the interface. For privacy, isolation and stability reasons this should only be allowed in exceptional occasions. * How the testbed works @@ -176,13 +161,12 @@ An example experiment: two slivers, one of them (source sliver) pings the other one (target sliver). 1. The researcher first contacts the server and creates a slice description which specifies a template for slivers (e.g. Debian Squeeze i386). - Experiment data is attached including a program to setup the experiment - (e.g. a script that runs =apt-get install iputils-ping=) and another one to - run it. + Experiment data is attached including a program to setup the experiment and + another one to run it. 2. The server updates the registry which holds all definitions of testbed, nodes, users, slices, slivers, etc. 3. The researcher chooses a couple of nodes and creates sliver descriptions for them in the previous slice. Both sliver descriptions include a public interface to the CN and user-defined properties for telling apart the @@ -206,24 +190,25 @@ collect results. * Community-Lab integration in existing community networks # CN diagram (buildings and cloud). A typical CN looks like this, with most nodes linked using WiFi technology -(cheap and ubiquitous), but sometimes others as optical fiber. Remember that -CNs are production networks with distributed ownership. Strategies: +(cheap and ubiquitous), but sometimes others as optical fiber. The CONFINE +project follows three strategies taking into account that CNs are production +networks with distributed ownership: # CN diagram extended with CONFINE devices (hover over interesting part). - Take an existing node owned by CN members, CONFINE provides a RD and connects it via Ethernet. Experiments are restricted to the application layer unless the node owner allows the RD to include a direct interface (i.e. antenna). - Extend the CN with complete nodes, CONFINE provides both the CD and the RD - and uses a CN member's location. All but low-level experiments are - possible with direct interfaces. + and uses a CN member's location. All but low-level experiments are possible + using direct interfaces. - Set up a physically separated cloud of nodes, CONFINE extends the CN with a full installation of connected nodes at a site controlled by a partner - (e.g. campus). All kinds of experiments are possible with direct + (e.g. campus). All kinds of experiments are possible using direct interfaces. Users are warned about the experimental nature of the network. * Recap - Community networks are an emerging field to provide citizens with