Index: script.txt ================================================================== --- script.txt +++ script.txt @@ -83,22 +83,21 @@ ** Overall architecture This architecture applies to all testbeds using the CONFINE software. # Move over overlay diagram less overlay connections plus overlay network. - A testbed consists of a set of nodes managed by the same server. - Server managed by testbed admins. - - Network and node managed by node admins (usually node owners). - - Node admins must adhere to a set of conditions. - - Solves management vs. ownersip problem. -- All components in testbed reachable via management network (tinc mesh VPN). + - Network and node managed by node admins (usually owners and CN members). + - Node admins must adhere to testbed conditions. + - This decouples testbed management from infrastructure ownership and mgmt. +- Testbed management traffic uses a tinc mesh VPN: - Avoids problems with firewalls and private networks in nodes. - - Avoids address scarcity and incompatibility (well structured IPv6 schema). - - Public CN addresses still used for experiments when available. -- Gateways connect disjoint parts of the management network. - - Allows a testbed spanning different CNs and islands through external means - (e.g. FEDERICA, the Internet). - - A gateway reachable from the Internet can expose the management network - (if using public addresses). + - Uses IPv6 to avoid address scarcity and incompatibility between CNs. + - Short-lived mgmt connections make components mostly autonomous and + tolerant to link instability. +- A testbed can span multiple CNs thanks to gateways. + - Bridging the mgmt net over external means (e.g. FEDERICA, the Internet). + - Gateways can route the management network to the Internet. - A researcher runs the experiments of a slice in slivers each running in a different node… ** Nodes, slices and slivers - …a model inspired in PlanetLab. @@ -107,49 +106,45 @@ allocated for a slice in a given node. # Diagram: Slices and slivers, two or three nodes with a few slivers on them, # each with a color identifying it with a slice.) ** Node architecture -Mostly autonomous, no long-running connections to server, asynchronous -operation: robust under link instability. # Node simplified diagram, hover to interesting parts. - The community device - - Completely normal CN network device, possibly already existing. - - Routes traffic between the CN and devices in the node's local network - (wired, runs no routing protocol). + - Completely normal CN device, so existing ones can be used. + - Routes traffic between the CN and devices in the node's wired local + network (which runs no routing protocol). - The research device - - More powerful than CD, it runs OpenWrt firmware customized by CONFINE. - - Experiments run here. The separation between CD and RD allows: - - Minumum CONFINE-specific tampering with CN hardware. - - Minimum CN-specific configuration for RDs. - - Greater compatibility and stability for the CN. + - Usually more powerful than CD, since experiments run here. + - Separating CD/RD makes integration with any CN simple and safe: + - Little CONFINE-specific tampering with CN infrastructure. + - Little CN-specific configuration for RDs. + - Misbehaving experiments can't crash CN infrastructure. + - Runs OpenWrt firmware customized by CONFINE. - Slivers are implemented as Linux containers. - - LXC: lightweight virtualization (in Linux mainstream). - - Provides a familiar env for researchers. - - Easier resource limitation, resource isolation and node stability. + - Lightweight virtualization supported mainstream. + - Provides a familiar and flexible env for researchers. + - Direct interfaces allow experiments to bypass the CD when interacting with + the CN. - Control software - - Manages containers and resource isolation through LXC tools. - - Ensures network isolation and stability through traffic control (QoS) - and filtering (from L2 upwards). - - Protects users' privacy through traffic filtering and anonimization. - - Optional, controlled direct interfaces for experiments to interact - directly with the CN (avoiding the CD). + - Uses LXC tools on containers to enforce resource limitation, resource + isolation and node stability. + - Uses traffic control, filtering and anonymization to ensure network + stability, isolation and privacy. - The recovery device can force a hardware reboot of the RD from several triggers and help with upgrade and recovery. ** Node and sliver connectivity # Node simplified diagram, hover to interesting parts. Slivers can be configured with different types of network interfaces depending on what connectivity researchers need for experiments: - Home computer behind a NAT router: a private interface with traffic - forwarded using NAT to the CN. Outgoing traffic is filtered to ensure - network stability. + forwarded using NAT to the CN and filtered to ensure network stability. - Publicly open service: a public interface (with a public CN address) with - traffic routed directly to the CN. Outgoing traffic is filtered to ensure - network stability. + traffic routed directly to the CN and filtered to ensure network stability. - Traffic capture: a passive interface using a direct interface for capture. - Incoming traffic is filtered and anonymized by control software. + Incoming traffic is filtered and anonymized to ensure network privacy. - Routing: an isolated interface using a VLAN on top of a direct interface. It only can reach other slivers of the same slice with isolated interfaces on the same link. All traffic is allowed. - Low-level testing: the sliver is given raw access to the interface. For privacy, isolation and stability reasons this should only be allowed in