#+title: Community-Lab: A Community Networking Testbed for the Future Internet * Introduction Hello, I'm (Speaker) from (organization), I work at the CONFINE project and I'm going to talk you about Community-Lab, a community networking testbed for the future Internet. *##* ** Community networks - Infrastructure deployed by organized groups of people for self-provision of broadband networking that works and grows according to their own interests. - Characteristics: Open participation, open and transparent management, distributed ownership. - The EU regards CNs as fundamental for *##* the universalization of broadband networking. - Means new research challenge: How to support the growth and sustainability of CNs by providing the means to conduct experimentally driven research. *##* ** The CONFINE project: Community Networks Testbed for the Future Internet - The CONFINE project takes on the previous challenge. - Project supported by the European Community Framework Programme 7 within the Future Internet Research and Experimentation Initiative (FIRE). - Partners: (*##* community networks) guifi.net, Funkfeuer, Athens Wireless Metropolitan Network; (*##* research institutions) Universitat Politècnica de Catalunya, Fraunhofer Institute for Communication, Information Processing and Ergonomics, Interdisciplinary Institute for Broadband Technology; (*##* supporting NGOs) OPLAN Foundation, Pangea. *##* - Objective: Provide a testbed and associated tools and knowledge for researchers to experiment on real community networks. *##* ** Testbed? - Environment built with real hardware for realistic experimental research on network technologies. *##* - Some wireless testbeds, both indoor and outdoor. - Problems: their limited local scale, their unrealistic controlled environment, experiments can't share resources simultaneously. - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes. Main inspiration for Community-Lab. *##* ** Community-Lab: a testbed for community networks - Community-Lab is the testbed developed by CONFINE. - Integrates and extends the participating community networks. - Using the FEDERICA research backbone for interconnection. *##* - All Community-Lab's software and documentation is “free as in freedom” so people can use it to setup their own CONFINE testbed. * Requirements and challenges A testbed has requirements that are challenged by the unique characteristics of CNs. For instance, how to *##* ** Simple management vs. Distributed node ownership - manage devices belonging to diverse owners? *##* ** Features vs. Lightweight & low cost - support devices ranging from PCs to embedded boards? *##* ** Compatibility vs. Heterogeneity - work with devices which allow little customization? - support diverse connectivity models and link technologies including wireless, wired and fiber? *##* ** Familiarity & flexibility vs. System stability - Researchers usually prefer a familiar Linux env with root access. - isolate experiments that share the same node? - *##* Sometimes accessing node locations can be hard. *##* - keep nodes stable to avoid in-place maintenance? *##* ** Flexibility vs. Network stability - Remember that network experiments run on a production network. - allow interaction at the lowest possible layer of the CN while not disrupting or saturating it? *##* ** Traffic collection vs. Privacy of community network users - allow experiments performing traffic collection and characterization? - While avoiding researchers spying on users' data? *##* ** Management robustness vs. Link instability - deal with frequent outages in the CN when managing nodes? *##* ** Reachability vs. IP address provisioning - CNs suffer from IPv4 scarcity and incompatible addressing besides little IPv6 support. - support testbed spanning different CNs? *##* * Community-Lab testbed architecture ** Overall architecture This is the architecture developed by the CONFINE project to handle the previous challenges. It applies to all testbeds using CONFINE software. *##* # Axel: Introduce scenario: CNs, nodes, admins. # Ivan: Don't zoom. - A testbed consists of a set of nodes managed by the same server. *##* - Server managed by testbed admins. - Network and nodes managed by CN members. - Node admins must adhere to testbed terms and conditions. - This decouples testbed management from infrastructure ownership & mgmt. *##* - Testbed management traffic uses a tinc mesh VPN: - Avoids problems with firewalls and private networks in nodes. - Uses IPv6 to avoid address scarcity and incompatibility between CNs. - Mgmt connections are short-lived to tolerate link instability. *##* - Gateways are entry points to the mgmt network. - They help extend it over multiple CNs by external means (e.g. FEDERICA, the Internet). - They can also route the management network to the Internet. *##* - Researchers run experiments in slices spread over several nodes (as slivers). *##* ** Slices, slivers and nodes # Axel: Reverse, from PoV of researcher: select nodes, run as slivers, gruop in slices. - These concepts are inspired in PlanetLab. - A slice is a management concept that groups a set of related slivers. - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…) allocated for a slice in a given node. - A node hosts several slivers at the same time. *##* ** Node architecture # Axel: More stress on node itself. # Ivan: Don't zoom!! allows the realization of these concepts. *##* A node consists of a CD, a RD and a rD connected to the same wired local network. *##* - The community device - Completely normal CN device, so existing ones can be used. - routes traffic between the CN and the local network (which runs no routing protocol). *##* - The research device - Usually more powerful than CD, since experiments run here. - Separating the RD from the CD minimizes tampering with CN infrastructure. - Also experiments can't crash CN devices. - runs the versatile, light & free OpenWrt distro, customized by CONFINE. *##* - Slivers are implemented as lightweight Linux containers. - So researchers get root access to a familiar environment. *##* - provides direct interfaces to allow low-level interaction of experiments with the CN bypassing the CD. *##* - runs CONFINE control software - uses LXC tools to manage containers and enforce resource limits, isolation and node stability. - uses traffic control, filtering and anonymization to ensure network stability, isolation and privacy (partialy implemented). *##* - The recovery device (not implemented) can force a remote hardware reboot of the RD in case it hangs. It also helps with upgrade and recovery. *##* * Experiments support # Axel: Turn around as of mail: from PoV of researcher: 1) testbed through API, choose nodes, 2) login OoB, 3) auto creation, 4) specific interfaces. Researchers can configure slivers with different types of network interfaces depending on the connectivity needs of experiments. For instance, to *##* - mimic a home PC: use the private interface, *##* which has L3 traffic forwarded using NAT to the CN but filtered to ensure network stability. *##* - implement a network service: create a public interface, *##* which has a CN address and L3 traffic routed directly to the CN but filtered to ensure network stability. *##* - experiment with routing algorithms: create an isolated interface, *##* which uses a VLAN on top of a direct interface. All L2 traffic is allowed, but only between other slivers of the same slice with isolated interfaces on the same physical link. These were demonstrated with BitTorrent and mesh routing experiments at IEEE P2P'12 Conference. *##* Future support is also planned for experiments that: - analyze traffic: create a passive interface *##* to capture traffic on a direct interface, which is filtered and anonymized to ensure network privacy. *##* - perform low-level testing: *##* the sliver is given free raw access to a direct interface. For privacy, isolation and stability reasons this should only be allowed in exceptional occasions. *##* Besides experiments run in slices, researchers will soon be able to collect link quality and bandwidth usage measurements of all RDs' interfaces through the DLEP protocol. *##* Moreover, the server and nodes will soon publish management information through an API that can be used to study the testbed itself, or to implement external services like node monitoring and selection. ** An example experiment to show how the testbed works. We'll create two slivers which ping each other. *##* # Use summary diagram, maybe colorise labels. 1. The researcher first contacts the server and registers a slice description which specifies a template for slivers (e.g. Debian Squeeze) and includes data and programs to setup slivers and run experiments. *##* 2. This and all subsequent changes performed by the researcher are stored in the registry, which holds the config of all components in the testbed. *##* 3. The researcher chooses two nodes and registers sliver descriptions for them in the previous slice. Each one includes a public interface to the CN. Then the researcher tells the server to instantiate the slice. *##* 4. Each of the previous nodes gets a sliver description for it. If enough resources are available, a container is created by applying the sliver configuration over the selected template. *##* 5. Once the researcher knows that slivers have been instantiated, the server can be commanded to activate the slice. *##* 6. When nodes get instructions to activate slivers they start containers. *##* 7. Containers execute the setup & run programs provided by the researcher. *##* 8. Researchers interact straight with containers if needed (e.g. via SSH) and collect results from them. *##* 9. When finished, the researcher tells the server to deactivate and deinstantiate the slice. *##* 10. Nodes get the instructions and they stop and remove containers. *##* This is a summary of all the previous steps. *##* * Cooperation between community networks and Community-Lab can take different forms. Given a typical CN like this, with most nodes linked using cheap and ubiquitous WiFi technology: *##* # Axel: Keep CN on sight, explain RDs and RD links (DIs) in cloud. - CN members can provide an existing CD and let CONFINE connect a RD to it via Ethernet. Experiments are restricted to the application layer unless the node owner allows the RD to include a direct interface (i.e. antenna). *##* - CN members can provide a location and let CONFINE set up a complete node there (CD and RD). In this way CONFINE helps extend the CN. *##* - CONFINE can also extend the CN by setting up a physically separated cloud of connected nodes. Experiments in all layers are possible in this setup, but users should be warned about the research nature of the network. *##* These are only a few ways of cooperation, but more can be envisioned. *##* * Participate! We introduced you to Community-Lab, a new testbed being developed by the CONFINE project to support research that can help CNs become a key part of the Internet in a near future. More information: http://community-lab.net/, http://confine-project.eu/ Community networks and researchers: We look forward to your participation! (Questions? Thanks!) # Local Variables: # mode: org # End: