#+title: Community-Lab: A Community Networking Testbed for the Future Internet
* Introduction
** Community networks
- Origins: In spite of the importance of the Internet, companies and
governments left behind people and regions of little economic interest for
them. Thus some groups started coordinating the deployment of their own
networks for self-provision.
- Characteristics: Open participation, open and transparent management,
distributed ownership, works and grows according to users' interests.
- Prospective: Strategic importance for the expansion of broadband access
throughout Europe (as stated in the European Digital Agenda).
- A challenge: How to support the growth and sustainability of community
networks by providing the means to conduct experimentally driven research.
** The CONFINE project (Community Networks Testbed for the Future Internet)
- Takes on the previous challenge.
- Project supported by the European Community Framework Programme 7 within the
Future Internet Research and Experimentation Initiative (FIRE).
- Partners (list with logos): Fundació guifi.net, Funkfeuer, Athens Wireless
Metropolitan Network (community networks); Universitat Politècnica de
Catalunya, Fraunhofer Institute for Communication, Information Processing
and Ergonomics, Interdisciplinary Institute for Broadband Technology
(research centres); the OPLAN Foundation, Pangea (NGOs).
- Objectives: Provide a testbed and associated tools and knowledge for
researchers to experiment on real community networks.
** Testbeds
- Environments built with real hardware for realistic experimental research on
network technologies (instead of simulations).
- Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's
NITOS, WINLAB's ORBIT (indoor). Limited local scale, controlled
environment, no resource sharing between experiments.
- Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
Main inspiration for Community-Lab.
** Community-Lab: a testbed for community networks
- The testbed developed by CONFINE.
- Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN.
# Node maps here for CNs with captures from node DBs.
- Also nodes in participating research centres.
- Linked together over the FEDERICA backbone.
- All its software and documentation is released under Free licenses, anyone
can setup a CONFINE testbed like Community-Lab.
* Challenges and requirements
** Simple management vs. Distributed node ownership
- In contrast with e.g. indoors testbeds that belong wholly to the same
entity.
** Features vs. Lightweight, low cost (free & open)
- Devices ranging from PCs to embedded boards.
- Need light system able to run on very different devices.
** Familiarity & flexibility vs. System stability
- Familiar Linux env with root access to researchers.
- Keep env isolation (nodes are shared by experiments).
- Keep node stability (to avoid in-place maintenance, some difficult to reach
node locations).
# Frozen tower.
** Flexibility vs. Network stability
- Network experiments running on nodes in a production network.
- Allow interaction with CN at the lowest level possible but not disrupting or
overusing it.
** Traffic collection vs. Privacy of CN users
- Experiments performing traffic collection and characterization.
- Avoid researchers spying on users' data.
** Link instability vs. Management robustness
- Deal with frequent network outages in the CN.
** Reachability vs. IP address provisioning
- Testbed spanning different CNs.
- IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.
** Heterogeneity vs. Compatibility
- Lots of different devices (disparate connectivity and software openness).
- Lots of different link technologies (wireless, wired, fiber).
* Community-Lab testbed architecture
** Overall architecture
This architecture applies to all testbeds using the CONFINE software.
# Move over overlay diagram less overlay connections plus overlay network.
- A testbed consists of a set of nodes managed by the same server.
- Server managed by testbed admins.
- Network and node managed by node admins (usually owners and CN members).
- Node admins must adhere to testbed conditions.
- This decouples testbed management from infrastructure ownership and mgmt.
- Testbed management traffic uses a tinc mesh VPN:
- Avoids problems with firewalls and private networks in nodes.
- Uses IPv6 to avoid address scarcity and incompatibility between CNs.
- Short-lived mgmt connections make components mostly autonomous and
tolerant to link instability.
- A testbed can span multiple CNs thanks to gateways.
- Bridging the mgmt net over external means (e.g. FEDERICA, the Internet).
- Gateways can route the management network to the Internet.
- A researcher runs the experiments of a slice in slivers each running in a
different node…
** Nodes, slices and slivers
- …a model inspired in PlanetLab.
- The slice (a management concept) groups a set of related slivers.
- A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
allocated for a slice in a given node.
# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
# each with a color identifying it with a slice.)
** Node architecture
# Node simplified diagram, hover to interesting parts.
- The community device
- Completely normal CN device, so existing ones can be used.
- Routes traffic between the CN and devices in the node's wired local
network (which runs no routing protocol).
- The research device
- Usually more powerful than CD, since experiments run here.
- Separating CD/RD makes integration with any CN simple and safe:
- Little CONFINE-specific tampering with CN infrastructure.
- Little CN-specific configuration for RDs.
- Misbehaving experiments can't crash CN infrastructure.
- Runs OpenWrt firmware customized by CONFINE.
- Slivers are implemented as Linux containers.
- Lightweight virtualization supported mainstream.
- Provides a familiar and flexible env for researchers.
- Direct interfaces allow experiments to bypass the CD when interacting with
the CN.
- Control software
- Uses LXC tools on containers to enforce resource limitation, resource
isolation and node stability.
- Uses traffic control, filtering and anonymization to ensure network
stability, isolation and privacy (partialy implemented).
- The recovery device can force a hardware reboot of the RD from several
triggers and help with upgrade and recovery (not implemented).
* Supported experiments
# Node simplified diagram, hover to interesting parts.
Researchers can configure slivers with different types of network interfaces
depending on the connectivity needs of experiments:
- Home PC-like access: a private interface with traffic forwarded using NAT to
the CN (filtered to ensure network stability).
- Internet service: a public interface (with a public CN address) with traffic
routed directly to the CN (filtered to ensure network stability).
- Traffic analysis (not implemented): a passive interface capturing traffic on
a direct interface (filtered and anonymized to ensure network privacy).
- Routing: an isolated interface using a VLAN on top of a direct interface.
All traffic is allowed, but it can only reach other slivers of the same
slice with isolated interfaces on the same physical link.
- Low-level testing (not implemented): the sliver is given raw access to the
interface. For privacy, isolation and stability reasons this should only be
allowed in exceptional occasions.
Besides low level access, RDs also offer link quality and bandwidth usage
measurements for all their interfaces through DLEP (available soon).
Finally, the server and nodes publish management information through an API
that can be used to study the testbed itself or to implement external services
(like node monitoring and selection).
** An example experiment
# Event diagram, hover over components explained.
To show how the testbed works: two slivers, one of them pings the other one.
Let's call them the source and target sliver, respectively.
1. The researcher first contacts the server and creates a slice description
which specifies a template for slivers (e.g. Debian Squeeze i386). The
researcher attaches experiment data including a program to setup slivers
for the experiments and another one to run them.
2. This and all subsequent changes initiated by the researcher are stored in
the registry, which holds the config of all components in the testbed.
3. The researcher chooses a couple of nodes and creates sliver descriptions
for them belonging to the previous slice. Both sliver descriptions include
a public interface to the CN and user-defined properties to mark slivers as
either source or target.
4. Each of the previous nodes gets a sliver description for it. If enough
resources are available, a container is created by applying the desired
configuration over the selected template.
5. Once the researcher knows that slivers have been instantiated, the server
can be commanded to activate the slice.
6. When nodes get instructions to activate slivers they start the containers.
7. Containers execute the experiment's setup and run programs. The programs
query sliver properties to decide whether to act as source or target.
8. Researchers interact straight with containers if needed (e.g. via SSH) and
collect results from them.
9. When finished, the researcher tells the server to deactivate and
deinstantiate the slice.
10. Nodes get the instructions and they stop and remove containers.
* Cooperation between community networks and Community-Lab
# CN diagram (buildings and cloud).
There are different ways. Given a typical CN like this, with most nodes
linked using cheap and ubiquitous WiFi technology:
# CN diagram extended with CONFINE devices (hover over interesting part).
- CN members can provide an existing CD and let CONFINE connect a RD to it via
Ethernet. Experiments are restricted to the application layer unless the
node owner allows the RD to include a direct interface (i.e. antenna).
- CN members can provide a location and let CONFINE set up a complete node
there (CD and RD). All but low-level experiments are possible using direct
interfaces. In this way CONFINE helps extend the CN.
- CONFINE can also extend the CN by setting up a physically separated cloud of
connected nodes at a site controlled by a partner (e.g. campus). All kinds
of experiments are possible using direct interfaces. Users are warned about
the experimental nature of the network.
* Participate!
We introduced you to Community-Lab, a new testbed being developed by the
CONFINE project to support research targeted to allow CNs to become a key part
of Internet infrastructure in the future.
Community networks and researchers: We look forward to your participation!
- More information: http://community-lab.net/, http://confine-project.eu/
- Questions?
# Commenters: Less attention on architecture, more on global working of
# testbed.
# Ivan: Describe simple experiment, show diagram (UML-like timing diagram?
# small animation?) showing the steps from slice creation to instantiation,
# activation, deactivation and deletion for that example experiment.
# Axel: Maybe the difference of push and pull can be a bit hidden since
# concepts of allocation and deployment remain somehow.
# Ivan: Explain sliver connectivity options using a table with examples ("for
# this experiment you can use that type of sliver interface").
# Axel: I think there are also many figures and lists in the paper that can be
# reused as buzzwords.
# Axel: For example its nice if RDs, sliver connectivity, experiment
# status,... can be instantly demonstrated using globally routable IPv6
# addresses to anybody without having to prepare complex tunnels. These are
# attractive advantages of our design/implementation over PlanetLab and we
# should make use of it and exploit them in demonstrations, dissemination,
# open-call...
# Ivan: We may show more or less the same presentation in the upcoming SAX
# 2012 (Tortosa, September 29-29). We may add (or dedicate more time to) a
# couple of points more related with Community Networks, namely the Open Call
# and how to participate in Community-Lab.
# Local Variables:
# mode: org
# End: