Community-Lab introduction

Artifact [adbf65b445]
Login

Artifact adbf65b445a5760aa117ad79c50fe558bb2fef09:


#+title: Community-Lab: A Community Networking Testbed for the Future Internet

* Introduction
** Community networks
- Infrastructure deployed by organized groups of people for self-provision of
  broadband networking that works and grows according to their own interests.
- Characteristics: Open participation, open and transparent management,
  distributed ownership.
- The EU regards CNs as fundamental for the universalization of broadband
  networking.
- New research challenge: How to support the growth and sustainability of CNs
  by providing the means to conduct experimentally driven research.

** The CONFINE project: Community Networks Testbed for the Future Internet
- Takes on the previous challenge.
- Project supported by the European Community Framework Programme 7 within the
  Future Internet Research and Experimentation Initiative (FIRE).
# List partner's logos.
- Partners: (community networks) guifi.net, Funkfeuer, Athens Wireless
  Metropolitan Network; (research centres) Universitat Politècnica de
  Catalunya, Fraunhofer Institute for Communication, Information Processing
  and Ergonomics, Interdisciplinary Institute for Broadband Technology; (NGOs)
  OPLAN Foundation, Pangea.
- Objective: Provide a testbed and associated tools and knowledge for
  researchers to experiment on real community networks.

** Testbed?
- Environment built with real hardware for realistic experimental research on
  network technologies.
- Wireless, both indoor (IBBT's w-iLab.t, CERTH's NITOS, WINLAB's ORBIT) and
  outdoor (HU's Berlin RoofNet, MIT Roofnet).  Problems: limited local scale,
  controlled environment, no resource sharing between experiments.
- Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
  Main inspiration for Community-Lab.

** Community-Lab: a testbed for community networks
- The testbed developed by CONFINE.
# Node maps here for CNs with captures from node DBs.
- Integrates and extends three community networks: guifi.net, FunkFeuer, AWMN.
- Also includes nodes in participating research centres.
- All linked together over the FEDERICA research backbone.
- All its software and documentation is “free as in freedom”, anyone can setup
  a CONFINE testbed like Community-Lab.

* Requirements and challenges
A testbed has requirements that are challenged by the unique characteristics
of CNs.  For instance, how to

** Simple management vs. Distributed node ownership
- manage devices belonging to diverse owners?

** Features vs. Lightweight, low cost
- support devices ranging from PCs to embedded boards?

** Compatibility vs. Heterogeneity
- work with devices which allow little customization?
- support diverse connectivity and link technologies (wireless, wired, fiber)?

** Familiarity & flexibility vs. System stability
- Researchers prefer a familiar Linux env with root access.
- isolate experiments that share the same node?
- keep nodes stable to avoid in-place maintenance?  Accessing node locations
  can be hard.
# Frozen tower.

** Flexibility vs. Network stability
- Network experiments run on nodes in a production network.
- allow interaction at the lowest possible layer of the CN while not
  disrupting or overusing it?

** Traffic collection vs. Privacy of CN users
- allow experiments performing traffic collection and characterization?
- avoid researchers spying on users' data?

** Management robustness vs. Link instability
- deal with frequent network outages in the CN when managing nodes?

** Reachability vs. IP address provisioning
- We have IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.
- support testbed spanning different CNs?

* Community-Lab testbed architecture
This is the architecture developed by the CONFINE project to handle the
previous challenges.

** Overall architecture
This architecture applies to all testbeds using the CONFINE software.
# Move over overlay diagram less overlay connections plus overlay network.
- A testbed consists of a set of nodes managed by the same server.
  - Server managed by testbed admins.
  - Network and node managed by CN members.
  - Node admins must adhere to testbed terms and conditions.
  - This decouples testbed management from infrastructure ownership and mgmt.
- Testbed management traffic uses a tinc mesh VPN:
  - Avoids problems with firewalls and private networks in nodes.
  - IPv6 is used to avoid address scarcity and incompatibility between CNs.
  - Link instability is tolerated by using short-lived mgmt connections.
- Gateways allow a testbed to span multiple CNs.
  - Connecting the mgmt net over external means (e.g. FEDERICA, the Internet).
  - Gateways can make the management network available to the Internet.
- A researcher runs the experiments of a slice in slivers each running in a
  different node.

** Slices, slivers and nodes
# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
# each with a color identifying it with a slice.)
- These concepts are inspired in PlanetLab.
- The slice (a management concept) groups a set of related slivers.
- A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
  allocated for a slice in a given node.
- A node hosts several slivers at the same time.

** Node architecture
allows the realization of these concepts.  A node consists of:
# Node simplified diagram, hover to interesting parts.
- The community device
  - Completely normal CN device, so existing ones can be used.
  - Routes traffic between the CN and the node's wired local network (which
    runs no routing protocol).
- The research device
  - Usually more powerful than CD, since experiments run here.
  - A separated RD minimizes tampering with CN infrastructure.
    - Also experiments can't crash the CD.
  - Runs the versatile, light and free OpenWrt distro, customized by CONFINE.
  - Slivers are implemented as lightweight Linux containers.
    - So researchers get root access to a familiar environment.
  - Direct interfaces allow low-level interaction of experiments with the CN
    bypassing the CD.
  - Control software
    - Uses LXC tools to manage containers and enforce resource limits,
      isolation and node stability.
    - Uses traffic control, filtering and anonymization to ensure network
      stability, isolation and privacy (partialy implemented).
- The recovery device (not implemented) can force a remote hardware reboot of
  the RD in case it hangs.  It also helps with upgrade and recovery.

* Experiments support
# Node simplified diagram, hover to interesting parts.
Researchers can configure slivers with different types of network interfaces
depending on the connectivity needs of experiments.  For instance, to

- mimic a home PC: use the private interface, which has L3 traffic forwarded
  using NAT to the CN but filtered to ensure network stability.
- implement a network service: create a public interface, which has a CN
  address and L3 traffic routed directly to the CN but filtered to ensure
  network stability.
- experiment with routing algorithms: create an isolated interface, which uses
  a VLAN on top of a direct interface.  All L2 traffic is allowed, but only
  between other slivers of the same slice with isolated interfaces on the same
  physical link.

These were demonstrated with BitTorrent and mesh routing experiments at IEEE
P2P'12 Conference.  Future support is planned for experiments that:

- analyze traffic: create a passive interface to capture traffic on a direct
  interface, which is filtered and anonymized to ensure network privacy.
- perform low-level testing: the sliver is given free raw access to a direct
  interface.  For privacy, isolation and stability reasons this should only be
  allowed in exceptional occasions.

# List example experiments, add these.
Besides experiments run in slices, researchers will soon be able to collect
link quality and bandwidth usage measurements of all RDs' interfaces through
the DLEP protocol.

Moreover, the server and nodes will soon publish management information
through an API that would be used to study the testbed itself, or to implement
external services like node monitoring and selection.

** An example experiment
# Event diagram, hover over components explained.
To show how the testbed works: two slivers which ping each other.

1. The researcher first contacts the server and registers a slice description
   which specifies a template for slivers (e.g. Debian Squeeze) and includes
   data and programs to setup slivers and run experiments.
2. This and all subsequent changes performed by the researcher are stored in
   the registry, which holds the config of all components in the testbed.
3. The researcher chooses two nodes and registers sliver descriptions for them
   in the previous slice.  Each one includes a public interface to the CN.
   The researcher tells the server to instantiate the slice.
4. Each of the previous nodes gets a sliver description for it.  If enough
   resources are available, a container is created by applying the sliver
   configuration over the selected template.
5. Once the researcher knows that slivers have been instantiated, the server
   can be commanded to activate the slice.
6. When nodes get instructions to activate slivers they start the containers.
7. Containers execute the setup and run programs provided by the researcher.
8. Researchers interact straight with containers if needed (e.g. via SSH) and
   collect results from them.
9. When finished, the researcher tells the server to deactivate and
   deinstantiate the slice.
10. Nodes get the instructions and they stop and remove containers.

* Cooperation between community networks and Community-Lab
# CN diagram (buildings and cloud).
can take different forms.  Given a typical CN like this, with most nodes
linked using cheap and ubiquitous WiFi technology:

# CN diagram extended with CONFINE devices (hover over interesting part).
- CN members can provide an existing CD and let CONFINE connect a RD to it via
  Ethernet.  Experiments are restricted to the application layer unless the
  node owner allows the RD to include a direct interface (i.e. antenna).
- CN members can provide a location and let CONFINE set up a complete node
  there (CD and RD).  In this way CONFINE helps extend the CN.
- CONFINE can also extend the CN by setting up a physically separated cloud of
  connected nodes at a site controlled by a partner (e.g. campus).  All kinds
  of experiments are possible using direct interfaces.  Users should be warned
  about the research nature of the network.

* Participate!
We introduced you to Community-Lab, a new testbed being developed by the
CONFINE project to support research that can help CNs become a key part of the
Internet in a near future.

Community networks and researchers: We look forward to your participation!
- More information: http://community-lab.net/, http://confine-project.eu/
- Questions?

# Commenters: Less attention on architecture, more on global working of
# testbed.

# Ivan: Describe simple experiment, show diagram (UML-like timing diagram?
# small animation?) showing the steps from slice creation to instantiation,
# activation, deactivation and deletion for that example experiment.

# Axel: Maybe the difference of push and pull can be a bit hidden since
# concepts of allocation and deployment remain somehow.

# Ivan: Explain sliver connectivity options using a table with examples ("for
# this experiment you can use that type of sliver interface").

# Axel: I think there are also many figures and lists in the paper that can be
# reused as buzzwords.

# Axel: For example its nice if RDs, sliver connectivity, experiment
# status,... can be instantly demonstrated using globally routable IPv6
# addresses to anybody without having to prepare complex tunnels.  These are
# attractive advantages of our design/implementation over PlanetLab and we
# should make use of it and exploit them in demonstrations, dissemination,
# open-call...

# Ivan: We may show more or less the same presentation in the upcoming SAX
# 2012 (Tortosa, September 29-29).  We may add (or dedicate more time to) a
# couple of points more related with Community Networks, namely the Open Call
# and how to participate in Community-Lab.

# Local Variables:
# mode: org
# End: