Community-Lab introduction

Artifact [f4b6ad912c]
Login

Artifact f4b6ad912c3304257b750b97f33f9535d630ee2b:


#+title: Community-Lab: A Community Networking Testbed for the Future Internet

* Introduction
** Community networks
- Origins: In spite of the importance of the Internet, companies leave behind
  people and regions of little economic interest for them.  Some groups
  started coordinating the deployment of their own networks for
  self-provision.
- Characteristics: Open participation, open and transparent management,
  distributed ownership, works and grows according to users' interests.
- Prospective: Strategic importance for the expansion of broadband access
  throughout Europe (Digital Agenda).

** Testbeds
- Environments built with real hardware for realistic experimental research on
  network technologies (instead of simulations).
- Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's
  NITOS, WINLAB's ORBIT (indoor).  Limited local scale, controlled
  environment, no resource sharing mechanisms.
- Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
  Main inspiration for Community-Lab.

** The CONFINE project
- Meaning: Community Networks Testbed for the Future Internet
- Project supported by the European Community Framework Programme 7 within the
  Future Internet Research and Experimentation Initiative (FIRE).
- Motivation: Support the growth and sustainability of community networks by
  providing the means to conduct experimentally driven research.
- Objectives: Provide a testbed and associated tools and knowledge for
  researchers to experiment on real community networks.
- Partners (list with logos): Fundació guifi.net, Funkfeuer, Athens Wireless
  Metropolitan Network (community networks); Universitat Politècnica de
  Catalunya, Fraunhofer Institute for Communication, Information Processing
  and Ergonomics, Interdisciplinary Institute for Broadband Technology
  (research centres); the OPLAN Foundation, Pangea (NGOs).

** Community-Lab: a testbed for community networks
- The testbed developed by CONFINE.
- Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN.
# Node maps here for CNs with captures from node DBs.
- Also nodes in participating research institutions.
- Linked together over FEDERICA.

* Challenges and requirements
** Simple management vs. Distributed node ownership
- In contrast with esp. indoors testbeds that belong wholly to the same
  entity.

** Features vs. Lightweight, low cost (free & open)
- Devices ranging from PCs to embedded boards located on roofs (or worse).
# Node on roof, frozen tower.
- Need light system able to run on a variety of devices.

** Familiarity & flexibility vs. System stability
- Familiar Linux env with root access to researchers.
- Keep env isolation (nodes are shared by experiments).
- Keep node stability (to avoid in-place maintenance, some difficult to reach
  node locations).

** Flexibility vs. Network stability
- Network experiments running on nodes in a production network.
- Allow interaction with CN at the lowest level possible but not disrupting or
  overusing it.

** Traffic collection vs. Privacy of CN users
- Experiments performing traffic collection and characterization.
- Avoid researchers spying on users' data.

** Link instability vs. Management robustness
- Deal with frequent network outages in the CN.

** Reachability vs. IP address provisioning
- Testbed spanning different CNs.
- IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.

** Heterogeneity vs. Compatibility
- Lots of different devices (disparate connectivity and software openness).
- Lots of different link technologies (wireless, wired, fiber).

* Community-Lab testbed architecture
** Overall architecture
This architecture applies to all testbeds using the CONFINE software.  All
CONFINE software and documentation is released under Free licenses.  Anyone
can setup a CONFINE testbed.
# Move over overlay diagram less overlay connections plus overlay network.
- A testbed consists of a set of nodes managed by the same server.
  - Server managed by testbed admins.
  - Network and node managed by node admins (usually node owners).
  - Node admins must adhere to a set of conditions.
  - Problematic nodes are not eligible for experimentation.
  - Solves management vs. ownersip problem.
- All components in testbed reachable via management network (tinc mesh VPN).
  - Server and nodes offer APIs on that network.
  - Avoids address scarcity and incompatibility (well structured IPv6 schema).
  - Avoids problems with firewalls and private networks.
  - Thus avoids most CONFINE-specific network configuration of the node (CD).
  - Public addresses still used for experiments when available.
  - Odd hosts can also connect to the management network.
- Gateways connect disjoint parts of the management network.
  - Allows a testbed spanning different CNs and islands through external means
    (e.g. FEDERICA, the Internet).
  - A gateway reachable from the Internet can expose the management network
    (if using public addresses).
- A researcher runs the experiments of a slice in slivers each running in a
  different node…

** Nodes, slices and slivers
- …a model inspired in PlanetLab.
- A slice groups a set of related slivers.
- A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
  allocated for a slice in a given node.
# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
# each with a color identifying it with a slice.)

** Node architecture
Mostly autonomous, no long-running connections to server, asynchronous
operation: robust under link instability.
# Node simplified diagram, hover to interesting parts.
- The community device
  - Completely normal CN network device, possibly already existing.
  - Routes traffic between the CN and devices in the node's local network
    (wired, runs no routing protocol).
  - CD/RD separation allows minimum CONFINE-specific configuration for RD, but
    adds one hop for experiments to CN.
- The research device
  - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
    customized by CONFINE.
  - Slivers are implemented as Linux containers.
    - LXC: lightweight virtualization (in Linux mainstream).
    - Resource limitation.
    - Allows a familiar env with resource isolation and keeping node
      stability.
    - Root access to slivers always available to researchers via SSH to RD.
  - Control software
    - Manages containers and resource isolation using LXC.
    - Ensures network isolation and stability through traffic control (QoS)
      and filtering (from L2 upwards).
    - Protects users' privacy through traffic filtering and anonimization.
  - Provides various services to slivers through internal bridge.
  - Optional, controlled direct interfaces for experiments to interact
    directly with the CN.
  - CD/RD separation allows greater compatibility and stability, as well as
    minimum CN-specific configuration, avoids managing CN hardware.
- The recovery device can force a hardware reboot of the RD from several
  triggers and help with upgrade and recovery.

** Alternative node arrangements
Compatible with the current architecture.
- RD hosts CD as a community container: low cost (one device), less stable.
  Not yet implemented.
- CD hosts RD as a KVM: for a powerful node such as a PC, in the future with
  radios linked over Ethernet and DLEP.

** Node and sliver connectivity
# Node simplified diagram, hover to interesting parts.
Slivers can be configured with different types of network interfaces depending
on what connectivity researchers need for experiments:
- Home computer behind a NAT router: a private interface placed into the
  internal bridge, where traffic is forwarded using NAT to the CN.  Outgoing
  traffic is filtered to ensure network stability.
- Publicly open service: a public interface (with a public CN address) placed
  into the local bridge, with traffic routed directly to the CN.  Outgoing
  traffic is filtered to ensure network stability.
- Traffic capture: a passive interface placed on the bridge of the direct
  interface used for capture.  Incoming traffic is filtered and anonimized by
  control software.
- Routing: an isolated interface using a VLAN on top of a direct interface.
  Other slivers with isolated interfaces must be within link layer reach.  All
  traffic is allowed.
- Low-level testing: the sliver is given raw access to the interface.  For
  privacy, isolation and stability reasons this should only be allowed in
  exceptional occasions.

* How the testbed works
# Event diagram, hover over components explained.
An example experiment: two slivers, one of them (source sliver) pings the
other one (target sliver).

1. The researcher first contacts the server and creates a slice description
   which specifies a template for slivers (e.g. Debian Squeeze i386).
   Experiment data is attached including a program to setup the experiment
   (e.g. a script that runs =apt-get install iputils-ping=) and another one to
   run it.
2. The server updates the registry which holds all definitions of testbed,
   nodes, users, slices, slivers, etc.
3. The researcher chooses a couple of nodes and creates sliver descriptions
   for them in the previous slice.  Both sliver descriptions include a public
   interface to the CN and user-defined properties for telling apart the
   source sliver from the target one.  Sliver descriptions go to the registry.
4. Each of the previous nodes gets a sliver description for it.  If enough
   resources are available, a container is created with the desired
   configuration.
5. Once the researcher knows that slivers have been instantiated, the server
   can be commanded to activate the slice.  The server updates the registry.
6. When nodes get instructions to activate slivers they start the containers.
7. Containers run the experiment setup program and the run program.  The
   programs query sliver properties to decide their behaviour.
8. Researchers interact with containers if needed (e.g. via SSH) and collect
   results straight from them.
9. When finished, the researcher tells the server to deactivate and
   deinstantiate the slice.
10. Nodes get the instructions and they stop and remove containers.

At all times there can be external services interacting with researchers,
server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
collect results.

* Community-Lab integration in existing community networks
# CN diagram (buildings and cloud).
A typical CN looks like this, with most nodes linked using WiFi technology
(cheap and ubiquitous), but sometimes others as optical fiber.  Remember that
CNs are production networks with distributed ownership.  Strategies:

# CN diagram extended with CONFINE devices (hover over interesting part).
- Take an existing node owned by CN members, CONFINE provides a RD and
  connects it via Ethernet.  Experiments are restricted to the application
  layer unless the node owner allows the RD to include a direct interface
  (i.e. antenna).
- Extend the CN with complete nodes, CONFINE provides both the CD and the RD
  and uses a CN member's location.  All but low-level experiments are
  possible with direct interfaces.
- Set up a physically separated cloud of nodes, CONFINE extends the CN with a
  full installation of connected nodes at a site controlled by a partner
  (e.g. campus).  All kinds of experiments are possible with direct
  interfaces.  Users are warned about the experimental nature of the network.

* Recap

- Community networks are an emerging field to provide citizens with
  connectivity in a sustainable and distributed manner in which the owners of
  the networks are the users themselves.
- Research on this field is necessary to support CNs growth while improving
  their operation and quality.
- Experimental tools are still lacking because of the peculiarities of CNs.
- The CONFINE project aims to fill this gap by deploying Community-Lab, a
  testbed for community networks inside existing community networks.

# Commenters: Less attention on architecture, more on global working of
# testbed.

# Ivan: Describe simple experiment, show diagram (UML-like timing diagram?
# small animation?) showing the steps from slice creation to instantiation,
# activation, deactivation and deletion for that example experiment.

# Axel: Maybe the difference of push and pull can be a bit hidden since
# concepts of allocation and deployment remain somehow.

# Ivan: Explain sliver connectivity options using a table with examples ("for
# this experiment you can use that type of sliver interface").

# Axel: I think there are also many figures and lists in the paper that can be
# reused as buzzwords.

# Axel: For example its nice if RDs, sliver connectivity, experiment
# status,... can be instantly demonstrated using globally routable IPv6
# addresses to anybody without having to prepare complex tunnels.  These are
# attractive advantages of our design/implementation over PlanetLab and we
# should make use of it and exploit them in demonstrations, dissemination,
# open-call...

# Ivan: We may show more or less the same presentation in the upcoming SAX
# 2012 (Tortosa, September 29-29).  We may add (or dedicate more time to) a
# couple of points more related with Community Networks, namely the Open Call
# and how to participate in Community-Lab.

# Local Variables:
# mode: org
# End: