Community-Lab introduction

Check-in [9e70b6e675]
Login
Overview
Comment:Added initial full version of script.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1:9e70b6e6759907d5494ddc8f68f3008f529a238d
User & Date: ivan on 2012-09-17 10:46:00
Other Links: manifest | tags
Context
2012-09-17
11:52
Leaner info on architecture, mainly alt node arrangements and refs to bridges. check-in: 86aa53413c user: ivan tags: trunk
10:46
Added initial full version of script. check-in: 9e70b6e675 user: ivan tags: trunk
09:49
initial empty check-in check-in: 43df7e8769 user: ivan tags: trunk
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Added script.txt version [f4b6ad912c].

            1  +#+title: Community-Lab: A Community Networking Testbed for the Future Internet
            2  +
            3  +* Introduction
            4  +** Community networks
            5  +- Origins: In spite of the importance of the Internet, companies leave behind
            6  +  people and regions of little economic interest for them.  Some groups
            7  +  started coordinating the deployment of their own networks for
            8  +  self-provision.
            9  +- Characteristics: Open participation, open and transparent management,
           10  +  distributed ownership, works and grows according to users' interests.
           11  +- Prospective: Strategic importance for the expansion of broadband access
           12  +  throughout Europe (Digital Agenda).
           13  +
           14  +** Testbeds
           15  +- Environments built with real hardware for realistic experimental research on
           16  +  network technologies (instead of simulations).
           17  +- Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's
           18  +  NITOS, WINLAB's ORBIT (indoor).  Limited local scale, controlled
           19  +  environment, no resource sharing mechanisms.
           20  +- Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
           21  +  Main inspiration for Community-Lab.
           22  +
           23  +** The CONFINE project
           24  +- Meaning: Community Networks Testbed for the Future Internet
           25  +- Project supported by the European Community Framework Programme 7 within the
           26  +  Future Internet Research and Experimentation Initiative (FIRE).
           27  +- Motivation: Support the growth and sustainability of community networks by
           28  +  providing the means to conduct experimentally driven research.
           29  +- Objectives: Provide a testbed and associated tools and knowledge for
           30  +  researchers to experiment on real community networks.
           31  +- Partners (list with logos): Fundació guifi.net, Funkfeuer, Athens Wireless
           32  +  Metropolitan Network (community networks); Universitat Politècnica de
           33  +  Catalunya, Fraunhofer Institute for Communication, Information Processing
           34  +  and Ergonomics, Interdisciplinary Institute for Broadband Technology
           35  +  (research centres); the OPLAN Foundation, Pangea (NGOs).
           36  +
           37  +** Community-Lab: a testbed for community networks
           38  +- The testbed developed by CONFINE.
           39  +- Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN.
           40  +# Node maps here for CNs with captures from node DBs.
           41  +- Also nodes in participating research institutions.
           42  +- Linked together over FEDERICA.
           43  +
           44  +* Challenges and requirements
           45  +** Simple management vs. Distributed node ownership
           46  +- In contrast with esp. indoors testbeds that belong wholly to the same
           47  +  entity.
           48  +
           49  +** Features vs. Lightweight, low cost (free & open)
           50  +- Devices ranging from PCs to embedded boards located on roofs (or worse).
           51  +# Node on roof, frozen tower.
           52  +- Need light system able to run on a variety of devices.
           53  +
           54  +** Familiarity & flexibility vs. System stability
           55  +- Familiar Linux env with root access to researchers.
           56  +- Keep env isolation (nodes are shared by experiments).
           57  +- Keep node stability (to avoid in-place maintenance, some difficult to reach
           58  +  node locations).
           59  +
           60  +** Flexibility vs. Network stability
           61  +- Network experiments running on nodes in a production network.
           62  +- Allow interaction with CN at the lowest level possible but not disrupting or
           63  +  overusing it.
           64  +
           65  +** Traffic collection vs. Privacy of CN users
           66  +- Experiments performing traffic collection and characterization.
           67  +- Avoid researchers spying on users' data.
           68  +
           69  +** Link instability vs. Management robustness
           70  +- Deal with frequent network outages in the CN.
           71  +
           72  +** Reachability vs. IP address provisioning
           73  +- Testbed spanning different CNs.
           74  +- IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.
           75  +
           76  +** Heterogeneity vs. Compatibility
           77  +- Lots of different devices (disparate connectivity and software openness).
           78  +- Lots of different link technologies (wireless, wired, fiber).
           79  +
           80  +* Community-Lab testbed architecture
           81  +** Overall architecture
           82  +This architecture applies to all testbeds using the CONFINE software.  All
           83  +CONFINE software and documentation is released under Free licenses.  Anyone
           84  +can setup a CONFINE testbed.
           85  +# Move over overlay diagram less overlay connections plus overlay network.
           86  +- A testbed consists of a set of nodes managed by the same server.
           87  +  - Server managed by testbed admins.
           88  +  - Network and node managed by node admins (usually node owners).
           89  +  - Node admins must adhere to a set of conditions.
           90  +  - Problematic nodes are not eligible for experimentation.
           91  +  - Solves management vs. ownersip problem.
           92  +- All components in testbed reachable via management network (tinc mesh VPN).
           93  +  - Server and nodes offer APIs on that network.
           94  +  - Avoids address scarcity and incompatibility (well structured IPv6 schema).
           95  +  - Avoids problems with firewalls and private networks.
           96  +  - Thus avoids most CONFINE-specific network configuration of the node (CD).
           97  +  - Public addresses still used for experiments when available.
           98  +  - Odd hosts can also connect to the management network.
           99  +- Gateways connect disjoint parts of the management network.
          100  +  - Allows a testbed spanning different CNs and islands through external means
          101  +    (e.g. FEDERICA, the Internet).
          102  +  - A gateway reachable from the Internet can expose the management network
          103  +    (if using public addresses).
          104  +- A researcher runs the experiments of a slice in slivers each running in a
          105  +  different node…
          106  +
          107  +** Nodes, slices and slivers
          108  +- …a model inspired in PlanetLab.
          109  +- A slice groups a set of related slivers.
          110  +- A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
          111  +  allocated for a slice in a given node.
          112  +# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
          113  +# each with a color identifying it with a slice.)
          114  +
          115  +** Node architecture
          116  +Mostly autonomous, no long-running connections to server, asynchronous
          117  +operation: robust under link instability.
          118  +# Node simplified diagram, hover to interesting parts.
          119  +- The community device
          120  +  - Completely normal CN network device, possibly already existing.
          121  +  - Routes traffic between the CN and devices in the node's local network
          122  +    (wired, runs no routing protocol).
          123  +  - CD/RD separation allows minimum CONFINE-specific configuration for RD, but
          124  +    adds one hop for experiments to CN.
          125  +- The research device
          126  +  - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
          127  +    customized by CONFINE.
          128  +  - Slivers are implemented as Linux containers.
          129  +    - LXC: lightweight virtualization (in Linux mainstream).
          130  +    - Resource limitation.
          131  +    - Allows a familiar env with resource isolation and keeping node
          132  +      stability.
          133  +    - Root access to slivers always available to researchers via SSH to RD.
          134  +  - Control software
          135  +    - Manages containers and resource isolation using LXC.
          136  +    - Ensures network isolation and stability through traffic control (QoS)
          137  +      and filtering (from L2 upwards).
          138  +    - Protects users' privacy through traffic filtering and anonimization.
          139  +  - Provides various services to slivers through internal bridge.
          140  +  - Optional, controlled direct interfaces for experiments to interact
          141  +    directly with the CN.
          142  +  - CD/RD separation allows greater compatibility and stability, as well as
          143  +    minimum CN-specific configuration, avoids managing CN hardware.
          144  +- The recovery device can force a hardware reboot of the RD from several
          145  +  triggers and help with upgrade and recovery.
          146  +
          147  +** Alternative node arrangements
          148  +Compatible with the current architecture.
          149  +- RD hosts CD as a community container: low cost (one device), less stable.
          150  +  Not yet implemented.
          151  +- CD hosts RD as a KVM: for a powerful node such as a PC, in the future with
          152  +  radios linked over Ethernet and DLEP.
          153  +
          154  +** Node and sliver connectivity
          155  +# Node simplified diagram, hover to interesting parts.
          156  +Slivers can be configured with different types of network interfaces depending
          157  +on what connectivity researchers need for experiments:
          158  +- Home computer behind a NAT router: a private interface placed into the
          159  +  internal bridge, where traffic is forwarded using NAT to the CN.  Outgoing
          160  +  traffic is filtered to ensure network stability.
          161  +- Publicly open service: a public interface (with a public CN address) placed
          162  +  into the local bridge, with traffic routed directly to the CN.  Outgoing
          163  +  traffic is filtered to ensure network stability.
          164  +- Traffic capture: a passive interface placed on the bridge of the direct
          165  +  interface used for capture.  Incoming traffic is filtered and anonimized by
          166  +  control software.
          167  +- Routing: an isolated interface using a VLAN on top of a direct interface.
          168  +  Other slivers with isolated interfaces must be within link layer reach.  All
          169  +  traffic is allowed.
          170  +- Low-level testing: the sliver is given raw access to the interface.  For
          171  +  privacy, isolation and stability reasons this should only be allowed in
          172  +  exceptional occasions.
          173  +
          174  +* How the testbed works
          175  +# Event diagram, hover over components explained.
          176  +An example experiment: two slivers, one of them (source sliver) pings the
          177  +other one (target sliver).
          178  +
          179  +1. The researcher first contacts the server and creates a slice description
          180  +   which specifies a template for slivers (e.g. Debian Squeeze i386).
          181  +   Experiment data is attached including a program to setup the experiment
          182  +   (e.g. a script that runs =apt-get install iputils-ping=) and another one to
          183  +   run it.
          184  +2. The server updates the registry which holds all definitions of testbed,
          185  +   nodes, users, slices, slivers, etc.
          186  +3. The researcher chooses a couple of nodes and creates sliver descriptions
          187  +   for them in the previous slice.  Both sliver descriptions include a public
          188  +   interface to the CN and user-defined properties for telling apart the
          189  +   source sliver from the target one.  Sliver descriptions go to the registry.
          190  +4. Each of the previous nodes gets a sliver description for it.  If enough
          191  +   resources are available, a container is created with the desired
          192  +   configuration.
          193  +5. Once the researcher knows that slivers have been instantiated, the server
          194  +   can be commanded to activate the slice.  The server updates the registry.
          195  +6. When nodes get instructions to activate slivers they start the containers.
          196  +7. Containers run the experiment setup program and the run program.  The
          197  +   programs query sliver properties to decide their behaviour.
          198  +8. Researchers interact with containers if needed (e.g. via SSH) and collect
          199  +   results straight from them.
          200  +9. When finished, the researcher tells the server to deactivate and
          201  +   deinstantiate the slice.
          202  +10. Nodes get the instructions and they stop and remove containers.
          203  +
          204  +At all times there can be external services interacting with researchers,
          205  +server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
          206  +collect results.
          207  +
          208  +* Community-Lab integration in existing community networks
          209  +# CN diagram (buildings and cloud).
          210  +A typical CN looks like this, with most nodes linked using WiFi technology
          211  +(cheap and ubiquitous), but sometimes others as optical fiber.  Remember that
          212  +CNs are production networks with distributed ownership.  Strategies:
          213  +
          214  +# CN diagram extended with CONFINE devices (hover over interesting part).
          215  +- Take an existing node owned by CN members, CONFINE provides a RD and
          216  +  connects it via Ethernet.  Experiments are restricted to the application
          217  +  layer unless the node owner allows the RD to include a direct interface
          218  +  (i.e. antenna).
          219  +- Extend the CN with complete nodes, CONFINE provides both the CD and the RD
          220  +  and uses a CN member's location.  All but low-level experiments are
          221  +  possible with direct interfaces.
          222  +- Set up a physically separated cloud of nodes, CONFINE extends the CN with a
          223  +  full installation of connected nodes at a site controlled by a partner
          224  +  (e.g. campus).  All kinds of experiments are possible with direct
          225  +  interfaces.  Users are warned about the experimental nature of the network.
          226  +
          227  +* Recap
          228  +
          229  +- Community networks are an emerging field to provide citizens with
          230  +  connectivity in a sustainable and distributed manner in which the owners of
          231  +  the networks are the users themselves.
          232  +- Research on this field is necessary to support CNs growth while improving
          233  +  their operation and quality.
          234  +- Experimental tools are still lacking because of the peculiarities of CNs.
          235  +- The CONFINE project aims to fill this gap by deploying Community-Lab, a
          236  +  testbed for community networks inside existing community networks.
          237  +
          238  +# Commenters: Less attention on architecture, more on global working of
          239  +# testbed.
          240  +
          241  +# Ivan: Describe simple experiment, show diagram (UML-like timing diagram?
          242  +# small animation?) showing the steps from slice creation to instantiation,
          243  +# activation, deactivation and deletion for that example experiment.
          244  +
          245  +# Axel: Maybe the difference of push and pull can be a bit hidden since
          246  +# concepts of allocation and deployment remain somehow.
          247  +
          248  +# Ivan: Explain sliver connectivity options using a table with examples ("for
          249  +# this experiment you can use that type of sliver interface").
          250  +
          251  +# Axel: I think there are also many figures and lists in the paper that can be
          252  +# reused as buzzwords.
          253  +
          254  +# Axel: For example its nice if RDs, sliver connectivity, experiment
          255  +# status,... can be instantly demonstrated using globally routable IPv6
          256  +# addresses to anybody without having to prepare complex tunnels.  These are
          257  +# attractive advantages of our design/implementation over PlanetLab and we
          258  +# should make use of it and exploit them in demonstrations, dissemination,
          259  +# open-call...
          260  +
          261  +# Ivan: We may show more or less the same presentation in the upcoming SAX
          262  +# 2012 (Tortosa, September 29-29).  We may add (or dedicate more time to) a
          263  +# couple of points more related with Community Networks, namely the Open Call
          264  +# and how to participate in Community-Lab.
          265  +
          266  +# Local Variables:
          267  +# mode: org
          268  +# End: