Community-Lab introduction

Check-in [a0f6c29ca9]
Login
Overview
Comment:Minor corrections after reading.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1:a0f6c29ca91dbac00580359ec56e388d805e814b
User & Date: ivan on 2012-09-18 09:01:04
Other Links: manifest | tags
Context
2012-09-18
09:43
Base file for Sozi presentation. check-in: fe496df47b user: ivan tags: trunk
09:01
Minor corrections after reading. check-in: a0f6c29ca9 user: ivan tags: trunk
2012-09-17
21:28
Added diagram with nodes, slices and slivers. check-in: 81d6ba9caa user: ivan tags: trunk
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Modified script.txt from [f759c594c4] to [d908ae5804].

    26     26     researchers to experiment on real community networks.
    27     27   
    28     28   ** Testbeds
    29     29   - Environments built with real hardware for realistic experimental research on
    30     30     network technologies (instead of simulations).
    31     31   - Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's
    32     32     NITOS, WINLAB's ORBIT (indoor).  Limited local scale, controlled
    33         -  environment, no resource sharing mechanisms.
           33  +  environment, no resource sharing between experiments.
    34     34   - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
    35     35     Main inspiration for Community-Lab.
    36     36   
    37     37   ** Community-Lab: a testbed for community networks
    38     38   - The testbed developed by CONFINE.
    39     39   - Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN.
    40     40   # Node maps here for CNs with captures from node DBs.
................................................................................
    41     41   - Also nodes in participating research centres.
    42     42   - Linked together over the FEDERICA backbone.
    43     43   - All its software and documentation is released under Free licenses, anyone
    44     44     can setup a CONFINE testbed like Community-Lab.
    45     45   
    46     46   * Challenges and requirements
    47     47   ** Simple management vs. Distributed node ownership
    48         -- In contrast with esp. indoors testbeds that belong wholly to the same
           48  +- In contrast with e.g. indoors testbeds that belong wholly to the same
    49     49     entity.
    50     50   
    51     51   ** Features vs. Lightweight, low cost (free & open)
    52     52   - Devices ranging from PCs to embedded boards.
    53         -- Need light system able to run on a variety of devices.
           53  +- Need light system able to run on very different devices.
    54     54   
    55     55   ** Familiarity & flexibility vs. System stability
    56     56   - Familiar Linux env with root access to researchers.
    57     57   - Keep env isolation (nodes are shared by experiments).
    58     58   - Keep node stability (to avoid in-place maintenance, some difficult to reach
    59     59     node locations).
    60     60   # Frozen tower.
................................................................................
    85     85   # Move over overlay diagram less overlay connections plus overlay network.
    86     86   - A testbed consists of a set of nodes managed by the same server.
    87     87     - Server managed by testbed admins.
    88     88     - Network and node managed by node admins (usually node owners).
    89     89     - Node admins must adhere to a set of conditions.
    90     90     - Solves management vs. ownersip problem.
    91     91   - All components in testbed reachable via management network (tinc mesh VPN).
    92         -  - Avoids problems with firewalls and private networks.
           92  +  - Avoids problems with firewalls and private networks in nodes.
    93     93     - Avoids address scarcity and incompatibility (well structured IPv6 schema).
    94         -  - Public addresses still used for experiments when available.
           94  +  - Public CN addresses still used for experiments when available.
    95     95   - Gateways connect disjoint parts of the management network.
    96     96     - Allows a testbed spanning different CNs and islands through external means
    97     97       (e.g. FEDERICA, the Internet).
    98     98     - A gateway reachable from the Internet can expose the management network
    99     99       (if using public addresses).
   100    100   - A researcher runs the experiments of a slice in slivers each running in a
   101    101     different node…
   102    102   
   103    103   ** Nodes, slices and slivers
   104    104   - …a model inspired in PlanetLab.
   105         -- A slice groups a set of related slivers.
          105  +- The slice (a management concept) groups a set of related slivers.
   106    106   - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
   107    107     allocated for a slice in a given node.
   108    108   # Diagram: Slices and slivers, two or three nodes with a few slivers on them,
   109    109   # each with a color identifying it with a slice.)
   110    110   
   111    111   ** Node architecture
   112    112   Mostly autonomous, no long-running connections to server, asynchronous
................................................................................
   113    113   operation: robust under link instability.
   114    114   # Node simplified diagram, hover to interesting parts.
   115    115   - The community device
   116    116     - Completely normal CN network device, possibly already existing.
   117    117     - Routes traffic between the CN and devices in the node's local network
   118    118       (wired, runs no routing protocol).
   119    119   - The research device
   120         -  - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
   121         -    customized by CONFINE.
          120  +  - More powerful than CD, it runs OpenWrt firmware customized by CONFINE.
   122    121     - Experiments run here.  The separation between CD and RD allows:
   123    122       - Minumum CONFINE-specific tampering with CN hardware.
   124    123       - Minimum CN-specific configuration for RDs.
   125    124       - Greater compatibility and stability for the CN.
   126    125     - Slivers are implemented as Linux containers.
   127    126       - LXC: lightweight virtualization (in Linux mainstream).
   128         -    - Easier resource limitation, resource isolation and node stability.
   129    127       - Provides a familiar env for researchers.
          128  +    - Easier resource limitation, resource isolation and node stability.
   130    129     - Control software
   131    130       - Manages containers and resource isolation through LXC tools.
   132    131       - Ensures network isolation and stability through traffic control (QoS)
   133    132         and filtering (from L2 upwards).
   134    133       - Protects users' privacy through traffic filtering and anonimization.
   135    134     - Optional, controlled direct interfaces for experiments to interact
   136    135       directly with the CN (avoiding the CD).
................................................................................
   144    143   - Home computer behind a NAT router: a private interface with traffic
   145    144     forwarded using NAT to the CN.  Outgoing traffic is filtered to ensure
   146    145     network stability.
   147    146   - Publicly open service: a public interface (with a public CN address) with
   148    147     traffic routed directly to the CN.  Outgoing traffic is filtered to ensure
   149    148     network stability.
   150    149   - Traffic capture: a passive interface using a direct interface for capture.
   151         -  Incoming traffic is filtered and anonimized by control software.
          150  +  Incoming traffic is filtered and anonymized by control software.
   152    151   - Routing: an isolated interface using a VLAN on top of a direct interface.
   153    152     It only can reach other slivers of the same slice with isolated interfaces
   154    153     on the same link.  All traffic is allowed.
   155    154   - Low-level testing: the sliver is given raw access to the interface.  For
   156    155     privacy, isolation and stability reasons this should only be allowed in
   157    156     exceptional occasions.
   158    157   
................................................................................
   164    163   1. The researcher first contacts the server and creates a slice description
   165    164      which specifies a template for slivers (e.g. Debian Squeeze i386).
   166    165      Experiment data is attached including a program to setup the experiment and
   167    166      another one to run it.
   168    167   2. The server updates the registry which holds all definitions of testbed,
   169    168      nodes, users, slices, slivers, etc.
   170    169   3. The researcher chooses a couple of nodes and creates sliver descriptions
   171         -   for them in the previous slice.  Both sliver descriptions include a public
   172         -   interface to the CN and user-defined properties for telling apart the
   173         -   source sliver from the target one.  Sliver descriptions go to the registry.
          170  +   for them belonging to the previous slice.  Both sliver descriptions include
          171  +   a public interface to the CN and user-defined properties for telling apart
          172  +   the source sliver from the target one.  Sliver descriptions go to the
          173  +   registry.
   174    174   4. Each of the previous nodes gets a sliver description for it.  If enough
   175    175      resources are available, a container is created with the desired
   176    176      configuration.
   177    177   5. Once the researcher knows that slivers have been instantiated, the server
   178    178      can be commanded to activate the slice.  The server updates the registry.
   179    179   6. When nodes get instructions to activate slivers they start the containers.
   180    180   7. Containers run the experiment setup program and the run program.  The
   181    181      programs query sliver properties to decide their behaviour.
   182         -8. Researchers interact with containers if needed (e.g. via SSH) and collect
   183         -   results straight from them.
          182  +8. Researchers interact straight with containers if needed (e.g. via SSH) and
          183  +   collect results from them.
   184    184   9. When finished, the researcher tells the server to deactivate and
   185    185      deinstantiate the slice.
   186    186   10. Nodes get the instructions and they stop and remove containers.
   187    187   
   188    188   At all times there can be external services interacting with researchers,
   189    189   server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
   190    190   collect results.
   191    191   
   192    192   * Community-Lab integration in existing community networks
   193    193   # CN diagram (buildings and cloud).
   194         -A typical CN looks like this, with most nodes linked using WiFi technology
   195         -(cheap and ubiquitous), but sometimes others as optical fiber.  The CONFINE
   196         -project follows three strategies taking into account that CNs are production
   197         -networks with distributed ownership:
          194  +A typical CN looks like this, with most nodes linked using cheap and
          195  +ubiquitous WiFi technology (and less frequently Ethernet, optical fiber or
          196  +others).  The CONFINE project follows three strategies taking into account
          197  +that CNs are production networks with distributed ownership:
   198    198   
   199    199   # CN diagram extended with CONFINE devices (hover over interesting part).
   200    200   - Take an existing node owned by CN members, CONFINE provides a RD and
   201         -  connects it via Ethernet.  Experiments are restricted to the application
   202         -  layer unless the node owner allows the RD to include a direct interface
   203         -  (i.e. antenna).
          201  +  connects it via Ethernet to the CD.  Experiments are restricted to the
          202  +  application layer unless the node owner allows the RD to include a direct
          203  +  interface (i.e. antenna).
   204    204   - Extend the CN with complete nodes, CONFINE provides both the CD and the RD
   205    205     and uses a CN member's location.  All but low-level experiments are possible
   206    206     using direct interfaces.
   207    207   - Set up a physically separated cloud of nodes, CONFINE extends the CN with a
   208    208     full installation of connected nodes at a site controlled by a partner
   209    209     (e.g. campus).  All kinds of experiments are possible using direct
   210    210     interfaces.  Users are warned about the experimental nature of the network.
   211    211   
   212    212   * Recap
   213    213   
   214    214   - Community networks are an emerging field to provide citizens with
   215    215     connectivity in a sustainable and distributed manner in which the owners of
   216    216     the networks are the users themselves.
   217         -- Research on this field is necessary to support CNs growth while improving
          217  +- Research on this field is necessary to support CNs' growth while improving
   218    218     their operation and quality.
   219    219   - Experimental tools are still lacking because of the peculiarities of CNs.
   220    220   - The CONFINE project aims to fill this gap by deploying Community-Lab, a
   221         -  testbed for community networks inside existing community networks.
          221  +  testbed for existing community networks.
   222    222   
   223    223   # Commenters: Less attention on architecture, more on global working of
   224    224   # testbed.
   225    225   
   226    226   # Ivan: Describe simple experiment, show diagram (UML-like timing diagram?
   227    227   # small animation?) showing the steps from slice creation to instantiation,
   228    228   # activation, deactivation and deletion for that example experiment.