Community-Lab introduction

Check-in [8bc1c13885]
Login
Overview
Comment:Some notes from Axel for simplifying the presentation.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1: 8bc1c13885ca25dfb82238b1716253b716b6d4f4
User & Date: ivan on 2012-10-07 14:21:17
Other Links: manifest | tags
Context
2012-10-07
18:06
Applied some suggestions by Axel. check-in: 29ee45c594 user: ivan tags: trunk
14:21
Some notes from Axel for simplifying the presentation. check-in: 8bc1c13885 user: ivan tags: trunk
2012-09-28
09:46
Argh, forgot to include FIRE name with logo. check-in: 896dcfe009 user: ivan tags: trunk, cnbub-2012-1.0.4
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Modified script.txt from [777bb9d984] to [63bbd39286].

    26     26     supporting NGOs) OPLAN Foundation, Pangea. *##*
    27     27   - Objective: Provide a testbed and associated tools and knowledge for
    28     28     researchers to experiment on real community networks. *##*
    29     29   
    30     30   ** Testbed?
    31     31   - Environment built with real hardware for realistic experimental research on
    32     32     network technologies. *##*
           33  +# Axel: Insert headline "Examples of existing testbeds".
    33     34   - Some wireless testbeds, both indoor and outdoor.
    34     35     - Problems: their limited local scale, their unrealistic controlled
    35     36       environment, experiments can't share resources simultaneously.
    36     37   - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
    37     38     Main inspiration for Community-Lab. *##*
    38     39   
    39     40   ** Community-Lab: a testbed for community networks
    40     41   - Community-Lab is the testbed developed by CONFINE.
    41     42   - Integrates and extends the participating community networks.
    42     43   - Using the FEDERICA research backbone for interconnection. *##*
    43     44   - All Community-Lab's software and documentation is “free as in freedom” so
    44     45     people can use it to setup their own CONFINE testbed.
    45     46   
           47  +# Axel: Headline: for what?
    46     48   * Requirements and challenges
    47     49   A testbed has requirements that are challenged by the unique characteristics
    48     50   of CNs.  For instance, how to *##*
    49     51   
    50     52   ** Simple management vs. Distributed node ownership
    51     53   - manage devices belonging to diverse owners? *##*
    52     54   
................................................................................
    82     84   - support testbed spanning different CNs? *##*
    83     85   
    84     86   * Community-Lab testbed architecture
    85     87   ** Overall architecture
    86     88   This is the architecture developed by the CONFINE project to handle the
    87     89   previous challenges.  It applies to all testbeds using CONFINE software. *##*
    88     90   
           91  +# Axel: Introduce scenario: CNs, nodes, admins.
           92  +# Ivan: Don't zoom.
    89     93   - A testbed consists of a set of nodes managed by the same server. *##*
    90     94     - Server managed by testbed admins.
    91     95     - Network and nodes managed by CN members.
    92     96     - Node admins must adhere to testbed terms and conditions.
    93     97     - This decouples testbed management from infrastructure ownership & mgmt. *##*
    94     98   - Testbed management traffic uses a tinc mesh VPN:
    95     99     - Avoids problems with firewalls and private networks in nodes.
................................................................................
    99    103     - They help extend it over multiple CNs by external means (e.g. FEDERICA, the
   100    104       Internet).
   101    105     - They can also route the management network to the Internet. *##*
   102    106   - Researchers run experiments in slices spread over several nodes (as
   103    107     slivers). *##*
   104    108   
   105    109   ** Slices, slivers and nodes
          110  +# Axel: Reverse, from PoV of researcher: select nodes, run as slivers, gruop in slices.
   106    111   - These concepts are inspired in PlanetLab.
   107    112   - A slice is a management concept that groups a set of related slivers.
   108    113   - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
   109    114     allocated for a slice in a given node.
   110    115   - A node hosts several slivers at the same time. *##*
   111    116   
   112    117   ** Node architecture
          118  +# Axel: More stress on node itself.
          119  +# Ivan: Don't zoom!!
   113    120   allows the realization of these concepts.  *##* A node consists of a CD, a RD
   114    121   and a rD connected to the same wired local network. *##*
   115    122   
   116    123   - The community device
   117    124     - Completely normal CN device, so existing ones can be used.
   118    125     - routes traffic between the CN and the local network (which runs no routing
   119    126       protocol). *##*
................................................................................
   131    138         isolation and node stability.
   132    139       - uses traffic control, filtering and anonymization to ensure network
   133    140         stability, isolation and privacy (partialy implemented). *##*
   134    141   - The recovery device (not implemented) can force a remote hardware reboot of
   135    142     the RD in case it hangs.  It also helps with upgrade and recovery. *##*
   136    143   
   137    144   * Experiments support
          145  +# Axel: Turn around as of mail: from PoV of researcher: 1) testbed through API, choose nodes, 2) login OoB, 3) auto creation, 4) specific interfaces.
   138    146   Researchers can configure slivers with different types of network interfaces
   139    147   depending on the connectivity needs of experiments.  For instance, to *##*
   140    148   
   141    149   - mimic a home PC: use the private interface, *##* which has L3 traffic
   142    150     forwarded using NAT to the CN but filtered to ensure network stability. *##*
   143    151   - implement a network service: create a public interface, *##* which has a CN
   144    152     address and L3 traffic routed directly to the CN but filtered to ensure
................................................................................
   166    174   through an API that can be used to study the testbed itself, or to implement
   167    175   external services like node monitoring and selection.
   168    176   
   169    177   ** An example experiment
   170    178   to show how the testbed works.  We'll create two slivers which ping each
   171    179   other. *##*
   172    180   
          181  +# Use summary diagram, maybe colorise labels.
   173    182   1. The researcher first contacts the server and registers a slice description
   174    183      which specifies a template for slivers (e.g. Debian Squeeze) and includes
   175    184      data and programs to setup slivers and run experiments. *##*
   176    185   2. This and all subsequent changes performed by the researcher are stored in
   177    186      the registry, which holds the config of all components in the testbed. *##*
   178    187   3. The researcher chooses two nodes and registers sliver descriptions for them
   179    188      in the previous slice.  Each one includes a public interface to the CN.
................................................................................
   193    202   
   194    203   This is a summary of all the previous steps. *##*
   195    204   
   196    205   * Cooperation between community networks and Community-Lab
   197    206   can take different forms.  Given a typical CN like this, with most nodes
   198    207   linked using cheap and ubiquitous WiFi technology: *##*
   199    208   
          209  +# Axel: Keep CN on sight, explain RDs and RD links (DIs) in cloud.
   200    210   - CN members can provide an existing CD and let CONFINE connect a RD to it via
   201    211     Ethernet.  Experiments are restricted to the application layer unless the
   202    212     node owner allows the RD to include a direct interface (i.e. antenna). *##*
   203    213   - CN members can provide a location and let CONFINE set up a complete node
   204    214     there (CD and RD).  In this way CONFINE helps extend the CN. *##*
   205    215   - CONFINE can also extend the CN by setting up a physically separated cloud of
   206    216     connected nodes.  Experiments in all layers are possible in this setup, but