Community-Lab introduction

Check-in [a0366fa17f]
Login
Overview
Comment:Streamlined several points after reading, simplified ping example.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1: a0366fa17f8d143bd1559f84ecc33f19a364e6d0
User & Date: ivan on 2012-09-20 11:41:53
Other Links: manifest | tags
Context
2012-09-20
12:25
Minor changes to testbeds slide. check-in: d110a1a7dc user: ivan tags: trunk
11:41
Streamlined several points after reading, simplified ping example. check-in: a0366fa17f user: ivan tags: trunk
09:23
Added FP7 and FIRE logos to slides. check-in: 65ca24d918 user: ivan tags: trunk
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Modified script.txt from [daf35fb5e3] to [2e758bb5c1].

     3      3   * Introduction
     4      4   ** Community networks
     5      5   - Infrastructure deployed by organized groups of people for self-provision of
     6      6     broadband networking that works and grows according to their own interests.
     7      7   - Characteristics: Open participation, open and transparent management,
     8      8     distributed ownership.
     9      9   - CNs are of strategic importance for the universal availability of broadband
    10         -  networking (according to the European Digital Agenda).
    11         -- A challenge: How to support the growth and sustainability of community
    12         -  networks by providing the means to conduct experimentally driven research.
           10  +  networking (an initiative for the European Digital Agenda).
           11  +- A challenge for researchers: How to support the growth and sustainability of
           12  +  CNs by providing the means to conduct experimentally driven research.
    13     13   
    14     14   ** The CONFINE project: Community Networks Testbed for the Future Internet
    15     15   - Takes on the previous challenge.
    16     16   - Project supported by the European Community Framework Programme 7 within the
    17     17     Future Internet Research and Experimentation Initiative (FIRE).
    18         -- Partners (list with logos): (community networks) guifi.net, Funkfeuer,
    19         -  Athens Wireless Metropolitan Network; (research centres) Universitat
    20         -  Politècnica de Catalunya, Fraunhofer Institute for Communication,
    21         -  Information Processing and Ergonomics, Interdisciplinary Institute for
    22         -  Broadband Technology; (NGOs) the OPLAN Foundation, Pangea.
    23         -- Objectives: Provide a testbed and associated tools and knowledge for
           18  +# List partner's logos.
           19  +- Partners: (community networks) guifi.net, Funkfeuer, Athens Wireless
           20  +  Metropolitan Network; (research centres) Universitat Politècnica de
           21  +  Catalunya, Fraunhofer Institute for Communication, Information Processing
           22  +  and Ergonomics, Interdisciplinary Institute for Broadband Technology; (NGOs)
           23  +  OPLAN Foundation, Pangea.
           24  +- Objective: Provide a testbed and associated tools and knowledge for
    24     25     researchers to experiment on real community networks.
    25     26   
    26     27   ** Testbeds
    27     28   - Environments built with real hardware for realistic experimental research on
    28         -  network technologies (instead of simulations).
    29         -- Wireless: (outdoor) Berlin RoofNet, MIT Roofnet; (indoor) IBBT's w-iLab.t,
    30         -  CERTH's NITOS, WINLAB's ORBIT.  Limited local scale, controlled environment,
    31         -  no resource sharing between experiments.
           29  +  network technologies.
           30  +- Wireless, both outdoor (HU's Berlin RoofNet, MIT Roofnet) and indoor (IBBT's
           31  +  w-iLab.t, CERTH's NITOS, WINLAB's ORBIT).  Problems: limited local scale,
           32  +  controlled environment, no resource sharing between experiments.
    32     33   - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes.
    33     34     Main inspiration for Community-Lab.
    34     35   
    35     36   ** Community-Lab: a testbed for community networks
    36     37   - The testbed developed by CONFINE.
    37         -- Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN.
    38     38   # Node maps here for CNs with captures from node DBs.
    39         -- Also nodes in participating research centres.
    40         -- Linked together over the FEDERICA academic backbone.
    41         -- All its software and documentation is released under Free licenses, anyone
    42         -  can setup a CONFINE testbed like Community-Lab.
           39  +- Integrates and extends three community networks: guifi.net, FunkFeuer, AWMN.
           40  +- Also includes nodes in participating research centres.
           41  +- All linked together over the FEDERICA research backbone.
           42  +- All its software and documentation is “free as in freedom”, anyone can setup
           43  +  a CONFINE testbed like Community-Lab.
    43     44   
    44     45   * Challenges and requirements
           46  +CNs pose unique challenges for a testbed.  How to
           47  +
    45     48   ** Simple management vs. Distributed node ownership
    46         -- How to manage devices belonging to diverse owners.
           49  +- manage devices belonging to diverse owners?
    47     50   
    48     51   ** Features vs. Lightweight, low cost (free & open)
    49         -- Devices ranging from PCs to embedded boards.
    50         -- Need light system able to run on very different devices.
           52  +- support devices ranging from PCs to embedded boards?
    51     53   
    52     54   ** Heterogeneity vs. Compatibility
    53         -- Some devices allow hacking while others don't.
    54         -- Diverse connectivity and link technologies (wireless, wired, fiber).
           55  +- work with devices which allow little customization?
           56  +- support diverse connectivity and link technologies (wireless, wired, fiber)?
    55     57   
    56     58   ** Familiarity & flexibility vs. System stability
    57     59   - Researchers prefer a familiar Linux env with root access.
    58         -- But experiments sharing the same node must be isolated.
           60  +- isolate experiments that share the same node?
           61  +- keep nodes stable to avoid in-place maintenance?  Accessing node locations
           62  +  can be hard.
    59     63   # Frozen tower.
    60         -- Accessing node locations can be hard, so keep node stability to avoid
    61         -  in-place maintenance.
    62     64   
    63     65   ** Flexibility vs. Network stability
    64         -- Network experiments running on nodes in a production network.
    65         -- Allow interaction at the lowest possible layer of the CN while not
    66         -  disrupting or overusing it.
           66  +- Network experiments run on nodes in a production network.
           67  +- allow interaction at the lowest possible layer of the CN while not
           68  +  disrupting or overusing it?
    67     69   
    68     70   ** Traffic collection vs. Privacy of CN users
    69         -- Experiments performing traffic collection and characterization.
    70         -- Avoid researchers spying on users' data.
           71  +- allow experiments performing traffic collection and characterization?
           72  +- avoid researchers spying on users' data?
    71     73   
    72     74   ** Link instability vs. Management robustness
    73         -- Management must deal with frequent network outages in the CN.
           75  +- deal with frequent network outages in the CN when managing nodes?
    74     76   
    75     77   ** Reachability vs. IP address provisioning
    76         -- Testbed spanning different CNs.
    77         -- IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.
           78  +- We have IPv4 scarcity and incompatibility between CNs, lack of IPv6 support.
           79  +- support testbed spanning different CNs?
    78     80   
    79     81   * Community-Lab testbed architecture
           82  +This is the architecture developed by the CONFINE project to handle the
           83  +previous challenges.
           84  +
    80     85   ** Overall architecture
    81     86   This architecture applies to all testbeds using the CONFINE software.
    82     87   # Move over overlay diagram less overlay connections plus overlay network.
    83     88   - A testbed consists of a set of nodes managed by the same server.
    84     89     - Server managed by testbed admins.
    85         -  - Network and node managed by node admins (usually owners and CN members).
    86         -  - Node admins must adhere to testbed conditions.
           90  +  - Network and node managed by CN members.
           91  +  - Node admins must adhere to testbed terms and conditions.
    87     92     - This decouples testbed management from infrastructure ownership and mgmt.
    88     93   - Testbed management traffic uses a tinc mesh VPN:
    89     94     - Avoids problems with firewalls and private networks in nodes.
    90         -  - Uses IPv6 to avoid address scarcity and incompatibility between CNs.
           95  +  - Mgmt network uses IPv6 to avoid address scarcity and incompatibility
           96  +    between CNs.
    91     97     - Short-lived mgmt connections make components mostly autonomous and
    92     98       tolerant to link instability.
    93         -- A testbed can span multiple CNs thanks to gateways.
           99  +- Gateways allow a testbed to span multiple CNs.
    94    100     - Bridging the mgmt net over external means (e.g. FEDERICA, the Internet).
    95    101     - Gateways can route the management network to the Internet.
    96    102   - A researcher runs the experiments of a slice in slivers each running in a
    97         -  different node…
          103  +  different node.
    98    104   
    99    105   ** Nodes, slices and slivers
   100         -- …a model inspired in PlanetLab.
          106  +# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
          107  +# each with a color identifying it with a slice.)
          108  +- These concepts are inspired in PlanetLab.
   101    109   - The slice (a management concept) groups a set of related slivers.
   102    110   - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…)
   103    111     allocated for a slice in a given node.
   104         -# Diagram: Slices and slivers, two or three nodes with a few slivers on them,
   105         -# each with a color identifying it with a slice.)
   106    112   
   107    113   ** Node architecture
          114  +allows the realization of these concepts.  A node consists of:
   108    115   # Node simplified diagram, hover to interesting parts.
   109    116   - The community device
   110    117     - Completely normal CN device, so existing ones can be used.
   111         -  - Routes traffic between the CN and devices in the node's wired local
   112         -    network (which runs no routing protocol).
          118  +  - Routes traffic between the CN and the node's wired local network (which
          119  +    runs no routing protocol).
   113    120   - The research device
   114    121     - Usually more powerful than CD, since experiments run here.
   115    122     - Separating CD/RD makes integration with any CN simple and safe:
   116         -    - Little CONFINE-specific tampering with CN infrastructure.
   117         -    - Little CN-specific configuration for RDs.
          123  +    - Little CONFINE-specific tampering with CN infrastructure.?!
          124  +    - Little CN-specific configuration for RDs.?!
   118    125       - Misbehaving experiments can't crash CN infrastructure.
   119    126     - Runs OpenWrt firmware customized by CONFINE.
   120    127     - Slivers are implemented as Linux containers.
   121    128       - Lightweight virtualization supported mainstream.
   122    129       - Provides a familiar and flexible env for researchers.
   123    130     - Direct interfaces allow experiments to bypass the CD when interacting with
   124    131       the CN.
   125    132     - Control software
   126    133       - Uses LXC tools on containers to enforce resource limitation, resource
   127    134         isolation and node stability.
   128    135       - Uses traffic control, filtering and anonymization to ensure network
   129    136         stability, isolation and privacy (partialy implemented).
   130         -- The recovery device can force a hardware reboot of the RD from several
   131         -  triggers and help with upgrade and recovery (not implemented).
          137  +- The recovery device (not implemented) can force a remote hardware reboot of
          138  +  the RD in case it hangs.  It also helps with upgrade and recovery.
   132    139   
   133    140   * Supported experiments
   134    141   # Node simplified diagram, hover to interesting parts.
   135    142   Researchers can configure slivers with different types of network interfaces
   136         -depending on the connectivity needs of experiments:
          143  +depending on the connectivity needs of experiments.  For instance, to
   137    144   
   138         -- Home PC-like access: a private interface with traffic forwarded using NAT to
   139         -  the CN (filtered to ensure network stability).
   140         -- Internet service: a public interface (with a public CN address) with traffic
   141         -  routed directly to the CN (filtered to ensure network stability).
   142         -- Traffic analysis (not implemented): a passive interface capturing traffic on
   143         -  a direct interface (filtered and anonymized to ensure network privacy).
   144         -- Routing: an isolated interface using a VLAN on top of a direct interface.
   145         -  All traffic is allowed, but it can only reach other slivers of the same
   146         -  slice with isolated interfaces on the same physical link.
   147         -- Low-level testing (not implemented): the sliver is given raw access to the
          145  +- mimic a home PC: use the private interface, which has traffic forwarded
          146  +  using NAT to the CN but filtered to ensure network stability.
          147  +- implement a network service: create a public interface, which has a CN
          148  +  address and traffic routed directly to the CN but filtered to ensure network
          149  +  stability.
          150  +- experiment with routing algorithms: create an isolated interface, which uses
          151  +  a VLAN on top of a direct interface.  All traffic is allowed, but only
          152  +  between other slivers of the same slice with isolated interfaces on the same
          153  +  physical link.
          154  +
          155  +Not yet implemented:
          156  +
          157  +- analyze traffic: create a passive interface to capture traffic on a direct
          158  +  interface, which is filtered and anonymized to ensure network privacy.
          159  +- perform low-level testing: the sliver is given free raw access to a direct
   148    160     interface.  For privacy, isolation and stability reasons this should only be
   149    161     allowed in exceptional occasions.
   150    162   
   151         -Besides low level access, RDs also offer link quality and bandwidth usage
   152         -measurements for all their interfaces through DLEP (available soon).
          163  +RDs will soon be able to provide link quality and bandwidth usage measurements
          164  +for all their interfaces through the DLEP protocol.
   153    165   
   154    166   Finally, the server and nodes publish management information through an API
   155         -that can be used to study the testbed itself or to implement external services
   156         -(like node monitoring and selection).
          167  +that can be used to study the testbed itself, or to implement external
          168  +services like node monitoring and selection.
   157    169   
   158    170   ** An example experiment
   159    171   # Event diagram, hover over components explained.
   160         -To show how the testbed works: two slivers, one of them pings the other one.
   161         -Let's call them the source and target sliver, respectively.
          172  +To show how the testbed works: two slivers which ping each other.
   162    173   
   163    174   1. The researcher first contacts the server and creates a slice description
   164         -   which specifies a template for slivers (e.g. Debian Squeeze i386).  The
   165         -   researcher attaches experiment data including a program to setup slivers
   166         -   for the experiments and another one to run them.
   167         -2. This and all subsequent changes initiated by the researcher are stored in
          175  +   which specifies a template for slivers (e.g. Debian Squeeze) and includes
          176  +   data and programs to setup slivers and run experiments.
          177  +2. This and all subsequent changes performed by the researcher are stored in
   168    178      the registry, which holds the config of all components in the testbed.
   169         -3. The researcher chooses a couple of nodes and creates sliver descriptions
   170         -   for them belonging to the previous slice.  Both sliver descriptions include
   171         -   a public interface to the CN and user-defined properties to mark slivers as
   172         -   either source or target.
          179  +3. The researcher chooses two nodes and adds sliver descriptions for them in
          180  +   the previous slice.  Each one includes a public interface to the CN.
   173    181   4. Each of the previous nodes gets a sliver description for it.  If enough
   174         -   resources are available, a container is created by applying the desired
          182  +   resources are available, a container is created by applying the sliver
   175    183      configuration over the selected template.
   176    184   5. Once the researcher knows that slivers have been instantiated, the server
   177    185      can be commanded to activate the slice.
   178    186   6. When nodes get instructions to activate slivers they start the containers.
   179         -7. Containers execute the experiment's setup and run programs.  The programs
   180         -   query sliver properties to decide whether to act as source or target.
          187  +7. Containers execute the setup and run programs provided by the researcher.
   181    188   8. Researchers interact straight with containers if needed (e.g. via SSH) and
   182    189      collect results from them.
   183    190   9. When finished, the researcher tells the server to deactivate and
   184    191      deinstantiate the slice.
   185    192   10. Nodes get the instructions and they stop and remove containers.
   186    193   
   187    194   * Cooperation between community networks and Community-Lab
   188    195   # CN diagram (buildings and cloud).
   189         -There are different ways.  Given a typical CN like this, with most nodes
          196  +can take different forms.  Given a typical CN like this, with most nodes
   190    197   linked using cheap and ubiquitous WiFi technology:
   191    198   
   192    199   # CN diagram extended with CONFINE devices (hover over interesting part).
   193    200   - CN members can provide an existing CD and let CONFINE connect a RD to it via
   194    201     Ethernet.  Experiments are restricted to the application layer unless the
   195    202     node owner allows the RD to include a direct interface (i.e. antenna).
   196    203   - CN members can provide a location and let CONFINE set up a complete node
   197         -  there (CD and RD).  All but low-level experiments are possible using direct
   198         -  interfaces.  In this way CONFINE helps extend the CN.
          204  +  there (CD and RD).  In this way CONFINE helps extend the CN.
   199    205   - CONFINE can also extend the CN by setting up a physically separated cloud of
   200    206     connected nodes at a site controlled by a partner (e.g. campus).  All kinds
   201         -  of experiments are possible using direct interfaces.  Users are warned about
   202         -  the experimental nature of the network.
          207  +  of experiments are possible using direct interfaces.  Users should be warned
          208  +  about the research nature of the network.
   203    209   
   204    210   * Participate!
   205    211   We introduced you to Community-Lab, a new testbed being developed by the
   206         -CONFINE project to support research targeted to allow CNs to become a key part
   207         -of Internet infrastructure in the future.
          212  +CONFINE project to support research that can help CNs become a key part of the
          213  +Internet in a near future.
   208    214   
   209    215   Community networks and researchers: We look forward to your participation!
   210    216   - More information: http://community-lab.net/, http://confine-project.eu/
   211    217   - Questions?
   212    218   
   213    219   # Commenters: Less attention on architecture, more on global working of
   214    220   # testbed.