Index: script.txt ================================================================== --- script.txt +++ script.txt @@ -1,224 +1,226 @@ #+title: Community-Lab: A Community Networking Testbed for the Future Internet * Introduction +Hello, I'm Blah Blah from Blah Blah, I work at the CONFINE project and I'm +going to talk you about *##* Community-Lab, a community networking testbed for +the future Internet. ** Community networks - Infrastructure deployed by organized groups of people for self-provision of broadband networking that works and grows according to their own interests. - Characteristics: Open participation, open and transparent management, distributed ownership. -- The EU regards CNs as fundamental for the universalization of broadband +- The EU regards CNs as fundamental for *##* the universalization of broadband networking. -- New research challenge: How to support the growth and sustainability of CNs - by providing the means to conduct experimentally driven research. +- Means new research challenge: How to support the growth and sustainability + of CNs by providing the means to conduct experimentally driven research. *##* ** The CONFINE project: Community Networks Testbed for the Future Internet -- Takes on the previous challenge. +- The CONFINE project takes on the previous challenge. - Project supported by the European Community Framework Programme 7 within the Future Internet Research and Experimentation Initiative (FIRE). -# List partner's logos. -- Partners: (community networks) guifi.net, Funkfeuer, Athens Wireless - Metropolitan Network; (research centres) Universitat Politècnica de +- Partners: (*##* community networks) guifi.net, Funkfeuer, Athens Wireless + Metropolitan Network; (*##* research institutions) Universitat Politècnica de Catalunya, Fraunhofer Institute for Communication, Information Processing - and Ergonomics, Interdisciplinary Institute for Broadband Technology; (NGOs) - OPLAN Foundation, Pangea. + and Ergonomics, Interdisciplinary Institute for Broadband Technology; (*##* + supporting NGOs) OPLAN Foundation, Pangea. *##* - Objective: Provide a testbed and associated tools and knowledge for - researchers to experiment on real community networks. + researchers to experiment on real community networks. *##* ** Testbed? - Environment built with real hardware for realistic experimental research on - network technologies. -- Wireless, both indoor (IBBT's w-iLab.t, CERTH's NITOS, WINLAB's ORBIT) and - outdoor (HU's Berlin RoofNet, MIT Roofnet). Problems: limited local scale, - controlled environment, no resource sharing between experiments. + network technologies. *##* +- Some wireless testbeds, both indoor and outdoor. + - Problems: their limited local scale, their unrealistic controlled + environment, experiments can't share resources simultaneously. - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes. - Main inspiration for Community-Lab. + Main inspiration for Community-Lab. *##* ** Community-Lab: a testbed for community networks -- The testbed developed by CONFINE. -# Node maps here for CNs with captures from node DBs. -- Integrates and extends three community networks: guifi.net, FunkFeuer, AWMN. -- Also includes nodes in participating research centres. -- All linked together over the FEDERICA research backbone. -- All its software and documentation is “free as in freedom”, anyone can setup - a CONFINE testbed like Community-Lab. +- Community-Lab is the testbed developed by CONFINE. +- Integrates and extends the participating community networks. +# Place CN logos over green blobs of CONFINE logo, +# with FEDERICA logo in center blob. +- Using the FEDERICA research backbone for interconnection. *##* +- All Community-Lab's software and documentation is “free as in freedom” so + people can use it to setup their own CONFINE testbed. * Requirements and challenges A testbed has requirements that are challenged by the unique characteristics -of CNs. For instance, how to +of CNs. For instance, how to *##* ** Simple management vs. Distributed node ownership -- manage devices belonging to diverse owners? +- manage devices belonging to diverse owners? *##* ** Features vs. Lightweight, low cost -- support devices ranging from PCs to embedded boards? +- support devices ranging from PCs to embedded boards? *##* ** Compatibility vs. Heterogeneity - work with devices which allow little customization? -- support diverse connectivity and link technologies (wireless, wired, fiber)? +- support diverse connectivity models and link technologies including + wireless, wired and fiber? *##* ** Familiarity & flexibility vs. System stability -- Researchers prefer a familiar Linux env with root access. +- Researchers usually prefer a familiar Linux env with root access. - isolate experiments that share the same node? -- keep nodes stable to avoid in-place maintenance? Accessing node locations - can be hard. -# Frozen tower. +- *##* Sometimes accessing node locations can be hard. *##* + - keep nodes stable to avoid in-place maintenance? *##* ** Flexibility vs. Network stability -- Network experiments run on nodes in a production network. +- Remember that network experiments run on a production network. - allow interaction at the lowest possible layer of the CN while not - disrupting or overusing it? + disrupting or saturating it? *##* ** Traffic collection vs. Privacy of CN users - allow experiments performing traffic collection and characterization? -- avoid researchers spying on users' data? +- While avoiding researchers spying on users' data? *##* ** Management robustness vs. Link instability -- deal with frequent network outages in the CN when managing nodes? +- deal with frequent network outages in the CN when managing nodes? *##* ** Reachability vs. IP address provisioning -- We have IPv4 scarcity and incompatibility between CNs, lack of IPv6 support. -- support testbed spanning different CNs? +- CNs have IPv4 scarcity and incompatible addressing with little IPv6 support. +- support testbed spanning different CNs? *##* * Community-Lab testbed architecture -This is the architecture developed by the CONFINE project to handle the -previous challenges. - ** Overall architecture -This architecture applies to all testbeds using the CONFINE software. -# Move over overlay diagram less overlay connections plus overlay network. -- A testbed consists of a set of nodes managed by the same server. +This is the architecture developed by the CONFINE project to handle the +previous challenges. It applies to all testbeds using CONFINE software. *##* + +- A testbed consists of a set of nodes managed by the same server. *##* - Server managed by testbed admins. - - Network and node managed by CN members. + - Network and nodes managed by CN members. - Node admins must adhere to testbed terms and conditions. - - This decouples testbed management from infrastructure ownership and mgmt. + - This decouples testbed management from infrastructure ownership & mgmt. *##* - Testbed management traffic uses a tinc mesh VPN: - Avoids problems with firewalls and private networks in nodes. - - IPv6 is used to avoid address scarcity and incompatibility between CNs. - - Link instability is tolerated by using short-lived mgmt connections. + - Uses IPv6 to avoid address scarcity and incompatibility between CNs. + - Mgmt connections are short-lived to tolerate link instability. *##* - Gateways are entry points to the mgmt network. - - They can extend it over multiple CNs by external means (e.g. FEDERICA, the + - They help extend it over multiple CNs by external means (e.g. FEDERICA, the Internet). - - They can also route the management network to the Internet. -- A researcher runs the experiments of a slice in slivers each running in a - different node. + - They can also route the management network to the Internet. *##* +- Researchers run experiments in slices spread over several nodes (as + slivers). *##* ** Slices, slivers and nodes -# Diagram: Slices and slivers, two or three nodes with a few slivers on them, -# each with a color identifying it with a slice.) - These concepts are inspired in PlanetLab. -- The slice (a management concept) groups a set of related slivers. +- A slice is a management concept that groups a set of related slivers. - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…) allocated for a slice in a given node. -- A node hosts several slivers at the same time. +- A node hosts several slivers at the same time. *##* ** Node architecture -allows the realization of these concepts. A node consists of: -# Node simplified diagram, hover to interesting parts. +allows the realization of these concepts. *##* A node consists of a CD, a RD +and a rD on a wired local network. *##* + - The community device - Completely normal CN device, so existing ones can be used. - - Routes traffic between the CN and the node's wired local network (which - runs no routing protocol). + - routes traffic between the CN and the local network (which runs no routing + protocol). *##* - The research device - Usually more powerful than CD, since experiments run here. - - A separated RD minimizes tampering with CN infrastructure. - - Also experiments can't crash the CD. - - Runs the versatile, light and free OpenWrt distro, customized by CONFINE. - - Slivers are implemented as lightweight Linux containers. - - So researchers get root access to a familiar environment. - - Direct interfaces allow low-level interaction of experiments with the CN - bypassing the CD. - - Control software - - Uses LXC tools to manage containers and enforce resource limits, + - Separating the RD from the CD minimizes tampering with CN infrastructure. + - Also experiments can't crash CN devices. + - runs the versatile, light & free OpenWrt distro, customized by CONFINE. *##* + - Slivers are implemented as lightweight Linux containers. + - So researchers get root access to a familiar environment. *##* + - provides direct interfaces to allow low-level interaction of experiments + with the CN bypassing the CD. *##* + - runs CONFINE control software + - uses LXC tools to manage containers and enforce resource limits, isolation and node stability. - - Uses traffic control, filtering and anonymization to ensure network - stability, isolation and privacy (partialy implemented). + - uses traffic control, filtering and anonymization to ensure network + stability, isolation and privacy (partialy implemented). *##* - The recovery device (not implemented) can force a remote hardware reboot of - the RD in case it hangs. It also helps with upgrade and recovery. + the RD in case it hangs. It also helps with upgrade and recovery. *##* * Experiments support -# Node simplified diagram, hover to interesting parts. +# Tag diagram with new title. Researchers can configure slivers with different types of network interfaces -depending on the connectivity needs of experiments. For instance, to +depending on the connectivity needs of experiments. For instance, to *##* -- mimic a home PC: use the private interface, which has L3 traffic forwarded - using NAT to the CN but filtered to ensure network stability. -- implement a network service: create a public interface, which has a CN +- mimic a home PC: use the private interface, *##* which has L3 traffic + forwarded using NAT to the CN but filtered to ensure network stability. *##* +- implement a network service: create a public interface, *##* which has a CN address and L3 traffic routed directly to the CN but filtered to ensure - network stability. -- experiment with routing algorithms: create an isolated interface, which uses - a VLAN on top of a direct interface. All L2 traffic is allowed, but only - between other slivers of the same slice with isolated interfaces on the same - physical link. + network stability. *##* +- experiment with routing algorithms: create an isolated interface, *##* which + uses a VLAN on top of a direct interface. All L2 traffic is allowed, but + only between other slivers of the same slice with isolated interfaces on the + same physical link. These were demonstrated with BitTorrent and mesh routing experiments at IEEE -P2P'12 Conference. Future support is planned for experiments that: +P2P'12 Conference. *##* Future support is also planned for experiments that: -- analyze traffic: create a passive interface to capture traffic on a direct - interface, which is filtered and anonymized to ensure network privacy. -- perform low-level testing: the sliver is given free raw access to a direct - interface. For privacy, isolation and stability reasons this should only be - allowed in exceptional occasions. +- analyze traffic: create a passive interface *##* to capture traffic on a + direct interface, which is filtered and anonymized to ensure network + privacy. *##* +- perform low-level testing: *##* the sliver is given free raw access to a + direct interface. For privacy, isolation and stability reasons this should + only be allowed in exceptional occasions. *##* -# List example experiments, add these. Besides experiments run in slices, researchers will soon be able to collect link quality and bandwidth usage measurements of all RDs' interfaces through -the DLEP protocol. +the DLEP protocol. *##* Moreover, the server and nodes will soon publish management information -through an API that would be used to study the testbed itself, or to implement +through an API that can be used to study the testbed itself, or to implement external services like node monitoring and selection. ** An example experiment -# Event diagram, hover over components explained. -To show how the testbed works: two slivers which ping each other. +to show how the testbed works. We'll create two slivers which ping each +other. *##* 1. The researcher first contacts the server and registers a slice description which specifies a template for slivers (e.g. Debian Squeeze) and includes - data and programs to setup slivers and run experiments. + data and programs to setup slivers and run experiments. *##* 2. This and all subsequent changes performed by the researcher are stored in - the registry, which holds the config of all components in the testbed. + the registry, which holds the config of all components in the testbed. *##* 3. The researcher chooses two nodes and registers sliver descriptions for them in the previous slice. Each one includes a public interface to the CN. - The researcher tells the server to instantiate the slice. + Then the researcher tells the server to instantiate the slice. *##* 4. Each of the previous nodes gets a sliver description for it. If enough resources are available, a container is created by applying the sliver - configuration over the selected template. + configuration over the selected template. *##* 5. Once the researcher knows that slivers have been instantiated, the server - can be commanded to activate the slice. -6. When nodes get instructions to activate slivers they start the containers. -7. Containers execute the setup and run programs provided by the researcher. + can be commanded to activate the slice. *##* +6. When nodes get instructions to activate slivers they start containers. *##* +7. Containers execute the setup & run programs provided by the researcher. *##* 8. Researchers interact straight with containers if needed (e.g. via SSH) and - collect results from them. + collect results from them. *##* 9. When finished, the researcher tells the server to deactivate and - deinstantiate the slice. -10. Nodes get the instructions and they stop and remove containers. + deinstantiate the slice. *##* +10. Nodes get the instructions and they stop and remove containers. *##* + +This is a summary of all the previous steps. *##* * Cooperation between community networks and Community-Lab -# CN diagram (buildings and cloud). can take different forms. Given a typical CN like this, with most nodes -linked using cheap and ubiquitous WiFi technology: +linked using cheap and ubiquitous WiFi technology: *##* -# CN diagram extended with CONFINE devices (hover over interesting part). - CN members can provide an existing CD and let CONFINE connect a RD to it via Ethernet. Experiments are restricted to the application layer unless the - node owner allows the RD to include a direct interface (i.e. antenna). + node owner allows the RD to include a direct interface (i.e. antenna). *##* - CN members can provide a location and let CONFINE set up a complete node - there (CD and RD). In this way CONFINE helps extend the CN. + there (CD and RD). In this way CONFINE helps extend the CN. *##* - CONFINE can also extend the CN by setting up a physically separated cloud of - connected nodes at a site controlled by a partner (e.g. campus). All kinds - of experiments are possible using direct interfaces. Users should be warned - about the research nature of the network. + connected nodes. Experiments in all layers are possible in this setup, but + users should be warned about the research nature of the network. *##* + +These are only a few ways of cooperation, but more can be envisioned. *##* * Participate! We introduced you to Community-Lab, a new testbed being developed by the CONFINE project to support research that can help CNs become a key part of the Internet in a near future. + +More information: http://community-lab.net/, http://confine-project.eu/ Community networks and researchers: We look forward to your participation! -- More information: http://community-lab.net/, http://confine-project.eu/ -- Questions? + +Questions? Thanks. # Commenters: Less attention on architecture, more on global working of # testbed. # Ivan: Describe simple experiment, show diagram (UML-like timing diagram? Index: slides.svg ================================================================== --- slides.svg +++ slides.svg cannot compute difference between binary files