Overview
Comment: | Minor corrections after reading. |
---|---|
Downloads: | Tarball | ZIP archive | SQL archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
a0f6c29ca91dbac00580359ec56e388d |
User & Date: | ivan on 2012-09-18 09:01:04 |
Other Links: | manifest | tags |
Context
2012-09-18
| ||
09:43 | Base file for Sozi presentation. check-in: fe496df47b user: ivan tags: trunk | |
09:01 | Minor corrections after reading. check-in: a0f6c29ca9 user: ivan tags: trunk | |
2012-09-17
| ||
21:28 | Added diagram with nodes, slices and slivers. check-in: 81d6ba9caa user: ivan tags: trunk | |
Changes
Modified script.txt from [f759c594c4] to [d908ae5804].
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 .. 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 .. 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 ... 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 ... 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 ... 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
researchers to experiment on real community networks. ** Testbeds - Environments built with real hardware for realistic experimental research on network technologies (instead of simulations). - Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's NITOS, WINLAB's ORBIT (indoor). Limited local scale, controlled environment, no resource sharing mechanisms. - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes. Main inspiration for Community-Lab. ** Community-Lab: a testbed for community networks - The testbed developed by CONFINE. - Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN. # Node maps here for CNs with captures from node DBs. ................................................................................ - Also nodes in participating research centres. - Linked together over the FEDERICA backbone. - All its software and documentation is released under Free licenses, anyone can setup a CONFINE testbed like Community-Lab. * Challenges and requirements ** Simple management vs. Distributed node ownership - In contrast with esp. indoors testbeds that belong wholly to the same entity. ** Features vs. Lightweight, low cost (free & open) - Devices ranging from PCs to embedded boards. - Need light system able to run on a variety of devices. ** Familiarity & flexibility vs. System stability - Familiar Linux env with root access to researchers. - Keep env isolation (nodes are shared by experiments). - Keep node stability (to avoid in-place maintenance, some difficult to reach node locations). # Frozen tower. ................................................................................ # Move over overlay diagram less overlay connections plus overlay network. - A testbed consists of a set of nodes managed by the same server. - Server managed by testbed admins. - Network and node managed by node admins (usually node owners). - Node admins must adhere to a set of conditions. - Solves management vs. ownersip problem. - All components in testbed reachable via management network (tinc mesh VPN). - Avoids problems with firewalls and private networks. - Avoids address scarcity and incompatibility (well structured IPv6 schema). - Public addresses still used for experiments when available. - Gateways connect disjoint parts of the management network. - Allows a testbed spanning different CNs and islands through external means (e.g. FEDERICA, the Internet). - A gateway reachable from the Internet can expose the management network (if using public addresses). - A researcher runs the experiments of a slice in slivers each running in a different node… ** Nodes, slices and slivers - …a model inspired in PlanetLab. - A slice groups a set of related slivers. - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…) allocated for a slice in a given node. # Diagram: Slices and slivers, two or three nodes with a few slivers on them, # each with a color identifying it with a slice.) ** Node architecture Mostly autonomous, no long-running connections to server, asynchronous ................................................................................ operation: robust under link instability. # Node simplified diagram, hover to interesting parts. - The community device - Completely normal CN network device, possibly already existing. - Routes traffic between the CN and devices in the node's local network (wired, runs no routing protocol). - The research device - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware customized by CONFINE. - Experiments run here. The separation between CD and RD allows: - Minumum CONFINE-specific tampering with CN hardware. - Minimum CN-specific configuration for RDs. - Greater compatibility and stability for the CN. - Slivers are implemented as Linux containers. - LXC: lightweight virtualization (in Linux mainstream). - Easier resource limitation, resource isolation and node stability. - Provides a familiar env for researchers. - Control software - Manages containers and resource isolation through LXC tools. - Ensures network isolation and stability through traffic control (QoS) and filtering (from L2 upwards). - Protects users' privacy through traffic filtering and anonimization. - Optional, controlled direct interfaces for experiments to interact directly with the CN (avoiding the CD). ................................................................................ - Home computer behind a NAT router: a private interface with traffic forwarded using NAT to the CN. Outgoing traffic is filtered to ensure network stability. - Publicly open service: a public interface (with a public CN address) with traffic routed directly to the CN. Outgoing traffic is filtered to ensure network stability. - Traffic capture: a passive interface using a direct interface for capture. Incoming traffic is filtered and anonimized by control software. - Routing: an isolated interface using a VLAN on top of a direct interface. It only can reach other slivers of the same slice with isolated interfaces on the same link. All traffic is allowed. - Low-level testing: the sliver is given raw access to the interface. For privacy, isolation and stability reasons this should only be allowed in exceptional occasions. ................................................................................ 1. The researcher first contacts the server and creates a slice description which specifies a template for slivers (e.g. Debian Squeeze i386). Experiment data is attached including a program to setup the experiment and another one to run it. 2. The server updates the registry which holds all definitions of testbed, nodes, users, slices, slivers, etc. 3. The researcher chooses a couple of nodes and creates sliver descriptions for them in the previous slice. Both sliver descriptions include a public interface to the CN and user-defined properties for telling apart the source sliver from the target one. Sliver descriptions go to the registry. 4. Each of the previous nodes gets a sliver description for it. If enough resources are available, a container is created with the desired configuration. 5. Once the researcher knows that slivers have been instantiated, the server can be commanded to activate the slice. The server updates the registry. 6. When nodes get instructions to activate slivers they start the containers. 7. Containers run the experiment setup program and the run program. The programs query sliver properties to decide their behaviour. 8. Researchers interact with containers if needed (e.g. via SSH) and collect results straight from them. 9. When finished, the researcher tells the server to deactivate and deinstantiate the slice. 10. Nodes get the instructions and they stop and remove containers. At all times there can be external services interacting with researchers, server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or collect results. * Community-Lab integration in existing community networks # CN diagram (buildings and cloud). A typical CN looks like this, with most nodes linked using WiFi technology (cheap and ubiquitous), but sometimes others as optical fiber. The CONFINE project follows three strategies taking into account that CNs are production networks with distributed ownership: # CN diagram extended with CONFINE devices (hover over interesting part). - Take an existing node owned by CN members, CONFINE provides a RD and connects it via Ethernet. Experiments are restricted to the application layer unless the node owner allows the RD to include a direct interface (i.e. antenna). - Extend the CN with complete nodes, CONFINE provides both the CD and the RD and uses a CN member's location. All but low-level experiments are possible using direct interfaces. - Set up a physically separated cloud of nodes, CONFINE extends the CN with a full installation of connected nodes at a site controlled by a partner (e.g. campus). All kinds of experiments are possible using direct interfaces. Users are warned about the experimental nature of the network. * Recap - Community networks are an emerging field to provide citizens with connectivity in a sustainable and distributed manner in which the owners of the networks are the users themselves. - Research on this field is necessary to support CNs growth while improving their operation and quality. - Experimental tools are still lacking because of the peculiarities of CNs. - The CONFINE project aims to fill this gap by deploying Community-Lab, a testbed for community networks inside existing community networks. # Commenters: Less attention on architecture, more on global working of # testbed. # Ivan: Describe simple experiment, show diagram (UML-like timing diagram? # small animation?) showing the steps from slice creation to instantiation, # activation, deactivation and deletion for that example experiment. |
| | | | | | | < < > | | | | > | | | | | | | | | | | |
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 .. 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 .. 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 ... 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 ... 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 ... 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
researchers to experiment on real community networks. ** Testbeds - Environments built with real hardware for realistic experimental research on network technologies (instead of simulations). - Wireless: Berlin RoofNet, MIT Roofnet (outdoor); IBBT's w-iLab.t, CERTH's NITOS, WINLAB's ORBIT (indoor). Limited local scale, controlled environment, no resource sharing between experiments. - Internet: PlanetLab, planet-scale testbed with resource sharing on nodes. Main inspiration for Community-Lab. ** Community-Lab: a testbed for community networks - The testbed developed by CONFINE. - Integrates and extends three Community Networks: guifi.net, FunkFeuer, AWMN. # Node maps here for CNs with captures from node DBs. ................................................................................ - Also nodes in participating research centres. - Linked together over the FEDERICA backbone. - All its software and documentation is released under Free licenses, anyone can setup a CONFINE testbed like Community-Lab. * Challenges and requirements ** Simple management vs. Distributed node ownership - In contrast with e.g. indoors testbeds that belong wholly to the same entity. ** Features vs. Lightweight, low cost (free & open) - Devices ranging from PCs to embedded boards. - Need light system able to run on very different devices. ** Familiarity & flexibility vs. System stability - Familiar Linux env with root access to researchers. - Keep env isolation (nodes are shared by experiments). - Keep node stability (to avoid in-place maintenance, some difficult to reach node locations). # Frozen tower. ................................................................................ # Move over overlay diagram less overlay connections plus overlay network. - A testbed consists of a set of nodes managed by the same server. - Server managed by testbed admins. - Network and node managed by node admins (usually node owners). - Node admins must adhere to a set of conditions. - Solves management vs. ownersip problem. - All components in testbed reachable via management network (tinc mesh VPN). - Avoids problems with firewalls and private networks in nodes. - Avoids address scarcity and incompatibility (well structured IPv6 schema). - Public CN addresses still used for experiments when available. - Gateways connect disjoint parts of the management network. - Allows a testbed spanning different CNs and islands through external means (e.g. FEDERICA, the Internet). - A gateway reachable from the Internet can expose the management network (if using public addresses). - A researcher runs the experiments of a slice in slivers each running in a different node… ** Nodes, slices and slivers - …a model inspired in PlanetLab. - The slice (a management concept) groups a set of related slivers. - A sliver holds the resources (CPU, memory, disk, bandwidth, interfaces…) allocated for a slice in a given node. # Diagram: Slices and slivers, two or three nodes with a few slivers on them, # each with a color identifying it with a slice.) ** Node architecture Mostly autonomous, no long-running connections to server, asynchronous ................................................................................ operation: robust under link instability. # Node simplified diagram, hover to interesting parts. - The community device - Completely normal CN network device, possibly already existing. - Routes traffic between the CN and devices in the node's local network (wired, runs no routing protocol). - The research device - More powerful than CD, it runs OpenWrt firmware customized by CONFINE. - Experiments run here. The separation between CD and RD allows: - Minumum CONFINE-specific tampering with CN hardware. - Minimum CN-specific configuration for RDs. - Greater compatibility and stability for the CN. - Slivers are implemented as Linux containers. - LXC: lightweight virtualization (in Linux mainstream). - Provides a familiar env for researchers. - Easier resource limitation, resource isolation and node stability. - Control software - Manages containers and resource isolation through LXC tools. - Ensures network isolation and stability through traffic control (QoS) and filtering (from L2 upwards). - Protects users' privacy through traffic filtering and anonimization. - Optional, controlled direct interfaces for experiments to interact directly with the CN (avoiding the CD). ................................................................................ - Home computer behind a NAT router: a private interface with traffic forwarded using NAT to the CN. Outgoing traffic is filtered to ensure network stability. - Publicly open service: a public interface (with a public CN address) with traffic routed directly to the CN. Outgoing traffic is filtered to ensure network stability. - Traffic capture: a passive interface using a direct interface for capture. Incoming traffic is filtered and anonymized by control software. - Routing: an isolated interface using a VLAN on top of a direct interface. It only can reach other slivers of the same slice with isolated interfaces on the same link. All traffic is allowed. - Low-level testing: the sliver is given raw access to the interface. For privacy, isolation and stability reasons this should only be allowed in exceptional occasions. ................................................................................ 1. The researcher first contacts the server and creates a slice description which specifies a template for slivers (e.g. Debian Squeeze i386). Experiment data is attached including a program to setup the experiment and another one to run it. 2. The server updates the registry which holds all definitions of testbed, nodes, users, slices, slivers, etc. 3. The researcher chooses a couple of nodes and creates sliver descriptions for them belonging to the previous slice. Both sliver descriptions include a public interface to the CN and user-defined properties for telling apart the source sliver from the target one. Sliver descriptions go to the registry. 4. Each of the previous nodes gets a sliver description for it. If enough resources are available, a container is created with the desired configuration. 5. Once the researcher knows that slivers have been instantiated, the server can be commanded to activate the slice. The server updates the registry. 6. When nodes get instructions to activate slivers they start the containers. 7. Containers run the experiment setup program and the run program. The programs query sliver properties to decide their behaviour. 8. Researchers interact straight with containers if needed (e.g. via SSH) and collect results from them. 9. When finished, the researcher tells the server to deactivate and deinstantiate the slice. 10. Nodes get the instructions and they stop and remove containers. At all times there can be external services interacting with researchers, server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or collect results. * Community-Lab integration in existing community networks # CN diagram (buildings and cloud). A typical CN looks like this, with most nodes linked using cheap and ubiquitous WiFi technology (and less frequently Ethernet, optical fiber or others). The CONFINE project follows three strategies taking into account that CNs are production networks with distributed ownership: # CN diagram extended with CONFINE devices (hover over interesting part). - Take an existing node owned by CN members, CONFINE provides a RD and connects it via Ethernet to the CD. Experiments are restricted to the application layer unless the node owner allows the RD to include a direct interface (i.e. antenna). - Extend the CN with complete nodes, CONFINE provides both the CD and the RD and uses a CN member's location. All but low-level experiments are possible using direct interfaces. - Set up a physically separated cloud of nodes, CONFINE extends the CN with a full installation of connected nodes at a site controlled by a partner (e.g. campus). All kinds of experiments are possible using direct interfaces. Users are warned about the experimental nature of the network. * Recap - Community networks are an emerging field to provide citizens with connectivity in a sustainable and distributed manner in which the owners of the networks are the users themselves. - Research on this field is necessary to support CNs' growth while improving their operation and quality. - Experimental tools are still lacking because of the peculiarities of CNs. - The CONFINE project aims to fill this gap by deploying Community-Lab, a testbed for existing community networks. # Commenters: Less attention on architecture, more on global working of # testbed. # Ivan: Describe simple experiment, show diagram (UML-like timing diagram? # small animation?) showing the steps from slice creation to instantiation, # activation, deactivation and deletion for that example experiment. |