Differences From
Artifact [f4b6ad912c]:
43 43
44 44 * Challenges and requirements
45 45 ** Simple management vs. Distributed node ownership
46 46 - In contrast with esp. indoors testbeds that belong wholly to the same
47 47 entity.
48 48
49 49 ** Features vs. Lightweight, low cost (free & open)
50 -- Devices ranging from PCs to embedded boards located on roofs (or worse).
51 -# Node on roof, frozen tower.
50 +- Devices ranging from PCs to embedded boards.
52 51 - Need light system able to run on a variety of devices.
53 52
54 53 ** Familiarity & flexibility vs. System stability
55 54 - Familiar Linux env with root access to researchers.
56 55 - Keep env isolation (nodes are shared by experiments).
57 56 - Keep node stability (to avoid in-place maintenance, some difficult to reach
58 57 node locations).
58 +# Frozen tower.
59 59
60 60 ** Flexibility vs. Network stability
61 61 - Network experiments running on nodes in a production network.
62 62 - Allow interaction with CN at the lowest level possible but not disrupting or
63 63 overusing it.
64 64
65 65 ** Traffic collection vs. Privacy of CN users
................................................................................
75 75
76 76 ** Heterogeneity vs. Compatibility
77 77 - Lots of different devices (disparate connectivity and software openness).
78 78 - Lots of different link technologies (wireless, wired, fiber).
79 79
80 80 * Community-Lab testbed architecture
81 81 ** Overall architecture
82 -This architecture applies to all testbeds using the CONFINE software. All
83 -CONFINE software and documentation is released under Free licenses. Anyone
82 +This architecture applies to all testbeds using the CONFINE software. Since
83 +all CONFINE software and documentation is released under Free licenses, anyone
84 84 can setup a CONFINE testbed.
85 85 # Move over overlay diagram less overlay connections plus overlay network.
86 86 - A testbed consists of a set of nodes managed by the same server.
87 87 - Server managed by testbed admins.
88 88 - Network and node managed by node admins (usually node owners).
89 89 - Node admins must adhere to a set of conditions.
90 - - Problematic nodes are not eligible for experimentation.
91 90 - Solves management vs. ownersip problem.
92 91 - All components in testbed reachable via management network (tinc mesh VPN).
93 - - Server and nodes offer APIs on that network.
92 + - Avoids problems with firewalls and private networks.
94 93 - Avoids address scarcity and incompatibility (well structured IPv6 schema).
95 - - Avoids problems with firewalls and private networks.
96 - - Thus avoids most CONFINE-specific network configuration of the node (CD).
97 94 - Public addresses still used for experiments when available.
98 - - Odd hosts can also connect to the management network.
99 95 - Gateways connect disjoint parts of the management network.
100 96 - Allows a testbed spanning different CNs and islands through external means
101 97 (e.g. FEDERICA, the Internet).
102 98 - A gateway reachable from the Internet can expose the management network
103 99 (if using public addresses).
104 100 - A researcher runs the experiments of a slice in slivers each running in a
105 101 different nodeā¦
................................................................................
116 112 Mostly autonomous, no long-running connections to server, asynchronous
117 113 operation: robust under link instability.
118 114 # Node simplified diagram, hover to interesting parts.
119 115 - The community device
120 116 - Completely normal CN network device, possibly already existing.
121 117 - Routes traffic between the CN and devices in the node's local network
122 118 (wired, runs no routing protocol).
123 - - CD/RD separation allows minimum CONFINE-specific configuration for RD, but
124 - adds one hop for experiments to CN.
125 119 - The research device
126 120 - More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
127 121 customized by CONFINE.
122 + - Experiments run here. The separation between CD and RD allows:
123 + - Minumum CONFINE-specific tampering with CN hardware.
124 + - Minimum CN-specific configuration for RDs.
125 + - Greater compatibility and stability for the CN.
128 126 - Slivers are implemented as Linux containers.
129 127 - LXC: lightweight virtualization (in Linux mainstream).
130 - - Resource limitation.
131 - - Allows a familiar env with resource isolation and keeping node
132 - stability.
133 - - Root access to slivers always available to researchers via SSH to RD.
128 + - Easier resource limitation, resource isolation and node stability.
129 + - Provides a familiar env for researchers.
134 130 - Control software
135 - - Manages containers and resource isolation using LXC.
131 + - Manages containers and resource isolation through LXC tools.
136 132 - Ensures network isolation and stability through traffic control (QoS)
137 133 and filtering (from L2 upwards).
138 134 - Protects users' privacy through traffic filtering and anonimization.
139 - - Provides various services to slivers through internal bridge.
140 135 - Optional, controlled direct interfaces for experiments to interact
141 - directly with the CN.
142 - - CD/RD separation allows greater compatibility and stability, as well as
143 - minimum CN-specific configuration, avoids managing CN hardware.
136 + directly with the CN (avoiding the CD).
144 137 - The recovery device can force a hardware reboot of the RD from several
145 138 triggers and help with upgrade and recovery.
146 139
147 -** Alternative node arrangements
148 -Compatible with the current architecture.
149 -- RD hosts CD as a community container: low cost (one device), less stable.
150 - Not yet implemented.
151 -- CD hosts RD as a KVM: for a powerful node such as a PC, in the future with
152 - radios linked over Ethernet and DLEP.
153 -
154 140 ** Node and sliver connectivity
155 141 # Node simplified diagram, hover to interesting parts.
156 142 Slivers can be configured with different types of network interfaces depending
157 143 on what connectivity researchers need for experiments:
158 -- Home computer behind a NAT router: a private interface placed into the
159 - internal bridge, where traffic is forwarded using NAT to the CN. Outgoing
160 - traffic is filtered to ensure network stability.
161 -- Publicly open service: a public interface (with a public CN address) placed
162 - into the local bridge, with traffic routed directly to the CN. Outgoing
163 - traffic is filtered to ensure network stability.
164 -- Traffic capture: a passive interface placed on the bridge of the direct
165 - interface used for capture. Incoming traffic is filtered and anonimized by
166 - control software.
144 +- Home computer behind a NAT router: a private interface with traffic
145 + forwarded using NAT to the CN. Outgoing traffic is filtered to ensure
146 + network stability.
147 +- Publicly open service: a public interface (with a public CN address) with
148 + traffic routed directly to the CN. Outgoing traffic is filtered to ensure
149 + network stability.
150 +- Traffic capture: a passive interface using a direct interface for capture.
151 + Incoming traffic is filtered and anonimized by control software.
167 152 - Routing: an isolated interface using a VLAN on top of a direct interface.
168 - Other slivers with isolated interfaces must be within link layer reach. All
169 - traffic is allowed.
153 + It only can reach other slivers of the same slice with isolated interfaces
154 + on the same link. All traffic is allowed.
170 155 - Low-level testing: the sliver is given raw access to the interface. For
171 156 privacy, isolation and stability reasons this should only be allowed in
172 157 exceptional occasions.
173 158
174 159 * How the testbed works
175 160 # Event diagram, hover over components explained.
176 161 An example experiment: two slivers, one of them (source sliver) pings the
177 162 other one (target sliver).
178 163
179 164 1. The researcher first contacts the server and creates a slice description
180 165 which specifies a template for slivers (e.g. Debian Squeeze i386).
181 - Experiment data is attached including a program to setup the experiment
182 - (e.g. a script that runs =apt-get install iputils-ping=) and another one to
183 - run it.
166 + Experiment data is attached including a program to setup the experiment and
167 + another one to run it.
184 168 2. The server updates the registry which holds all definitions of testbed,
185 169 nodes, users, slices, slivers, etc.
186 170 3. The researcher chooses a couple of nodes and creates sliver descriptions
187 171 for them in the previous slice. Both sliver descriptions include a public
188 172 interface to the CN and user-defined properties for telling apart the
189 173 source sliver from the target one. Sliver descriptions go to the registry.
190 174 4. Each of the previous nodes gets a sliver description for it. If enough
................................................................................
204 188 At all times there can be external services interacting with researchers,
205 189 server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
206 190 collect results.
207 191
208 192 * Community-Lab integration in existing community networks
209 193 # CN diagram (buildings and cloud).
210 194 A typical CN looks like this, with most nodes linked using WiFi technology
211 -(cheap and ubiquitous), but sometimes others as optical fiber. Remember that
212 -CNs are production networks with distributed ownership. Strategies:
195 +(cheap and ubiquitous), but sometimes others as optical fiber. The CONFINE
196 +project follows three strategies taking into account that CNs are production
197 +networks with distributed ownership:
213 198
214 199 # CN diagram extended with CONFINE devices (hover over interesting part).
215 200 - Take an existing node owned by CN members, CONFINE provides a RD and
216 201 connects it via Ethernet. Experiments are restricted to the application
217 202 layer unless the node owner allows the RD to include a direct interface
218 203 (i.e. antenna).
219 204 - Extend the CN with complete nodes, CONFINE provides both the CD and the RD
220 - and uses a CN member's location. All but low-level experiments are
221 - possible with direct interfaces.
205 + and uses a CN member's location. All but low-level experiments are possible
206 + using direct interfaces.
222 207 - Set up a physically separated cloud of nodes, CONFINE extends the CN with a
223 208 full installation of connected nodes at a site controlled by a partner
224 - (e.g. campus). All kinds of experiments are possible with direct
209 + (e.g. campus). All kinds of experiments are possible using direct
225 210 interfaces. Users are warned about the experimental nature of the network.
226 211
227 212 * Recap
228 213
229 214 - Community networks are an emerging field to provide citizens with
230 215 connectivity in a sustainable and distributed manner in which the owners of
231 216 the networks are the users themselves.