43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
..
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
...
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
...
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
|
* Challenges and requirements
** Simple management vs. Distributed node ownership
- In contrast with esp. indoors testbeds that belong wholly to the same
entity.
** Features vs. Lightweight, low cost (free & open)
- Devices ranging from PCs to embedded boards located on roofs (or worse).
# Node on roof, frozen tower.
- Need light system able to run on a variety of devices.
** Familiarity & flexibility vs. System stability
- Familiar Linux env with root access to researchers.
- Keep env isolation (nodes are shared by experiments).
- Keep node stability (to avoid in-place maintenance, some difficult to reach
node locations).
** Flexibility vs. Network stability
- Network experiments running on nodes in a production network.
- Allow interaction with CN at the lowest level possible but not disrupting or
overusing it.
** Traffic collection vs. Privacy of CN users
................................................................................
** Heterogeneity vs. Compatibility
- Lots of different devices (disparate connectivity and software openness).
- Lots of different link technologies (wireless, wired, fiber).
* Community-Lab testbed architecture
** Overall architecture
This architecture applies to all testbeds using the CONFINE software. All
CONFINE software and documentation is released under Free licenses. Anyone
can setup a CONFINE testbed.
# Move over overlay diagram less overlay connections plus overlay network.
- A testbed consists of a set of nodes managed by the same server.
- Server managed by testbed admins.
- Network and node managed by node admins (usually node owners).
- Node admins must adhere to a set of conditions.
- Problematic nodes are not eligible for experimentation.
- Solves management vs. ownersip problem.
- All components in testbed reachable via management network (tinc mesh VPN).
- Server and nodes offer APIs on that network.
- Avoids address scarcity and incompatibility (well structured IPv6 schema).
- Avoids problems with firewalls and private networks.
- Thus avoids most CONFINE-specific network configuration of the node (CD).
- Public addresses still used for experiments when available.
- Odd hosts can also connect to the management network.
- Gateways connect disjoint parts of the management network.
- Allows a testbed spanning different CNs and islands through external means
(e.g. FEDERICA, the Internet).
- A gateway reachable from the Internet can expose the management network
(if using public addresses).
- A researcher runs the experiments of a slice in slivers each running in a
different node…
................................................................................
Mostly autonomous, no long-running connections to server, asynchronous
operation: robust under link instability.
# Node simplified diagram, hover to interesting parts.
- The community device
- Completely normal CN network device, possibly already existing.
- Routes traffic between the CN and devices in the node's local network
(wired, runs no routing protocol).
- CD/RD separation allows minimum CONFINE-specific configuration for RD, but
adds one hop for experiments to CN.
- The research device
- More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
customized by CONFINE.
- Slivers are implemented as Linux containers.
- LXC: lightweight virtualization (in Linux mainstream).
- Resource limitation.
- Allows a familiar env with resource isolation and keeping node
stability.
- Root access to slivers always available to researchers via SSH to RD.
- Control software
- Manages containers and resource isolation using LXC.
- Ensures network isolation and stability through traffic control (QoS)
and filtering (from L2 upwards).
- Protects users' privacy through traffic filtering and anonimization.
- Provides various services to slivers through internal bridge.
- Optional, controlled direct interfaces for experiments to interact
directly with the CN.
- CD/RD separation allows greater compatibility and stability, as well as
minimum CN-specific configuration, avoids managing CN hardware.
- The recovery device can force a hardware reboot of the RD from several
triggers and help with upgrade and recovery.
** Alternative node arrangements
Compatible with the current architecture.
- RD hosts CD as a community container: low cost (one device), less stable.
Not yet implemented.
- CD hosts RD as a KVM: for a powerful node such as a PC, in the future with
radios linked over Ethernet and DLEP.
** Node and sliver connectivity
# Node simplified diagram, hover to interesting parts.
Slivers can be configured with different types of network interfaces depending
on what connectivity researchers need for experiments:
- Home computer behind a NAT router: a private interface placed into the
internal bridge, where traffic is forwarded using NAT to the CN. Outgoing
traffic is filtered to ensure network stability.
- Publicly open service: a public interface (with a public CN address) placed
into the local bridge, with traffic routed directly to the CN. Outgoing
traffic is filtered to ensure network stability.
- Traffic capture: a passive interface placed on the bridge of the direct
interface used for capture. Incoming traffic is filtered and anonimized by
control software.
- Routing: an isolated interface using a VLAN on top of a direct interface.
Other slivers with isolated interfaces must be within link layer reach. All
traffic is allowed.
- Low-level testing: the sliver is given raw access to the interface. For
privacy, isolation and stability reasons this should only be allowed in
exceptional occasions.
* How the testbed works
# Event diagram, hover over components explained.
An example experiment: two slivers, one of them (source sliver) pings the
other one (target sliver).
1. The researcher first contacts the server and creates a slice description
which specifies a template for slivers (e.g. Debian Squeeze i386).
Experiment data is attached including a program to setup the experiment
(e.g. a script that runs =apt-get install iputils-ping=) and another one to
run it.
2. The server updates the registry which holds all definitions of testbed,
nodes, users, slices, slivers, etc.
3. The researcher chooses a couple of nodes and creates sliver descriptions
for them in the previous slice. Both sliver descriptions include a public
interface to the CN and user-defined properties for telling apart the
source sliver from the target one. Sliver descriptions go to the registry.
4. Each of the previous nodes gets a sliver description for it. If enough
................................................................................
At all times there can be external services interacting with researchers,
server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
collect results.
* Community-Lab integration in existing community networks
# CN diagram (buildings and cloud).
A typical CN looks like this, with most nodes linked using WiFi technology
(cheap and ubiquitous), but sometimes others as optical fiber. Remember that
CNs are production networks with distributed ownership. Strategies:
# CN diagram extended with CONFINE devices (hover over interesting part).
- Take an existing node owned by CN members, CONFINE provides a RD and
connects it via Ethernet. Experiments are restricted to the application
layer unless the node owner allows the RD to include a direct interface
(i.e. antenna).
- Extend the CN with complete nodes, CONFINE provides both the CD and the RD
and uses a CN member's location. All but low-level experiments are
possible with direct interfaces.
- Set up a physically separated cloud of nodes, CONFINE extends the CN with a
full installation of connected nodes at a site controlled by a partner
(e.g. campus). All kinds of experiments are possible with direct
interfaces. Users are warned about the experimental nature of the network.
* Recap
- Community networks are an emerging field to provide citizens with
connectivity in a sustainable and distributed manner in which the owners of
the networks are the users themselves.
|
|
<
>
|
|
<
<
<
<
>
<
<
<
>
>
>
>
<
<
<
<
>
>
|
<
|
<
<
<
<
<
<
<
<
<
|
|
|
|
|
|
|
|
<
<
>
|
|
<
|
|
>
|
|
|
|
|
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
..
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
...
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
...
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
|
* Challenges and requirements
** Simple management vs. Distributed node ownership
- In contrast with esp. indoors testbeds that belong wholly to the same
entity.
** Features vs. Lightweight, low cost (free & open)
- Devices ranging from PCs to embedded boards.
- Need light system able to run on a variety of devices.
** Familiarity & flexibility vs. System stability
- Familiar Linux env with root access to researchers.
- Keep env isolation (nodes are shared by experiments).
- Keep node stability (to avoid in-place maintenance, some difficult to reach
node locations).
# Frozen tower.
** Flexibility vs. Network stability
- Network experiments running on nodes in a production network.
- Allow interaction with CN at the lowest level possible but not disrupting or
overusing it.
** Traffic collection vs. Privacy of CN users
................................................................................
** Heterogeneity vs. Compatibility
- Lots of different devices (disparate connectivity and software openness).
- Lots of different link technologies (wireless, wired, fiber).
* Community-Lab testbed architecture
** Overall architecture
This architecture applies to all testbeds using the CONFINE software. Since
all CONFINE software and documentation is released under Free licenses, anyone
can setup a CONFINE testbed.
# Move over overlay diagram less overlay connections plus overlay network.
- A testbed consists of a set of nodes managed by the same server.
- Server managed by testbed admins.
- Network and node managed by node admins (usually node owners).
- Node admins must adhere to a set of conditions.
- Solves management vs. ownersip problem.
- All components in testbed reachable via management network (tinc mesh VPN).
- Avoids problems with firewalls and private networks.
- Avoids address scarcity and incompatibility (well structured IPv6 schema).
- Public addresses still used for experiments when available.
- Gateways connect disjoint parts of the management network.
- Allows a testbed spanning different CNs and islands through external means
(e.g. FEDERICA, the Internet).
- A gateway reachable from the Internet can expose the management network
(if using public addresses).
- A researcher runs the experiments of a slice in slivers each running in a
different node…
................................................................................
Mostly autonomous, no long-running connections to server, asynchronous
operation: robust under link instability.
# Node simplified diagram, hover to interesting parts.
- The community device
- Completely normal CN network device, possibly already existing.
- Routes traffic between the CN and devices in the node's local network
(wired, runs no routing protocol).
- The research device
- More powerful than CD, it runs OpenWrt (Attitude Adjustment) firmware
customized by CONFINE.
- Experiments run here. The separation between CD and RD allows:
- Minumum CONFINE-specific tampering with CN hardware.
- Minimum CN-specific configuration for RDs.
- Greater compatibility and stability for the CN.
- Slivers are implemented as Linux containers.
- LXC: lightweight virtualization (in Linux mainstream).
- Easier resource limitation, resource isolation and node stability.
- Provides a familiar env for researchers.
- Control software
- Manages containers and resource isolation through LXC tools.
- Ensures network isolation and stability through traffic control (QoS)
and filtering (from L2 upwards).
- Protects users' privacy through traffic filtering and anonimization.
- Optional, controlled direct interfaces for experiments to interact
directly with the CN (avoiding the CD).
- The recovery device can force a hardware reboot of the RD from several
triggers and help with upgrade and recovery.
** Node and sliver connectivity
# Node simplified diagram, hover to interesting parts.
Slivers can be configured with different types of network interfaces depending
on what connectivity researchers need for experiments:
- Home computer behind a NAT router: a private interface with traffic
forwarded using NAT to the CN. Outgoing traffic is filtered to ensure
network stability.
- Publicly open service: a public interface (with a public CN address) with
traffic routed directly to the CN. Outgoing traffic is filtered to ensure
network stability.
- Traffic capture: a passive interface using a direct interface for capture.
Incoming traffic is filtered and anonimized by control software.
- Routing: an isolated interface using a VLAN on top of a direct interface.
It only can reach other slivers of the same slice with isolated interfaces
on the same link. All traffic is allowed.
- Low-level testing: the sliver is given raw access to the interface. For
privacy, isolation and stability reasons this should only be allowed in
exceptional occasions.
* How the testbed works
# Event diagram, hover over components explained.
An example experiment: two slivers, one of them (source sliver) pings the
other one (target sliver).
1. The researcher first contacts the server and creates a slice description
which specifies a template for slivers (e.g. Debian Squeeze i386).
Experiment data is attached including a program to setup the experiment and
another one to run it.
2. The server updates the registry which holds all definitions of testbed,
nodes, users, slices, slivers, etc.
3. The researcher chooses a couple of nodes and creates sliver descriptions
for them in the previous slice. Both sliver descriptions include a public
interface to the CN and user-defined properties for telling apart the
source sliver from the target one. Sliver descriptions go to the registry.
4. Each of the previous nodes gets a sliver description for it. If enough
................................................................................
At all times there can be external services interacting with researchers,
server, nodes and slivers, e.g. to help choosing nodes, monitor nodes or
collect results.
* Community-Lab integration in existing community networks
# CN diagram (buildings and cloud).
A typical CN looks like this, with most nodes linked using WiFi technology
(cheap and ubiquitous), but sometimes others as optical fiber. The CONFINE
project follows three strategies taking into account that CNs are production
networks with distributed ownership:
# CN diagram extended with CONFINE devices (hover over interesting part).
- Take an existing node owned by CN members, CONFINE provides a RD and
connects it via Ethernet. Experiments are restricted to the application
layer unless the node owner allows the RD to include a direct interface
(i.e. antenna).
- Extend the CN with complete nodes, CONFINE provides both the CD and the RD
and uses a CN member's location. All but low-level experiments are possible
using direct interfaces.
- Set up a physically separated cloud of nodes, CONFINE extends the CN with a
full installation of connected nodes at a site controlled by a partner
(e.g. campus). All kinds of experiments are possible using direct
interfaces. Users are warned about the experimental nature of the network.
* Recap
- Community networks are an emerging field to provide citizens with
connectivity in a sustainable and distributed manner in which the owners of
the networks are the users themselves.
|