Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
i think this could scale , but you will need sufficient power / ethernet capacities at the nodes.
and the biggest problem may be that these endpoints make much noise on the layer2 level. (disovery protocolls and stuff like this)
you could easily imagine what would happen if you have a layer2 switch with 100000 ports.
i think that most of the time you will not do this, and implement some sort of routing between some clouds of batman-adv networks.
On 26.04.2017 16:20, fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
I can't see how it could scale. Each node needs a table of all other nodes to calculate the cost for each path. This is a LOT of data and each node rebroadcasts it's neighbor's information so the more neighbors, the more rebroadcasting.
Maybe if you had a long string of nodes with each node only seeing 1-2 other nodes, it might scale up higher, but that's not the point of mesh networking.
It really means that under idea circumstances, overhead is going to scale up with node count by percent. So a few nodes is low overhead, but 50 nodes might be as much as 50% and so on.
This is a critical flaw in batman-adv to scale up. Requiring each node to know all other nodes. If this rule could be changed to have batman-adv only keep track of gateway nodes and nodes in the path to a gateway, then it could scale vastly further.
On Wed, Apr 26, 2017 at 9:10 AM, jens jens@viisauksena.de wrote:
i think this could scale , but you will need sufficient power / ethernet capacities at the nodes.
and the biggest problem may be that these endpoints make much noise on the layer2 level. (disovery protocolls and stuff like this)
you could easily imagine what would happen if you have a layer2 switch with 100000 ports.
i think that most of the time you will not do this, and implement some sort of routing between some clouds of batman-adv networks.
On 26.04.2017 16:20, fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
-- make the world nicer, please use PGP encryption
Hi Jens!
By 'sufficient power', do you mean processing power to handle overhead traffic? I'm imagining a network which primarily connects WLANs.
Clouds of batman-adv networks sounds like a good idea. Any idea about how to implement the interfaces between these clouds? How big should they be allowed to grow before forming a new cloud?
fuumind
ons 2017-04-26 klockan 17:10 +0200 skrev jens:
i think this could scale , but you will need sufficient power / ethernet capacities at the nodes.
and the biggest problem may be that these endpoints make much noise on the layer2 level. (disovery protocolls and stuff like this)
you could easily imagine what would happen if you have a layer2 switch with 100000 ports.
i think that most of the time you will not do this, and implement some sort of routing between some clouds of batman-adv networks.
On 26.04.2017 16:20, fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
I don't think it's CPU that's the issue, it's WLAN overhead. Too many nodes is too much overhead traffic.
The web is telling me there is an upper limit of a 'basic' cluster of about 300, and if you add some multicast filtering you can hit maybe 1500. OGM's are eventually going to saturate the network though.
That makes me think you should be looking at small clouds and some other 'routing' protocol to connect those clusters together. Maybe do an OSPF ring linking up the batman-adv clouds.
On Wed, Apr 26, 2017 at 10:06 AM, fuumind fuumind@openmailbox.org wrote:
Hi Jens!
By 'sufficient power', do you mean processing power to handle overhead traffic? I'm imagining a network which primarily connects WLANs.
Clouds of batman-adv networks sounds like a good idea. Any idea about how to implement the interfaces between these clouds? How big should they be allowed to grow before forming a new cloud?
fuumind
ons 2017-04-26 klockan 17:10 +0200 skrev jens:
i think this could scale , but you will need sufficient power / ethernet capacities at the nodes.
and the biggest problem may be that these endpoints make much noise on the layer2 level. (disovery protocolls and stuff like this)
you could easily imagine what would happen if you have a layer2 switch with 100000 ports.
i think that most of the time you will not do this, and implement some sort of routing between some clouds of batman-adv networks.
On 26.04.2017 16:20, fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
Hi,
"works well" depends much on your scenario.
I've seen many well-working networks with 100-300 nodes. There are Freifunk community networks with over 1000 nodes running batman-adv (i.e. "standard" users like laptops and smartphones), but they employ a lot of filtering to avoid too much broadcast.
Unless you have special restrictions like minimal broadcast traffic or high capacity on the wireless links, memory and CPU, I would not recommend to plan for 10 000 nodes and beyond.
Cheers, Simon
On Wednesday, April 26, 2017 4:20:59 PM CEST fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
Ok, so clouds of a few hundred nodes would work reasonably well and at the same time keep customization to a minimum.
What makes OSPF a better candidate for interfacing between the clouds? Does it keep a more limited routing table and thus minimize overhead traffic? I haven't been able to figure it out good enough yet.
fuumind
ons 2017-04-26 klockan 18:05 +0200 skrev Simon Wunderlich:
Hi,
"works well" depends much on your scenario.
I've seen many well-working networks with 100-300 nodes. There are Freifunk community networks with over 1000 nodes running batman-adv (i.e. "standard" users like laptops and smartphones), but they employ a lot of filtering to avoid too much broadcast.
Unless you have special restrictions like minimal broadcast traffic or high capacity on the wireless links, memory and CPU, I would not recommend to plan for 10 000 nodes and beyond.
Cheers, Simon
On Wednesday, April 26, 2017 4:20:59 PM CEST fuumind wrote:
Hi list!
Been lurking for almost a year on the battlemesh list and recently joined here as well.
I'm curious about how well batman-adv scales. Would a network of 10 000 nodes work well? What about 100 000 nodes or 1 000 000?
Thanks! fuumind
Hi,
On Friday, April 28, 2017 12:44:08 PM CEST fuumind wrote:
Ok, so clouds of a few hundred nodes would work reasonably well and at the same time keep customization to a minimum.
Yes
What makes OSPF a better candidate for interfacing between the clouds? Does it keep a more limited routing table and thus minimize overhead traffic? I haven't been able to figure it out good enough yet.
The idea would be that you do a layer 3 protocol only on a few selected "gateway" nodes of the clouds. Those gateways would forward traffic from one cloud to the other, without having the details (and thus not having the overhead) of the state of the mesh.
Which technology is the best is beyond my knowledge.
Cheers, Simon
Now I get it. :) That makes sense.
Thanks!
fre 2017-04-28 klockan 14:23 +0200 skrev Simon Wunderlich:
Hi,
On Friday, April 28, 2017 12:44:08 PM CEST fuumind wrote:
Ok, so clouds of a few hundred nodes would work reasonably well and at the same time keep customization to a minimum.
Yes
What makes OSPF a better candidate for interfacing between the clouds? Does it keep a more limited routing table and thus minimize overhead traffic? I haven't been able to figure it out good enough yet.
The idea would be that you do a layer 3 protocol only on a few selected "gateway" nodes of the clouds. Those gateways would forward traffic from one cloud to the other, without having the details (and thus not having the overhead) of the state of the mesh.
Which technology is the best is beyond my knowledge.
Cheers, Simon
On Fri, Apr 28, 2017 at 02:23:16PM +0200, Simon Wunderlich wrote:
On Friday, April 28, 2017 12:44:08 PM CEST fuumind wrote:
Ok, so clouds of a few hundred nodes would work reasonably well and at the same time keep customization to a minimum.
Yes
What makes OSPF a better candidate for interfacing between the clouds? Does it keep a more limited routing table and thus minimize overhead traffic? I haven't been able to figure it out good enough yet.
Which technology is the best is beyond my knowledge.
We (Freifunk Stuttgart) use a setup similar to this. A split in different Layer3 Routingdomains had been necessary, as the batman-management traffic reached 1mbit/s at about 300 batman nodes. Geographically we depend on vpn-links to servers for our network, so we rarely have longer lines that could aggregate some information. The local uplinks are often consumer internet lines als slow as 16mbit/s down- and 1mbit/s upstream. Our different batman-segments are distributed between several core gateways. These gateway are linked together via a backbone vpn, no batman there, just routing within that vpn. Our first idea had been tinc vpn (meshing vpn) + OSPF, however, tinc mesh and ospf don't play nice, we had a really unstable network. In my eyes, ospf needs a stable network, some vpn between some internet servers as a virtual switch is not good enough. Atm we use tinc + tinc internal routing. This works well but is somewhat unflexible when it comes to new networks. In future we will try and convert to p2p vpn links (probably openvpn) between the gateways and use bgp for the routing stuff.
Regards, Adrian
b.a.t.m.a.n@lists.open-mesh.org