On Fri, Apr 28, 2017 at 02:23:16PM +0200, Simon Wunderlich wrote:
On Friday, April 28, 2017 12:44:08 PM CEST fuumind wrote:
Ok, so clouds of a few hundred nodes would work reasonably well and at the same time keep customization to a minimum.
Yes
What makes OSPF a better candidate for interfacing between the clouds? Does it keep a more limited routing table and thus minimize overhead traffic? I haven't been able to figure it out good enough yet.
Which technology is the best is beyond my knowledge.
We (Freifunk Stuttgart) use a setup similar to this. A split in different Layer3 Routingdomains had been necessary, as the batman-management traffic reached 1mbit/s at about 300 batman nodes. Geographically we depend on vpn-links to servers for our network, so we rarely have longer lines that could aggregate some information. The local uplinks are often consumer internet lines als slow as 16mbit/s down- and 1mbit/s upstream. Our different batman-segments are distributed between several core gateways. These gateway are linked together via a backbone vpn, no batman there, just routing within that vpn. Our first idea had been tinc vpn (meshing vpn) + OSPF, however, tinc mesh and ospf don't play nice, we had a really unstable network. In my eyes, ospf needs a stable network, some vpn between some internet servers as a virtual switch is not good enough. Atm we use tinc + tinc internal routing. This works well but is somewhat unflexible when it comes to new networks. In future we will try and convert to p2p vpn links (probably openvpn) between the gateways and use bgp for the routing stuff.
Regards, Adrian