On Mon, Apr 2, 2012 at 3:32 AM, Marek Lindner lindner_marek@yahoo.de wrote:
On Monday, April 02, 2012 00:23:36 dan wrote:
I am making some assumptions. I assume that the link will at some point become saturated. If we simply track the maximum then we can advertise an available amount.
This will result in a metric optimizing paths for the highest throughput ever recorded. In reality one can easily observe many links with variable throughput. Sometimes you get a spike of high throughput although the average speed is lower. Or your wifi environment changes with a negative impact on the throughput.
True. maybe not keep the maximum. Maybe watch the interface queue and measure the throughput when frames start to get queued. Updated the 'max' speed whenever an interface starts to queue frames.
How do we identify this? Well, if the C radio has historically transfered 10Mb (on the interface closest to the gateway) and we have tracked it, we can take the current 5Mb away from that an see that there is 5Mb remaining. This does also assume that a specific interface has a consistent speed.
That is not a safe assumption.
maybe its ok to assume that an interface has a consistent speed for some period of time...
I don't suggest making throughput the #1 route selection method, only what would be used if similar quality links where available. in this case, A<>B and A<>F are very similar quality so we would use available throughput in the decision making. Have a tunable threshold for TQ vs TQ before this load balancing is taken into account.
Interesting idea. Have to think about this a little bit.
I have another though on how to determine maximum speed but it is more 'destructive' Have batman-adv do a test on each link for tx, rx, and bi-directional and store the results and consider these the interfaces potential. Also identify if an interface is FD or HD. retest on an interval, and/or when the TQ on a link is consistently worse than when tested last. If the test was thorough enough, it would be able to identify at what throughput ping times and packet loss spike and have an effective 'safe' maximum vs absolute maximum.
Yes, we still have the "costly" way of detecting the link throughput ourselves. What do you think about the idea of asking the wifi rate algorithm for the link speed ?
I am a wisp. In my experience, the wifi sync rate isn't reliable. In perfect conditions yes, but when there is fresnel zone incursion on a wireless link, the algorythm can't take into account reflected signal as noise because they dont exist yet. Not until you start transfering data does the signal get reflected back (as noise) and the radio has to adjust the rate down. Problem is that this happens after you have dropped 5% of your packets, which would drop the TQ on the link and it would be effectively down until. Now the data stops, reflections stop, link changes speed back up and very light use (pings, OGMs) travel safely and TQ rises. Rinse and repeat.
Regards, Marek
I wish I had a really great solution to this. I dont really have anything to complain about, batman-adv is already a mile ahead of the next best mesh routing protocal :)