I am testing an implementation of batman-adv, alfred, 802.11s, 11ac. In this implementation I have my mesh interface on the 5GHz radio (qca9882), Using LEDE/Openwrt 17.01.04
I am getting
root@Daniel Node:~# batctl tp 9c:b7:93:e3:56:e4 Test duration 10020ms. Sent 142966836 Bytes. Throughput: 13.61 MB/s (114.15 Mbps) is this inline with what everyone has seen. That data rate established is high however the throughput is very low
root@Daniel Node:~# iw mesh0 station dump Station 9c:b7:93:e3:56:e4 (on mesh0) inactive time: 100 ms rx bytes: 75007154 rx packets: 910406 tx bytes: 968480244 tx packets: 635811 tx retries: 0 tx failed: 7 rx drop misc: 15 signal: -45 dBm signal avg: -44 dBm Toffset: 18446744061225933893 us tx bitrate: 6.0 MBit/s rx bitrate: 650.0 MBit/s VHT-MCS 7 80MHz short GI VHT-NSS 2 rx duration: 10763380 us mesh llid: 44135 mesh plid: 29835 mesh plink: ESTAB mesh local PS mode: ACTIVE mesh peer PS mode: ACTIVE mesh non-peer PS mode: ACTIVE authorized: yes authenticated: yes associated: yes preamble: long WMM/WME: yes MFP: no TDLS peer: no DTIM period: 2 beacon interval:100 short slot time:yes connected time: 12825 seconds
What may I be doing wrong?
1/6 connection rate speed is pretty typical in 80Mhz channels unless you are in a very truly clean environment and your walls aren't reflective etc etc. This is why MIMO/MU-MIMO is such a big thing and you don't have that going for you. That’s not a Batman-adv thing. You should try 20 and 40 MHz channels and see how that works out.
Also, the tx rate is very low. I’m not sure how accurate this test is or if uses some ack or something that would rely on the tax. I would try an iperf test between nodes to get a more accuarate number.
Also, are there other radios?
On Sun, Dec 31, 2017 at 6:41 AM, Daniel Ghansah smartwires@gmail.com wrote:
I am testing an implementation of batman-adv, alfred, 802.11s, 11ac. In this implementation I have my mesh interface on the 5GHz radio (qca9882), Using LEDE/Openwrt 17.01.04
I am getting
root@Daniel Node:~# batctl tp 9c:b7:93:e3:56:e4 Test duration 10020ms. Sent 142966836 Bytes. Throughput: 13.61 MB/s (114.15 Mbps) is this inline with what everyone has seen. That data rate established is high however the throughput is very low
root@Daniel Node:~# iw mesh0 station dump Station 9c:b7:93:e3:56:e4 (on mesh0) inactive time: 100 ms rx bytes: 75007154 rx packets: 910406 tx bytes: 968480244 tx packets: 635811 tx retries: 0 tx failed: 7 rx drop misc: 15 signal: -45 dBm signal avg: -44 dBm Toffset: 18446744061225933893 us tx bitrate: 6.0 MBit/s rx bitrate: 650.0 MBit/s VHT-MCS 7 80MHz short GI VHT-NSS 2 rx duration: 10763380 us mesh llid: 44135 mesh plid: 29835 mesh plink: ESTAB mesh local PS mode: ACTIVE mesh peer PS mode: ACTIVE mesh non-peer PS mode: ACTIVE authorized: yes authenticated: yes associated: yes preamble: long WMM/WME: yes MFP: no TDLS peer: no DTIM period: 2 beacon interval:100 short slot time:yes connected time: 12825 seconds
What may I be doing wrong?
On Sunday, December 31, 2017 8:41:03 AM HKT Daniel Ghansah wrote:
Throughput: 13.61 MB/s (114.15 Mbps) is this inline with what everyone has seen. That data rate established is high however the throughput is very low
tx bitrate: 6.0 MBit/s
Is the tx bitrate field what you are wondering about (vs the measured speed) ?
This is a known bug caused by the QCA WiFi driver firmware blob. The exported TX bitrate value is utterly bogus. Only QCA is in the position to fix that.
There have been attempts such as this one: https://github.com/torvalds/linux/commit/c1dd8016ae02557e4f3fcf7339865924d93... Not sure this fix addresses your case. Sven might know.
Cheers, Marek
On Montag, 1. Januar 2018 13:12:53 CET Marek Lindner wrote: [...]
This is a known bug caused by the QCA WiFi driver firmware blob. The exported TX bitrate value is utterly bogus. Only QCA is in the position to fix that.
There have been attempts such as this one: https://github.com/torvalds/linux/commit/c1dd8016ae02557e4f3fcf7339865924d93... Not sure this fix addresses your case. Sven might know.
This only works for 10.4 firmware versions with peer stats enabled. The 10.2.4 firmware versions (only some are actually supported) require following patchset:
* https://patchwork.kernel.org/patch/10092915/ * https://patchwork.kernel.org/patch/10092917/ * https://patchwork.kernel.org/patch/10092919/
And you need the patch mentioned by Marek (+ the patch referenced in it) to get any TX rate values at all.
But QCA already knows that they (relatively often) still report completely bogus values (and only QCA can fix it). And you must understand that the values which you will get here are *a lot* higher than what you can realistically achieve via this link since the TX/RX data rates are actually physical data rates and not the throughput measured via TCP/UDP/... - or by looking at the payload of the actually transported (QoS) data packets.
So let as assume for now that you will lose 50% of the physical data rate to some expected overhead. Then you might still have the problem that each packet has to be retransmitted 4 times (or more) before it can be received by the other end and the aggregation is 1. The TX/RX data rate information in iw will not capture anything of that.
But there can also be a lot of other factors which influence the performance - maybe you CPU is not capable of handling the packet generation and transmission, the MTU is not configured properly on the slave device, the NIC might not be able to transmit a single flow to saturate the link, batman-adv's throughput meter might not handle some packet loss as well as expected, the qdisc (flow dissector) might fail to handle the batman-adv tp packets properly, ...
Kind regards, Sven
@ Marek Thanks for your response.I am aware of the bugus 6 mbits display I was referring the results of batctl tp Should I be using iperf? @Sven Thanks for the explanation. I am going to ask a friend to make a build with those patches for me. The CPU I have been using is a QCA9557 I also have a Gateworks GW5310 so I can do some comparisons and report back my results. Should I be using iperf or is there something else you would recommend for more accurate throughput values.
Daniel
On Mon, Jan 1, 2018 at 4:13 AM, Sven Eckelmann sven@narfation.org wrote:
On Montag, 1. Januar 2018 13:12:53 CET Marek Lindner wrote: [...]
This is a known bug caused by the QCA WiFi driver firmware blob. The exported TX bitrate value is utterly bogus. Only QCA is in the position to fix that.
There have been attempts such as this one: https://github.com/torvalds/linux/commit/c1dd8016ae02557e4f3fcf7339865924d93... Not sure this fix addresses your case. Sven might know.
This only works for 10.4 firmware versions with peer stats enabled. The 10.2.4 firmware versions (only some are actually supported) require following patchset:
- https://patchwork.kernel.org/patch/10092915/
- https://patchwork.kernel.org/patch/10092917/
- https://patchwork.kernel.org/patch/10092919/
And you need the patch mentioned by Marek (+ the patch referenced in it) to get any TX rate values at all.
But QCA already knows that they (relatively often) still report completely bogus values (and only QCA can fix it). And you must understand that the values which you will get here are *a lot* higher than what you can realistically achieve via this link since the TX/RX data rates are actually physical data rates and not the throughput measured via TCP/UDP/... - or by looking at the payload of the actually transported (QoS) data packets.
So let as assume for now that you will lose 50% of the physical data rate to some expected overhead. Then you might still have the problem that each packet has to be retransmitted 4 times (or more) before it can be received by the other end and the aggregation is 1. The TX/RX data rate information in iw will not capture anything of that.
But there can also be a lot of other factors which influence the performance - maybe you CPU is not capable of handling the packet generation and transmission, the MTU is not configured properly on the slave device, the NIC might not be able to transmit a single flow to saturate the link, batman-adv's throughput meter might not handle some packet loss as well as expected, the qdisc (flow dissector) might fail to handle the batman-adv tp packets properly, ...
Kind regards, Sven
On Monday, January 1, 2018 11:33:11 AM HKT Daniel Ghansah wrote:
Thanks for your response.I am aware of the bugus 6 mbits display I was referring the results of batctl tp Should I be using iperf?
You're message wasn't entirely clear to me that is why I added this bit. You surely can run iperf and compare the results. Let us know what you find.
Please note that the batman-adv throughput meter was developed because iperf is not able to saturate the link when running on embedded devices. So, if you're using iperf you should run it on modern notebook CPUs connected to your test devices. Otherwise the results will disappoint.
Cheers, Marek
b.a.t.m.a.n@lists.open-mesh.org