Hi,
before we proceed any further would you please solve the riddle of *how* you obtain the CPU usage values ? I have asked this question several times but could not find any answer in your mails. Maybe it is something obvious I am missing ?
Without batman-advanced protocol between the nodes and when all nodes are communicating to each other the results for TCP and UDP cpu usage were:
A B C (all in communication range)
TCP CPU utilization send local from C to B = 5.85% TCP CPU utilzation send local from B to A = 0.90% TCP CPU utilization send local from C to A = 5.10%
Why are you not building the same setup ? Comparing 2 different setups to draw conclusions is a bit weird ..
CPU load due to batman-adv (from C to B) = 0.19% (1-hop) (which is 6.04 - 5.85) CPU load due to batman-adv (from B to A) = 0.27% (1-hop) (which is 1.127 - 0.90) CPU load due to batman-adv (from C to A) = 1.865% (2-hop) (which is 8.615 - 5.85 - 0.90)
The CPU load for 2-hop is more than that for 1-hop which is obvious. But, shouldn't the individual some of 1-hops (0.19% + 0.27 % = 0.46%) be equal to the 2-hop (1.865%).
In your first email you explain that all 3 systems use different hardware components (800MHz/1000MHz/2000MHz). Adding / subtracting percentage values based on different hardware is weird too .. Also, you probably will notice that you obtain different values depending on which node has to generate the packets. For instance, A -> C won't give you the same results as C -> A.
Also, what about the CPU load on the relaying node itself (node B)?
What about it ?
Regards, Marek