The CPU usage is the resources consumed to run the netperf test. netperf gives the amount of CPU it consumes while sending the stream TCP_STREAM from client to the server. So, the CPU usage is calculated on the network first without batman-adv and secondly with batman-adv enabled on each nodes. Thus the difference would measure the cpu load created by batman-adv. So, I don't use top / uptime / etc.
The whole point of using batman-adv in this experiment is to create adhoc connection between the end nodes (A and C) which are out of contact. So, the intermediate node (B) acts as the relaying node. So, I would like to create a virtual 2-hop condition (using ebtables to disable connection between end nodes A and C). Now that the 2-hop case is formed, what would be its effect in terms of CPU usage in node B. I don't think I can use any better setup than this one.
The percentage values I have stated are already normalized to 800 Mhz for simplicity. For example in node B (cpu 1000 Mhz) has CPU usage 1%, I use 1% of 1000 which is 10 and now (10/800) is 1.25%. I have thus used 1.25% instead of 1%.
I don't still get why the cpu load generated by batman-adv for 2-hop case is so high (1.865% when I was expectiong 0.46% as described in previous email).
I hope I explained better now.
Max
On Thu, Aug 11, 2011 at 4:37 PM, Marek Lindner lindner_marek@yahoo.de wrote:
Hi,
before we proceed any further would you please solve the riddle of *how* you obtain the CPU usage values ? I have asked this question several times but could not find any answer in your mails. Maybe it is something obvious I am missing ?
Without batman-advanced protocol between the nodes and when all nodes are communicating to each other the results for TCP and UDP cpu usage were:
A B C (all in communication range)
TCP CPU utilization send local from C to B = 5.85% TCP CPU utilzation send local from B to A = 0.90% TCP CPU utilization send local from C to A = 5.10%
Why are you not building the same setup ? Comparing 2 different setups to draw conclusions is a bit weird ..
CPU load due to batman-adv (from C to B) = 0.19% (1-hop) (which is 6.04 - 5.85) CPU load due to batman-adv (from B to A) = 0.27% (1-hop) (which is 1.127 - 0.90) CPU load due to batman-adv (from C to A) = 1.865% (2-hop) (which is 8.615 - 5.85 - 0.90)
The CPU load for 2-hop is more than that for 1-hop which is obvious. But, shouldn't the individual some of 1-hops (0.19% + 0.27 % = 0.46%) be equal to the 2-hop (1.865%).
In your first email you explain that all 3 systems use different hardware components (800MHz/1000MHz/2000MHz). Adding / subtracting percentage values based on different hardware is weird too .. Also, you probably will notice that you obtain different values depending on which node has to generate the packets. For instance, A -> C won't give you the same results as C -> A.
Also, what about the CPU load on the relaying node itself (node B)?
What about it ?
Regards, Marek