On Monday, August 15, 2011 12:02:34 Max Ip wrote:
The CPU usage is the resources consumed to run the netperf test. netperf gives the amount of CPU it consumes while sending the stream TCP_STREAM from client to the server. So, the CPU usage is calculated on the network first without batman-adv and secondly with batman-adv enabled on each nodes. Thus the difference would measure the cpu load created by batman-adv. So, I don't use top / uptime / etc.
When netperf tells you how much CPU cycles it consumed how can you deduce the batman-adv load from it ? netperf runs in user space and has no knowledge about what is going on in the kernel. In fact, netperf probably asks the kernel how many CPU cycles were consumed by its own process. In other words: netperf has no way of knowing the load created by the kernel as it simply is beyond its scope. I suggest looking at tools that are able to analyze kernel load, for example ftrace.
The whole point of using batman-adv in this experiment is to create adhoc connection between the end nodes (A and C) which are out of contact. So, the intermediate node (B) acts as the relaying node. So, I would like to create a virtual 2-hop condition (using ebtables to disable connection between end nodes A and C). Now that the 2-hop case is formed, what would be its effect in terms of CPU usage in node B. I don't think I can use any better setup than this one.
What you want to achieve is understandable but you were running different test setups and compared the results. Without batman-adv you had no relay and with batman-adv you had a relay. Trying to fix the difference with a simple calculation should only be done if one understands all the implications.
Regards, Marek