Sven Eckelmann wrote:
OpenVZ-Containers use the same kernel as the host system and I can build and load the module on the host, but I don't know how to access /proc/net/batman-adv from within the container. The container uses a own proc-fs.
Ok, that is correct. I wanted to check how to work around such problems and how to implement it in the new configuration api/multiple bat-device implementation for 0.3 (or later) - but unfortunately OpenVZ for 2.6.32 was only good to crash my system very hard. (but it is at least a known upstream bug :) ) So it can take a little bit longer until I really checked what we can do here.
Just to document some stuff I ran into:
Ok, was a little bit work to get 2.6.32 working with openvz and without these nasty vfs crashes, but now it boots up correctly. The first thing I checked was if it is possible to create bridge devices in VEs - and the result was quite impressive. It can create a socket to create the bridge, but the ioctl fails. The problem seems to be the get_exec_env()->features doesn't have VE_FEATURE_BRIDGE set (ENOTTY).
vzctl set 777 --features "bridges:on" --save
should fix it - but is not supported by the current vzctl. But even after using this feature it fails with EPERM when using that ioctl - which should only come when we don't have the capability CAP_NET_ADMIN. I find it a little bit irritating since capabilities should not be enabled - but openvz seems to activate it for VEs and somebody must enable it using
vzctl set 777 --capability "NET_ADMIN:on" --save
... which can only be done when the VE is stopped (which is different to the --features stuff, which irritates me even more).
So, now we have everything - lets try `brctl addbr br0` - and what would we expect to happen? Yes, after that day that it just crash my complete kernel in br_sysfs_addbr.... Good time to stop anything practical using openvz and concentrate on bare source code and ignoring the reality :)
Note for Marek:
proc and sysfs is created using init_ve_proc and init_ve_sysfs for the VE. It doesn't look like there is much stuff done there. You must use register_pernet_subsys to create functions which are called when a network namespace is created or deleted. This could be used to create the appropriate files in /proc/net - should work with /sys too (but haven't found something which uses it). It is documented in net/core/net_namespace.c. There are more namespaces in the current openvz kernel, but I don't think that they are interesting for us.
We should get informed about the removal of a device only in the namespace using hard_if_event - so it should not be a problem when the namespace disappears. Also we only use init_net in hard_interface.c when we add a interface - so the actual namespace should not be a problem. But I am a little bit curious why we only deactivate the interface in hard_if_event when it gets NETDEV_UNREGISTER. I would think that batman_if->net_dev of that interface isn't valid anymore and we aren't allowed to access it. So removing it should be the right choice.
I started to write some tests, but then noticed quite fast that (at least in 0.2.1) all variables to store the proc filesystem information are global variables - which will not work.
proc_interface_write must also check the net_device structure and not the actual string (if it is allowed to share a batX device between host and VE).
I've added a small (and really unclean) test of the pernet stuff - just to get some feeling what must be done to use it. Haven't really done anything useful due to openvz's way of telling me that it don't want to cooperate... at least there is now some idea how we (ok... you) can deal with the net namespaces :)
Best regards, Sven