Hi everyone,
The following patches are adding an optimized, group-aware multicast handling in case of symmetric multicast memberships (a node being both sender and receiver). They aim at sending multicast data packets only to nodes which are actually receivers of the specific multicast group instead of flooding those packets through the whole mesh, as it is being done at the moment.
Please see the attached document for details about the algorithm, the integration into the current B.A.T.M.A.N.-Advanced code and how to activate/use this mode.
Comments and further testing of these patches is highly appreciated :).
Cheers, Linus
PS: Chris Lang has also been working on optimizations for multicast traffic, so we'd have to see how to merge things best after he publishes them, too.
This adds the possibility to attach multicast announcements - so called MCAs - to OGMs. It also adds a packet structure for the multicast path selection and a packet types needed for the future multicast optimizations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 1 + packet.h | 43 ++++++++++++++++++++++++++++++++++--------- 2 files changed, 35 insertions(+), 9 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 4f95777..2b502be 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -313,6 +313,7 @@ int hardif_enable_interface(struct batman_if *batman_if, char *iface_name) batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; batman_packet->num_hna = 0; + batman_packet->num_mca = 0;
batman_if->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; diff --git a/packet.h b/packet.h index b49fdf7..bf87ef6 100644 --- a/packet.h +++ b/packet.h @@ -24,15 +24,17 @@
#define ETH_P_BATMAN 0x4305 /* unofficial/not registered Ethertype */
-#define BAT_PACKET 0x01 -#define BAT_ICMP 0x02 -#define BAT_UNICAST 0x03 -#define BAT_BCAST 0x04 -#define BAT_VIS 0x05 -#define BAT_UNICAST_FRAG 0x06 +#define BAT_PACKET 0x01 +#define BAT_ICMP 0x02 +#define BAT_UNICAST 0x03 +#define BAT_BCAST 0x04 +#define BAT_VIS 0x05 +#define BAT_UNICAST_FRAG 0x06 +#define BAT_MCAST 0x07 +#define BAT_MCAST_TRACKER 0x08
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 13 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -60,9 +62,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_hna; uint8_t gw_flags; /* flags related to gateway class */ - uint8_t align; + uint8_t num_hna; + uint8_t num_mca; } __attribute__((packed));
#define BAT_PACKET_LEN sizeof(struct batman_packet) @@ -120,6 +122,29 @@ struct bcast_packet { uint32_t seqno; } __attribute__((packed));
+struct mcast_packet { + uint8_t packet_type; /* BAT_MCAST */ + uint8_t version; /* batman version field */ + uint8_t orig[6]; + uint8_t ttl; + uint32_t seqno; +} __attribute__((packed)); + +/* marks the path for multicast streams */ +struct mcast_tracker_packet { + uint8_t packet_type; /* BAT_MCAST_TRACKER */ + uint8_t version; /* batman version field */ + uint8_t orig[6]; + uint8_t ttl; + uint8_t num_mcast_entries; + uint8_t align[2]; +} __attribute__((packed)); + +struct mcast_entry { + uint8_t mcast_addr[6]; + uint8_t num_dest; /* number of multicast data receivers */ +}; + struct vis_packet { uint8_t packet_type; uint8_t version; /* batman version field */
+struct mcast_packet {
- uint8_t packet_type; /* BAT_MCAST */
- uint8_t version; /* batman version field */
- uint8_t orig[6];
- uint8_t ttl;
- uint32_t seqno;
+} __attribute__((packed));
It would be better to put seqno before ttl, so that it is 32bit aligned.
Andrew
This commit adds the needed configurable variables in bat_priv and according user interfaces in sysfs for the future multicast optimizations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- Makefile.kbuild | 1 + bat_sysfs.c | 160 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.c | 121 +++++++++++++++++++++++++++++++++++++++++ multicast.h | 30 ++++++++++ packet.h | 4 ++ soft-interface.c | 4 ++ types.h | 4 ++ 7 files changed, 324 insertions(+), 0 deletions(-) create mode 100644 multicast.c create mode 100644 multicast.h
diff --git a/Makefile.kbuild b/Makefile.kbuild index e99c198..56296c4 100644 --- a/Makefile.kbuild +++ b/Makefile.kbuild @@ -49,5 +49,6 @@ batman-adv-y += send.o batman-adv-y += soft-interface.o batman-adv-y += translation-table.o batman-adv-y += unicast.o +batman-adv-y += multicast.o batman-adv-y += vis.o batman-adv-y += bat_printk.o diff --git a/bat_sysfs.c b/bat_sysfs.c index cd7bb51..f627d70 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -27,6 +27,7 @@ #include "gateway_common.h" #include "gateway_client.h" #include "vis.h" +#include "multicast.h"
#define to_dev(obj) container_of(obj, struct device, kobj) #define kobj_to_netdev(obj) to_net_dev(to_dev(obj->parent)) @@ -356,6 +357,153 @@ static ssize_t store_gw_bwidth(struct kobject *kobj, struct attribute *attr, return gw_bandwidth_set(net_dev, buff, count); }
+static ssize_t show_mcast_mode(struct kobject *kobj, struct attribute *attr, + char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int mcast_mode = atomic_read(&bat_priv->mcast_mode); + int ret; + + switch (mcast_mode) { + case MCAST_MODE_CLASSIC_FLOODING: + ret = sprintf(buff, "classic_flooding\n"); + break; + case MCAST_MODE_PROACT_TRACKING: + ret = sprintf(buff, "proactive_tracking\n"); + break; + default: + ret = -1; + break; + } + + return ret; +} + +static ssize_t store_mcast_mode(struct kobject *kobj, struct attribute *attr, + char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long val; + int ret, mcast_mode_tmp = -1; + + ret = strict_strtoul(buff, 10, &val); + + if (((count == 2) && (!ret) && (val == MCAST_MODE_CLASSIC_FLOODING)) || + (strncmp(buff, "classic_flooding", 16) == 0)) + mcast_mode_tmp = MCAST_MODE_CLASSIC_FLOODING; + + if (((count == 2) && (!ret) && (val == MCAST_MODE_PROACT_TRACKING)) || + (strncmp(buff, "proact_tracking", 15) == 0)) + mcast_mode_tmp = MCAST_MODE_PROACT_TRACKING; + + if (mcast_mode_tmp < 0) { + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + bat_info(net_dev, + "Invalid parameter for 'mcast mode' setting received: " + "%s\n", buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->mcast_mode) == mcast_mode_tmp) + return count; + + bat_info(net_dev, "Changing mcast mode from: %s to: %s\n", + atomic_read(&bat_priv->mcast_mode) == + MCAST_MODE_CLASSIC_FLOODING ? + "classic_flooding" : "proact_tracking", + mcast_mode_tmp == MCAST_MODE_CLASSIC_FLOODING ? + "classic_flooding" : "proact_tracking"); + + atomic_set(&bat_priv->mcast_mode, (unsigned)mcast_mode_tmp); + return count; +} + +static ssize_t show_mcast_tracker_interval(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + if (!tracker_interval) + return sprintf(buff, "auto\n"); + else + return sprintf(buff, "%i\n", tracker_interval); +} + +static ssize_t store_mcast_tracker_interval(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + + return mcast_tracker_interval_set(net_dev, buff, count); +} + +static ssize_t show_mcast_tracker_timeout(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + + if (!tracker_timeout) + return sprintf(buff, "auto\n"); + else + return sprintf(buff, "%i\n", tracker_timeout); +} + +static ssize_t store_mcast_tracker_timeout(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + + return mcast_tracker_timeout_set(net_dev, buff, count); +} + +static ssize_t show_mcast_fanout(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + + return sprintf(buff, "%i\n", + atomic_read(&bat_priv->mcast_fanout)); +} + +static ssize_t store_mcast_fanout(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long mcast_fanout_tmp; + int ret; + + ret = strict_strtoul(buff, 10, &mcast_fanout_tmp); + if (ret) { + bat_info(net_dev, "Invalid parameter for 'mcast_fanout' " + "setting received: %s\n", buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->mcast_fanout) == mcast_fanout_tmp) + return count; + + bat_info(net_dev, "Changing mcast fanout interval from: %i to: %li\n", + atomic_read(&bat_priv->mcast_fanout), + mcast_fanout_tmp); + + atomic_set(&bat_priv->mcast_fanout, mcast_fanout_tmp); + return count; +} + BAT_ATTR_BOOL(aggregated_ogms, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(fragmentation, S_IRUGO | S_IWUSR, update_min_mtu); @@ -367,6 +515,14 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); +static BAT_ATTR(mcast_mode, S_IRUGO | S_IWUSR, + show_mcast_mode, store_mcast_mode); +static BAT_ATTR(mcast_tracker_interval, S_IRUGO | S_IWUSR, + show_mcast_tracker_interval, store_mcast_tracker_interval); +static BAT_ATTR(mcast_tracker_timeout, S_IRUGO | S_IWUSR, + show_mcast_tracker_timeout, store_mcast_tracker_timeout); +static BAT_ATTR(mcast_fanout, S_IRUGO | S_IWUSR, + show_mcast_fanout, store_mcast_fanout); #ifdef CONFIG_BATMAN_ADV_DEBUG BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); #endif @@ -381,6 +537,10 @@ static struct bat_attribute *mesh_attrs[] = { &bat_attr_hop_penalty, &bat_attr_gw_sel_class, &bat_attr_gw_bandwidth, + &bat_attr_mcast_mode, + &bat_attr_mcast_tracker_interval, + &bat_attr_mcast_tracker_timeout, + &bat_attr_mcast_fanout, #ifdef CONFIG_BATMAN_ADV_DEBUG &bat_attr_log_level, #endif diff --git a/multicast.c b/multicast.c new file mode 100644 index 0000000..0598873 --- /dev/null +++ b/multicast.c @@ -0,0 +1,121 @@ +/* + * Copyright (C) 2010 B.A.T.M.A.N. contributors: + * + * Linus Lüssing + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + * + */ + +#include "main.h" +#include "multicast.h" + +int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, + size_t count) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long new_tracker_interval; + int cur_tracker_interval; + int ret; + + ret = strict_strtoul(buff, 10, &new_tracker_interval); + + if (ret && !strncmp(buff, "auto", 4)) { + new_tracker_interval = 0; + goto ok; + } + + else if (ret) { + bat_info(net_dev, "Invalid parameter for " + "'mcast_tracker_interval' setting received: %s\n", + buff); + return -EINVAL; + } + + if (new_tracker_interval < JITTER) { + bat_info(net_dev, "New mcast tracker interval too small: %li " + "(min: %i or auto)\n", new_tracker_interval, JITTER); + return -EINVAL; + } + +ok: + cur_tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + if (cur_tracker_interval == new_tracker_interval) + return count; + + if (!cur_tracker_interval && new_tracker_interval) + bat_info(net_dev, "Tracker interval change from: %s to: %li\n", + "auto", new_tracker_interval); + else if (cur_tracker_interval && !new_tracker_interval) + bat_info(net_dev, "Tracker interval change from: %i to: %s\n", + cur_tracker_interval, "auto"); + else + bat_info(net_dev, "Tracker interval change from: %i to: %li\n", + cur_tracker_interval, new_tracker_interval); + + atomic_set(&bat_priv->mcast_tracker_interval, new_tracker_interval); + + return count; +} + +int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, + size_t count) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long new_tracker_timeout; + int cur_tracker_timeout; + int ret; + + ret = strict_strtoul(buff, 10, &new_tracker_timeout); + + if (ret && !strncmp(buff, "auto", 4)) { + new_tracker_timeout = 0; + goto ok; + } + + else if (ret) { + bat_info(net_dev, "Invalid parameter for " + "'mcast_tracker_timeout' setting received: %s\n", + buff); + return -EINVAL; + } + + if (new_tracker_timeout < JITTER) { + bat_info(net_dev, "New mcast tracker timeout too small: %li " + "(min: %i or auto)\n", new_tracker_timeout, JITTER); + return -EINVAL; + } + +ok: + cur_tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + + if (cur_tracker_timeout == new_tracker_timeout) + return count; + + if (!cur_tracker_timeout && new_tracker_timeout) + bat_info(net_dev, "Tracker timeout change from: %s to: %li\n", + "auto", new_tracker_timeout); + else if (cur_tracker_timeout && !new_tracker_timeout) + bat_info(net_dev, "Tracker timeout change from: %i to: %s\n", + cur_tracker_timeout, "auto"); + else + bat_info(net_dev, "Tracker timeout change from: %i to: %li\n", + cur_tracker_timeout, new_tracker_timeout); + + atomic_set(&bat_priv->mcast_tracker_timeout, new_tracker_timeout); + + return count; +} diff --git a/multicast.h b/multicast.h new file mode 100644 index 0000000..12a3376 --- /dev/null +++ b/multicast.h @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2010 B.A.T.M.A.N. contributors: + * + * Linus Lüssing + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + * + */ + +#ifndef _NET_BATMAN_ADV_MULTICAST_H_ +#define _NET_BATMAN_ADV_MULTICAST_H_ + +int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, + size_t count); +int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, + size_t count); + +#endif /* _NET_BATMAN_ADV_MULTICAST_H_ */ diff --git a/packet.h b/packet.h index bf87ef6..6926ca4 100644 --- a/packet.h +++ b/packet.h @@ -50,6 +50,10 @@ #define VIS_TYPE_SERVER_SYNC 0 #define VIS_TYPE_CLIENT_UPDATE 1
+/* mcast defines */ +#define MCAST_MODE_CLASSIC_FLOODING 0 +#define MCAST_MODE_PROACT_TRACKING 1 + /* fragmentation defines */ #define UNI_FRAG_HEAD 0x01
diff --git a/soft-interface.c b/soft-interface.c index e89ede1..7cea678 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -597,6 +597,10 @@ struct net_device *softif_create(char *name) atomic_set(&bat_priv->gw_bandwidth, 41); atomic_set(&bat_priv->orig_interval, 1000); atomic_set(&bat_priv->hop_penalty, 10); + atomic_set(&bat_priv->mcast_mode, MCAST_MODE_CLASSIC_FLOODING); + atomic_set(&bat_priv->mcast_tracker_interval, 0); /* = auto */ + atomic_set(&bat_priv->mcast_tracker_timeout, 0); /* = auto */ + atomic_set(&bat_priv->mcast_fanout, 2); atomic_set(&bat_priv->log_level, 0); atomic_set(&bat_priv->fragmentation, 1); atomic_set(&bat_priv->bcast_queue_left, BCAST_QUEUE_LEN); diff --git a/types.h b/types.h index 1d00849..b61d5a8 100644 --- a/types.h +++ b/types.h @@ -132,6 +132,10 @@ struct bat_priv { atomic_t gw_bandwidth; /* gw bandwidth */ atomic_t orig_interval; /* uint */ atomic_t hop_penalty; /* uint */ + atomic_t mcast_mode; /* MCAST_MODE_* */ + atomic_t mcast_tracker_interval;/* uint, auto */ + atomic_t mcast_tracker_timeout; /* uint, auto */ + atomic_t mcast_fanout; /* uint */ atomic_t log_level; /* uint */ atomic_t bcast_seqno; atomic_t bcast_queue_left;
The data structures and locking mechanisms for fetching multicast mac addresses from a net_device have changed a little between kernel versions 2.6.21 to 2.6.35.
Therefore this commit backports two macros (netdev_mc_count(), netdev_for_each_mc_addr()) for older kernel versions and abstracts the way of locking and accessing the variables with own customized macros.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- compat.h | 49 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 49 insertions(+), 0 deletions(-)
diff --git a/compat.h b/compat.h index b01455f..bbb1dad 100644 --- a/compat.h +++ b/compat.h @@ -264,4 +264,53 @@ int bat_seq_printf(struct seq_file *m, const char *f, ...);
#endif /* < KERNEL_VERSION(2, 6, 29) */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 34) + +#define netdev_mc_count(dev) ((dev)->mc_count) +#define netdev_for_each_mc_addr(mclist, dev) \ + for (mclist = dev->mc_list; mclist; mclist = mclist->next) + +#endif /* < KERNEL_VERSION(2, 6, 34) */ + + +/* + * net_device - multicast list handling + * structures + */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35) + +#define MC_LIST struct dev_addr_list +#define MC_LIST_ADDR da_addr + +#endif /* < KERNEL_VERSION(2, 6, 35) */ + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 34) + +#define MC_LIST struct netdev_hw_addr_list_mc +#define MC_LIST_ADDR addr + +#endif /* > KERNEL_VERSION(2, 6, 34) */ + +/* + * net_device - multicast list handling + * locking + */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27) + +#define MC_LIST_LOCK(soft_iface, flags) \ + spin_lock_irqsave(&soft_iface->_xmit_lock, flags) +#define MC_LIST_UNLOCK(soft_iface, flags) \ + spin_unlock_irqrestore(&soft_iface->_xmit_lock, flags) + +#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27) */ + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 26) + +#define MC_LIST_LOCK(soft_iface, flags) \ + spin_lock_irqsave(&soft_iface->addr_list_lock, flags) +#define MC_LIST_UNLOCK(soft_iface, flags) \ + spin_unlock_irqrestore(&soft_iface->addr_list_lock, flags) + +#endif /* > KERNEL_VERSION(2, 6, 26) */ + #endif /* _NET_BATMAN_ADV_COMPAT_H_ */
This patch introduces multicast announcements - MCA for short - which are now being attached to an OGM if an optimized multicast mode needing MCAs has been selected (i.e. proactive_tracking).
MCA entries are multicast mac addresses used by a multicast receiver in the mesh cloud. Currently MCAs are only fetched locally from the according batman interface itself, bridged-in hosts will not yet get announced and will need a more complex patch for supporting IGMP/MLD snooping. However, the local fetching also allows to have multicast optimizations on layer 2 already for batman nodes, not depending on IP at all.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- aggregation.c | 12 +++++++- aggregation.h | 6 +++- main.h | 2 + send.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++++------- 4 files changed, 87 insertions(+), 15 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 0c92e3b..d4de296 100644 --- a/aggregation.c +++ b/aggregation.c @@ -30,6 +30,12 @@ static int hna_len(struct batman_packet *batman_packet) return batman_packet->num_hna * ETH_ALEN; }
+/* calculate the size of the mca information for a given packet */ +static int mca_len(struct batman_packet *batman_packet) +{ + return batman_packet->num_mca * ETH_ALEN; +} + /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -265,9 +271,11 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, hna_buff, hna_len(batman_packet), if_incoming);
- buff_pos += BAT_PACKET_LEN + hna_len(batman_packet); + buff_pos += BAT_PACKET_LEN + hna_len(batman_packet) + + mca_len(batman_packet); batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_hna)); + batman_packet->num_hna, + batman_packet->num_mca)); } diff --git a/aggregation.h b/aggregation.h index 71a91b3..93f2496 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna) +static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna, + int num_mca) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN) + + (num_mca * ETH_ALEN);
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/main.h b/main.h index a362433..772d621 100644 --- a/main.h +++ b/main.h @@ -105,6 +105,8 @@
/* #define VIS_SUBCLUSTERS_DISABLED */
+#define UINT8_MAX 255 + /* * Kernel headers */ diff --git a/send.c b/send.c index b89b9f7..ba7ebfe 100644 --- a/send.c +++ b/send.c @@ -122,7 +122,8 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_hna)) { + batman_packet->num_hna, + batman_packet->num_mca)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -214,18 +215,71 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
+static void add_own_MCA(struct batman_packet *batman_packet, int num_mca, + struct net_device *soft_iface) +{ + MC_LIST *mc_list_entry; + int num_mca_done = 0; + unsigned long flags; + char *mca_entry = (char *)(batman_packet + 1); + + if (num_mca == 0) + goto out; + + if (num_mca > UINT8_MAX) { + pr_warning("Too many multicast announcements here, " + "just adding %i\n", UINT8_MAX); + num_mca = UINT8_MAX; + } + + mca_entry = mca_entry + batman_packet->num_hna * ETH_ALEN; + + MC_LIST_LOCK(soft_iface, flags); + netdev_for_each_mc_addr(mc_list_entry, soft_iface) { + memcpy(mca_entry, &mc_list_entry->MC_LIST_ADDR, ETH_ALEN); + mca_entry += ETH_ALEN; + + /* A multicast address might just have been added, + * avoid writing outside of buffer */ + if(++num_mca_done == num_mca) + break; + } + MC_LIST_UNLOCK(soft_iface, flags); + +out: + batman_packet->num_mca = num_mca_done; +} + static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_if *batman_if) { - int new_len; - unsigned char *new_buff; + int new_len, mcast_mode, num_mca = 0; + unsigned long flags; + unsigned char *new_buff = NULL; struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_hna * ETH_ALEN); - new_buff = kmalloc(new_len, GFP_ATOMIC); + batman_packet = (struct batman_packet *)batman_if->packet_buff; + mcast_mode = atomic_read(&bat_priv->mcast_mode); + + /* Avoid attaching MCAs, if multicast optimization is disabled */ + if (mcast_mode == MCAST_MODE_PROACT_TRACKING) { + MC_LIST_LOCK(batman_if->soft_iface, flags); + num_mca = netdev_mc_count(batman_if->soft_iface); + MC_LIST_UNLOCK(batman_if->soft_iface, flags); + }
- /* keep old buffer if kmalloc should fail */ + if (atomic_read(&bat_priv->hna_local_changed) || + num_mca != batman_packet->num_mca) { + new_len = sizeof(struct batman_packet) + + (bat_priv->num_local_hna * ETH_ALEN) + + num_mca * ETH_ALEN; + new_buff = kmalloc(new_len, GFP_ATOMIC); + } + + /* + * if local hna or mca has changed but kmalloc failed + * then just keep the old buffer + */ if (new_buff) { memcpy(new_buff, batman_if->packet_buff, sizeof(struct batman_packet)); @@ -239,6 +293,13 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, batman_if->packet_buff = new_buff; batman_if->packet_len = new_len; } + + /** + * always copy mca entries (if there are any) - we have to + * traverse the list anyway, so we can just do a memcpy instead of memcmp + * for the sake of simplicity + */ + add_own_MCA(batman_packet, num_mca, batman_if->soft_iface); }
void schedule_own_packet(struct batman_if *batman_if) @@ -264,9 +325,7 @@ void schedule_own_packet(struct batman_if *batman_if) if (batman_if->if_status == IF_TO_BE_ACTIVATED) batman_if->if_status = IF_ACTIVE;
- /* if local hna has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->hna_local_changed)) && - (batman_if == bat_priv->primary_if)) + if (batman_if == bat_priv->primary_if) rebuild_batman_packet(bat_priv, batman_if);
/** @@ -359,7 +418,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(bat_priv); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + hna_buff_len, + sizeof(struct batman_packet) + hna_buff_len + + batman_packet->num_mca * ETH_ALEN, if_incoming, 0, send_time); }
This commit adds a timer for sending periodic tracker packets (the sending is not in the scope of this patch). Furthermore, the timer gets restarted if the tracker interval gets changed or if the originator interval changed and we have selected auto mode for the tracker interval.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_sysfs.c | 13 +++++++++++-- main.c | 5 +++++ multicast.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 3 +++ types.h | 1 + 5 files changed, 77 insertions(+), 2 deletions(-)
diff --git a/bat_sysfs.c b/bat_sysfs.c index f627d70..8f688db 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -357,8 +357,16 @@ static ssize_t store_gw_bwidth(struct kobject *kobj, struct attribute *attr, return gw_bandwidth_set(net_dev, buff, count); }
+void update_mcast_tracker(struct net_device *net_dev) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + + if (!atomic_read(&bat_priv->mcast_tracker_interval)) + mcast_tracker_reset(bat_priv); +} + static ssize_t show_mcast_mode(struct kobject *kobj, struct attribute *attr, - char *buff) + char *buff) { struct device *dev = to_dev(kobj->parent); struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); @@ -509,7 +517,8 @@ BAT_ATTR_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(fragmentation, S_IRUGO | S_IWUSR, update_min_mtu); static BAT_ATTR(vis_mode, S_IRUGO | S_IWUSR, show_vis_mode, store_vis_mode); static BAT_ATTR(gw_mode, S_IRUGO | S_IWUSR, show_gw_mode, store_gw_mode); -BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, NULL); +BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, + update_mcast_tracker); BAT_ATTR_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, TQ_MAX_VALUE, NULL); BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); diff --git a/main.c b/main.c index b827f6a..135ac89 100644 --- a/main.c +++ b/main.c @@ -32,6 +32,7 @@ #include "gateway_client.h" #include "types.h" #include "vis.h" +#include "multicast.h" #include "hash.h"
struct list_head if_list; @@ -109,6 +110,9 @@ int mesh_init(struct net_device *soft_iface) if (vis_init(bat_priv) < 1) goto err;
+ if (mcast_init(bat_priv) < 1) + goto err; + atomic_set(&bat_priv->mesh_state, MESH_ACTIVE); goto end;
@@ -139,6 +143,7 @@ void mesh_free(struct net_device *soft_iface) hna_global_free(bat_priv);
softif_neigh_purge(bat_priv); + mcast_free(bat_priv);
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); } diff --git a/multicast.c b/multicast.c index 0598873..1ddd37b 100644 --- a/multicast.c +++ b/multicast.c @@ -22,6 +22,48 @@ #include "main.h" #include "multicast.h"
+/* how long to wait until sending a multicast tracker packet */ +static int tracker_send_delay(struct bat_priv *bat_priv) +{ + int tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + /* auto mode, set to 1/2 ogm interval */ + if (!tracker_interval) + tracker_interval = atomic_read(&bat_priv->orig_interval) / 2; + + /* multicast tracker packets get half as much jitter as ogms as they're + * limited down to JITTER and not JITTER*2 */ + return msecs_to_jiffies(tracker_interval - + JITTER/2 + (random32() % JITTER)); +} + +static void start_mcast_tracker(struct bat_priv *bat_priv) +{ + // adding some jitter + unsigned long tracker_interval = tracker_send_delay(bat_priv); + queue_delayed_work(bat_event_workqueue, &bat_priv->mcast_tracker_work, + tracker_interval); +} + +static void stop_mcast_tracker(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->mcast_tracker_work); +} + +void mcast_tracker_reset(struct bat_priv *bat_priv) +{ + stop_mcast_tracker(bat_priv); + start_mcast_tracker(bat_priv); +} + +static void mcast_tracker_timer(struct work_struct *work) +{ + struct bat_priv *bat_priv = container_of(work, struct bat_priv, + mcast_tracker_work.work); + + start_mcast_tracker(bat_priv); +} + int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, size_t count) { @@ -68,6 +110,8 @@ ok:
atomic_set(&bat_priv->mcast_tracker_interval, new_tracker_interval);
+ mcast_tracker_reset(bat_priv); + return count; }
@@ -119,3 +163,16 @@ ok:
return count; } + +int mcast_init(struct bat_priv *bat_priv) +{ + INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); + start_mcast_tracker(bat_priv); + + return 1; +} + +void mcast_free(struct bat_priv *bat_priv) +{ + stop_mcast_tracker(bat_priv); +} diff --git a/multicast.h b/multicast.h index 12a3376..26ce6d8 100644 --- a/multicast.h +++ b/multicast.h @@ -26,5 +26,8 @@ int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, size_t count); int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, size_t count); +void mcast_tracker_reset(struct bat_priv *bat_priv); +int mcast_init(struct bat_priv *bat_priv); +void mcast_free(struct bat_priv *bat_priv);
#endif /* _NET_BATMAN_ADV_MULTICAST_H_ */ diff --git a/types.h b/types.h index b61d5a8..41eabfe 100644 --- a/types.h +++ b/types.h @@ -169,6 +169,7 @@ struct bat_priv { struct delayed_work hna_work; struct delayed_work orig_work; struct delayed_work vis_work; + struct delayed_work mcast_tracker_work; struct gw_node *curr_gw; struct vis_info *my_vis_info; };
We need to memorize the MCA information attached to the OGMs to be able to prepare the tracker packets with them later.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- originator.c | 7 ++++++- routing.c | 40 +++++++++++++++++++++++++++++++++++++--- routing.h | 2 +- types.h | 2 ++ 4 files changed, 46 insertions(+), 5 deletions(-)
diff --git a/originator.c b/originator.c index 89ec021..900d5fc 100644 --- a/originator.c +++ b/originator.c @@ -101,6 +101,8 @@ static void free_orig_node(void *data, void *arg) frag_list_free(&orig_node->frag_list); hna_global_del_orig(bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->mca_buff); + kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -147,6 +149,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->hna_buff = NULL; + orig_node->mca_buff = NULL; + orig_node->num_mca = 0; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -256,7 +260,8 @@ static bool purge_orig_node(struct bat_priv *bat_priv, update_routes(bat_priv, orig_node, best_neigh_node, orig_node->hna_buff, - orig_node->hna_buff_len); + orig_node->hna_buff_len, + orig_node->mca_buff, orig_node->num_mca); /* update bonding candidates, we could have lost * some candidates. */ update_bonding_candidates(bat_priv, orig_node); diff --git a/routing.c b/routing.c index d8b0c5a..cf145b7 100644 --- a/routing.c +++ b/routing.c @@ -77,6 +77,34 @@ static void update_HNA(struct bat_priv *bat_priv, struct orig_node *orig_node, } }
+/* Copy the mca buffer again if something has changed */ +static void update_MCA(struct orig_node *orig_node, + unsigned char *mca_buff, int num_mca) +{ + /* numbers differ? then reallocate buffer */ + if (num_mca != orig_node->num_mca) { + kfree(orig_node->mca_buff); + if (num_mca > 0) { + orig_node->mca_buff = + kmalloc(num_mca * ETH_ALEN, GFP_ATOMIC); + if (orig_node->mca_buff) + goto update; + } + orig_node->mca_buff = NULL; + orig_node->num_mca = 0; + /* size ok, just update? */ + } else if (num_mca > 0 && + memcmp(orig_node->mca_buff, mca_buff, num_mca * ETH_ALEN)) + goto update; + + /* it's the same, leave it like that */ + return; + +update: + memcpy(orig_node->mca_buff, mca_buff, num_mca * ETH_ALEN); + orig_node->num_mca = num_mca; +} + static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, @@ -114,7 +142,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len) + int hna_buff_len, unsigned char *mca_buff, int num_mca) {
if (orig_node == NULL) @@ -126,6 +154,8 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, /* may be just HNA changed */ else update_HNA(bat_priv, orig_node, hna_buff, hna_buff_len); + + update_MCA(orig_node, mca_buff, num_mca); }
static int is_bidirectional_neigh(struct orig_node *orig_node, @@ -247,6 +277,7 @@ static void update_orig(struct bat_priv *bat_priv, { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; int tmp_hna_buff_len; + unsigned char *mca_buff;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " "Searching and updating originator entry of received packet\n"); @@ -297,6 +328,7 @@ static void update_orig(struct bat_priv *bat_priv,
tmp_hna_buff_len = (hna_buff_len > batman_packet->num_hna * ETH_ALEN ? batman_packet->num_hna * ETH_ALEN : hna_buff_len); + mca_buff = (char *)batman_packet + BAT_PACKET_LEN + tmp_hna_buff_len;
/* if this neighbor already is our next hop there is nothing * to change */ @@ -317,12 +349,14 @@ static void update_orig(struct bat_priv *bat_priv, goto update_hna;
update_routes(bat_priv, orig_node, neigh_node, - hna_buff, tmp_hna_buff_len); + hna_buff, tmp_hna_buff_len, mca_buff, + batman_packet->num_mca); goto update_gw;
update_hna: update_routes(bat_priv, orig_node, orig_node->router, - hna_buff, tmp_hna_buff_len); + hna_buff, tmp_hna_buff_len, mca_buff, + batman_packet->num_mca);
update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) diff --git a/routing.h b/routing.h index f108f23..d44b540 100644 --- a/routing.h +++ b/routing.h @@ -31,7 +31,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, struct batman_if *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len); + int hna_buff_len, unsigned char *mca_buff, int num_mca); int route_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if, int hdr_size); int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); diff --git a/types.h b/types.h index 41eabfe..0129b1f 100644 --- a/types.h +++ b/types.h @@ -79,6 +79,8 @@ struct orig_node { uint8_t flags; unsigned char *hna_buff; int16_t hna_buff_len; + unsigned char *mca_buff; + uint8_t num_mca; uint32_t last_real_seqno; uint8_t last_ttl; TYPE_OF_WORD bcast_bits[NUM_WORDS];
This commit introduces batman multicast tracker packets. Their job is, to mark nodes responsible for forwarding multicast data later (so a multicast receiver will not be marked, only the forwarding nodes).
When having activated the proact_tracking multicast mode, a path between all multicast _receivers_ of a group will be marked - in fact, in this mode BATMAN will assume, that a multicast receiver is also a multicast sender, therefore a multicast sender should also join the same multicast group.
The advantage of this is less complexity and the paths are marked in advance before an actual data packet has been sent, decreasing delays. The disadvantage is higher protocol overhead.
One large tracker packet will be created on a generating node first, which then gets split for every necessary next hop destination.
This commit does not add forwarding of tracker packets but just local generation and local sending of them.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hash.h | 4 + multicast.c | 469 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 3 + 3 files changed, 476 insertions(+), 0 deletions(-)
diff --git a/hash.h b/hash.h index 0b61c6e..d61c185 100644 --- a/hash.h +++ b/hash.h @@ -28,6 +28,10 @@ .index = 0, .walk = NULL, \ .safe = NULL}
+#define HASHIT_RESET(name) \ + name.index = 0, name.walk = NULL; \ + name.safe = NULL + /* callback to a compare function. should * compare 2 element datas for their keys, * return 0 if same and not 0 if not diff --git a/multicast.c b/multicast.c index 1ddd37b..bfb1410 100644 --- a/multicast.c +++ b/multicast.c @@ -21,6 +21,24 @@
#include "main.h" #include "multicast.h" +#include "hash.h" +#include "send.h" +#include "compat.h" + +#define tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) \ + for (mcast_num = 0, mcast_entry = (struct mcast_entry *)(tracker_packet + 1), \ + dest_entry = (uint8_t *)(mcast_entry + 1); \ + mcast_num < tracker_packet->num_mcast_entries; mcast_num++, \ + mcast_entry = (struct mcast_entry *)dest_entry, \ + dest_entry = (uint8_t *)(mcast_entry + 1)) \ + for (dest_num = 0; dest_num < mcast_entry->num_dest; dest_num++, \ + dest_entry += ETH_ALEN) + +struct dest_entries_list { + struct list_head list; + uint8_t dest[6]; + struct batman_if *batman_if; +};
/* how long to wait until sending a multicast tracker packet */ static int tracker_send_delay(struct bat_priv *bat_priv) @@ -56,11 +74,462 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static inline int find_mca_match(struct orig_node *orig_node, + int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries) +{ + int pos; + + for (pos = 0; pos < num_mcast_entries; pos++) + if (!memcmp(&mc_addr_list[pos*ETH_ALEN], + &orig_node->mca_buff[ETH_ALEN*mca_pos], ETH_ALEN)) + return pos; + return -1; +} + +/** + * Prepares a multicast tracker packet on a multicast member with all its + * groups and their members attached. Note, that the proactive tracking + * mode does not differentiate between multicast senders and receivers, + * resulting in tracker packets between each node. + * + * Returns NULL if this node is not a member of any group or if there are + * no other members in its groups. + * + * @bat_priv: bat_priv for the mesh we are preparing this packet + */ +static struct mcast_tracker_packet *mcast_proact_tracker_prepare( + struct bat_priv *bat_priv, int *tracker_packet_len) +{ + struct net_device *soft_iface = bat_priv->primary_if->soft_iface; + uint8_t *mc_addr_list; + MC_LIST *mc_entry; + struct element_t *bucket; + struct orig_node *orig_node; + + /* one dest_entries_list per multicast group, + * they'll collect dest_entries[x] */ + int num_mcast_entries, used_mcast_entries = 0; + struct list_head *dest_entries_list; + struct dest_entries_list dest_entries[UINT8_MAX], *dest, *tmp; + int num_dest_entries, dest_entries_total = 0; + + uint8_t *dest_entry; + int pos, mca_pos; + unsigned long flags; + struct mcast_tracker_packet *tracker_packet = NULL; + struct mcast_entry *mcast_entry; + HASHIT(hashit); + + /* Make a copy so we don't have to rush because of locking */ + MC_LIST_LOCK(soft_iface, flags); + num_mcast_entries = netdev_mc_count(soft_iface); + mc_addr_list = kmalloc(ETH_ALEN * num_mcast_entries, GFP_ATOMIC); + if (!mc_addr_list) { + MC_LIST_UNLOCK(soft_iface, flags); + goto out; + } + pos = 0; + netdev_for_each_mc_addr(mc_entry, soft_iface) { + memcpy(&mc_addr_list[pos * ETH_ALEN], mc_entry->MC_LIST_ADDR, + ETH_ALEN); + pos++; + } + MC_LIST_UNLOCK(soft_iface, flags); + + if (num_mcast_entries > UINT8_MAX) + num_mcast_entries = UINT8_MAX; + dest_entries_list = kmalloc(num_mcast_entries * + sizeof(struct list_head), GFP_ATOMIC); + if (!dest_entries_list) + goto free; + + for (pos = 0; pos < num_mcast_entries; pos++) + INIT_LIST_HEAD(&dest_entries_list[pos]); + + /* fill the lists and buffers */ + spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + while (hash_iterate(bat_priv->orig_hash, &hashit)) { + bucket = hlist_entry(hashit.walk, struct element_t, hlist); + orig_node = bucket->data; + if (!orig_node->num_mca) + continue; + + num_dest_entries = 0; + for (mca_pos = 0; mca_pos < orig_node->num_mca && dest_entries_total != UINT8_MAX; mca_pos++) { + pos = find_mca_match(orig_node, mca_pos, mc_addr_list, + num_mcast_entries); + if (pos > UINT8_MAX || pos < 0) + continue; + memcpy(dest_entries[dest_entries_total].dest, orig_node->orig, ETH_ALEN); + list_add(&dest_entries[dest_entries_total].list, &dest_entries_list[pos]); + + num_dest_entries++; + dest_entries_total++; + } + } + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + + /* Any list left empty? */ + for (pos = 0; pos < num_mcast_entries; pos++) + if (!list_empty(&dest_entries_list[pos])) + used_mcast_entries++; + + if (!used_mcast_entries) + goto free_all; + + /* prepare tracker packet, finally! */ + *tracker_packet_len = sizeof(struct mcast_tracker_packet) + + sizeof(struct mcast_entry) * used_mcast_entries + + ETH_ALEN * dest_entries_total; + if (*tracker_packet_len > ETH_DATA_LEN) { + pr_warning("mcast tracker packet got too large (%i Bytes), " + "forcing reduced size of %i Bytes\n", + *tracker_packet_len, ETH_DATA_LEN); + *tracker_packet_len = ETH_DATA_LEN; + } + tracker_packet = kmalloc(*tracker_packet_len, GFP_ATOMIC); + + tracker_packet->packet_type = BAT_MCAST_TRACKER; + tracker_packet->version = COMPAT_VERSION; + memcpy(tracker_packet->orig, bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + tracker_packet->ttl = TTL; + tracker_packet->num_mcast_entries = (used_mcast_entries > UINT8_MAX) ? + UINT8_MAX : used_mcast_entries; + memset(tracker_packet->align, 0, sizeof(tracker_packet->align)); + + /* append all collected entries */ + mcast_entry = (struct mcast_entry *)(tracker_packet + 1); + for (pos = 0; pos < num_mcast_entries; pos++) { + if (list_empty(&dest_entries_list[pos])) + continue; + + if ((char *)(mcast_entry + 1) <= + (char *)tracker_packet + ETH_DATA_LEN) { + memcpy(mcast_entry->mcast_addr, + &mc_addr_list[pos*ETH_ALEN], ETH_ALEN); + mcast_entry->num_dest = 0; + } + + dest_entry = (uint8_t *)(mcast_entry + 1); + list_for_each_entry_safe(dest, tmp, &dest_entries_list[pos], + list) { + /* still place for a dest_entry left? + * watch out for overflow here, stop at UINT8_MAX */ + if ((char *)dest_entry + ETH_ALEN <= + (char *)tracker_packet + ETH_DATA_LEN && + mcast_entry->num_dest != UINT8_MAX) { + mcast_entry->num_dest++; + memcpy(dest_entry, dest->dest, ETH_ALEN); + dest_entry += ETH_ALEN; + } + list_del(&dest->list); + } + /* still space for another mcast_entry left? */ + if ((char *)(mcast_entry + 1) <= + (char *)tracker_packet + ETH_DATA_LEN) + mcast_entry = (struct mcast_entry*)dest_entry; + } + + + /* outstanding cleanup */ +free_all: + kfree(dest_entries_list); +free: + kfree(mc_addr_list); +out: + + return tracker_packet; +} + +/* Adds the router for the destination address to the next_hop list and its + * interface to the forw_if_list - but only if this router has not been + * added yet */ +static int add_router_of_dest(struct dest_entries_list *next_hops, + uint8_t *dest, struct bat_priv *bat_priv) +{ + struct dest_entries_list *next_hop_tmp, *next_hop_entry; + unsigned long flags; + struct element_t *bucket; + struct orig_node *orig_node; + HASHIT(hashit); + + next_hop_entry = kmalloc(sizeof(struct dest_entries_list), GFP_ATOMIC); + if (!next_hop_entry) + return 1; + + next_hop_entry->batman_if = NULL; + spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + while (hash_iterate(bat_priv->orig_hash, &hashit)) { + bucket = hlist_entry(hashit.walk, struct element_t, hlist); + orig_node = bucket->data; + + if (memcmp(orig_node->orig, dest, ETH_ALEN)) + continue; + + if (!orig_node->router) + break; + + memcpy(next_hop_entry->dest, orig_node->router->addr, + ETH_ALEN); + next_hop_entry->batman_if = orig_node->router->if_incoming; + break; + } + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + if (!next_hop_entry->batman_if) + goto free; + + list_for_each_entry(next_hop_tmp, &next_hops->list, list) + if (!memcmp(next_hop_tmp->dest, next_hop_entry->dest, + ETH_ALEN)) + goto free; + + list_add(&next_hop_entry->list, &next_hops->list); + + return 0; + +free: + kfree(next_hop_entry); + return 1; +} + +/* Collect nexthops for all dest entries specified in this tracker packet */ +static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, + struct dest_entries_list *next_hops, + struct bat_priv *bat_priv) +{ + int num_next_hops = 0, mcast_num, dest_num, ret; + struct mcast_entry *mcast_entry; + uint8_t *dest_entry; + + INIT_LIST_HEAD(&next_hops->list); + + tracker_packet_for_each_dest(mcast_entry, dest_entry, + mcast_num, dest_num, tracker_packet) { + ret = add_router_of_dest(next_hops, dest_entry, + bat_priv); + if (!ret) + num_next_hops++; + } + + return num_next_hops; +} + +static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, + uint8_t *next_hop, struct bat_priv *bat_priv) +{ + struct mcast_entry *mcast_entry; + uint8_t *dest_entry; + int mcast_num, dest_num; + + unsigned long flags; + struct element_t *bucket; + struct orig_node *orig_node; + HASHIT(hashit); + + spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + tracker_packet_for_each_dest(mcast_entry, dest_entry, + mcast_num, dest_num, tracker_packet) { + while (hash_iterate(bat_priv->orig_hash, &hashit)) { + bucket = hlist_entry(hashit.walk, struct element_t, + hlist); + orig_node = bucket->data; + + if (memcmp(orig_node->orig, dest_entry, ETH_ALEN)) + continue; + + /* is the next hop already our destination? */ + if (!memcmp(orig_node->orig, next_hop, ETH_ALEN)) + memset(dest_entry, '\0', ETH_ALEN); + else if (!orig_node->router) + memset(dest_entry, '\0', ETH_ALEN); + else if (!memcmp(orig_node->orig, + orig_node->router->orig_node->primary_addr, + ETH_ALEN)) + memset(dest_entry, '\0', ETH_ALEN); + /* is this the wrong next hop for our destination? */ + else if (memcmp(orig_node->router->addr, + next_hop, ETH_ALEN)) + memset(dest_entry, '\0', ETH_ALEN); + + break; + } + HASHIT_RESET(hashit); + } + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); +} + +static int shrink_tracker_packet(struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len) +{ + struct mcast_entry *mcast_entry; + uint8_t *dest_entry; + uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len; + int mcast_num, dest_num; + int new_tracker_packet_len = sizeof(struct mcast_tracker_packet); + + tracker_packet_for_each_dest(mcast_entry, dest_entry, + mcast_num, dest_num, tracker_packet) { + if (memcmp(dest_entry, "\0\0\0\0\0\0", ETH_ALEN)) { + new_tracker_packet_len += ETH_ALEN; + continue; + } + + memmove(dest_entry, dest_entry + ETH_ALEN, + tail - dest_entry - ETH_ALEN); + + mcast_entry->num_dest--; + tail -= ETH_ALEN; + + if (mcast_entry->num_dest) { + dest_num--; + dest_entry -= ETH_ALEN; + continue; + } + + /* = mcast_entry */ + dest_entry -= sizeof(struct mcast_entry); + + memmove(dest_entry, dest_entry + sizeof(struct mcast_entry), + tail - dest_entry - sizeof(struct mcast_entry)); + + tracker_packet->num_mcast_entries--; + tail -= sizeof(struct mcast_entry); + + mcast_num--; + + /* Avoid mcast_entry check of tracker_packet_for_each_dest's + * inner loop */ + break; + } + + new_tracker_packet_len += sizeof(struct mcast_entry) * + tracker_packet->num_mcast_entries; + + return new_tracker_packet_len; +} + +static struct sk_buff *build_tracker_packet_skb( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, uint8_t *dest) +{ + struct sk_buff *skb; + struct mcast_tracker_packet *skb_tracker_data; + + skb = dev_alloc_skb(tracker_packet_len + sizeof(struct ethhdr)); + if (!skb) + return NULL; + + skb_reserve(skb, sizeof(struct ethhdr)); + skb_tracker_data = (struct mcast_tracker_packet *) + skb_put(skb, tracker_packet_len); + + memcpy(skb_tracker_data, tracker_packet, tracker_packet_len); + + return skb; +} + + +/** + * Sends (splitted parts of) a multicast tracker packet on the according + * interfaces. + * + * @tracker_packet: A compact multicast tracker packet with all groups and + * destinations attached. + */ +void route_mcast_tracker_packet( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct bat_priv *bat_priv) +{ + struct dest_entries_list next_hops, *tmp; + struct mcast_tracker_packet *next_hop_tracker_packets, + *next_hop_tracker_packet; + struct dest_entries_list *next_hop; + struct sk_buff *skb; + int num_next_hops, i; + int *tracker_packet_lengths; + + rcu_read_lock(); + num_next_hops = tracker_next_hops(tracker_packet, &next_hops, + bat_priv); + if (!num_next_hops) + goto out; + next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops, + GFP_ATOMIC); + if (!next_hop_tracker_packets) + goto free; + + tracker_packet_lengths = kmalloc(sizeof(int) * num_next_hops, + GFP_ATOMIC); + if (!tracker_packet_lengths) + goto free2; + + i = 0; + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + next_hop_tracker_packet = (struct mcast_tracker_packet *) + ((char *)next_hop_tracker_packets + + i * tracker_packet_len); + memcpy(next_hop_tracker_packet, tracker_packet, + tracker_packet_len); + zero_tracker_packet(next_hop_tracker_packet, next_hop->dest, + bat_priv); + tracker_packet_lengths[i] = shrink_tracker_packet( + next_hop_tracker_packet, tracker_packet_len); + i++; + } + + i = 0; + /* Add ethernet header, send 'em! */ + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + if (tracker_packet_lengths[i] == + sizeof(struct mcast_tracker_packet)) + goto skip_send; + + skb = build_tracker_packet_skb(&next_hop_tracker_packets[i], + tracker_packet_lengths[i], + next_hop->dest); + if (skb) + send_skb_packet(skb, next_hop->batman_if, + next_hop->dest); +skip_send: + list_del(&next_hop->list); + kfree(next_hop); + i++; + } + + kfree(tracker_packet_lengths); + kfree(next_hop_tracker_packets); + return; + +free2: + kfree(next_hop_tracker_packets); +free: + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + list_del(&next_hop->list); + kfree(next_hop); + } +out: + rcu_read_unlock(); +} + static void mcast_tracker_timer(struct work_struct *work) { struct bat_priv *bat_priv = container_of(work, struct bat_priv, mcast_tracker_work.work); + struct mcast_tracker_packet *tracker_packet = NULL; + int tracker_packet_len = 0; + + if (atomic_read(&bat_priv->mcast_mode) == MCAST_MODE_PROACT_TRACKING) + tracker_packet = mcast_proact_tracker_prepare(bat_priv, + &tracker_packet_len); + + if (!tracker_packet) + goto out; + + route_mcast_tracker_packet(tracker_packet, tracker_packet_len, bat_priv); + kfree(tracker_packet);
+out: start_mcast_tracker(bat_priv); }
diff --git a/multicast.h b/multicast.h index 26ce6d8..2711d8b 100644 --- a/multicast.h +++ b/multicast.h @@ -27,6 +27,9 @@ int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, size_t count); void mcast_tracker_reset(struct bat_priv *bat_priv); +void route_mcast_tracker_packet( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
+#define tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) \
- for (mcast_num = 0, mcast_entry = (struct mcast_entry *)(tracker_packet + 1), \
dest_entry = (uint8_t *)(mcast_entry + 1); \
mcast_num < tracker_packet->num_mcast_entries; mcast_num++, \
mcast_entry = (struct mcast_entry *)dest_entry, \
dest_entry = (uint8_t *)(mcast_entry + 1)) \
for (dest_num = 0; dest_num < mcast_entry->num_dest; dest_num++, \
dest_entry += ETH_ALEN)
It is probably not a good idea to have nested for loops inside a macro like this. What happens with code like:
tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) { ... ... if (foo == bar) break;
if (foo) continue;
These don't do what you would expect.
Andrew
Before/while a tracker packet is being searched for next hops for its destination entries, it will also be checked if the number of destination and mcast entries might exceed the tracker_packet_len. Otherwise we might read/write in unallocated memory. Such a broken tracker packet could potentially occure when we are going to reuse route_mcast_tracker_packet for tracker packets received from a neighbour node.
In such a case, we are just reducing the stated mcast / dest numbers in the tracker packet to fit the size of the allocated buffer.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 29 +++++++++++++++++++++++++---- 1 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/multicast.c b/multicast.c index bfb1410..2b3d613 100644 --- a/multicast.c +++ b/multicast.c @@ -293,25 +293,46 @@ free: return 1; }
-/* Collect nexthops for all dest entries specified in this tracker packet */ +/* Collect nexthops for all dest entries specified in this tracker packet. + * It also reduces the number of elements in the tracker packet if they exceed + * the buffers length (i.g. because of a received, broken tracker packet) to + * avoid writing in unallocated memory. */ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct dest_entries_list *next_hops, struct bat_priv *bat_priv) { int num_next_hops = 0, mcast_num, dest_num, ret; struct mcast_entry *mcast_entry; uint8_t *dest_entry; + uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len;
INIT_LIST_HEAD(&next_hops->list);
tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) { + /* avoid writing outside of unallocated memory later */ + if (dest_entry + ETH_ALEN > tail) { + bat_dbg(DBG_BATMAN, bat_priv, + "mcast tracker packet is broken, too many " + "entries claimed for its length, repairing"); + + tracker_packet->num_mcast_entries = mcast_num; + + if (dest_num) { + tracker_packet->num_mcast_entries++; + mcast_entry->num_dest = dest_num; + } + + goto out; + } + ret = add_router_of_dest(next_hops, dest_entry, bat_priv); if (!ret) num_next_hops++; } - +out: return num_next_hops; }
@@ -450,8 +471,8 @@ void route_mcast_tracker_packet( int *tracker_packet_lengths;
rcu_read_lock(); - num_next_hops = tracker_next_hops(tracker_packet, &next_hops, - bat_priv); + num_next_hops = tracker_next_hops(tracker_packet, tracker_packet_len, + &next_hops, bat_priv); if (!num_next_hops) goto out; next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops,
This commit adds the ability to also forward a received multicast tracker packet (if necessary). It also makes use of the same splitting methods introduced with one of the previous commits, in case of multiple next hop destinations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 5 +++++ routing.c | 19 +++++++++++++++++++ routing.h | 1 + 3 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 2b502be..3b380e1 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -624,6 +624,11 @@ int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, ret = recv_bcast_packet(skb, batman_if); break;
+ /* multicast tracker packet */ + case BAT_MCAST_TRACKER: + ret = recv_mcast_tracker_packet(skb, batman_if); + break; + /* vis packet */ case BAT_VIS: ret = recv_vis_packet(skb, batman_if); diff --git a/routing.c b/routing.c index cf145b7..9c83006 100644 --- a/routing.c +++ b/routing.c @@ -35,6 +35,7 @@ #include "gateway_common.h" #include "gateway_client.h" #include "unicast.h" +#include "multicast.h"
void slide_own_bcast_window(struct batman_if *batman_if) { @@ -1378,6 +1379,24 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if) return NET_RX_SUCCESS; }
+int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct mcast_tracker_packet *tracker_packet; + int hdr_size = sizeof(struct mcast_tracker_packet); + + if (check_unicast_packet(skb, hdr_size) < 0) + return NET_RX_DROP; + + tracker_packet = (struct mcast_tracker_packet *)skb->data; + + route_mcast_tracker_packet(tracker_packet, skb->len, bat_priv); + + dev_kfree_skb(skb); + + return NET_RX_SUCCESS; +} + int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct vis_packet *vis_packet; diff --git a/routing.h b/routing.h index d44b540..ad3f054 100644 --- a/routing.h +++ b/routing.h @@ -38,6 +38,7 @@ int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_ucast_frag_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if); +int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bat_packet(struct sk_buff *skb, struct batman_if *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv,
On reception of a multicast tracker packet (both locally generated or received over an interface), a node now memorizes its forwarding state for a tuple of multicast-group, originator, and next-hops (+ their according outgoing interface).
The first two elements are necessary to determine, whether a node shall forward a multicast data packet on reception later. The next-hop and according interface information is necessary to quickly determine, if a multicast data packet shall be forwarded via unicast to each single next hop or via broadcast.
This commit does not yet purge multicast forwarding table entries after the set tracker timeout yet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 278 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- types.h | 2 + 2 files changed, 277 insertions(+), 3 deletions(-)
diff --git a/multicast.c b/multicast.c index 2b3d613..c7edbef 100644 --- a/multicast.c +++ b/multicast.c @@ -25,6 +25,10 @@ #include "send.h" #include "compat.h"
+/* If auto mode for tracker timeout has been selected, + * how many times of tracker_interval to wait */ +#define TRACKER_TIMEOUT_AUTO_X 5 + #define tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) \ for (mcast_num = 0, mcast_entry = (struct mcast_entry *)(tracker_packet + 1), \ dest_entry = (uint8_t *)(mcast_entry + 1); \ @@ -40,6 +44,34 @@ struct dest_entries_list { struct batman_if *batman_if; };
+ +struct mcast_forw_nexthop_entry { + struct list_head list; + uint8_t neigh_addr[6]; + unsigned long timeout; /* old jiffies value */ +}; + +struct mcast_forw_if_entry { + struct list_head list; + int16_t if_num; + int num_nexthops; + struct list_head mcast_nexthop_list; +}; + +struct mcast_forw_orig_entry { + struct list_head list; + uint8_t orig[6]; + uint32_t last_mcast_seqno; + TYPE_OF_WORD mcast_bits[NUM_WORDS]; + struct list_head mcast_if_list; +}; + +struct mcast_forw_table_entry { + struct list_head list; + uint8_t mcast_addr[6]; + struct list_head mcast_orig_list; +}; + /* how long to wait until sending a multicast tracker packet */ static int tracker_send_delay(struct bat_priv *bat_priv) { @@ -74,6 +106,218 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static void prepare_forw_if_entry(struct list_head *forw_if_list, + int16_t if_num, uint8_t *neigh_addr) +{ + struct mcast_forw_if_entry *forw_if_entry; + struct mcast_forw_nexthop_entry *forw_nexthop_entry; + + list_for_each_entry (forw_if_entry, forw_if_list, list) + if (forw_if_entry->if_num == if_num) + goto skip_create_if; + + forw_if_entry = kmalloc(sizeof(struct mcast_forw_if_entry), + GFP_ATOMIC); + if (!forw_if_entry) + return; + + forw_if_entry->if_num = if_num; + forw_if_entry->num_nexthops = 0; + INIT_LIST_HEAD(&forw_if_entry->mcast_nexthop_list); + list_add(&forw_if_entry->list, forw_if_list); + +skip_create_if: + list_for_each_entry (forw_nexthop_entry, + &forw_if_entry->mcast_nexthop_list, list) { + if (!memcmp(forw_nexthop_entry->neigh_addr, + neigh_addr, ETH_ALEN)) + return; + } + + forw_nexthop_entry = kmalloc(sizeof(struct mcast_forw_nexthop_entry), + GFP_ATOMIC); + if (!forw_nexthop_entry && forw_if_entry->num_nexthops) + return; + else if(!forw_nexthop_entry) + goto free; + + memcpy(forw_nexthop_entry->neigh_addr, neigh_addr, ETH_ALEN); + forw_if_entry->num_nexthops++; + if (forw_if_entry->num_nexthops < 0) { + kfree(forw_nexthop_entry); + goto free; + } + + list_add(&forw_nexthop_entry->list, + &forw_if_entry->mcast_nexthop_list); + return; +free: + list_del(&forw_if_entry->list); + kfree(forw_if_entry); +} + +static struct list_head *prepare_forw_table_entry( + struct mcast_forw_table_entry *forw_table, + uint8_t *mcast_addr, uint8_t *orig) +{ + struct mcast_forw_table_entry *forw_table_entry; + struct mcast_forw_orig_entry *orig_entry; + + forw_table_entry = kmalloc(sizeof(struct mcast_forw_table_entry), + GFP_ATOMIC); + if (!forw_table_entry) + return NULL; + + memcpy(forw_table_entry->mcast_addr, mcast_addr, ETH_ALEN); + list_add(&forw_table_entry->list, &forw_table->list); + + INIT_LIST_HEAD(&forw_table_entry->mcast_orig_list); + orig_entry = kmalloc(sizeof(struct mcast_forw_orig_entry), GFP_ATOMIC); + if (!orig_entry) + goto free; + + memcpy(orig_entry->orig, orig, ETH_ALEN); + INIT_LIST_HEAD(&orig_entry->mcast_if_list); + list_add(&orig_entry->list, &forw_table_entry->mcast_orig_list); + + return &orig_entry->mcast_if_list; + +free: + list_del(&forw_table_entry->list); + kfree(forw_table_entry); + return NULL; +} + +static int sync_nexthop(struct mcast_forw_nexthop_entry *sync_nexthop_entry, + struct list_head *nexthop_list) +{ + struct mcast_forw_nexthop_entry *nexthop_entry; + int synced = 0; + + list_for_each_entry(nexthop_entry, nexthop_list, list) { + if (memcmp(sync_nexthop_entry->neigh_addr, + nexthop_entry->neigh_addr, ETH_ALEN)) + continue; + + nexthop_entry->timeout = jiffies; + list_del(&sync_nexthop_entry->list); + kfree(sync_nexthop_entry); + + synced = 1; + break; + } + + if (!synced) { + sync_nexthop_entry->timeout = jiffies; + list_move(&sync_nexthop_entry->list, nexthop_list); + return 1; + } + + return 0; +} + +static void sync_if(struct mcast_forw_if_entry *sync_if_entry, + struct list_head *if_list) +{ + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *sync_nexthop_entry, *tmp; + int synced = 0; + + list_for_each_entry(if_entry, if_list, list) { + if (sync_if_entry->if_num != if_entry->if_num) + continue; + + list_for_each_entry_safe(sync_nexthop_entry, tmp, + &sync_if_entry->mcast_nexthop_list, list) + if (sync_nexthop(sync_nexthop_entry, + &if_entry->mcast_nexthop_list)) + if_entry->num_nexthops++; + + list_del(&sync_if_entry->list); + kfree(sync_if_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_if_entry->list, if_list); +} + +/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_orig(struct mcast_forw_orig_entry *sync_orig_entry, + struct list_head *orig_list) +{ + struct mcast_forw_orig_entry *orig_entry; + struct mcast_forw_if_entry *sync_if_entry, *tmp; + int synced = 0; + + list_for_each_entry(orig_entry, orig_list, list) { + if (memcmp(sync_orig_entry->orig, + orig_entry->orig, ETH_ALEN)) + continue; + + list_for_each_entry_safe(sync_if_entry, tmp, + &sync_orig_entry->mcast_if_list, list) + sync_if(sync_if_entry, &orig_entry->mcast_if_list); + + list_del(&sync_orig_entry->list); + kfree(sync_orig_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_orig_entry->list, orig_list); +} + + +/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_table(struct mcast_forw_table_entry *sync_table_entry, + struct list_head *forw_table) +{ + struct mcast_forw_table_entry *table_entry; + struct mcast_forw_orig_entry *sync_orig_entry, *tmp; + int synced = 0; + + list_for_each_entry(table_entry, forw_table, list) { + if (memcmp(sync_table_entry->mcast_addr, + table_entry->mcast_addr, ETH_ALEN)) + continue; + + list_for_each_entry_safe(sync_orig_entry, tmp, + &sync_table_entry->mcast_orig_list, list) + sync_orig(sync_orig_entry, + &table_entry->mcast_orig_list); + + list_del(&sync_table_entry->list); + kfree(sync_table_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_table_entry->list, forw_table); +} + +/* Updates the old multicast forwarding table with the information gained + * from the generated/received tracker packet. It also frees the generated + * table for syncing (*forw_table). */ +static void update_mcast_forw_table(struct mcast_forw_table_entry *forw_table, + struct bat_priv *bat_priv) +{ + struct mcast_forw_table_entry *sync_table_entry, *tmp; + unsigned long flags; + + spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + list_for_each_entry_safe(sync_table_entry, tmp, &forw_table->list, + list) + sync_table(sync_table_entry, &bat_priv->mcast_forw_table); + spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); +} + static inline int find_mca_match(struct orig_node *orig_node, int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries) { @@ -246,13 +490,16 @@ out: * interface to the forw_if_list - but only if this router has not been * added yet */ static int add_router_of_dest(struct dest_entries_list *next_hops, - uint8_t *dest, struct bat_priv *bat_priv) + uint8_t *dest, + struct list_head *forw_if_list, + struct bat_priv *bat_priv) { struct dest_entries_list *next_hop_tmp, *next_hop_entry; unsigned long flags; struct element_t *bucket; struct orig_node *orig_node; HASHIT(hashit); + int16_t if_num;
next_hop_entry = kmalloc(sizeof(struct dest_entries_list), GFP_ATOMIC); if (!next_hop_entry) @@ -273,12 +520,17 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, memcpy(next_hop_entry->dest, orig_node->router->addr, ETH_ALEN); next_hop_entry->batman_if = orig_node->router->if_incoming; + if_num = next_hop_entry->batman_if->if_num; break; } spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); if (!next_hop_entry->batman_if) goto free;
+ if (forw_if_list) + prepare_forw_if_entry(forw_if_list, if_num, + next_hop_entry->dest); + list_for_each_entry(next_hop_tmp, &next_hops->list, list) if (!memcmp(next_hop_tmp->dest, next_hop_entry->dest, ETH_ALEN)) @@ -300,14 +552,17 @@ free: static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct dest_entries_list *next_hops, + struct mcast_forw_table_entry *forw_table, struct bat_priv *bat_priv) { int num_next_hops = 0, mcast_num, dest_num, ret; struct mcast_entry *mcast_entry; uint8_t *dest_entry; uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len; + struct list_head *forw_table_if = NULL;
INIT_LIST_HEAD(&next_hops->list); + INIT_LIST_HEAD(&forw_table->list);
tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) { @@ -327,8 +582,15 @@ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, goto out; }
+ if (dest_num) + goto skip; + + forw_table_if = prepare_forw_table_entry(forw_table, + mcast_entry->mcast_addr, + tracker_packet->orig); +skip: ret = add_router_of_dest(next_hops, dest_entry, - bat_priv); + forw_table_if, bat_priv); if (!ret) num_next_hops++; } @@ -336,6 +598,8 @@ out: return num_next_hops; }
+/* Zero destination entries not destined for the specified next hop in the + * tracker packet */ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, uint8_t *next_hop, struct bat_priv *bat_priv) { @@ -380,6 +644,8 @@ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); }
+/* Remove zeroed destination entries and empty multicast entries in tracker + * packet */ static int shrink_tracker_packet(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len) { @@ -466,15 +732,19 @@ void route_mcast_tracker_packet( struct mcast_tracker_packet *next_hop_tracker_packets, *next_hop_tracker_packet; struct dest_entries_list *next_hop; + struct mcast_forw_table_entry forw_table; struct sk_buff *skb; int num_next_hops, i; int *tracker_packet_lengths;
rcu_read_lock(); num_next_hops = tracker_next_hops(tracker_packet, tracker_packet_len, - &next_hops, bat_priv); + &next_hops, &forw_table, bat_priv); if (!num_next_hops) goto out; + + update_mcast_forw_table(&forw_table, bat_priv); + next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops, GFP_ATOMIC); if (!next_hop_tracker_packets) @@ -657,6 +927,8 @@ ok: int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); + INIT_LIST_HEAD(&bat_priv->mcast_forw_table); + start_mcast_tracker(bat_priv);
return 1; diff --git a/types.h b/types.h index 0129b1f..17ccd5a 100644 --- a/types.h +++ b/types.h @@ -153,6 +153,7 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct list_head vis_send_list; + struct list_head mcast_forw_table; struct hashtable_t *orig_hash; struct hashtable_t *hna_local_hash; struct hashtable_t *hna_global_hash; @@ -166,6 +167,7 @@ struct bat_priv { spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ + spinlock_t mcast_forw_table_lock; /* protects mcast_forw_table */ int16_t num_local_hna; atomic_t hna_local_changed; struct delayed_work hna_work;
With this commit the full multicast forwarding table, which is used for determining whether to forward a multicast data packet or not, can now be displayed via mcast_forw_table in BATMAN's debugfs directory.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_debugfs.c | 9 ++++++ multicast.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + 3 files changed, 93 insertions(+), 0 deletions(-)
diff --git a/bat_debugfs.c b/bat_debugfs.c index 0ae81d0..7b1b57d 100644 --- a/bat_debugfs.c +++ b/bat_debugfs.c @@ -32,6 +32,7 @@ #include "soft-interface.h" #include "vis.h" #include "icmp_socket.h" +#include "multicast.h"
static struct dentry *bat_debugfs;
@@ -252,6 +253,12 @@ static int transtable_local_open(struct inode *inode, struct file *file) return single_open(file, hna_local_seq_print_text, net_dev); }
+static int mcast_forw_table_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, mcast_forw_table_seq_print_text, net_dev); +} + static int vis_data_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; @@ -280,6 +287,7 @@ static BAT_DEBUGINFO(gateways, S_IRUGO, gateways_open); static BAT_DEBUGINFO(softif_neigh, S_IRUGO, softif_neigh_open); static BAT_DEBUGINFO(transtable_global, S_IRUGO, transtable_global_open); static BAT_DEBUGINFO(transtable_local, S_IRUGO, transtable_local_open); +static BAT_DEBUGINFO(mcast_forw_table, S_IRUGO, mcast_forw_table_open); static BAT_DEBUGINFO(vis_data, S_IRUGO, vis_data_open);
static struct bat_debuginfo *mesh_debuginfos[] = { @@ -288,6 +296,7 @@ static struct bat_debuginfo *mesh_debuginfos[] = { &bat_debuginfo_softif_neigh, &bat_debuginfo_transtable_global, &bat_debuginfo_transtable_local, + &bat_debuginfo_mcast_forw_table, &bat_debuginfo_vis_data, NULL, }; diff --git a/multicast.c b/multicast.c index c7edbef..edfe7e2 100644 --- a/multicast.c +++ b/multicast.c @@ -106,6 +106,24 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static inline int get_remaining_timeout( + struct mcast_forw_nexthop_entry *nexthop_entry, + struct bat_priv *bat_priv) +{ + int tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + if (!tracker_timeout) + tracker_timeout = atomic_read(&bat_priv->mcast_tracker_interval) + * TRACKER_TIMEOUT_AUTO_X; + if (!tracker_timeout) + tracker_timeout = atomic_read(&bat_priv->orig_interval) + * TRACKER_TIMEOUT_AUTO_X / 2; + + tracker_timeout = jiffies_to_msecs(nexthop_entry->timeout) + + tracker_timeout - jiffies_to_msecs(jiffies); + + return (tracker_timeout > 0 ? tracker_timeout : 0); +} + static void prepare_forw_if_entry(struct list_head *forw_if_list, int16_t if_num, uint8_t *neigh_addr) { @@ -924,6 +942,71 @@ ok: return count; }
+static inline struct batman_if *if_num_to_batman_if(int16_t if_num) +{ + struct batman_if *batman_if; + + list_for_each_entry_rcu(batman_if, &if_list, list) + if (batman_if->if_num == if_num) + return batman_if; + + return NULL; +} + +int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) +{ + struct net_device *net_dev = (struct net_device *)seq->private; + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long flags; + struct batman_if *batman_if; + struct mcast_forw_table_entry *table_entry; + struct mcast_forw_orig_entry *orig_entry; + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *nexthop_entry; + + seq_printf(seq, "[B.A.T.M.A.N. adv %s%s, MainIF/MAC: %s/%pM (%s)]\n", + SOURCE_VERSION, REVISION_VERSION_STR, + bat_priv->primary_if->net_dev->name, + bat_priv->primary_if->net_dev->dev_addr, net_dev->name); + seq_printf(seq, "Multicast group MAC\tOriginator\t" + "Outgoing interface\tNexthop - timeout in msecs\n"); + + rcu_read_lock(); + spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) { + seq_printf(seq, "%pM\n", table_entry->mcast_addr); + + list_for_each_entry(orig_entry, &table_entry->mcast_orig_list, + list) { + seq_printf(seq, "\t%pM\n", orig_entry->orig); + + list_for_each_entry(if_entry, + &orig_entry->mcast_if_list, list) { + batman_if = + if_num_to_batman_if(if_entry->if_num); + if (!batman_if) + continue; + + seq_printf(seq, "\t\t%s\n", + batman_if->net_dev->name); + + list_for_each_entry(nexthop_entry, + &if_entry->mcast_nexthop_list, + list) { + seq_printf(seq, "\t\t\t%pM - %i\n", + nexthop_entry->neigh_addr, + get_remaining_timeout( + nexthop_entry, bat_priv)); + } + } + } + } + spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + rcu_read_unlock(); + + return 0; +} + int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); diff --git a/multicast.h b/multicast.h index 2711d8b..0bd0590 100644 --- a/multicast.h +++ b/multicast.h @@ -30,6 +30,7 @@ void mcast_tracker_reset(struct bat_priv *bat_priv); void route_mcast_tracker_packet( struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct bat_priv *bat_priv); +int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
With this commit, the multicast forwarding table, which has been previously filled up due to multicast tracker packets, will now be checked frequently (once per second) for timeouted entries. If so these entries get removed from the table.
Note, that a more frequent check interval is not necessary, as multicast data will not only be forwarded if an entry exists, but also if that one might not have timeouted yet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + originator.c | 2 ++ 3 files changed, 54 insertions(+), 0 deletions(-)
diff --git a/multicast.c b/multicast.c index edfe7e2..2b1bfde 100644 --- a/multicast.c +++ b/multicast.c @@ -821,6 +821,57 @@ out: rcu_read_unlock(); }
+void purge_mcast_forw_table(struct bat_priv *bat_priv) +{ + unsigned long flags; + struct mcast_forw_table_entry *table_entry, *tmp_table_entry; + struct mcast_forw_orig_entry *orig_entry, *tmp_orig_entry; + struct mcast_forw_if_entry *if_entry, *tmp_if_entry; + struct mcast_forw_nexthop_entry *nexthop_entry, *tmp_nexthop_entry; + + spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + list_for_each_entry_safe(table_entry, tmp_table_entry, + &bat_priv->mcast_forw_table, list) { + list_for_each_entry_safe(orig_entry, tmp_orig_entry, + &table_entry->mcast_orig_list, list) { + list_for_each_entry_safe(if_entry, tmp_if_entry, + &orig_entry->mcast_if_list, list) { + list_for_each_entry_safe(nexthop_entry, + tmp_nexthop_entry, + &if_entry->mcast_nexthop_list, + list) { + if (get_remaining_timeout( + nexthop_entry, bat_priv)) + continue; + + list_del(&nexthop_entry->list); + kfree(nexthop_entry); + if_entry->num_nexthops--; + } + + if (!list_empty(&if_entry->mcast_nexthop_list)) + continue; + + list_del(&if_entry->list); + kfree(if_entry); + } + + if (!list_empty(&orig_entry->mcast_if_list)) + continue; + + list_del(&orig_entry->list); + kfree(orig_entry); + } + + if (!list_empty(&table_entry->mcast_orig_list)) + continue; + + list_del(&table_entry->list); + kfree(table_entry); + } + spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); +} + static void mcast_tracker_timer(struct work_struct *work) { struct bat_priv *bat_priv = container_of(work, struct bat_priv, diff --git a/multicast.h b/multicast.h index 0bd0590..7312afa 100644 --- a/multicast.h +++ b/multicast.h @@ -30,6 +30,7 @@ void mcast_tracker_reset(struct bat_priv *bat_priv); void route_mcast_tracker_packet( struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct bat_priv *bat_priv); +void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv); diff --git a/originator.c b/originator.c index 900d5fc..39ce8d5 100644 --- a/originator.c +++ b/originator.c @@ -30,6 +30,7 @@ #include "hard-interface.h" #include "unicast.h" #include "soft-interface.h" +#include "multicast.h"
static void purge_orig(struct work_struct *work);
@@ -311,6 +312,7 @@ static void purge_orig(struct work_struct *work) struct bat_priv *bat_priv = container_of(delayed_work, struct bat_priv, orig_work);
+ purge_mcast_forw_table(bat_priv); _purge_orig(bat_priv); start_purge_timer(bat_priv); }
On Tue, Dec 07, 2010 at 11:32:22PM +0100, Linus L??ssing wrote:
With this commit, the multicast forwarding table, which has been previously filled up due to multicast tracker packets, will now be checked frequently (once per second) for timeouted entries. If so these entries get removed from the table.
Note, that a more frequent check interval is not necessary, as multicast data will not only be forwarded if an entry exists, but also if that one might not have timeouted yet.
Signed-off-by: Linus L??ssing linus.luessing@saxnet.de
multicast.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + originator.c | 2 ++ 3 files changed, 54 insertions(+), 0 deletions(-)
diff --git a/multicast.c b/multicast.c index edfe7e2..2b1bfde 100644 --- a/multicast.c +++ b/multicast.c @@ -821,6 +821,57 @@ out: rcu_read_unlock(); }
+void purge_mcast_forw_table(struct bat_priv *bat_priv) +{
- unsigned long flags;
- struct mcast_forw_table_entry *table_entry, *tmp_table_entry;
- struct mcast_forw_orig_entry *orig_entry, *tmp_orig_entry;
- struct mcast_forw_if_entry *if_entry, *tmp_if_entry;
- struct mcast_forw_nexthop_entry *nexthop_entry, *tmp_nexthop_entry;
- spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags);
- list_for_each_entry_safe(table_entry, tmp_table_entry,
&bat_priv->mcast_forw_table, list) {
list_for_each_entry_safe(orig_entry, tmp_orig_entry,
&table_entry->mcast_orig_list, list) {
list_for_each_entry_safe(if_entry, tmp_if_entry,
&orig_entry->mcast_if_list, list) {
list_for_each_entry_safe(nexthop_entry,
tmp_nexthop_entry,
&if_entry->mcast_nexthop_list,
list) {
I would probably break this up into four functions.
Andrew
This patch adds the capability to encapsulate and send a node's own multicast data packets. Based on the previously established multicast forwarding table, the sender can decide wheather it actually has to send the multicast data to one or more of its interfaces or not.
Furthermore, the sending procedure also decides whether to broadcast or unicast a multicast data packet to its next-hops, depending on the configured mcast_fanout (default: < 3 next hops on an interface, send seperate unicast packets).
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + soft-interface.c | 25 +++++++++-- types.h | 1 + 4 files changed, 156 insertions(+), 4 deletions(-)
diff --git a/multicast.c b/multicast.c index 2b1bfde..72249ef 100644 --- a/multicast.c +++ b/multicast.c @@ -23,6 +23,7 @@ #include "multicast.h" #include "hash.h" #include "send.h" +#include "soft-interface.h" #include "compat.h"
/* If auto mode for tracker timeout has been selected, @@ -1058,6 +1059,138 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) return 0; }
+static void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) +{ + struct sk_buff *skb1; + struct mcast_packet *mcast_packet; + struct ethhdr *ethhdr; + struct batman_if *batman_if; + unsigned long flags; + struct mcast_forw_table_entry *table_entry; + struct mcast_forw_orig_entry *orig_entry; + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *nexthop_entry; + int mcast_fanout = atomic_read(&bat_priv->mcast_fanout); + int num_bcasts = 3, i; + struct dest_entries_list dest_list, *dest_entry, *tmp; + + mcast_packet = (struct mcast_packet*)skb->data; + ethhdr = (struct ethhdr*)(mcast_packet + 1); + + INIT_LIST_HEAD(&dest_list.list); + + mcast_packet->ttl--; + + rcu_read_lock(); + spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) { + if (memcmp(ethhdr->h_dest, table_entry->mcast_addr, ETH_ALEN)) + continue; + + list_for_each_entry(orig_entry, &table_entry->mcast_orig_list, + list) { + if (memcmp(mcast_packet->orig, + orig_entry->orig, ETH_ALEN)) + continue; + + list_for_each_entry(if_entry, + &orig_entry->mcast_if_list, list) { + batman_if = if_num_to_batman_if( + if_entry->if_num); + + /* send via broadcast */ + if (if_entry->num_nexthops > mcast_fanout) { + dest_entry = kmalloc(sizeof(struct + dest_entries_list), + GFP_ATOMIC); + memcpy(dest_entry->dest, + broadcast_addr, ETH_ALEN); + dest_entry->batman_if = batman_if; + list_add(&dest_entry->list, + &dest_list.list); + continue; + } + + /* send seperate unicast packets */ + list_for_each_entry(nexthop_entry, + &if_entry->mcast_nexthop_list, + list) { + if (!get_remaining_timeout( + nexthop_entry, + bat_priv)) + continue; + + dest_entry = kmalloc(sizeof(struct + dest_entries_list), + GFP_ATOMIC); + memcpy(dest_entry->dest, + nexthop_entry->neigh_addr, + ETH_ALEN); + dest_entry->batman_if = batman_if; + list_add(&dest_entry->list, + &dest_list.list); + } + } + break; + } + break; + } + spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + + list_for_each_entry_safe (dest_entry, tmp, &dest_list.list, list) { + if (is_broadcast_ether_addr(dest_entry->dest)) { + for (i = 0; i < num_bcasts; i++) { + skb1 = skb_clone(skb, GFP_ATOMIC); + send_skb_packet(skb1, dest_entry->batman_if, + dest_entry->dest); + } + } else { + skb1 = skb_clone(skb, GFP_ATOMIC); + send_skb_packet(skb1, dest_entry->batman_if, + dest_entry->dest); + } + list_del(&dest_entry->list); + kfree(dest_entry); + } + rcu_read_unlock(); +} + +int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) +{ + struct mcast_packet *mcast_packet; + + if (!bat_priv->primary_if) + goto dropped; + + if (my_skb_head_push(skb, sizeof(struct mcast_packet)) < 0) + goto dropped; + + mcast_packet = (struct mcast_packet *)skb->data; + mcast_packet->version = COMPAT_VERSION; + mcast_packet->ttl = TTL; + + /* batman packet type: broadcast */ + mcast_packet->packet_type = BAT_MCAST; + + /* hw address of first interface is the orig mac because only + * this mac is known throughout the mesh */ + memcpy(mcast_packet->orig, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + + /* set broadcast sequence number */ + mcast_packet->seqno = + htonl(atomic_inc_return(&bat_priv->mcast_seqno)); + + route_mcast_packet(skb, bat_priv); + + kfree_skb(skb); + return 0; + +dropped: + kfree_skb(skb); + return 1; +} + int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); diff --git a/multicast.h b/multicast.h index 7312afa..06dd398 100644 --- a/multicast.h +++ b/multicast.h @@ -32,6 +32,7 @@ void route_mcast_tracker_packet( int tracker_packet_len, struct bat_priv *bat_priv); void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); +int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
diff --git a/soft-interface.c b/soft-interface.c index 7cea678..2a5a728 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -38,6 +38,7 @@ #include <linux/if_vlan.h> #include "unicast.h" #include "routing.h" +#include "multicast.h"
static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd); @@ -347,7 +348,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) struct vlan_ethhdr *vhdr; int data_len = skb->len, ret; short vid = -1; - bool do_bcast = false; + bool bcast_dst = false, mcast_dst = false;
if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE) goto dropped; @@ -384,12 +385,20 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (ret < 0) goto dropped;
- if (ret == 0) - do_bcast = true; + /* dhcp request, which should be sent to the gateway directly? */ + if (ret) + goto unicast; + + if (is_broadcast_ether_addr(ethhdr->h_dest)) + bcast_dst = true; + else if (atomic_read(&bat_priv->mcast_mode) == MCAST_MODE_PROACT_TRACKING) + mcast_dst = true; + else + bcast_dst = true; }
/* ethernet packet should be broadcasted */ - if (do_bcast) { + if (bcast_dst) { if (!bat_priv->primary_if) goto dropped;
@@ -418,8 +427,15 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) * the original skb. */ kfree_skb(skb);
+ /* multicast data with path optimization */ + } else if (mcast_dst) { + ret = mcast_send_skb(skb, bat_priv); + if (ret != 0) + goto dropped_freed; + /* unicast packet */ } else { +unicast: ret = unicast_send_skb(skb, bat_priv); if (ret != 0) goto dropped_freed; @@ -608,6 +624,7 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); + atomic_set(&bat_priv->mcast_seqno, 1); atomic_set(&bat_priv->hna_local_changed, 0);
bat_priv->primary_if = NULL; diff --git a/types.h b/types.h index 17ccd5a..c12fd2c 100644 --- a/types.h +++ b/types.h @@ -140,6 +140,7 @@ struct bat_priv { atomic_t mcast_fanout; /* uint */ atomic_t log_level; /* uint */ atomic_t bcast_seqno; + atomic_t mcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; char num_ifaces;
On Tue, Dec 07, 2010 at 11:32:23PM +0100, Linus L??ssing wrote:
This patch adds the capability to encapsulate and send a node's own multicast data packets. Based on the previously established multicast forwarding table, the sender can decide wheather it actually has to send the multicast data to one or more of its interfaces or not.
Furthermore, the sending procedure also decides whether to broadcast or unicast a multicast data packet to its next-hops, depending on the configured mcast_fanout (default: < 3 next hops on an interface, send seperate unicast packets).
Signed-off-by: Linus L??ssing linus.luessing@saxnet.de
multicast.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + soft-interface.c | 25 +++++++++-- types.h | 1 + 4 files changed, 156 insertions(+), 4 deletions(-)
diff --git a/multicast.c b/multicast.c index 2b1bfde..72249ef 100644 --- a/multicast.c +++ b/multicast.c @@ -23,6 +23,7 @@ #include "multicast.h" #include "hash.h" #include "send.h" +#include "soft-interface.h" #include "compat.h"
/* If auto mode for tracker timeout has been selected, @@ -1058,6 +1059,138 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) return 0; }
+static void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) +{
- struct sk_buff *skb1;
- struct mcast_packet *mcast_packet;
- struct ethhdr *ethhdr;
- struct batman_if *batman_if;
- unsigned long flags;
- struct mcast_forw_table_entry *table_entry;
- struct mcast_forw_orig_entry *orig_entry;
- struct mcast_forw_if_entry *if_entry;
- struct mcast_forw_nexthop_entry *nexthop_entry;
- int mcast_fanout = atomic_read(&bat_priv->mcast_fanout);
- int num_bcasts = 3, i;
- struct dest_entries_list dest_list, *dest_entry, *tmp;
- mcast_packet = (struct mcast_packet*)skb->data;
- ethhdr = (struct ethhdr*)(mcast_packet + 1);
- INIT_LIST_HEAD(&dest_list.list);
- mcast_packet->ttl--;
- rcu_read_lock();
- spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags);
- list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) {
if (memcmp(ethhdr->h_dest, table_entry->mcast_addr, ETH_ALEN))
continue;
list_for_each_entry(orig_entry, &table_entry->mcast_orig_list,
list) {
if (memcmp(mcast_packet->orig,
orig_entry->orig, ETH_ALEN))
continue;
list_for_each_entry(if_entry,
&orig_entry->mcast_if_list, list) {
batman_if = if_num_to_batman_if(
if_entry->if_num);
/* send via broadcast */
if (if_entry->num_nexthops > mcast_fanout) {
dest_entry = kmalloc(sizeof(struct
dest_entries_list),
GFP_ATOMIC);
memcpy(dest_entry->dest,
broadcast_addr, ETH_ALEN);
dest_entry->batman_if = batman_if;
list_add(&dest_entry->list,
&dest_list.list);
continue;
}
/* send seperate unicast packets */
list_for_each_entry(nexthop_entry,
&if_entry->mcast_nexthop_list,
list) {
if (!get_remaining_timeout(
nexthop_entry,
bat_priv))
continue;
Again, refactor this into four functions.
Andrew
We need to check similar things for BAT_MCAST packets later too, therefore moving them to a seperate function.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- routing.c | 43 ++++++++++++++++++++++++++----------------- 1 files changed, 26 insertions(+), 17 deletions(-)
diff --git a/routing.c b/routing.c index 9c83006..ff74bd1 100644 --- a/routing.c +++ b/routing.c @@ -1167,6 +1167,31 @@ static int check_unicast_packet(struct sk_buff *skb, int hdr_size) return 0; }
+static int check_broadcast_packet(struct sk_buff *skb, int hdr_size) +{ + struct ethhdr *ethhdr; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, hdr_size))) + return -1; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with broadcast indication but unicast recipient */ + if (!is_broadcast_ether_addr(ethhdr->h_dest)) + return -1; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + return -1; + + /* ignore broadcasts sent by myself */ + if (is_my_mac(ethhdr->h_source)) + return -1; + + return 0; +} + int route_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if, int hdr_size) { @@ -1306,26 +1331,10 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if) struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct orig_node *orig_node; struct bcast_packet *bcast_packet; - struct ethhdr *ethhdr; int hdr_size = sizeof(struct bcast_packet); int32_t seq_diff;
- /* drop packet if it has not necessary minimum size */ - if (unlikely(!pskb_may_pull(skb, hdr_size))) - return NET_RX_DROP; - - ethhdr = (struct ethhdr *)skb_mac_header(skb); - - /* packet with broadcast indication but unicast recipient */ - if (!is_broadcast_ether_addr(ethhdr->h_dest)) - return NET_RX_DROP; - - /* packet with broadcast sender address */ - if (is_broadcast_ether_addr(ethhdr->h_source)) - return NET_RX_DROP; - - /* ignore broadcasts sent by myself */ - if (is_my_mac(ethhdr->h_source)) + if (check_broadcast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
bcast_packet = (struct bcast_packet *)skb->data;
This patch adds the forwarding of multicast data packets to the local soft interface if this receiving node is a member of the same multicast group as specified in the multicast packet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 5 +++++ routing.c | 30 ++++++++++++++++++++++++++++++ routing.h | 1 + 3 files changed, 36 insertions(+), 0 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 3b380e1..b668ae6 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -624,6 +624,11 @@ int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, ret = recv_bcast_packet(skb, batman_if); break;
+ /* multicast packet */ + case BAT_MCAST: + ret = recv_mcast_packet(skb, batman_if); + break; + /* multicast tracker packet */ case BAT_MCAST_TRACKER: ret = recv_mcast_tracker_packet(skb, batman_if); diff --git a/routing.c b/routing.c index ff74bd1..b1aad26 100644 --- a/routing.c +++ b/routing.c @@ -1388,6 +1388,36 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if) return NET_RX_SUCCESS; }
+int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) +{ + struct ethhdr *ethhdr; + MC_LIST *mc_entry; + unsigned long flags; + int ret = 1; + int hdr_size = sizeof(struct mcast_packet); + + /* multicast data packets might be received via unicast or broadcast */ + if (check_unicast_packet(skb, hdr_size) < 0 && + check_broadcast_packet(skb, hdr_size) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet)); + + /* multicast for me? */ + MC_LIST_LOCK(recv_if->soft_iface, flags); + netdev_for_each_mc_addr(mc_entry, recv_if->soft_iface) { + ret = memcmp(mc_entry->MC_LIST_ADDR, ethhdr->h_dest, ETH_ALEN); + if (!ret) + break; + } + MC_LIST_UNLOCK(recv_if->soft_iface, flags); + + if (!ret) + interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); + + return NET_RX_SUCCESS; +} + int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); diff --git a/routing.h b/routing.h index ad3f054..6b45212 100644 --- a/routing.h +++ b/routing.h @@ -38,6 +38,7 @@ int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_ucast_frag_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if); +int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bat_packet(struct sk_buff *skb, struct batman_if *recv_if);
This patch enables the forwarding of multicast data and uses the same methods for deciding to forward via broad- or unicast(s) as the local packet encapsulation already did.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 2 +- multicast.h | 1 + routing.c | 4 ++++ 3 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/multicast.c b/multicast.c index 72249ef..042d392 100644 --- a/multicast.c +++ b/multicast.c @@ -1059,7 +1059,7 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) return 0; }
-static void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) +void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) { struct sk_buff *skb1; struct mcast_packet *mcast_packet; diff --git a/multicast.h b/multicast.h index 06dd398..6dcf537 100644 --- a/multicast.h +++ b/multicast.h @@ -32,6 +32,7 @@ void route_mcast_tracker_packet( int tracker_packet_len, struct bat_priv *bat_priv); void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); +void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv); diff --git a/routing.c b/routing.c index b1aad26..f9582be 100644 --- a/routing.c +++ b/routing.c @@ -1390,6 +1390,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if)
int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct ethhdr *ethhdr; MC_LIST *mc_entry; unsigned long flags; @@ -1401,6 +1402,9 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) check_broadcast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* forward multicast packet if necessary */ + route_mcast_packet(skb, bat_priv); + ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet));
/* multicast for me? */
This commit adds duplicate checks to avoid endless rebroadcasts in the case of forwarding multicast data packets via broadcasting.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- originator.c | 2 ++ routing.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++- types.h | 3 +++ 3 files changed, 52 insertions(+), 1 deletions(-)
diff --git a/originator.c b/originator.c index 39ce8d5..f882292 100644 --- a/originator.c +++ b/originator.c @@ -154,6 +154,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) orig_node->num_mca = 0; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); + orig_node->mcast_seqno_reset = jiffies - 1 + - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS);
diff --git a/routing.c b/routing.c index f9582be..19f045a 100644 --- a/routing.c +++ b/routing.c @@ -1391,8 +1391,11 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if) int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct orig_node *orig_node; + struct mcast_packet *mcast_packet; struct ethhdr *ethhdr; MC_LIST *mc_entry; + int32_t seq_diff; unsigned long flags; int ret = 1; int hdr_size = sizeof(struct mcast_packet); @@ -1402,10 +1405,53 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) check_broadcast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ mcast_packet = (struct mcast_packet *)skb->data; + + /* ignore broadcasts originated by myself */ + if (is_my_mac(mcast_packet->orig)) + return NET_RX_DROP; + + if (mcast_packet->ttl < 2) + return NET_RX_DROP; + + spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + orig_node = ((struct orig_node *) + hash_find(bat_priv->orig_hash, compare_orig, choose_orig, + mcast_packet->orig)); + + if (orig_node == NULL) { + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + return NET_RX_DROP; + } + + /* check whether the packet is a duplicate */ + if (get_bit_status(orig_node->mcast_bits, + orig_node->last_mcast_seqno, + ntohl(mcast_packet->seqno))) { + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + return NET_RX_DROP; + } + + seq_diff = ntohl(mcast_packet->seqno) - orig_node->last_mcast_seqno; + + /* check whether the packet is old and the host just restarted. */ + if (window_protected(bat_priv, seq_diff, + &orig_node->mcast_seqno_reset)) { + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + return NET_RX_DROP; + } + + /* mark broadcast in flood history, update window position + * if required. */ + if (bit_get_packet(bat_priv, orig_node->mcast_bits, seq_diff, 1)) + orig_node->last_mcast_seqno = ntohl(mcast_packet->seqno); + + spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + /* forward multicast packet if necessary */ route_mcast_packet(skb, bat_priv);
- ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet)); + ethhdr = (struct ethhdr *)(mcast_packet + 1);
/* multicast for me? */ MC_LIST_LOCK(recv_if->soft_iface, flags); diff --git a/types.h b/types.h index c12fd2c..890822f 100644 --- a/types.h +++ b/types.h @@ -74,6 +74,7 @@ struct orig_node { int tq_asym_penalty; unsigned long last_valid; unsigned long bcast_seqno_reset; + unsigned long mcast_seqno_reset; unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; @@ -84,7 +85,9 @@ struct orig_node { uint32_t last_real_seqno; uint8_t last_ttl; TYPE_OF_WORD bcast_bits[NUM_WORDS]; + TYPE_OF_WORD mcast_bits[NUM_WORDS]; uint32_t last_bcast_seqno; + uint32_t last_mcast_seqno; struct list_head neigh_list; struct list_head frag_list; unsigned long last_frag_packet;
We may only optimize the multicast packet flow, if an mcast_mode has been activated and if we are a multicast receiver of the same group. Otherwise flood the multicast packet without optimizations.
This allows us to still flood multicast packets of protocols where it is not easily possible for a multicast sender to be a multicast receiver of the same group instead of dropping them (for instance IPv6 NDP).
This commit therefore also makes IPv6 usable again, if the proact_tracking multicast mode has been activated.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 25 +++++++++++++++++++++++++ multicast.h | 1 + soft-interface.c | 2 +- 3 files changed, 27 insertions(+), 1 deletions(-)
diff --git a/multicast.c b/multicast.c index 042d392..1f84f7c 100644 --- a/multicast.c +++ b/multicast.c @@ -107,6 +107,31 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+inline int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface) { + MC_LIST *mc_entry; + unsigned long flags; + struct bat_priv *bat_priv = netdev_priv(soft_iface); + int mcast_mode = atomic_read(&bat_priv->mcast_mode); + + if (mcast_mode != MCAST_MODE_PROACT_TRACKING) + return 0; + + /* Still allow flooding of multicast packets of protocols where it is + * not easily possible for a multicast sender to be a multicast + * receiver of the same group (for instance IPv6 NDP) */ + MC_LIST_LOCK(soft_iface, flags); + netdev_for_each_mc_addr(mc_entry, soft_iface) { + if (memcmp(dest, mc_entry->MC_LIST_ADDR, ETH_ALEN)) + continue; + + MC_LIST_UNLOCK(soft_iface, flags); + return 1; + } + MC_LIST_UNLOCK(soft_iface, flags); + + return 0; +} + static inline int get_remaining_timeout( struct mcast_forw_nexthop_entry *nexthop_entry, struct bat_priv *bat_priv) diff --git a/multicast.h b/multicast.h index 6dcf537..630a0ae 100644 --- a/multicast.h +++ b/multicast.h @@ -27,6 +27,7 @@ int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, size_t count); void mcast_tracker_reset(struct bat_priv *bat_priv); +int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface); void route_mcast_tracker_packet( struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct bat_priv *bat_priv); diff --git a/soft-interface.c b/soft-interface.c index 2a5a728..444028d 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -391,7 +391,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface)
if (is_broadcast_ether_addr(ethhdr->h_dest)) bcast_dst = true; - else if (atomic_read(&bat_priv->mcast_mode) == MCAST_MODE_PROACT_TRACKING) + else if (mcast_may_optimize(ethhdr->h_dest, soft_iface)) mcast_dst = true; else bcast_dst = true;
On Tue, Dec 07, 2010 at 11:32:28PM +0100, Linus L??ssing wrote:
We may only optimize the multicast packet flow, if an mcast_mode has been activated and if we are a multicast receiver of the same group. Otherwise flood the multicast packet without optimizations.
This allows us to still flood multicast packets of protocols where it is not easily possible for a multicast sender to be a multicast receiver of the same group instead of dropping them (for instance IPv6 NDP).
This commit therefore also makes IPv6 usable again, if the proact_tracking multicast mode has been activated.
Signed-off-by: Linus L??ssing linus.luessing@saxnet.de
multicast.c | 25 +++++++++++++++++++++++++ multicast.h | 1 + soft-interface.c | 2 +- 3 files changed, 27 insertions(+), 1 deletions(-)
diff --git a/multicast.c b/multicast.c index 042d392..1f84f7c 100644 --- a/multicast.c +++ b/multicast.c @@ -107,6 +107,31 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+inline int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface) {
- MC_LIST *mc_entry;
- unsigned long flags;
- struct bat_priv *bat_priv = netdev_priv(soft_iface);
- int mcast_mode = atomic_read(&bat_priv->mcast_mode);
--- a/soft-interface.c +++ b/soft-interface.c @@ -391,7 +391,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface)
if (is_broadcast_ether_addr(ethhdr->h_dest)) bcast_dst = true;
else if (atomic_read(&bat_priv->mcast_mode) == MCAST_MODE_PROACT_TRACKING)
else bcast_dst = true;else if (mcast_may_optimize(ethhdr->h_dest, soft_iface)) mcast_dst = true;
You define mcast_may_optimize as inline, and then use it from a different file. This makes the inline pointless, as far as i know.
Andrew
Depending on the scenario, people might want to adjust the number of (re)broadcast of data packets - usually higher values in sparse or lower values in dense networks.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_sysfs.c | 2 ++ send.c | 3 ++- types.h | 1 + 3 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/bat_sysfs.c b/bat_sysfs.c index 8f688db..7135c08 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -520,6 +520,7 @@ static BAT_ATTR(gw_mode, S_IRUGO | S_IWUSR, show_gw_mode, store_gw_mode); BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, update_mcast_tracker); BAT_ATTR_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, TQ_MAX_VALUE, NULL); +BAT_ATTR_UINT(num_bcasts, S_IRUGO | S_IWUSR, 0, INT_MAX, NULL); BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, @@ -544,6 +545,7 @@ static struct bat_attribute *mesh_attrs[] = { &bat_attr_gw_mode, &bat_attr_orig_interval, &bat_attr_hop_penalty, + &bat_attr_num_bcasts, &bat_attr_gw_sel_class, &bat_attr_gw_bandwidth, &bat_attr_mcast_mode, diff --git a/send.c b/send.c index ba7ebfe..26a6c99 100644 --- a/send.c +++ b/send.c @@ -512,6 +512,7 @@ static void send_outstanding_bcast_packet(struct work_struct *work) struct sk_buff *skb1; struct net_device *soft_iface = forw_packet->if_incoming->soft_iface; struct bat_priv *bat_priv = netdev_priv(soft_iface); + int num_bcasts = atomic_read(&bat_priv->num_bcasts);
spin_lock_bh(&bat_priv->forw_bcast_list_lock); hlist_del(&forw_packet->list); @@ -536,7 +537,7 @@ static void send_outstanding_bcast_packet(struct work_struct *work) forw_packet->num_packets++;
/* if we still have some more bcasts to send */ - if (forw_packet->num_packets < 3) { + if (forw_packet->num_packets < num_bcasts) { _add_bcast_packet_to_list(bat_priv, forw_packet, ((5 * HZ) / 1000)); return; diff --git a/types.h b/types.h index 890822f..938fc6d 100644 --- a/types.h +++ b/types.h @@ -137,6 +137,7 @@ struct bat_priv { atomic_t gw_bandwidth; /* gw bandwidth */ atomic_t orig_interval; /* uint */ atomic_t hop_penalty; /* uint */ + atomic_t num_bcasts; /* uint */ atomic_t mcast_mode; /* MCAST_MODE_* */ atomic_t mcast_tracker_interval;/* uint, auto */ atomic_t mcast_tracker_timeout; /* uint, auto */
We are never going to be using those spinlocks for the multicast specific code in hardware interrupt context, therefore just disabling bottom halves is enough.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- compat.h | 16 ++++++++-------- multicast.c | 48 ++++++++++++++++++++---------------------------- routing.c | 15 +++++++-------- send.c | 10 ++++------ 4 files changed, 39 insertions(+), 50 deletions(-)
diff --git a/compat.h b/compat.h index bbb1dad..8836fff 100644 --- a/compat.h +++ b/compat.h @@ -297,19 +297,19 @@ int bat_seq_printf(struct seq_file *m, const char *f, ...); */ #if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27)
-#define MC_LIST_LOCK(soft_iface, flags) \ - spin_lock_irqsave(&soft_iface->_xmit_lock, flags) -#define MC_LIST_UNLOCK(soft_iface, flags) \ - spin_unlock_irqrestore(&soft_iface->_xmit_lock, flags) +#define MC_LIST_LOCK(soft_iface) \ + netif_tx_lock_bh(soft_iface) +#define MC_LIST_UNLOCK(soft_iface) \ + netif_tx_unlock_bh(soft_iface)
#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27) */
#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 26)
-#define MC_LIST_LOCK(soft_iface, flags) \ - spin_lock_irqsave(&soft_iface->addr_list_lock, flags) -#define MC_LIST_UNLOCK(soft_iface, flags) \ - spin_unlock_irqrestore(&soft_iface->addr_list_lock, flags) +#define MC_LIST_LOCK(soft_iface) \ + netif_addr_lock_bh(soft_iface) +#define MC_LIST_UNLOCK(soft_iface) \ + netif_addr_unlock_bh(soft_iface)
#endif /* > KERNEL_VERSION(2, 6, 26) */
diff --git a/multicast.c b/multicast.c index 1f84f7c..4681046 100644 --- a/multicast.c +++ b/multicast.c @@ -109,7 +109,6 @@ void mcast_tracker_reset(struct bat_priv *bat_priv)
inline int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface) { MC_LIST *mc_entry; - unsigned long flags; struct bat_priv *bat_priv = netdev_priv(soft_iface); int mcast_mode = atomic_read(&bat_priv->mcast_mode);
@@ -119,15 +118,15 @@ inline int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface) { /* Still allow flooding of multicast packets of protocols where it is * not easily possible for a multicast sender to be a multicast * receiver of the same group (for instance IPv6 NDP) */ - MC_LIST_LOCK(soft_iface, flags); + MC_LIST_LOCK(soft_iface); netdev_for_each_mc_addr(mc_entry, soft_iface) { if (memcmp(dest, mc_entry->MC_LIST_ADDR, ETH_ALEN)) continue;
- MC_LIST_UNLOCK(soft_iface, flags); + MC_LIST_UNLOCK(soft_iface); return 1; } - MC_LIST_UNLOCK(soft_iface, flags); + MC_LIST_UNLOCK(soft_iface);
return 0; } @@ -353,13 +352,12 @@ static void update_mcast_forw_table(struct mcast_forw_table_entry *forw_table, struct bat_priv *bat_priv) { struct mcast_forw_table_entry *sync_table_entry, *tmp; - unsigned long flags;
- spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + spin_lock_bh(&bat_priv->mcast_forw_table_lock); list_for_each_entry_safe(sync_table_entry, tmp, &forw_table->list, list) sync_table(sync_table_entry, &bat_priv->mcast_forw_table); - spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); }
static inline int find_mca_match(struct orig_node *orig_node, @@ -403,17 +401,16 @@ static struct mcast_tracker_packet *mcast_proact_tracker_prepare(
uint8_t *dest_entry; int pos, mca_pos; - unsigned long flags; struct mcast_tracker_packet *tracker_packet = NULL; struct mcast_entry *mcast_entry; HASHIT(hashit);
/* Make a copy so we don't have to rush because of locking */ - MC_LIST_LOCK(soft_iface, flags); + MC_LIST_LOCK(soft_iface); num_mcast_entries = netdev_mc_count(soft_iface); mc_addr_list = kmalloc(ETH_ALEN * num_mcast_entries, GFP_ATOMIC); if (!mc_addr_list) { - MC_LIST_UNLOCK(soft_iface, flags); + MC_LIST_UNLOCK(soft_iface); goto out; } pos = 0; @@ -422,7 +419,7 @@ static struct mcast_tracker_packet *mcast_proact_tracker_prepare( ETH_ALEN); pos++; } - MC_LIST_UNLOCK(soft_iface, flags); + MC_LIST_UNLOCK(soft_iface);
if (num_mcast_entries > UINT8_MAX) num_mcast_entries = UINT8_MAX; @@ -435,7 +432,7 @@ static struct mcast_tracker_packet *mcast_proact_tracker_prepare( INIT_LIST_HEAD(&dest_entries_list[pos]);
/* fill the lists and buffers */ - spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + spin_lock_bh(&bat_priv->orig_hash_lock); while (hash_iterate(bat_priv->orig_hash, &hashit)) { bucket = hlist_entry(hashit.walk, struct element_t, hlist); orig_node = bucket->data; @@ -455,7 +452,7 @@ static struct mcast_tracker_packet *mcast_proact_tracker_prepare( dest_entries_total++; } } - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock);
/* Any list left empty? */ for (pos = 0; pos < num_mcast_entries; pos++) @@ -539,7 +536,6 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, struct bat_priv *bat_priv) { struct dest_entries_list *next_hop_tmp, *next_hop_entry; - unsigned long flags; struct element_t *bucket; struct orig_node *orig_node; HASHIT(hashit); @@ -550,7 +546,7 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, return 1;
next_hop_entry->batman_if = NULL; - spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + spin_lock_bh(&bat_priv->orig_hash_lock); while (hash_iterate(bat_priv->orig_hash, &hashit)) { bucket = hlist_entry(hashit.walk, struct element_t, hlist); orig_node = bucket->data; @@ -567,7 +563,7 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, if_num = next_hop_entry->batman_if->if_num; break; } - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock); if (!next_hop_entry->batman_if) goto free;
@@ -651,12 +647,11 @@ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, uint8_t *dest_entry; int mcast_num, dest_num;
- unsigned long flags; struct element_t *bucket; struct orig_node *orig_node; HASHIT(hashit);
- spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + spin_lock_bh(&bat_priv->orig_hash_lock); tracker_packet_for_each_dest(mcast_entry, dest_entry, mcast_num, dest_num, tracker_packet) { while (hash_iterate(bat_priv->orig_hash, &hashit)) { @@ -685,7 +680,7 @@ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, } HASHIT_RESET(hashit); } - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock); }
/* Remove zeroed destination entries and empty multicast entries in tracker @@ -849,13 +844,12 @@ out:
void purge_mcast_forw_table(struct bat_priv *bat_priv) { - unsigned long flags; struct mcast_forw_table_entry *table_entry, *tmp_table_entry; struct mcast_forw_orig_entry *orig_entry, *tmp_orig_entry; struct mcast_forw_if_entry *if_entry, *tmp_if_entry; struct mcast_forw_nexthop_entry *nexthop_entry, *tmp_nexthop_entry;
- spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + spin_lock_bh(&bat_priv->mcast_forw_table_lock); list_for_each_entry_safe(table_entry, tmp_table_entry, &bat_priv->mcast_forw_table, list) { list_for_each_entry_safe(orig_entry, tmp_orig_entry, @@ -895,7 +889,7 @@ void purge_mcast_forw_table(struct bat_priv *bat_priv) list_del(&table_entry->list); kfree(table_entry); } - spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); }
static void mcast_tracker_timer(struct work_struct *work) @@ -1034,7 +1028,6 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - unsigned long flags; struct batman_if *batman_if; struct mcast_forw_table_entry *table_entry; struct mcast_forw_orig_entry *orig_entry; @@ -1049,7 +1042,7 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) "Outgoing interface\tNexthop - timeout in msecs\n");
rcu_read_lock(); - spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + spin_lock_bh(&bat_priv->mcast_forw_table_lock); list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) { seq_printf(seq, "%pM\n", table_entry->mcast_addr);
@@ -1078,7 +1071,7 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) } } } - spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); rcu_read_unlock();
return 0; @@ -1090,7 +1083,6 @@ void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) struct mcast_packet *mcast_packet; struct ethhdr *ethhdr; struct batman_if *batman_if; - unsigned long flags; struct mcast_forw_table_entry *table_entry; struct mcast_forw_orig_entry *orig_entry; struct mcast_forw_if_entry *if_entry; @@ -1107,7 +1099,7 @@ void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) mcast_packet->ttl--;
rcu_read_lock(); - spin_lock_irqsave(&bat_priv->mcast_forw_table_lock, flags); + spin_lock_bh(&bat_priv->mcast_forw_table_lock); list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) { if (memcmp(ethhdr->h_dest, table_entry->mcast_addr, ETH_ALEN)) continue; @@ -1160,7 +1152,7 @@ void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) } break; } - spin_unlock_irqrestore(&bat_priv->mcast_forw_table_lock, flags); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock);
list_for_each_entry_safe (dest_entry, tmp, &dest_list.list, list) { if (is_broadcast_ether_addr(dest_entry->dest)) { diff --git a/routing.c b/routing.c index 19f045a..4f99134 100644 --- a/routing.c +++ b/routing.c @@ -1396,7 +1396,6 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) struct ethhdr *ethhdr; MC_LIST *mc_entry; int32_t seq_diff; - unsigned long flags; int ret = 1; int hdr_size = sizeof(struct mcast_packet);
@@ -1414,13 +1413,13 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) if (mcast_packet->ttl < 2) return NET_RX_DROP;
- spin_lock_irqsave(&bat_priv->orig_hash_lock, flags); + spin_lock_bh(&bat_priv->orig_hash_lock); orig_node = ((struct orig_node *) hash_find(bat_priv->orig_hash, compare_orig, choose_orig, mcast_packet->orig));
if (orig_node == NULL) { - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock); return NET_RX_DROP; }
@@ -1428,7 +1427,7 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) if (get_bit_status(orig_node->mcast_bits, orig_node->last_mcast_seqno, ntohl(mcast_packet->seqno))) { - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock); return NET_RX_DROP; }
@@ -1437,7 +1436,7 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) /* check whether the packet is old and the host just restarted. */ if (window_protected(bat_priv, seq_diff, &orig_node->mcast_seqno_reset)) { - spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock); return NET_RX_DROP; }
@@ -1446,7 +1445,7 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) if (bit_get_packet(bat_priv, orig_node->mcast_bits, seq_diff, 1)) orig_node->last_mcast_seqno = ntohl(mcast_packet->seqno);
- spin_unlock_irqrestore(&bat_priv->orig_hash_lock, flags); + spin_unlock_bh(&bat_priv->orig_hash_lock);
/* forward multicast packet if necessary */ route_mcast_packet(skb, bat_priv); @@ -1454,13 +1453,13 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) ethhdr = (struct ethhdr *)(mcast_packet + 1);
/* multicast for me? */ - MC_LIST_LOCK(recv_if->soft_iface, flags); + MC_LIST_LOCK(recv_if->soft_iface); netdev_for_each_mc_addr(mc_entry, recv_if->soft_iface) { ret = memcmp(mc_entry->MC_LIST_ADDR, ethhdr->h_dest, ETH_ALEN); if (!ret) break; } - MC_LIST_UNLOCK(recv_if->soft_iface, flags); + MC_LIST_UNLOCK(recv_if->soft_iface);
if (!ret) interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/send.c b/send.c index 26a6c99..eef95bc 100644 --- a/send.c +++ b/send.c @@ -220,7 +220,6 @@ static void add_own_MCA(struct batman_packet *batman_packet, int num_mca, { MC_LIST *mc_list_entry; int num_mca_done = 0; - unsigned long flags; char *mca_entry = (char *)(batman_packet + 1);
if (num_mca == 0) @@ -234,7 +233,7 @@ static void add_own_MCA(struct batman_packet *batman_packet, int num_mca,
mca_entry = mca_entry + batman_packet->num_hna * ETH_ALEN;
- MC_LIST_LOCK(soft_iface, flags); + MC_LIST_LOCK(soft_iface); netdev_for_each_mc_addr(mc_list_entry, soft_iface) { memcpy(mca_entry, &mc_list_entry->MC_LIST_ADDR, ETH_ALEN); mca_entry += ETH_ALEN; @@ -244,7 +243,7 @@ static void add_own_MCA(struct batman_packet *batman_packet, int num_mca, if(++num_mca_done == num_mca) break; } - MC_LIST_UNLOCK(soft_iface, flags); + MC_LIST_UNLOCK(soft_iface);
out: batman_packet->num_mca = num_mca_done; @@ -254,7 +253,6 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_if *batman_if) { int new_len, mcast_mode, num_mca = 0; - unsigned long flags; unsigned char *new_buff = NULL; struct batman_packet *batman_packet;
@@ -263,9 +261,9 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv,
/* Avoid attaching MCAs, if multicast optimization is disabled */ if (mcast_mode == MCAST_MODE_PROACT_TRACKING) { - MC_LIST_LOCK(batman_if->soft_iface, flags); + MC_LIST_LOCK(batman_if->soft_iface); num_mca = netdev_mc_count(batman_if->soft_iface); - MC_LIST_UNLOCK(batman_if->soft_iface, flags); + MC_LIST_UNLOCK(batman_if->soft_iface); }
if (atomic_read(&bat_priv->hna_local_changed) ||
On Tue, Dec 07, 2010 at 11:13:51PM +0100, Linus L??ssing wrote:
Please see the attached document for details about the algorithm, the integration into the current B.A.T.M.A.N.-Advanced code and how to activate/use this mode.
Hi Linus, Simon
Nice document.
However one thing i don't like about it is the use of multicast group. In the Terminology you define it as:
multicast group: B.A.T.M.A.N. advanced will not distinguish between different multicast group IDs / IP protocols on layer 3. Instead, the term multicast group will be used analogously for a multicast mac address.
This is quite different to the normal usage of the term. In the RFC's the multicast group is always an IP address in the range 224.0.0.0/4.
When looking at B.A.T.M.A.N. on its own, your definition is O.K, but when you start to consider the whole protocol stack, it will lead to confusion.
Do you think you could do a search/replace with "multicast MAC address"?
Also, the meaning of symmetric multicast group membership takes a bit of understanding. Once i read section 2.2.1 it became clear, but maybe that section should be earlier in the document? Also, from experience talking to people about IP multicast, i know people have trouble getting the concept you can send a multicast packet without being a member of the group. Maybe a short explanation of how IP multicast works, or a link to a good tutorial would be good.
Is this symmetric assumption a problem? Depends one the use case. Your PIM gateway into the multicast cloud should be a member of the group. It has to receive the packets from the local hosts, so it can forward them upstream to the RP, the root of the distribution tree. So your traffic coming from upstream should be O.K. However if you have a webcam which is multicasting a video stream, it might not be a member of the group, since it is not interesting in receiving video streams, just sending them. So in this use case you won't have any benefit from your scheme.
The symmetric assumption is a nice simplification to get started, but i think you need to have a good plan for allowing none members to send multicast traffic.
Have you considered handing broadcast packets as multicast packets? Broadcast is just a special case of multicast.
Andrew
Hey Andrew,
thank you for your comments and the review!
On Wed, Dec 08, 2010 at 08:29:15AM +0100, Andrew Lunn wrote:
[...]
Do you think you could do a search/replace with "multicast MAC address"?
You've got a point, we can change that.
[...]
Is this symmetric assumption a problem? Depends one the use case. Your PIM gateway into the multicast cloud should be a member of the group. It has to receive the packets from the local hosts, so it can forward them upstream to the RP, the root of the distribution tree. So your traffic coming from upstream should be O.K. However if you have a webcam which is multicasting a video stream, it might not be a member of the group, since it is not interesting in receiving video streams, just sending them. So in this use case you won't have any benefit from your scheme.
The symmetric assumption is a nice simplification to get started, but i think you need to have a good plan for allowing none members to send multicast traffic.
Yup, we had a "symmetric" application in mind when designing this algorithm, but we have discussed ideas to improve the algorithm to a more general scheme. For example we could detect if a multicast MAC address is sent to through the soft interface, and then start sending tracker packets accordingly - this method might need some tuning however, the build up might take some time, and maybe we should avoid building it up for single packets.
Detecting non-local receivers (like SNMP snooping) appears to be the harder task IMHO to become more general - you will most likely not receive your webcam stream with your WiFi AP. :)
Have you considered handing broadcast packets as multicast packets? Broadcast is just a special case of multicast.
That is right, but I don't think this is a good idea to use this multicast approach for broadcast in this case. From a theoretical point of view, this algorithm is a group aware one for "sparse" mesh networks, where the number of group members is quite small (< 50% of all mesh nodes). If all nodes were in a group (as it would be the case for broadcast), the overhead of the tracker packets would most likely nullify any gain. Non-group-aware algorithm like MPR and its variations [1] are probably more suited for this case. But maybe someone will find a clever workaround. ;)
best regards, Simon
[1] http://tools.ietf.org/html/draft-ietf-manet-smf-10 Appendices A to C
b.a.t.m.a.n@lists.open-mesh.org