Hi everyone,
Here's the next series of patches which should address the comments I got for the first one. Thanks for all the feedback!
Changelog: * rebasing to commit [65e0869478bce153a799c0e774a117ba5fc78025], using new orig_hash methods * putting seqno before ttl, 4 byte aligning mcast_packet [01/20] * adapted compat.h to not use custom lock macros, instead only one macro for netif_addr_lock_bh() in case of older kernel versions * merged spinlock-irqsave-to-bh commit into previous commits [20/20] * moved mcast_may_optimize() to soft-interface.c [18/20], removed inlining (won't optimize much anyway...) * purge_mcast_forw_table, splitted list operations into separate functions [12/20] * use batman_if refcounting to reduce the time of rcu-locking [13/20] * do not create nexthop entry if according batman_if is NULL [13/20] * route_mcast_packet, split into separat functions [13/20] * fix typo "seperate" [13/20] * fix typo "i.g." [08/20] * COMPAT_VERSION to 14 [01/20] * use rcu-locking+refcounting for orig_node, remove orig_hash_lock [07/20], [17/20] * made checkpatch-clean * use __packed instead of __attribute_((packed)) [01/20] * change tracker_packet_for_each_dest macro [07/20]: make a "break" in this macro to behave like usual, export parts from macro into own functions
TODO: * directly prepare mcast-tracker-packet in sk_buff * only create methods / variables in patches that need them
* update mcast-doc * upload updated mcast-doc to wiki
maybe TODO? * use hlist instead of list for mcast-table? * use rcu-locking / refcounting for mcast_forw_table?
Cheers, Linus
This adds the possibility to attach multicast announcements - so called MCAs - to OGMs. It also adds a packet structure for the multicast path selection and a packet types needed for the future multicast optimizations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 1 + packet.h | 41 +++++++++++++++++++++++++++++++++-------- 2 files changed, 34 insertions(+), 8 deletions(-)
diff --git a/batman-adv/hard-interface.c b/batman-adv/hard-interface.c index 3ab9a20..2bae3e4 100644 --- a/batman-adv/hard-interface.c +++ b/batman-adv/hard-interface.c @@ -313,6 +313,7 @@ int hardif_enable_interface(struct batman_if *batman_if, char *iface_name) batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; batman_packet->num_hna = 0; + batman_packet->num_mca = 0;
batman_if->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; diff --git a/batman-adv/packet.h b/batman-adv/packet.h index 2284e81..a02f793 100644 --- a/batman-adv/packet.h +++ b/batman-adv/packet.h @@ -24,15 +24,17 @@
#define ETH_P_BATMAN 0x4305 /* unofficial/not registered Ethertype */
-#define BAT_PACKET 0x01 -#define BAT_ICMP 0x02 -#define BAT_UNICAST 0x03 -#define BAT_BCAST 0x04 -#define BAT_VIS 0x05 -#define BAT_UNICAST_FRAG 0x06 +#define BAT_PACKET 0x01 +#define BAT_ICMP 0x02 +#define BAT_UNICAST 0x03 +#define BAT_BCAST 0x04 +#define BAT_VIS 0x05 +#define BAT_UNICAST_FRAG 0x06 +#define BAT_MCAST 0x07 +#define BAT_MCAST_TRACKER 0x08
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -61,8 +63,8 @@ struct batman_packet { uint8_t prev_sender[6]; uint8_t ttl; uint8_t num_hna; + uint8_t num_mca; uint8_t gw_flags; /* flags related to gateway class */ - uint8_t align; } __packed;
#define BAT_PACKET_LEN sizeof(struct batman_packet) @@ -120,6 +122,29 @@ struct bcast_packet { uint32_t seqno; } __packed;
+struct mcast_packet { + uint8_t packet_type; /* BAT_MCAST */ + uint8_t version; /* batman version field */ + uint8_t orig[6]; + uint32_t seqno; + uint8_t ttl; +} __packed; + +/* marks the path for multicast streams */ +struct mcast_tracker_packet { + uint8_t packet_type; /* BAT_MCAST_TRACKER */ + uint8_t version; /* batman version field */ + uint8_t orig[6]; + uint8_t ttl; + uint8_t num_mcast_entries; + uint8_t align[2]; +} __packed; + +struct mcast_entry { + uint8_t mcast_addr[6]; + uint8_t num_dest; /* number of multicast data receivers */ +}; + struct vis_packet { uint8_t packet_type; uint8_t version; /* batman version field */
This commit adds the needed configurable variables in bat_priv and according user interfaces in sysfs for the future multicast optimizations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- Makefile.kbuild | 1 + bat_sysfs.c | 160 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.c | 121 +++++++++++++++++++++++++++++++++++++++++ multicast.h | 30 ++++++++++ packet.h | 4 ++ soft-interface.c | 4 ++ types.h | 4 ++ 7 files changed, 324 insertions(+), 0 deletions(-) create mode 100644 multicast.c create mode 100644 multicast.h
diff --git a/batman-adv/Makefile.kbuild b/batman-adv/Makefile.kbuild index e99c198..56296c4 100644 --- a/batman-adv/Makefile.kbuild +++ b/batman-adv/Makefile.kbuild @@ -49,5 +49,6 @@ batman-adv-y += send.o batman-adv-y += soft-interface.o batman-adv-y += translation-table.o batman-adv-y += unicast.o +batman-adv-y += multicast.o batman-adv-y += vis.o batman-adv-y += bat_printk.o diff --git a/batman-adv/bat_sysfs.c b/batman-adv/bat_sysfs.c index cd7bb51..f627d70 100644 --- a/batman-adv/bat_sysfs.c +++ b/batman-adv/bat_sysfs.c @@ -27,6 +27,7 @@ #include "gateway_common.h" #include "gateway_client.h" #include "vis.h" +#include "multicast.h"
#define to_dev(obj) container_of(obj, struct device, kobj) #define kobj_to_netdev(obj) to_net_dev(to_dev(obj->parent)) @@ -356,6 +357,153 @@ static ssize_t store_gw_bwidth(struct kobject *kobj, struct attribute *attr, return gw_bandwidth_set(net_dev, buff, count); }
+static ssize_t show_mcast_mode(struct kobject *kobj, struct attribute *attr, + char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int mcast_mode = atomic_read(&bat_priv->mcast_mode); + int ret; + + switch (mcast_mode) { + case MCAST_MODE_CLASSIC_FLOODING: + ret = sprintf(buff, "classic_flooding\n"); + break; + case MCAST_MODE_PROACT_TRACKING: + ret = sprintf(buff, "proactive_tracking\n"); + break; + default: + ret = -1; + break; + } + + return ret; +} + +static ssize_t store_mcast_mode(struct kobject *kobj, struct attribute *attr, + char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long val; + int ret, mcast_mode_tmp = -1; + + ret = strict_strtoul(buff, 10, &val); + + if (((count == 2) && (!ret) && (val == MCAST_MODE_CLASSIC_FLOODING)) || + (strncmp(buff, "classic_flooding", 16) == 0)) + mcast_mode_tmp = MCAST_MODE_CLASSIC_FLOODING; + + if (((count == 2) && (!ret) && (val == MCAST_MODE_PROACT_TRACKING)) || + (strncmp(buff, "proact_tracking", 15) == 0)) + mcast_mode_tmp = MCAST_MODE_PROACT_TRACKING; + + if (mcast_mode_tmp < 0) { + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + bat_info(net_dev, + "Invalid parameter for 'mcast mode' setting received: " + "%s\n", buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->mcast_mode) == mcast_mode_tmp) + return count; + + bat_info(net_dev, "Changing mcast mode from: %s to: %s\n", + atomic_read(&bat_priv->mcast_mode) == + MCAST_MODE_CLASSIC_FLOODING ? + "classic_flooding" : "proact_tracking", + mcast_mode_tmp == MCAST_MODE_CLASSIC_FLOODING ? + "classic_flooding" : "proact_tracking"); + + atomic_set(&bat_priv->mcast_mode, (unsigned)mcast_mode_tmp); + return count; +} + +static ssize_t show_mcast_tracker_interval(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + if (!tracker_interval) + return sprintf(buff, "auto\n"); + else + return sprintf(buff, "%i\n", tracker_interval); +} + +static ssize_t store_mcast_tracker_interval(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + + return mcast_tracker_interval_set(net_dev, buff, count); +} + +static ssize_t show_mcast_tracker_timeout(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + int tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + + if (!tracker_timeout) + return sprintf(buff, "auto\n"); + else + return sprintf(buff, "%i\n", tracker_timeout); +} + +static ssize_t store_mcast_tracker_timeout(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + + return mcast_tracker_timeout_set(net_dev, buff, count); +} + +static ssize_t show_mcast_fanout(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct device *dev = to_dev(kobj->parent); + struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); + + return sprintf(buff, "%i\n", + atomic_read(&bat_priv->mcast_fanout)); +} + +static ssize_t store_mcast_fanout(struct kobject *kobj, + struct attribute *attr, char *buff, size_t count) +{ + struct device *dev = to_dev(kobj->parent); + struct net_device *net_dev = to_net_dev(dev); + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long mcast_fanout_tmp; + int ret; + + ret = strict_strtoul(buff, 10, &mcast_fanout_tmp); + if (ret) { + bat_info(net_dev, "Invalid parameter for 'mcast_fanout' " + "setting received: %s\n", buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->mcast_fanout) == mcast_fanout_tmp) + return count; + + bat_info(net_dev, "Changing mcast fanout interval from: %i to: %li\n", + atomic_read(&bat_priv->mcast_fanout), + mcast_fanout_tmp); + + atomic_set(&bat_priv->mcast_fanout, mcast_fanout_tmp); + return count; +} + BAT_ATTR_BOOL(aggregated_ogms, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(fragmentation, S_IRUGO | S_IWUSR, update_min_mtu); @@ -367,6 +515,14 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); +static BAT_ATTR(mcast_mode, S_IRUGO | S_IWUSR, + show_mcast_mode, store_mcast_mode); +static BAT_ATTR(mcast_tracker_interval, S_IRUGO | S_IWUSR, + show_mcast_tracker_interval, store_mcast_tracker_interval); +static BAT_ATTR(mcast_tracker_timeout, S_IRUGO | S_IWUSR, + show_mcast_tracker_timeout, store_mcast_tracker_timeout); +static BAT_ATTR(mcast_fanout, S_IRUGO | S_IWUSR, + show_mcast_fanout, store_mcast_fanout); #ifdef CONFIG_BATMAN_ADV_DEBUG BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); #endif @@ -381,6 +537,10 @@ static struct bat_attribute *mesh_attrs[] = { &bat_attr_hop_penalty, &bat_attr_gw_sel_class, &bat_attr_gw_bandwidth, + &bat_attr_mcast_mode, + &bat_attr_mcast_tracker_interval, + &bat_attr_mcast_tracker_timeout, + &bat_attr_mcast_fanout, #ifdef CONFIG_BATMAN_ADV_DEBUG &bat_attr_log_level, #endif diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c new file mode 100644 index 0000000..0598873 --- /dev/null +++ b/batman-adv/multicast.c @@ -0,0 +1,121 @@ +/* + * Copyright (C) 2010 B.A.T.M.A.N. contributors: + * + * Linus Lüssing + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + * + */ + +#include "main.h" +#include "multicast.h" + +int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, + size_t count) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long new_tracker_interval; + int cur_tracker_interval; + int ret; + + ret = strict_strtoul(buff, 10, &new_tracker_interval); + + if (ret && !strncmp(buff, "auto", 4)) { + new_tracker_interval = 0; + goto ok; + } + + else if (ret) { + bat_info(net_dev, "Invalid parameter for " + "'mcast_tracker_interval' setting received: %s\n", + buff); + return -EINVAL; + } + + if (new_tracker_interval < JITTER) { + bat_info(net_dev, "New mcast tracker interval too small: %li " + "(min: %i or auto)\n", new_tracker_interval, JITTER); + return -EINVAL; + } + +ok: + cur_tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + if (cur_tracker_interval == new_tracker_interval) + return count; + + if (!cur_tracker_interval && new_tracker_interval) + bat_info(net_dev, "Tracker interval change from: %s to: %li\n", + "auto", new_tracker_interval); + else if (cur_tracker_interval && !new_tracker_interval) + bat_info(net_dev, "Tracker interval change from: %i to: %s\n", + cur_tracker_interval, "auto"); + else + bat_info(net_dev, "Tracker interval change from: %i to: %li\n", + cur_tracker_interval, new_tracker_interval); + + atomic_set(&bat_priv->mcast_tracker_interval, new_tracker_interval); + + return count; +} + +int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, + size_t count) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + unsigned long new_tracker_timeout; + int cur_tracker_timeout; + int ret; + + ret = strict_strtoul(buff, 10, &new_tracker_timeout); + + if (ret && !strncmp(buff, "auto", 4)) { + new_tracker_timeout = 0; + goto ok; + } + + else if (ret) { + bat_info(net_dev, "Invalid parameter for " + "'mcast_tracker_timeout' setting received: %s\n", + buff); + return -EINVAL; + } + + if (new_tracker_timeout < JITTER) { + bat_info(net_dev, "New mcast tracker timeout too small: %li " + "(min: %i or auto)\n", new_tracker_timeout, JITTER); + return -EINVAL; + } + +ok: + cur_tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + + if (cur_tracker_timeout == new_tracker_timeout) + return count; + + if (!cur_tracker_timeout && new_tracker_timeout) + bat_info(net_dev, "Tracker timeout change from: %s to: %li\n", + "auto", new_tracker_timeout); + else if (cur_tracker_timeout && !new_tracker_timeout) + bat_info(net_dev, "Tracker timeout change from: %i to: %s\n", + cur_tracker_timeout, "auto"); + else + bat_info(net_dev, "Tracker timeout change from: %i to: %li\n", + cur_tracker_timeout, new_tracker_timeout); + + atomic_set(&bat_priv->mcast_tracker_timeout, new_tracker_timeout); + + return count; +} diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h new file mode 100644 index 0000000..12a3376 --- /dev/null +++ b/batman-adv/multicast.h @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2010 B.A.T.M.A.N. contributors: + * + * Linus Lüssing + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + * + */ + +#ifndef _NET_BATMAN_ADV_MULTICAST_H_ +#define _NET_BATMAN_ADV_MULTICAST_H_ + +int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, + size_t count); +int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, + size_t count); + +#endif /* _NET_BATMAN_ADV_MULTICAST_H_ */ diff --git a/batman-adv/packet.h b/batman-adv/packet.h index a02f793..519bca8 100644 --- a/batman-adv/packet.h +++ b/batman-adv/packet.h @@ -50,6 +50,10 @@ #define VIS_TYPE_SERVER_SYNC 0 #define VIS_TYPE_CLIENT_UPDATE 1
+/* mcast defines */ +#define MCAST_MODE_CLASSIC_FLOODING 0 +#define MCAST_MODE_PROACT_TRACKING 1 + /* fragmentation defines */ #define UNI_FRAG_HEAD 0x01
diff --git a/batman-adv/soft-interface.c b/batman-adv/soft-interface.c index e89ede1..7cea678 100644 --- a/batman-adv/soft-interface.c +++ b/batman-adv/soft-interface.c @@ -597,6 +597,10 @@ struct net_device *softif_create(char *name) atomic_set(&bat_priv->gw_bandwidth, 41); atomic_set(&bat_priv->orig_interval, 1000); atomic_set(&bat_priv->hop_penalty, 10); + atomic_set(&bat_priv->mcast_mode, MCAST_MODE_CLASSIC_FLOODING); + atomic_set(&bat_priv->mcast_tracker_interval, 0); /* = auto */ + atomic_set(&bat_priv->mcast_tracker_timeout, 0); /* = auto */ + atomic_set(&bat_priv->mcast_fanout, 2); atomic_set(&bat_priv->log_level, 0); atomic_set(&bat_priv->fragmentation, 1); atomic_set(&bat_priv->bcast_queue_left, BCAST_QUEUE_LEN); diff --git a/batman-adv/types.h b/batman-adv/types.h index 8e97861..3abf6d9 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -137,6 +137,10 @@ struct bat_priv { atomic_t gw_bandwidth; /* gw bandwidth */ atomic_t orig_interval; /* uint */ atomic_t hop_penalty; /* uint */ + atomic_t mcast_mode; /* MCAST_MODE_* */ + atomic_t mcast_tracker_interval;/* uint, auto */ + atomic_t mcast_tracker_timeout; /* uint, auto */ + atomic_t mcast_fanout; /* uint */ atomic_t log_level; /* uint */ atomic_t bcast_seqno; atomic_t bcast_queue_left;
The data structures and locking mechanisms for fetching multicast mac addresses from a net_device have changed a little between kernel versions 2.6.21 to 2.6.35.
Therefore this commit backports two macros (netdev_mc_count(), netdev_for_each_mc_addr()) for older kernel versions and abstracts the way of locking and accessing the variables with own customized macros.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- compat.h | 40 ++++++++++++++++++++++++++++++++++++++++ 1 files changed, 40 insertions(+), 0 deletions(-)
diff --git a/batman-adv/compat.h b/batman-adv/compat.h index 6074969..ffeec34 100644 --- a/batman-adv/compat.h +++ b/batman-adv/compat.h @@ -270,4 +270,44 @@ int bat_seq_printf(struct seq_file *m, const char *f, ...);
#endif /* < KERNEL_VERSION(2, 6, 33) */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 34) + +#define netdev_mc_count(dev) ((dev)->mc_count) +#define netdev_for_each_mc_addr(mclist, dev) \ + for (mclist = dev->mc_list; mclist; mclist = mclist->next) + +#endif /* < KERNEL_VERSION(2, 6, 34) */ + + +/* + * net_device - multicast list handling + * structures + */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35) + +#define MC_LIST struct dev_addr_list +#define MC_LIST_ADDR da_addr + +#endif /* < KERNEL_VERSION(2, 6, 35) */ + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 34) + +#define MC_LIST struct netdev_hw_addr_list_mc +#define MC_LIST_ADDR addr + +#endif /* > KERNEL_VERSION(2, 6, 34) */ + +/* + * net_device - multicast list handling + * locking + */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27) + +#define netif_addr_lock_bh(soft_iface) \ + netif_tx_lock_bh(soft_iface) +#define netif_addr_unlock_bh(soft_iface) \ + netif_tx_unlock_bh(soft_iface) + +#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 27) */ + #endif /* _NET_BATMAN_ADV_COMPAT_H_ */
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 34)
+#define MC_LIST struct netdev_hw_addr_list_mc +#define MC_LIST_ADDR addr
+#endif /* > KERNEL_VERSION(2, 6, 34) */
Should have been "MC_LIST struct netdev_hw_addr" instead... fixed upstream (and will be fixed in next patchset here).
Cheers, Linus
This patch introduces multicast announcements - MCA for short - which are now being attached to an OGM if an optimized multicast mode needing MCAs has been selected (i.e. proactive_tracking).
MCA entries are multicast mac addresses used by a multicast receiver in the mesh cloud. Currently MCAs are only fetched locally from the according batman interface itself, bridged-in hosts will not yet get announced and will need a more complex patch for supporting IGMP/MLD snooping. However, the local fetching also allows to have multicast optimizations on layer 2 already for batman nodes, not depending on IP at all.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- aggregation.c | 12 +++++++- aggregation.h | 6 +++- main.h | 2 + send.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++-------- 4 files changed, 85 insertions(+), 15 deletions(-)
diff --git a/batman-adv/aggregation.c b/batman-adv/aggregation.c index 3850a3e..c3aff27 100644 --- a/batman-adv/aggregation.c +++ b/batman-adv/aggregation.c @@ -30,6 +30,12 @@ static int hna_len(struct batman_packet *batman_packet) return batman_packet->num_hna * ETH_ALEN; }
+/* calculate the size of the mca information for a given packet */ +static int mca_len(struct batman_packet *batman_packet) +{ + return batman_packet->num_mca * ETH_ALEN; +} + /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -265,9 +271,11 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, hna_buff, hna_len(batman_packet), if_incoming);
- buff_pos += BAT_PACKET_LEN + hna_len(batman_packet); + buff_pos += BAT_PACKET_LEN + hna_len(batman_packet) + + mca_len(batman_packet); batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_hna)); + batman_packet->num_hna, + batman_packet->num_mca)); } diff --git a/batman-adv/aggregation.h b/batman-adv/aggregation.h index 71a91b3..93f2496 100644 --- a/batman-adv/aggregation.h +++ b/batman-adv/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna) +static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna, + int num_mca) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN) + + (num_mca * ETH_ALEN);
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/batman-adv/main.h b/batman-adv/main.h index c239c97..4913f12 100644 --- a/batman-adv/main.h +++ b/batman-adv/main.h @@ -105,6 +105,8 @@
/* #define VIS_SUBCLUSTERS_DISABLED */
+#define UINT8_MAX 255 + /* * Kernel headers */ diff --git a/batman-adv/send.c b/batman-adv/send.c index 77f8297..03db894 100644 --- a/batman-adv/send.c +++ b/batman-adv/send.c @@ -122,7 +122,8 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_hna)) { + batman_packet->num_hna, + batman_packet->num_mca)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -214,18 +215,69 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
+static void add_own_MCA(struct batman_packet *batman_packet, int num_mca, + struct net_device *soft_iface) +{ + MC_LIST *mc_list_entry; + int num_mca_done = 0; + char *mca_entry = (char *)(batman_packet + 1); + + if (num_mca == 0) + goto out; + + if (num_mca > UINT8_MAX) { + pr_warning("Too many multicast announcements here, " + "just adding %i\n", UINT8_MAX); + num_mca = UINT8_MAX; + } + + mca_entry = mca_entry + batman_packet->num_hna * ETH_ALEN; + + netif_addr_lock_bh(soft_iface); + netdev_for_each_mc_addr(mc_list_entry, soft_iface) { + memcpy(mca_entry, &mc_list_entry->MC_LIST_ADDR, ETH_ALEN); + mca_entry += ETH_ALEN; + + /* A multicast address might just have been added, + * avoid writing outside of buffer */ + if (++num_mca_done == num_mca) + break; + } + netif_addr_unlock_bh(soft_iface); + +out: + batman_packet->num_mca = num_mca_done; +} + static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_if *batman_if) { - int new_len; - unsigned char *new_buff; + int new_len, mcast_mode, num_mca = 0; + unsigned char *new_buff = NULL; struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_hna * ETH_ALEN); - new_buff = kmalloc(new_len, GFP_ATOMIC); + batman_packet = (struct batman_packet *)batman_if->packet_buff; + mcast_mode = atomic_read(&bat_priv->mcast_mode);
- /* keep old buffer if kmalloc should fail */ + /* Avoid attaching MCAs, if multicast optimization is disabled */ + if (mcast_mode == MCAST_MODE_PROACT_TRACKING) { + netif_addr_lock_bh(batman_if->soft_iface); + num_mca = netdev_mc_count(batman_if->soft_iface); + netif_addr_unlock_bh(batman_if->soft_iface); + } + + if (atomic_read(&bat_priv->hna_local_changed) || + num_mca != batman_packet->num_mca) { + new_len = sizeof(struct batman_packet) + + (bat_priv->num_local_hna * ETH_ALEN) + + num_mca * ETH_ALEN; + new_buff = kmalloc(new_len, GFP_ATOMIC); + } + + /* + * if local hna or mca has changed but kmalloc failed + * then just keep the old buffer + */ if (new_buff) { memcpy(new_buff, batman_if->packet_buff, sizeof(struct batman_packet)); @@ -239,6 +291,13 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, batman_if->packet_buff = new_buff; batman_if->packet_len = new_len; } + + /** + * always copy mca entries (if there are any) - we have to + * traverse the list anyway, so we can just do a memcpy instead of + * memcmp for the sake of simplicity + */ + add_own_MCA(batman_packet, num_mca, batman_if->soft_iface); }
void schedule_own_packet(struct batman_if *batman_if) @@ -264,9 +323,7 @@ void schedule_own_packet(struct batman_if *batman_if) if (batman_if->if_status == IF_TO_BE_ACTIVATED) batman_if->if_status = IF_ACTIVE;
- /* if local hna has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->hna_local_changed)) && - (batman_if == bat_priv->primary_if)) + if (batman_if == bat_priv->primary_if) rebuild_batman_packet(bat_priv, batman_if);
/** @@ -359,7 +416,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + hna_buff_len, + sizeof(struct batman_packet) + hna_buff_len + + batman_packet->num_mca * ETH_ALEN, if_incoming, 0, send_time); }
This commit adds a timer for sending periodic tracker packets (the sending is not in the scope of this patch). Furthermore, the timer gets restarted if the tracker interval gets changed or if the originator interval changed and we have selected auto mode for the tracker interval.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_sysfs.c | 13 +++++++++++-- main.c | 5 +++++ multicast.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 3 +++ types.h | 1 + 5 files changed, 77 insertions(+), 2 deletions(-)
diff --git a/batman-adv/bat_sysfs.c b/batman-adv/bat_sysfs.c index f627d70..8f688db 100644 --- a/batman-adv/bat_sysfs.c +++ b/batman-adv/bat_sysfs.c @@ -357,8 +357,16 @@ static ssize_t store_gw_bwidth(struct kobject *kobj, struct attribute *attr, return gw_bandwidth_set(net_dev, buff, count); }
+void update_mcast_tracker(struct net_device *net_dev) +{ + struct bat_priv *bat_priv = netdev_priv(net_dev); + + if (!atomic_read(&bat_priv->mcast_tracker_interval)) + mcast_tracker_reset(bat_priv); +} + static ssize_t show_mcast_mode(struct kobject *kobj, struct attribute *attr, - char *buff) + char *buff) { struct device *dev = to_dev(kobj->parent); struct bat_priv *bat_priv = netdev_priv(to_net_dev(dev)); @@ -509,7 +517,8 @@ BAT_ATTR_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); BAT_ATTR_BOOL(fragmentation, S_IRUGO | S_IWUSR, update_min_mtu); static BAT_ATTR(vis_mode, S_IRUGO | S_IWUSR, show_vis_mode, store_vis_mode); static BAT_ATTR(gw_mode, S_IRUGO | S_IWUSR, show_gw_mode, store_gw_mode); -BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, NULL); +BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, + update_mcast_tracker); BAT_ATTR_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, TQ_MAX_VALUE, NULL); BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); diff --git a/batman-adv/main.c b/batman-adv/main.c index cb3c3a3..1641075 100644 --- a/batman-adv/main.c +++ b/batman-adv/main.c @@ -32,6 +32,7 @@ #include "gateway_client.h" #include "types.h" #include "vis.h" +#include "multicast.h" #include "hash.h"
struct list_head if_list; @@ -108,6 +109,9 @@ int mesh_init(struct net_device *soft_iface) if (vis_init(bat_priv) < 1) goto err;
+ if (mcast_init(bat_priv) < 1) + goto err; + atomic_set(&bat_priv->mesh_state, MESH_ACTIVE); goto end;
@@ -138,6 +142,7 @@ void mesh_free(struct net_device *soft_iface) hna_global_free(bat_priv);
softif_neigh_purge(bat_priv); + mcast_free(bat_priv);
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); } diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index 0598873..cc83937 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -22,6 +22,48 @@ #include "main.h" #include "multicast.h"
+/* how long to wait until sending a multicast tracker packet */ +static int tracker_send_delay(struct bat_priv *bat_priv) +{ + int tracker_interval = atomic_read(&bat_priv->mcast_tracker_interval); + + /* auto mode, set to 1/2 ogm interval */ + if (!tracker_interval) + tracker_interval = atomic_read(&bat_priv->orig_interval) / 2; + + /* multicast tracker packets get half as much jitter as ogms as they're + * limited down to JITTER and not JITTER*2 */ + return msecs_to_jiffies(tracker_interval - + JITTER/2 + (random32() % JITTER)); +} + +static void start_mcast_tracker(struct bat_priv *bat_priv) +{ + /* adding some jitter */ + unsigned long tracker_interval = tracker_send_delay(bat_priv); + queue_delayed_work(bat_event_workqueue, &bat_priv->mcast_tracker_work, + tracker_interval); +} + +static void stop_mcast_tracker(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->mcast_tracker_work); +} + +void mcast_tracker_reset(struct bat_priv *bat_priv) +{ + stop_mcast_tracker(bat_priv); + start_mcast_tracker(bat_priv); +} + +static void mcast_tracker_timer(struct work_struct *work) +{ + struct bat_priv *bat_priv = container_of(work, struct bat_priv, + mcast_tracker_work.work); + + start_mcast_tracker(bat_priv); +} + int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, size_t count) { @@ -68,6 +110,8 @@ ok:
atomic_set(&bat_priv->mcast_tracker_interval, new_tracker_interval);
+ mcast_tracker_reset(bat_priv); + return count; }
@@ -119,3 +163,16 @@ ok:
return count; } + +int mcast_init(struct bat_priv *bat_priv) +{ + INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); + start_mcast_tracker(bat_priv); + + return 1; +} + +void mcast_free(struct bat_priv *bat_priv) +{ + stop_mcast_tracker(bat_priv); +} diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 12a3376..26ce6d8 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -26,5 +26,8 @@ int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, size_t count); int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, size_t count); +void mcast_tracker_reset(struct bat_priv *bat_priv); +int mcast_init(struct bat_priv *bat_priv); +void mcast_free(struct bat_priv *bat_priv);
#endif /* _NET_BATMAN_ADV_MULTICAST_H_ */ diff --git a/batman-adv/types.h b/batman-adv/types.h index 3abf6d9..c4ae252 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -173,6 +173,7 @@ struct bat_priv { struct delayed_work hna_work; struct delayed_work orig_work; struct delayed_work vis_work; + struct delayed_work mcast_tracker_work; struct gw_node *curr_gw; struct vis_info *my_vis_info; };
We need to memorize the MCA information attached to the OGMs to be able to prepare the tracker packets with them later.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- originator.c | 7 ++++++- routing.c | 40 +++++++++++++++++++++++++++++++++++++--- routing.h | 2 +- types.h | 2 ++ 4 files changed, 46 insertions(+), 5 deletions(-)
diff --git a/batman-adv/originator.c b/batman-adv/originator.c index cf2ec37..aee77d3 100644 --- a/batman-adv/originator.c +++ b/batman-adv/originator.c @@ -139,6 +139,8 @@ void orig_node_free_ref(struct kref *refcount) hna_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->mca_buff); + kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -227,6 +229,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->hna_buff = NULL; + orig_node->mca_buff = NULL; + orig_node->num_mca = 0; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -340,7 +344,8 @@ static bool purge_orig_node(struct bat_priv *bat_priv, update_routes(bat_priv, orig_node, best_neigh_node, orig_node->hna_buff, - orig_node->hna_buff_len); + orig_node->hna_buff_len, + orig_node->mca_buff, orig_node->num_mca); } }
diff --git a/batman-adv/routing.c b/batman-adv/routing.c index a90d105..4f55715 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -85,6 +85,34 @@ static void update_HNA(struct bat_priv *bat_priv, struct orig_node *orig_node, } }
+/* Copy the mca buffer again if something has changed */ +static void update_MCA(struct orig_node *orig_node, + unsigned char *mca_buff, int num_mca) +{ + /* numbers differ? then reallocate buffer */ + if (num_mca != orig_node->num_mca) { + kfree(orig_node->mca_buff); + if (num_mca > 0) { + orig_node->mca_buff = + kmalloc(num_mca * ETH_ALEN, GFP_ATOMIC); + if (orig_node->mca_buff) + goto update; + } + orig_node->mca_buff = NULL; + orig_node->num_mca = 0; + /* size ok, just update? */ + } else if (num_mca > 0 && + memcmp(orig_node->mca_buff, mca_buff, num_mca * ETH_ALEN)) + goto update; + + /* it's the same, leave it like that */ + return; + +update: + memcpy(orig_node->mca_buff, mca_buff, num_mca * ETH_ALEN); + orig_node->num_mca = num_mca; +} + static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, @@ -129,7 +157,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len) + int hna_buff_len, unsigned char *mca_buff, int num_mca) {
if (!orig_node) @@ -141,6 +169,8 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, /* may be just HNA changed */ else update_HNA(bat_priv, orig_node, hna_buff, hna_buff_len); + + update_MCA(orig_node, mca_buff, num_mca); }
static int is_bidirectional_neigh(struct orig_node *orig_node, @@ -376,6 +406,7 @@ static void update_orig(struct bat_priv *bat_priv, struct hlist_node *node; int tmp_hna_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh; + unsigned char *mca_buff;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " "Searching and updating originator entry of received packet\n"); @@ -435,6 +466,7 @@ static void update_orig(struct bat_priv *bat_priv,
tmp_hna_buff_len = (hna_buff_len > batman_packet->num_hna * ETH_ALEN ? batman_packet->num_hna * ETH_ALEN : hna_buff_len); + mca_buff = (char *)batman_packet + BAT_PACKET_LEN + tmp_hna_buff_len;
/* if this neighbor already is our next hop there is nothing * to change */ @@ -467,12 +499,14 @@ static void update_orig(struct bat_priv *bat_priv, }
update_routes(bat_priv, orig_node, neigh_node, - hna_buff, tmp_hna_buff_len); + hna_buff, tmp_hna_buff_len, mca_buff, + batman_packet->num_mca); goto update_gw;
update_hna: update_routes(bat_priv, orig_node, orig_node->router, - hna_buff, tmp_hna_buff_len); + hna_buff, tmp_hna_buff_len, mca_buff, + batman_packet->num_mca);
update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) diff --git a/batman-adv/routing.h b/batman-adv/routing.h index e02789e..bf508e6 100644 --- a/batman-adv/routing.h +++ b/batman-adv/routing.h @@ -31,7 +31,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, struct batman_if *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len); + int hna_buff_len, unsigned char *mca_buff, int num_mca); int route_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if, int hdr_size); int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); diff --git a/batman-adv/types.h b/batman-adv/types.h index c4ae252..675a50f 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -79,6 +79,8 @@ struct orig_node { uint8_t flags; unsigned char *hna_buff; int16_t hna_buff_len; + unsigned char *mca_buff; + uint8_t num_mca; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS];
This commit introduces batman multicast tracker packets. Their job is, to mark nodes responsible for forwarding multicast data later (so a multicast receiver will not be marked, only the forwarding nodes).
When having activated the proact_tracking multicast mode, a path between all multicast _receivers_ of a group will be marked - in fact, in this mode BATMAN will assume, that a multicast receiver is also a multicast sender, therefore a multicast sender should also join the same multicast group.
The advantage of this is less complexity and the paths are marked in advance before an actual data packet has been sent, decreasing delays. The disadvantage is higher protocol overhead.
One large tracker packet will be created on a generating node first, which then gets split for every necessary next hop destination.
This commit does not add forwarding of tracker packets but just local generation and local sending of them.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 548 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 3 + 2 files changed, 551 insertions(+), 0 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index cc83937..34e89a8 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -21,6 +21,76 @@
#include "main.h" #include "multicast.h" +#include "hash.h" +#include "send.h" +#include "compat.h" + +struct tracker_packet_state { + int mcast_num, dest_num; + struct mcast_entry *mcast_entry; + uint8_t *dest_entry; + int break_flag; +}; + +static void init_state_mcast_entry(struct tracker_packet_state *state, + struct mcast_tracker_packet *tracker_packet) +{ + state->mcast_num = 0; + state->mcast_entry = (struct mcast_entry *)(tracker_packet + 1); + state->dest_entry = (uint8_t *)(state->mcast_entry + 1); +} + +static int check_state_mcast_entry(struct tracker_packet_state *state, + struct mcast_tracker_packet *tracker_packet) +{ + if (state->mcast_num < tracker_packet->num_mcast_entries && + !state->break_flag) + return 1; + + return 0; +} + +static void inc_state_mcast_entry(struct tracker_packet_state *state) +{ + state->mcast_num++; + state->mcast_entry = (struct mcast_entry *)state->dest_entry; + state->dest_entry = (uint8_t *)(state->mcast_entry + 1); +} + +static void init_state_dest_entry(struct tracker_packet_state *state) +{ + state->dest_num = 0; + state->break_flag = 1; +} + +static int check_state_dest_entry(struct tracker_packet_state *state) +{ + if (state->dest_num < state->mcast_entry->num_dest) + return 1; + + return 0; +} + +static void inc_state_dest_entry(struct tracker_packet_state *state) +{ + state->dest_num++; + state->dest_entry += ETH_ALEN; + state->break_flag = 0; +} + +#define tracker_packet_for_each_dest(state, tracker_packet) \ + for (init_state_mcast_entry(state, tracker_packet); \ + check_state_mcast_entry(state, tracker_packet); \ + inc_state_mcast_entry(state)) \ + for (init_state_dest_entry(state); \ + check_state_dest_entry(state); \ + inc_state_dest_entry(state)) + +struct dest_entries_list { + struct list_head list; + uint8_t dest[6]; + struct batman_if *batman_if; +};
/* how long to wait until sending a multicast tracker packet */ static int tracker_send_delay(struct bat_priv *bat_priv) @@ -56,11 +126,489 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static inline int find_mca_match(struct orig_node *orig_node, + int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries) +{ + int pos; + + for (pos = 0; pos < num_mcast_entries; pos++) + if (!memcmp(&mc_addr_list[pos*ETH_ALEN], + &orig_node->mca_buff[ETH_ALEN*mca_pos], ETH_ALEN)) + return pos; + return -1; +} + +/** + * Prepares a multicast tracker packet on a multicast member with all its + * groups and their members attached. Note, that the proactive tracking + * mode does not differentiate between multicast senders and receivers, + * resulting in tracker packets between each node. + * + * Returns NULL if this node is not a member of any group or if there are + * no other members in its groups. + * + * @bat_priv: bat_priv for the mesh we are preparing this packet + */ +static struct mcast_tracker_packet *mcast_proact_tracker_prepare( + struct bat_priv *bat_priv, int *tracker_packet_len) +{ + struct net_device *soft_iface = bat_priv->primary_if->soft_iface; + uint8_t *mc_addr_list; + MC_LIST *mc_entry; + struct element_t *bucket; + struct orig_node *orig_node; + struct hashtable_t *hash = bat_priv->orig_hash; + struct hlist_node *walk; + struct hlist_head *head; + int i; + + /* one dest_entries_list per multicast group, + * they'll collect dest_entries[x] */ + int num_mcast_entries, used_mcast_entries = 0; + struct list_head *dest_entries_list; + struct dest_entries_list dest_entries[UINT8_MAX], *dest, *tmp; + int num_dest_entries, dest_entries_total = 0; + + uint8_t *dest_entry; + int pos, mca_pos; + struct mcast_tracker_packet *tracker_packet = NULL; + struct mcast_entry *mcast_entry; + + if (!hash) + goto out; + + /* Make a copy so we don't have to rush because of locking */ + netif_addr_lock_bh(soft_iface); + num_mcast_entries = netdev_mc_count(soft_iface); + mc_addr_list = kmalloc(ETH_ALEN * num_mcast_entries, GFP_ATOMIC); + if (!mc_addr_list) { + netif_addr_unlock_bh(soft_iface); + goto out; + } + pos = 0; + netdev_for_each_mc_addr(mc_entry, soft_iface) { + memcpy(&mc_addr_list[pos * ETH_ALEN], mc_entry->MC_LIST_ADDR, + ETH_ALEN); + pos++; + } + netif_addr_unlock_bh(soft_iface); + + if (num_mcast_entries > UINT8_MAX) + num_mcast_entries = UINT8_MAX; + dest_entries_list = kmalloc(num_mcast_entries * + sizeof(struct list_head), GFP_ATOMIC); + if (!dest_entries_list) + goto free; + + for (pos = 0; pos < num_mcast_entries; pos++) + INIT_LIST_HEAD(&dest_entries_list[pos]); + + /* fill the lists and buffers */ + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(bucket, walk, head, hlist) { + orig_node = bucket->data; + if (!orig_node->num_mca) + continue; + + num_dest_entries = 0; + for (mca_pos = 0; mca_pos < orig_node->num_mca && + dest_entries_total != UINT8_MAX; mca_pos++) { + pos = find_mca_match(orig_node, mca_pos, + mc_addr_list, num_mcast_entries); + if (pos > UINT8_MAX || pos < 0) + continue; + memcpy(dest_entries[dest_entries_total].dest, + orig_node->orig, ETH_ALEN); + list_add( + &dest_entries[dest_entries_total].list, + &dest_entries_list[pos]); + + num_dest_entries++; + dest_entries_total++; + } + } + rcu_read_unlock(); + } + + /* Any list left empty? */ + for (pos = 0; pos < num_mcast_entries; pos++) + if (!list_empty(&dest_entries_list[pos])) + used_mcast_entries++; + + if (!used_mcast_entries) + goto free_all; + + /* prepare tracker packet, finally! */ + *tracker_packet_len = sizeof(struct mcast_tracker_packet) + + used_mcast_entries * sizeof(struct mcast_entry) + + ETH_ALEN * dest_entries_total; + if (*tracker_packet_len > ETH_DATA_LEN) { + pr_warning("mcast tracker packet got too large (%i Bytes), " + "forcing reduced size of %i Bytes\n", + *tracker_packet_len, ETH_DATA_LEN); + *tracker_packet_len = ETH_DATA_LEN; + } + tracker_packet = kmalloc(*tracker_packet_len, GFP_ATOMIC); + + tracker_packet->packet_type = BAT_MCAST_TRACKER; + tracker_packet->version = COMPAT_VERSION; + memcpy(tracker_packet->orig, bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + tracker_packet->ttl = TTL; + tracker_packet->num_mcast_entries = (used_mcast_entries > UINT8_MAX) ? + UINT8_MAX : used_mcast_entries; + memset(tracker_packet->align, 0, sizeof(tracker_packet->align)); + + /* append all collected entries */ + mcast_entry = (struct mcast_entry *)(tracker_packet + 1); + for (pos = 0; pos < num_mcast_entries; pos++) { + if (list_empty(&dest_entries_list[pos])) + continue; + + if ((char *)(mcast_entry + 1) <= + (char *)tracker_packet + ETH_DATA_LEN) { + memcpy(mcast_entry->mcast_addr, + &mc_addr_list[pos*ETH_ALEN], ETH_ALEN); + mcast_entry->num_dest = 0; + } + + dest_entry = (uint8_t *)(mcast_entry + 1); + list_for_each_entry_safe(dest, tmp, &dest_entries_list[pos], + list) { + /* still place for a dest_entry left? + * watch out for overflow here, stop at UINT8_MAX */ + if ((char *)dest_entry + ETH_ALEN <= + (char *)tracker_packet + ETH_DATA_LEN && + mcast_entry->num_dest != UINT8_MAX) { + mcast_entry->num_dest++; + memcpy(dest_entry, dest->dest, ETH_ALEN); + dest_entry += ETH_ALEN; + } + list_del(&dest->list); + } + /* still space for another mcast_entry left? */ + if ((char *)(mcast_entry + 1) <= + (char *)tracker_packet + ETH_DATA_LEN) + mcast_entry = (struct mcast_entry *)dest_entry; + } + + + /* outstanding cleanup */ +free_all: + kfree(dest_entries_list); +free: + kfree(mc_addr_list); +out: + + return tracker_packet; +} + +/* Adds the router for the destination address to the next_hop list and its + * interface to the forw_if_list - but only if this router has not been + * added yet */ +static int add_router_of_dest(struct dest_entries_list *next_hops, + uint8_t *dest, struct bat_priv *bat_priv) +{ + struct dest_entries_list *next_hop_tmp, *next_hop_entry; + struct element_t *bucket; + struct orig_node *orig_node; + struct hashtable_t *hash = bat_priv->orig_hash; + struct hlist_node *walk; + struct hlist_head *head; + int i; + + next_hop_entry = kmalloc(sizeof(struct dest_entries_list), GFP_ATOMIC); + if (!next_hop_entry) + return 1; + + next_hop_entry->batman_if = NULL; + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(bucket, walk, head, hlist) { + orig_node = bucket->data; + + if (memcmp(orig_node->orig, dest, ETH_ALEN)) + continue; + + if (!orig_node->router) { + i = hash->size; + break; + } + + memcpy(next_hop_entry->dest, orig_node->router->addr, + ETH_ALEN); + next_hop_entry->batman_if = + orig_node->router->if_incoming; + i = hash->size; + break; + } + rcu_read_unlock(); + } + if (!next_hop_entry->batman_if) + goto free; + + list_for_each_entry(next_hop_tmp, &next_hops->list, list) + if (!memcmp(next_hop_tmp->dest, next_hop_entry->dest, + ETH_ALEN)) + goto free; + + list_add(&next_hop_entry->list, &next_hops->list); + + return 0; + +free: + kfree(next_hop_entry); + return 1; +} + +/* Collect nexthops for all dest entries specified in this tracker packet */ +static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, + struct dest_entries_list *next_hops, + struct bat_priv *bat_priv) +{ + int num_next_hops = 0, ret; + struct tracker_packet_state state; + + INIT_LIST_HEAD(&next_hops->list); + + tracker_packet_for_each_dest(&state, tracker_packet) { + ret = add_router_of_dest(next_hops, state.dest_entry, + bat_priv); + if (!ret) + num_next_hops++; + } + + return num_next_hops; +} + +static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, + uint8_t *next_hop, struct bat_priv *bat_priv) +{ + struct tracker_packet_state state; + + struct element_t *bucket; + struct orig_node *orig_node; + struct hashtable_t *hash = bat_priv->orig_hash; + struct hlist_node *walk; + struct hlist_head *head; + int i; + + tracker_packet_for_each_dest(&state, tracker_packet) { + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(bucket, walk, head, hlist) { + orig_node = bucket->data; + + if (memcmp(orig_node->orig, state.dest_entry, + ETH_ALEN)) + continue; + + /* is the next hop already our destination? */ + if (!memcmp(orig_node->orig, next_hop, + ETH_ALEN)) + memset(state.dest_entry, '\0', + ETH_ALEN); + else if (!orig_node->router) + memset(state.dest_entry, '\0', + ETH_ALEN); + else if (!memcmp(orig_node->orig, + orig_node->router->orig_node-> + primary_addr, ETH_ALEN)) + memset(state.dest_entry, '\0', + ETH_ALEN); + /* is this the wrong next hop for our + * destination? */ + else if (memcmp(orig_node->router->addr, + next_hop, ETH_ALEN)) + memset(state.dest_entry, '\0', + ETH_ALEN); + + i = hash->size; + break; + } + rcu_read_unlock(); + } + } +} + +static int shrink_tracker_packet(struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len) +{ + struct tracker_packet_state state; + uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len; + int new_tracker_packet_len = sizeof(struct mcast_tracker_packet); + + tracker_packet_for_each_dest(&state, tracker_packet) { + if (memcmp(state.dest_entry, "\0\0\0\0\0\0", ETH_ALEN)) { + new_tracker_packet_len += ETH_ALEN; + continue; + } + + memmove(state.dest_entry, state.dest_entry + ETH_ALEN, + tail - state.dest_entry - ETH_ALEN); + + state.mcast_entry->num_dest--; + tail -= ETH_ALEN; + + if (state.mcast_entry->num_dest) { + state.dest_num--; + state.dest_entry -= ETH_ALEN; + continue; + } + + /* = mcast_entry */ + state.dest_entry -= sizeof(struct mcast_entry); + + memmove(state.dest_entry, state.dest_entry + + sizeof(struct mcast_entry), + tail - state.dest_entry - sizeof(struct mcast_entry)); + + tracker_packet->num_mcast_entries--; + tail -= sizeof(struct mcast_entry); + + state.mcast_num--; + + /* Avoid mcast_entry check of tracker_packet_for_each_dest's + * inner loop */ + state.break_flag = 0; + break; + } + + new_tracker_packet_len += sizeof(struct mcast_entry) * + tracker_packet->num_mcast_entries; + + return new_tracker_packet_len; +} + +static struct sk_buff *build_tracker_packet_skb( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, uint8_t *dest) +{ + struct sk_buff *skb; + struct mcast_tracker_packet *skb_tracker_data; + + skb = dev_alloc_skb(tracker_packet_len + sizeof(struct ethhdr)); + if (!skb) + return NULL; + + skb_reserve(skb, sizeof(struct ethhdr)); + skb_tracker_data = (struct mcast_tracker_packet *) + skb_put(skb, tracker_packet_len); + + memcpy(skb_tracker_data, tracker_packet, tracker_packet_len); + + return skb; +} + + +/** + * Sends (splitted parts of) a multicast tracker packet on the according + * interfaces. + * + * @tracker_packet: A compact multicast tracker packet with all groups and + * destinations attached. + */ +void route_mcast_tracker_packet( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct bat_priv *bat_priv) +{ + struct dest_entries_list next_hops, *tmp; + struct mcast_tracker_packet *next_hop_tracker_packets, + *next_hop_tracker_packet; + struct dest_entries_list *next_hop; + struct sk_buff *skb; + int num_next_hops, i; + int *tracker_packet_lengths; + + rcu_read_lock(); + num_next_hops = tracker_next_hops(tracker_packet, &next_hops, + bat_priv); + if (!num_next_hops) + goto out; + next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops, + GFP_ATOMIC); + if (!next_hop_tracker_packets) + goto free; + + tracker_packet_lengths = kmalloc(num_next_hops * sizeof(int), + GFP_ATOMIC); + if (!tracker_packet_lengths) + goto free2; + + i = 0; + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + next_hop_tracker_packet = (struct mcast_tracker_packet *) + ((char *)next_hop_tracker_packets + + i * tracker_packet_len); + memcpy(next_hop_tracker_packet, tracker_packet, + tracker_packet_len); + zero_tracker_packet(next_hop_tracker_packet, next_hop->dest, + bat_priv); + tracker_packet_lengths[i] = shrink_tracker_packet( + next_hop_tracker_packet, tracker_packet_len); + i++; + } + + i = 0; + /* Add ethernet header, send 'em! */ + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + if (tracker_packet_lengths[i] == + sizeof(struct mcast_tracker_packet)) + goto skip_send; + + skb = build_tracker_packet_skb(&next_hop_tracker_packets[i], + tracker_packet_lengths[i], + next_hop->dest); + if (skb) + send_skb_packet(skb, next_hop->batman_if, + next_hop->dest); +skip_send: + list_del(&next_hop->list); + kfree(next_hop); + i++; + } + + kfree(tracker_packet_lengths); + kfree(next_hop_tracker_packets); + return; + +free2: + kfree(next_hop_tracker_packets); +free: + list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) { + list_del(&next_hop->list); + kfree(next_hop); + } +out: + rcu_read_unlock(); +} + static void mcast_tracker_timer(struct work_struct *work) { struct bat_priv *bat_priv = container_of(work, struct bat_priv, mcast_tracker_work.work); + struct mcast_tracker_packet *tracker_packet = NULL; + int tracker_packet_len = 0;
+ if (atomic_read(&bat_priv->mcast_mode) == MCAST_MODE_PROACT_TRACKING) + tracker_packet = mcast_proact_tracker_prepare(bat_priv, + &tracker_packet_len); + + if (!tracker_packet) + goto out; + + route_mcast_tracker_packet(tracker_packet, tracker_packet_len, + bat_priv); + kfree(tracker_packet); + +out: start_mcast_tracker(bat_priv); }
diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 26ce6d8..2711d8b 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -27,6 +27,9 @@ int mcast_tracker_interval_set(struct net_device *net_dev, char *buff, int mcast_tracker_timeout_set(struct net_device *net_dev, char *buff, size_t count); void mcast_tracker_reset(struct bat_priv *bat_priv); +void route_mcast_tracker_packet( + struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
Hey Linus,
please see some comments inline.
On Sat, Jan 22, 2011 at 02:21:30AM +0100, Linus Lüssing wrote:
[...]
+static inline int find_mca_match(struct orig_node *orig_node,
int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries)
+{
- int pos;
- for (pos = 0; pos < num_mcast_entries; pos++)
if (!memcmp(&mc_addr_list[pos*ETH_ALEN],
&orig_node->mca_buff[ETH_ALEN*mca_pos], ETH_ALEN))
return pos;
- return -1;
+}
A comment explaining this function find_mca_match() would be nice.
+/**
- Prepares a multicast tracker packet on a multicast member with all its
- groups and their members attached. Note, that the proactive tracking
- mode does not differentiate between multicast senders and receivers,
- resulting in tracker packets between each node.
- Returns NULL if this node is not a member of any group or if there are
- no other members in its groups.
- @bat_priv: bat_priv for the mesh we are preparing this packet
- */
+static struct mcast_tracker_packet *mcast_proact_tracker_prepare(
struct bat_priv *bat_priv, int *tracker_packet_len)
+{
- struct net_device *soft_iface = bat_priv->primary_if->soft_iface;
- uint8_t *mc_addr_list;
- MC_LIST *mc_entry;
- struct element_t *bucket;
- struct orig_node *orig_node;
- struct hashtable_t *hash = bat_priv->orig_hash;
- struct hlist_node *walk;
- struct hlist_head *head;
- int i;
- /* one dest_entries_list per multicast group,
* they'll collect dest_entries[x] */
- int num_mcast_entries, used_mcast_entries = 0;
- struct list_head *dest_entries_list;
- struct dest_entries_list dest_entries[UINT8_MAX], *dest, *tmp;
This will reserve 256 * 18 = 4608 byte on the stack, which is too much for 4k stacks. Please allocate this somewhere else.
- int num_dest_entries, dest_entries_total = 0;
- uint8_t *dest_entry;
- int pos, mca_pos;
- struct mcast_tracker_packet *tracker_packet = NULL;
- struct mcast_entry *mcast_entry;
- if (!hash)
goto out;
- /* Make a copy so we don't have to rush because of locking */
- netif_addr_lock_bh(soft_iface);
- num_mcast_entries = netdev_mc_count(soft_iface);
- mc_addr_list = kmalloc(ETH_ALEN * num_mcast_entries, GFP_ATOMIC);
- if (!mc_addr_list) {
netif_addr_unlock_bh(soft_iface);
goto out;
- }
- pos = 0;
- netdev_for_each_mc_addr(mc_entry, soft_iface) {
memcpy(&mc_addr_list[pos * ETH_ALEN], mc_entry->MC_LIST_ADDR,
ETH_ALEN);
pos++;
- }
- netif_addr_unlock_bh(soft_iface);
- if (num_mcast_entries > UINT8_MAX)
num_mcast_entries = UINT8_MAX;
- dest_entries_list = kmalloc(num_mcast_entries *
sizeof(struct list_head), GFP_ATOMIC);
- if (!dest_entries_list)
goto free;
- for (pos = 0; pos < num_mcast_entries; pos++)
INIT_LIST_HEAD(&dest_entries_list[pos]);
dest_entries[...].list should be initialized here too, shouldn't they? BTW, the names a re a little bit confusing (dest_entries_list vs. dest_entries), other names and an explanation would be helpful.
- /* fill the lists and buffers */
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
rcu_read_lock();
hlist_for_each_entry_rcu(bucket, walk, head, hlist) {
orig_node = bucket->data;
if (!orig_node->num_mca)
continue;
num_dest_entries = 0;
for (mca_pos = 0; mca_pos < orig_node->num_mca &&
dest_entries_total != UINT8_MAX; mca_pos++) {
pos = find_mca_match(orig_node, mca_pos,
mc_addr_list, num_mcast_entries);
if (pos > UINT8_MAX || pos < 0)
Shouldn't this rather be if (pos >= num_mcast_entries || pos < 0)
(it is used for dest_entries_list after all).
continue;
memcpy(dest_entries[dest_entries_total].dest,
orig_node->orig, ETH_ALEN);
list_add(
&dest_entries[dest_entries_total].list,
&dest_entries_list[pos]);
num_dest_entries++;
num_dest_entries is obsolete IMHO, it is only increase but never used.
dest_entries_total++;
}
}
rcu_read_unlock();
- }
- /* Any list left empty? */
- for (pos = 0; pos < num_mcast_entries; pos++)
if (!list_empty(&dest_entries_list[pos]))
used_mcast_entries++;
- if (!used_mcast_entries)
dest_entries[...].list should be initialized here too, shouldn't they? BTW, the names a re a little bit confusing (dest_entries_list vs. dest_entries), other names and an explanation would be helpful.
Nope, afaik I don't need to initialize list items which I'm not using as a list's head - list_add() will set the next and prev pointers anyway. Hmm, for the names I couldn't come up with something much better yet - renamed dest_entries_list to dest_entries_buckets though, to make it clearer that dest_entries are some reserved items which will be sorted into the right buckets later.
Hmm, there are actually two lines of explanation in mcast_tracker_packet_prepare(): 480 /* one dest_entries_buckets[x] per multicast group, 481 * they'll collect dest_entries[y] items */ 482 int num_mcast_entries, used_mcast_entries = 0; 483 struct list_head *dest_entries_buckets; 484 struct dest_entries_list *dest_entries, *dest, *tmp;
Or do you mean that the term "dest_entries" is ambiguous; not being clear if 'dest' is refering to a nexthop or a final destination?
Cheers, Linus
+struct tracker_packet_state {
- int mcast_num, dest_num;
- struct mcast_entry *mcast_entry;
- uint8_t *dest_entry;
- int break_flag;
+};
+static void init_state_mcast_entry(struct tracker_packet_state *state,
struct mcast_tracker_packet *tracker_packet)
+{
- state->mcast_num = 0;
- state->mcast_entry = (struct mcast_entry *)(tracker_packet + 1);
- state->dest_entry = (uint8_t *)(state->mcast_entry + 1);
+}
+static int check_state_mcast_entry(struct tracker_packet_state *state,
struct mcast_tracker_packet *tracker_packet)
+{
- if (state->mcast_num < tracker_packet->num_mcast_entries &&
!state->break_flag)
return 1;
- return 0;
+}
+static void inc_state_mcast_entry(struct tracker_packet_state *state) +{
- state->mcast_num++;
- state->mcast_entry = (struct mcast_entry *)state->dest_entry;
- state->dest_entry = (uint8_t *)(state->mcast_entry + 1);
+}
+static void init_state_dest_entry(struct tracker_packet_state *state) +{
- state->dest_num = 0;
- state->break_flag = 1;
+}
+static int check_state_dest_entry(struct tracker_packet_state *state) +{
- if (state->dest_num < state->mcast_entry->num_dest)
return 1;
- return 0;
+}
+static void inc_state_dest_entry(struct tracker_packet_state *state) +{
- state->dest_num++;
- state->dest_entry += ETH_ALEN;
- state->break_flag = 0;
+}
+#define tracker_packet_for_each_dest(state, tracker_packet) \
- for (init_state_mcast_entry(state, tracker_packet); \
check_state_mcast_entry(state, tracker_packet); \
inc_state_mcast_entry(state)) \
for (init_state_dest_entry(state); \
check_state_dest_entry(state); \
inc_state_dest_entry(state))
Mixed up the logic of the new state.break_flag a little. Might be fixed upstream (didn't have the time to test it yet). Will be fixed and tested with next patchset.
Cheers, Linus
Hello,
found one more thing which will break packet sending, please see inline:
On Sat, Jan 22, 2011 at 02:21:30AM +0100, Linus Lüssing wrote:
[...] +/**
- Sends (splitted parts of) a multicast tracker packet on the according
- interfaces.
- @tracker_packet: A compact multicast tracker packet with all groups and
destinations attached.
- */
+void route_mcast_tracker_packet(
struct mcast_tracker_packet *tracker_packet,
int tracker_packet_len, struct bat_priv *bat_priv)
+{
- struct dest_entries_list next_hops, *tmp;
- struct mcast_tracker_packet *next_hop_tracker_packets,
*next_hop_tracker_packet;
- struct dest_entries_list *next_hop;
- struct sk_buff *skb;
- int num_next_hops, i;
- int *tracker_packet_lengths;
- rcu_read_lock();
- num_next_hops = tracker_next_hops(tracker_packet, &next_hops,
bat_priv);
- if (!num_next_hops)
goto out;
- next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops,
GFP_ATOMIC);
- if (!next_hop_tracker_packets)
goto free;
- tracker_packet_lengths = kmalloc(num_next_hops * sizeof(int),
GFP_ATOMIC);
- if (!tracker_packet_lengths)
goto free2;
- i = 0;
- list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) {
next_hop_tracker_packet = (struct mcast_tracker_packet *)
((char *)next_hop_tracker_packets +
i * tracker_packet_len);
memcpy(next_hop_tracker_packet, tracker_packet,
tracker_packet_len);
zero_tracker_packet(next_hop_tracker_packet, next_hop->dest,
bat_priv);
tracker_packet_lengths[i] = shrink_tracker_packet(
next_hop_tracker_packet, tracker_packet_len);
i++;
- }
- i = 0;
- /* Add ethernet header, send 'em! */
- list_for_each_entry_safe(next_hop, tmp, &next_hops.list, list) {
if (tracker_packet_lengths[i] ==
sizeof(struct mcast_tracker_packet))
goto skip_send;
skb = build_tracker_packet_skb(&next_hop_tracker_packets[i],
Don't use next_hop_tracker_packets[i]! This will give you a wrong pointer, because the base is "sizeof(struct mcast_tracker_packet)" and not "tracker_packet_len". Instead, please use a "next_hop_tracker_packet" construct like in the loop above.
(BTW, i think you could merge the two loops and skip the array "tracker_packet_lengths" for keeping the lengths, i think ...)
tracker_packet_lengths[i],
next_hop->dest);
if (skb)
send_skb_packet(skb, next_hop->batman_if,
next_hop->dest);
+skip_send:
list_del(&next_hop->list);
kfree(next_hop);
i++;
- }
Hi,
please see comments inline.
On Sat, Jan 22, 2011 at 02:21:30AM +0100, Linus Lüssing wrote:
[...] +static int add_router_of_dest(struct dest_entries_list *next_hops,
uint8_t *dest, struct bat_priv *bat_priv)
+{
- struct dest_entries_list *next_hop_tmp, *next_hop_entry;
- struct element_t *bucket;
- struct orig_node *orig_node;
- struct hashtable_t *hash = bat_priv->orig_hash;
- struct hlist_node *walk;
- struct hlist_head *head;
- int i;
- next_hop_entry = kmalloc(sizeof(struct dest_entries_list), GFP_ATOMIC);
- if (!next_hop_entry)
return 1;
- next_hop_entry->batman_if = NULL;
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
rcu_read_lock();
hlist_for_each_entry_rcu(bucket, walk, head, hlist) {
orig_node = bucket->data;
if (memcmp(orig_node->orig, dest, ETH_ALEN))
continue;
Traversing the hash yourself seems pretty redundant. If you need to find a specific node, you can just use hash_find().
if (!orig_node->router) {
i = hash->size;
break;
}
memcpy(next_hop_entry->dest, orig_node->router->addr,
ETH_ALEN);
next_hop_entry->batman_if =
orig_node->router->if_incoming;
i = hash->size;
break;
}
rcu_read_unlock();
- }
[...] +static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet,
uint8_t *next_hop, struct bat_priv *bat_priv)
+{
- struct tracker_packet_state state;
- struct element_t *bucket;
- struct orig_node *orig_node;
- struct hashtable_t *hash = bat_priv->orig_hash;
- struct hlist_node *walk;
- struct hlist_head *head;
- int i;
- tracker_packet_for_each_dest(&state, tracker_packet) {
for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
rcu_read_lock();
hlist_for_each_entry_rcu(bucket, walk, head, hlist) {
orig_node = bucket->data;
if (memcmp(orig_node->orig, state.dest_entry,
ETH_ALEN))
continue;
Same here. Just use hash_find().
/* is the next hop already our destination? */
if (!memcmp(orig_node->orig, next_hop,
ETH_ALEN))
memset(state.dest_entry, '\0',
ETH_ALEN);
else if (!orig_node->router)
memset(state.dest_entry, '\0',
ETH_ALEN);
else if (!memcmp(orig_node->orig,
orig_node->router->orig_node->
primary_addr, ETH_ALEN))
memset(state.dest_entry, '\0',
ETH_ALEN);
/* is this the wrong next hop for our
* destination? */
else if (memcmp(orig_node->router->addr,
next_hop, ETH_ALEN))
memset(state.dest_entry, '\0',
ETH_ALEN);
i = hash->size;
break;
}
rcu_read_unlock();
}
- }
+}
Before/while a tracker packet is being searched for next hops for its destination entries, it will also be checked if the number of destination and mcast entries might exceed the tracker_packet_len. Otherwise we might read/write in unallocated memory. Such a broken tracker packet could potentially occure when we are going to reuse route_mcast_tracker_packet for tracker packets received from a neighbour node.
In such a case, we are just reducing the stated mcast / dest numbers in the tracker packet to fit the size of the allocated buffer.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 27 ++++++++++++++++++++++++--- 1 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index 34e89a8..c06035f 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -366,17 +366,38 @@ free: return 1; }
-/* Collect nexthops for all dest entries specified in this tracker packet */ +/* Collect nexthops for all dest entries specified in this tracker packet. + * It also reduces the number of elements in the tracker packet if they exceed + * the buffers length (e.g. because of a received, broken tracker packet) to + * avoid writing in unallocated memory. */ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, + int tracker_packet_len, struct dest_entries_list *next_hops, struct bat_priv *bat_priv) { int num_next_hops = 0, ret; struct tracker_packet_state state; + uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len;
INIT_LIST_HEAD(&next_hops->list);
tracker_packet_for_each_dest(&state, tracker_packet) { + /* avoid writing outside of unallocated memory later */ + if (state.dest_entry + ETH_ALEN > tail) { + bat_dbg(DBG_BATMAN, bat_priv, + "mcast tracker packet is broken, too many " + "entries claimed for its length, repairing"); + + tracker_packet->num_mcast_entries = state.mcast_num; + + if (state.dest_num) { + tracker_packet->num_mcast_entries++; + state.mcast_entry->num_dest = state.dest_num; + } + + break; + } + ret = add_router_of_dest(next_hops, state.dest_entry, bat_priv); if (!ret) @@ -528,8 +549,8 @@ void route_mcast_tracker_packet( int *tracker_packet_lengths;
rcu_read_lock(); - num_next_hops = tracker_next_hops(tracker_packet, &next_hops, - bat_priv); + num_next_hops = tracker_next_hops(tracker_packet, tracker_packet_len, + &next_hops, bat_priv); if (!num_next_hops) goto out; next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops,
This commit adds the ability to also forward a received multicast tracker packet (if necessary). It also makes use of the same splitting methods introduced with one of the previous commits, in case of multiple next hop destinations.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 5 +++++ routing.c | 19 +++++++++++++++++++ routing.h | 1 + 3 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/batman-adv/hard-interface.c b/batman-adv/hard-interface.c index 2bae3e4..f478c4b 100644 --- a/batman-adv/hard-interface.c +++ b/batman-adv/hard-interface.c @@ -624,6 +624,11 @@ int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, ret = recv_bcast_packet(skb, batman_if); break;
+ /* multicast tracker packet */ + case BAT_MCAST_TRACKER: + ret = recv_mcast_tracker_packet(skb, batman_if); + break; + /* vis packet */ case BAT_VIS: ret = recv_vis_packet(skb, batman_if); diff --git a/batman-adv/routing.c b/batman-adv/routing.c index 4f55715..944dc94 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -35,6 +35,7 @@ #include "gateway_common.h" #include "gateway_client.h" #include "unicast.h" +#include "multicast.h"
void slide_own_bcast_window(struct batman_if *batman_if) { @@ -1500,6 +1501,24 @@ out: return ret; }
+int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct mcast_tracker_packet *tracker_packet; + int hdr_size = sizeof(struct mcast_tracker_packet); + + if (check_unicast_packet(skb, hdr_size) < 0) + return NET_RX_DROP; + + tracker_packet = (struct mcast_tracker_packet *)skb->data; + + route_mcast_tracker_packet(tracker_packet, skb->len, bat_priv); + + dev_kfree_skb(skb); + + return NET_RX_SUCCESS; +} + int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct vis_packet *vis_packet; diff --git a/batman-adv/routing.h b/batman-adv/routing.h index bf508e6..0fad12a 100644 --- a/batman-adv/routing.h +++ b/batman-adv/routing.h @@ -38,6 +38,7 @@ int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_ucast_frag_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if); +int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bat_packet(struct sk_buff *skb, struct batman_if *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv,
On reception of a multicast tracker packet (both locally generated or received over an interface), a node now memorizes its forwarding state for a tuple of multicast-group, originator, and next-hops (+ their according outgoing interface).
The first two elements are necessary to determine, whether a node shall forward a multicast data packet on reception later. The next-hop and according interface information is necessary to quickly determine, if a multicast data packet shall be forwarded via unicast to each single next hop or via broadcast.
This commit does not yet purge multicast forwarding table entries after the set tracker timeout yet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- types.h | 2 + 2 files changed, 276 insertions(+), 3 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index c06035f..bef3972 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -25,6 +25,10 @@ #include "send.h" #include "compat.h"
+/* If auto mode for tracker timeout has been selected, + * how many times of tracker_interval to wait */ +#define TRACKER_TIMEOUT_AUTO_X 5 + struct tracker_packet_state { int mcast_num, dest_num; struct mcast_entry *mcast_entry; @@ -92,6 +96,34 @@ struct dest_entries_list { struct batman_if *batman_if; };
+ +struct mcast_forw_nexthop_entry { + struct list_head list; + uint8_t neigh_addr[6]; + unsigned long timeout; /* old jiffies value */ +}; + +struct mcast_forw_if_entry { + struct list_head list; + int16_t if_num; + int num_nexthops; + struct list_head mcast_nexthop_list; +}; + +struct mcast_forw_orig_entry { + struct list_head list; + uint8_t orig[6]; + uint32_t last_mcast_seqno; + unsigned long mcast_bits[NUM_WORDS]; + struct list_head mcast_if_list; +}; + +struct mcast_forw_table_entry { + struct list_head list; + uint8_t mcast_addr[6]; + struct list_head mcast_orig_list; +}; + /* how long to wait until sending a multicast tracker packet */ static int tracker_send_delay(struct bat_priv *bat_priv) { @@ -126,6 +158,217 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static void prepare_forw_if_entry(struct list_head *forw_if_list, + int16_t if_num, uint8_t *neigh_addr) +{ + struct mcast_forw_if_entry *forw_if_entry; + struct mcast_forw_nexthop_entry *forw_nexthop_entry; + + list_for_each_entry(forw_if_entry, forw_if_list, list) + if (forw_if_entry->if_num == if_num) + goto skip_create_if; + + forw_if_entry = kmalloc(sizeof(struct mcast_forw_if_entry), + GFP_ATOMIC); + if (!forw_if_entry) + return; + + forw_if_entry->if_num = if_num; + forw_if_entry->num_nexthops = 0; + INIT_LIST_HEAD(&forw_if_entry->mcast_nexthop_list); + list_add(&forw_if_entry->list, forw_if_list); + +skip_create_if: + list_for_each_entry(forw_nexthop_entry, + &forw_if_entry->mcast_nexthop_list, list) { + if (!memcmp(forw_nexthop_entry->neigh_addr, + neigh_addr, ETH_ALEN)) + return; + } + + forw_nexthop_entry = kmalloc(sizeof(struct mcast_forw_nexthop_entry), + GFP_ATOMIC); + if (!forw_nexthop_entry && forw_if_entry->num_nexthops) + return; + else if (!forw_nexthop_entry) + goto free; + + memcpy(forw_nexthop_entry->neigh_addr, neigh_addr, ETH_ALEN); + forw_if_entry->num_nexthops++; + if (forw_if_entry->num_nexthops < 0) { + kfree(forw_nexthop_entry); + goto free; + } + + list_add(&forw_nexthop_entry->list, + &forw_if_entry->mcast_nexthop_list); + return; +free: + list_del(&forw_if_entry->list); + kfree(forw_if_entry); +} + +static struct list_head *prepare_forw_table_entry( + struct mcast_forw_table_entry *forw_table, + uint8_t *mcast_addr, uint8_t *orig) +{ + struct mcast_forw_table_entry *forw_table_entry; + struct mcast_forw_orig_entry *orig_entry; + + forw_table_entry = kmalloc(sizeof(struct mcast_forw_table_entry), + GFP_ATOMIC); + if (!forw_table_entry) + return NULL; + + memcpy(forw_table_entry->mcast_addr, mcast_addr, ETH_ALEN); + list_add(&forw_table_entry->list, &forw_table->list); + + INIT_LIST_HEAD(&forw_table_entry->mcast_orig_list); + orig_entry = kmalloc(sizeof(struct mcast_forw_orig_entry), GFP_ATOMIC); + if (!orig_entry) + goto free; + + memcpy(orig_entry->orig, orig, ETH_ALEN); + INIT_LIST_HEAD(&orig_entry->mcast_if_list); + list_add(&orig_entry->list, &forw_table_entry->mcast_orig_list); + + return &orig_entry->mcast_if_list; + +free: + list_del(&forw_table_entry->list); + kfree(forw_table_entry); + return NULL; +} + +static int sync_nexthop(struct mcast_forw_nexthop_entry *sync_nexthop_entry, + struct list_head *nexthop_list) +{ + struct mcast_forw_nexthop_entry *nexthop_entry; + int synced = 0; + + list_for_each_entry(nexthop_entry, nexthop_list, list) { + if (memcmp(sync_nexthop_entry->neigh_addr, + nexthop_entry->neigh_addr, ETH_ALEN)) + continue; + + nexthop_entry->timeout = jiffies; + list_del(&sync_nexthop_entry->list); + kfree(sync_nexthop_entry); + + synced = 1; + break; + } + + if (!synced) { + sync_nexthop_entry->timeout = jiffies; + list_move(&sync_nexthop_entry->list, nexthop_list); + return 1; + } + + return 0; +} + +static void sync_if(struct mcast_forw_if_entry *sync_if_entry, + struct list_head *if_list) +{ + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *sync_nexthop_entry, *tmp; + int synced = 0; + + list_for_each_entry(if_entry, if_list, list) { + if (sync_if_entry->if_num != if_entry->if_num) + continue; + + list_for_each_entry_safe(sync_nexthop_entry, tmp, + &sync_if_entry->mcast_nexthop_list, list) + if (sync_nexthop(sync_nexthop_entry, + &if_entry->mcast_nexthop_list)) + if_entry->num_nexthops++; + + list_del(&sync_if_entry->list); + kfree(sync_if_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_if_entry->list, if_list); +} + +/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_orig(struct mcast_forw_orig_entry *sync_orig_entry, + struct list_head *orig_list) +{ + struct mcast_forw_orig_entry *orig_entry; + struct mcast_forw_if_entry *sync_if_entry, *tmp; + int synced = 0; + + list_for_each_entry(orig_entry, orig_list, list) { + if (memcmp(sync_orig_entry->orig, + orig_entry->orig, ETH_ALEN)) + continue; + + list_for_each_entry_safe(sync_if_entry, tmp, + &sync_orig_entry->mcast_if_list, list) + sync_if(sync_if_entry, &orig_entry->mcast_if_list); + + list_del(&sync_orig_entry->list); + kfree(sync_orig_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_orig_entry->list, orig_list); +} + + +/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_table(struct mcast_forw_table_entry *sync_table_entry, + struct list_head *forw_table) +{ + struct mcast_forw_table_entry *table_entry; + struct mcast_forw_orig_entry *sync_orig_entry, *tmp; + int synced = 0; + + list_for_each_entry(table_entry, forw_table, list) { + if (memcmp(sync_table_entry->mcast_addr, + table_entry->mcast_addr, ETH_ALEN)) + continue; + + list_for_each_entry_safe(sync_orig_entry, tmp, + &sync_table_entry->mcast_orig_list, list) + sync_orig(sync_orig_entry, + &table_entry->mcast_orig_list); + + list_del(&sync_table_entry->list); + kfree(sync_table_entry); + + synced = 1; + break; + } + + if (!synced) + list_move(&sync_table_entry->list, forw_table); +} + +/* Updates the old multicast forwarding table with the information gained + * from the generated/received tracker packet. It also frees the generated + * table for syncing (*forw_table). */ +static void update_mcast_forw_table(struct mcast_forw_table_entry *forw_table, + struct bat_priv *bat_priv) +{ + struct mcast_forw_table_entry *sync_table_entry, *tmp; + + spin_lock_bh(&bat_priv->mcast_forw_table_lock); + list_for_each_entry_safe(sync_table_entry, tmp, &forw_table->list, + list) + sync_table(sync_table_entry, &bat_priv->mcast_forw_table); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); +} + static inline int find_mca_match(struct orig_node *orig_node, int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries) { @@ -310,9 +553,12 @@ out: * interface to the forw_if_list - but only if this router has not been * added yet */ static int add_router_of_dest(struct dest_entries_list *next_hops, - uint8_t *dest, struct bat_priv *bat_priv) + uint8_t *dest, + struct list_head *forw_if_list, + struct bat_priv *bat_priv) { struct dest_entries_list *next_hop_tmp, *next_hop_entry; + int16_t if_num; struct element_t *bucket; struct orig_node *orig_node; struct hashtable_t *hash = bat_priv->orig_hash; @@ -344,6 +590,7 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, ETH_ALEN); next_hop_entry->batman_if = orig_node->router->if_incoming; + if_num = next_hop_entry->batman_if->if_num; i = hash->size; break; } @@ -352,6 +599,10 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, if (!next_hop_entry->batman_if) goto free;
+ if (forw_if_list) + prepare_forw_if_entry(forw_if_list, if_num, + next_hop_entry->dest); + list_for_each_entry(next_hop_tmp, &next_hops->list, list) if (!memcmp(next_hop_tmp->dest, next_hop_entry->dest, ETH_ALEN)) @@ -373,13 +624,16 @@ free: static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct dest_entries_list *next_hops, + struct mcast_forw_table_entry *forw_table, struct bat_priv *bat_priv) { int num_next_hops = 0, ret; struct tracker_packet_state state; uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len; + struct list_head *forw_table_if = NULL;
INIT_LIST_HEAD(&next_hops->list); + INIT_LIST_HEAD(&forw_table->list);
tracker_packet_for_each_dest(&state, tracker_packet) { /* avoid writing outside of unallocated memory later */ @@ -398,8 +652,15 @@ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, break; }
+ if (state.dest_num) + goto skip; + + forw_table_if = prepare_forw_table_entry(forw_table, + state.mcast_entry->mcast_addr, + tracker_packet->orig); +skip: ret = add_router_of_dest(next_hops, state.dest_entry, - bat_priv); + forw_table_if, bat_priv); if (!ret) num_next_hops++; } @@ -407,6 +668,8 @@ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, return num_next_hops; }
+/* Zero destination entries not destined for the specified next hop in the + * tracker packet */ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, uint8_t *next_hop, struct bat_priv *bat_priv) { @@ -459,6 +722,8 @@ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, } }
+/* Remove zeroed destination entries and empty multicast entries in tracker + * packet */ static int shrink_tracker_packet(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len) { @@ -544,15 +809,19 @@ void route_mcast_tracker_packet( struct mcast_tracker_packet *next_hop_tracker_packets, *next_hop_tracker_packet; struct dest_entries_list *next_hop; + struct mcast_forw_table_entry forw_table; struct sk_buff *skb; int num_next_hops, i; int *tracker_packet_lengths;
rcu_read_lock(); num_next_hops = tracker_next_hops(tracker_packet, tracker_packet_len, - &next_hops, bat_priv); + &next_hops, &forw_table, bat_priv); if (!num_next_hops) goto out; + + update_mcast_forw_table(&forw_table, bat_priv); + next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops, GFP_ATOMIC); if (!next_hop_tracker_packets) @@ -736,6 +1005,8 @@ ok: int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); + INIT_LIST_HEAD(&bat_priv->mcast_forw_table); + start_mcast_tracker(bat_priv);
return 1; diff --git a/batman-adv/types.h b/batman-adv/types.h index 675a50f..75adec9 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -158,6 +158,7 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct list_head vis_send_list; + struct list_head mcast_forw_table; struct hashtable_t *orig_hash; struct hashtable_t *hna_local_hash; struct hashtable_t *hna_global_hash; @@ -170,6 +171,7 @@ struct bat_priv { spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ + spinlock_t mcast_forw_table_lock; /* protects mcast_forw_table */ int16_t num_local_hna; atomic_t hna_local_changed; struct delayed_work hna_work;
Hey Linus,
the spinlock initialization is missing. I'd suggest to add a line into main.c/mesh_init(): spin_lock_init(&bat_priv->mcast_forw_table_lock);
On Sat, Jan 22, 2011 at 02:21:33AM +0100, Linus Lüssing wrote:
On reception of a multicast tracker packet (both locally generated or received over an interface), a node now memorizes its forwarding state for a tuple of multicast-group, originator, and next-hops (+ their according outgoing interface).
The first two elements are necessary to determine, whether a node shall forward a multicast data packet on reception later. The next-hop and according interface information is necessary to quickly determine, if a multicast data packet shall be forwarded via unicast to each single next hop or via broadcast.
This commit does not yet purge multicast forwarding table entries after the set tracker timeout yet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de
multicast.c | 277 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- types.h | 2 + 2 files changed, 276 insertions(+), 3 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index c06035f..bef3972 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -25,6 +25,10 @@ #include "send.h" #include "compat.h"
+/* If auto mode for tracker timeout has been selected,
- how many times of tracker_interval to wait */
+#define TRACKER_TIMEOUT_AUTO_X 5
struct tracker_packet_state { int mcast_num, dest_num; struct mcast_entry *mcast_entry; @@ -92,6 +96,34 @@ struct dest_entries_list { struct batman_if *batman_if; };
+struct mcast_forw_nexthop_entry {
- struct list_head list;
- uint8_t neigh_addr[6];
- unsigned long timeout; /* old jiffies value */
+};
+struct mcast_forw_if_entry {
- struct list_head list;
- int16_t if_num;
- int num_nexthops;
- struct list_head mcast_nexthop_list;
+};
+struct mcast_forw_orig_entry {
- struct list_head list;
- uint8_t orig[6];
- uint32_t last_mcast_seqno;
- unsigned long mcast_bits[NUM_WORDS];
- struct list_head mcast_if_list;
+};
+struct mcast_forw_table_entry {
- struct list_head list;
- uint8_t mcast_addr[6];
- struct list_head mcast_orig_list;
+};
/* how long to wait until sending a multicast tracker packet */ static int tracker_send_delay(struct bat_priv *bat_priv) { @@ -126,6 +158,217 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static void prepare_forw_if_entry(struct list_head *forw_if_list,
int16_t if_num, uint8_t *neigh_addr)
+{
- struct mcast_forw_if_entry *forw_if_entry;
- struct mcast_forw_nexthop_entry *forw_nexthop_entry;
- list_for_each_entry(forw_if_entry, forw_if_list, list)
if (forw_if_entry->if_num == if_num)
goto skip_create_if;
- forw_if_entry = kmalloc(sizeof(struct mcast_forw_if_entry),
GFP_ATOMIC);
- if (!forw_if_entry)
return;
- forw_if_entry->if_num = if_num;
- forw_if_entry->num_nexthops = 0;
- INIT_LIST_HEAD(&forw_if_entry->mcast_nexthop_list);
- list_add(&forw_if_entry->list, forw_if_list);
+skip_create_if:
- list_for_each_entry(forw_nexthop_entry,
&forw_if_entry->mcast_nexthop_list, list) {
if (!memcmp(forw_nexthop_entry->neigh_addr,
neigh_addr, ETH_ALEN))
return;
- }
- forw_nexthop_entry = kmalloc(sizeof(struct mcast_forw_nexthop_entry),
GFP_ATOMIC);
- if (!forw_nexthop_entry && forw_if_entry->num_nexthops)
return;
- else if (!forw_nexthop_entry)
goto free;
- memcpy(forw_nexthop_entry->neigh_addr, neigh_addr, ETH_ALEN);
- forw_if_entry->num_nexthops++;
- if (forw_if_entry->num_nexthops < 0) {
kfree(forw_nexthop_entry);
goto free;
- }
- list_add(&forw_nexthop_entry->list,
&forw_if_entry->mcast_nexthop_list);
- return;
+free:
- list_del(&forw_if_entry->list);
- kfree(forw_if_entry);
+}
+static struct list_head *prepare_forw_table_entry(
struct mcast_forw_table_entry *forw_table,
uint8_t *mcast_addr, uint8_t *orig)
+{
- struct mcast_forw_table_entry *forw_table_entry;
- struct mcast_forw_orig_entry *orig_entry;
- forw_table_entry = kmalloc(sizeof(struct mcast_forw_table_entry),
GFP_ATOMIC);
- if (!forw_table_entry)
return NULL;
- memcpy(forw_table_entry->mcast_addr, mcast_addr, ETH_ALEN);
- list_add(&forw_table_entry->list, &forw_table->list);
- INIT_LIST_HEAD(&forw_table_entry->mcast_orig_list);
- orig_entry = kmalloc(sizeof(struct mcast_forw_orig_entry), GFP_ATOMIC);
- if (!orig_entry)
goto free;
- memcpy(orig_entry->orig, orig, ETH_ALEN);
- INIT_LIST_HEAD(&orig_entry->mcast_if_list);
- list_add(&orig_entry->list, &forw_table_entry->mcast_orig_list);
- return &orig_entry->mcast_if_list;
+free:
- list_del(&forw_table_entry->list);
- kfree(forw_table_entry);
- return NULL;
+}
+static int sync_nexthop(struct mcast_forw_nexthop_entry *sync_nexthop_entry,
struct list_head *nexthop_list)
+{
- struct mcast_forw_nexthop_entry *nexthop_entry;
- int synced = 0;
- list_for_each_entry(nexthop_entry, nexthop_list, list) {
if (memcmp(sync_nexthop_entry->neigh_addr,
nexthop_entry->neigh_addr, ETH_ALEN))
continue;
nexthop_entry->timeout = jiffies;
list_del(&sync_nexthop_entry->list);
kfree(sync_nexthop_entry);
synced = 1;
break;
- }
- if (!synced) {
sync_nexthop_entry->timeout = jiffies;
list_move(&sync_nexthop_entry->list, nexthop_list);
return 1;
- }
- return 0;
+}
+static void sync_if(struct mcast_forw_if_entry *sync_if_entry,
struct list_head *if_list)
+{
- struct mcast_forw_if_entry *if_entry;
- struct mcast_forw_nexthop_entry *sync_nexthop_entry, *tmp;
- int synced = 0;
- list_for_each_entry(if_entry, if_list, list) {
if (sync_if_entry->if_num != if_entry->if_num)
continue;
list_for_each_entry_safe(sync_nexthop_entry, tmp,
&sync_if_entry->mcast_nexthop_list, list)
if (sync_nexthop(sync_nexthop_entry,
&if_entry->mcast_nexthop_list))
if_entry->num_nexthops++;
list_del(&sync_if_entry->list);
kfree(sync_if_entry);
synced = 1;
break;
- }
- if (!synced)
list_move(&sync_if_entry->list, if_list);
+}
+/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_orig(struct mcast_forw_orig_entry *sync_orig_entry,
struct list_head *orig_list)
+{
- struct mcast_forw_orig_entry *orig_entry;
- struct mcast_forw_if_entry *sync_if_entry, *tmp;
- int synced = 0;
- list_for_each_entry(orig_entry, orig_list, list) {
if (memcmp(sync_orig_entry->orig,
orig_entry->orig, ETH_ALEN))
continue;
list_for_each_entry_safe(sync_if_entry, tmp,
&sync_orig_entry->mcast_if_list, list)
sync_if(sync_if_entry, &orig_entry->mcast_if_list);
list_del(&sync_orig_entry->list);
kfree(sync_orig_entry);
synced = 1;
break;
- }
- if (!synced)
list_move(&sync_orig_entry->list, orig_list);
+}
+/* syncs all multicast entries of sync_table_entry to forw_table */ +static void sync_table(struct mcast_forw_table_entry *sync_table_entry,
struct list_head *forw_table)
+{
- struct mcast_forw_table_entry *table_entry;
- struct mcast_forw_orig_entry *sync_orig_entry, *tmp;
- int synced = 0;
- list_for_each_entry(table_entry, forw_table, list) {
if (memcmp(sync_table_entry->mcast_addr,
table_entry->mcast_addr, ETH_ALEN))
continue;
list_for_each_entry_safe(sync_orig_entry, tmp,
&sync_table_entry->mcast_orig_list, list)
sync_orig(sync_orig_entry,
&table_entry->mcast_orig_list);
list_del(&sync_table_entry->list);
kfree(sync_table_entry);
synced = 1;
break;
- }
- if (!synced)
list_move(&sync_table_entry->list, forw_table);
+}
+/* Updates the old multicast forwarding table with the information gained
- from the generated/received tracker packet. It also frees the generated
- table for syncing (*forw_table). */
+static void update_mcast_forw_table(struct mcast_forw_table_entry *forw_table,
struct bat_priv *bat_priv)
+{
- struct mcast_forw_table_entry *sync_table_entry, *tmp;
- spin_lock_bh(&bat_priv->mcast_forw_table_lock);
- list_for_each_entry_safe(sync_table_entry, tmp, &forw_table->list,
list)
sync_table(sync_table_entry, &bat_priv->mcast_forw_table);
- spin_unlock_bh(&bat_priv->mcast_forw_table_lock);
+}
static inline int find_mca_match(struct orig_node *orig_node, int mca_pos, uint8_t *mc_addr_list, int num_mcast_entries) { @@ -310,9 +553,12 @@ out:
- interface to the forw_if_list - but only if this router has not been
- added yet */
static int add_router_of_dest(struct dest_entries_list *next_hops,
uint8_t *dest, struct bat_priv *bat_priv)
uint8_t *dest,
struct list_head *forw_if_list,
struct bat_priv *bat_priv)
{ struct dest_entries_list *next_hop_tmp, *next_hop_entry;
- int16_t if_num; struct element_t *bucket; struct orig_node *orig_node; struct hashtable_t *hash = bat_priv->orig_hash;
@@ -344,6 +590,7 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, ETH_ALEN); next_hop_entry->batman_if = orig_node->router->if_incoming;
}if_num = next_hop_entry->batman_if->if_num; i = hash->size; break;
@@ -352,6 +599,10 @@ static int add_router_of_dest(struct dest_entries_list *next_hops, if (!next_hop_entry->batman_if) goto free;
- if (forw_if_list)
prepare_forw_if_entry(forw_if_list, if_num,
next_hop_entry->dest);
- list_for_each_entry(next_hop_tmp, &next_hops->list, list) if (!memcmp(next_hop_tmp->dest, next_hop_entry->dest, ETH_ALEN))
@@ -373,13 +624,16 @@ free: static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct dest_entries_list *next_hops,
struct mcast_forw_table_entry *forw_table, struct bat_priv *bat_priv)
{ int num_next_hops = 0, ret; struct tracker_packet_state state; uint8_t *tail = (uint8_t *)tracker_packet + tracker_packet_len;
struct list_head *forw_table_if = NULL;
INIT_LIST_HEAD(&next_hops->list);
INIT_LIST_HEAD(&forw_table->list);
tracker_packet_for_each_dest(&state, tracker_packet) { /* avoid writing outside of unallocated memory later */
@@ -398,8 +652,15 @@ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, break; }
if (state.dest_num)
goto skip;
forw_table_if = prepare_forw_table_entry(forw_table,
state.mcast_entry->mcast_addr,
tracker_packet->orig);
+skip: ret = add_router_of_dest(next_hops, state.dest_entry,
bat_priv);
if (!ret) num_next_hops++; }forw_table_if, bat_priv);
@@ -407,6 +668,8 @@ static int tracker_next_hops(struct mcast_tracker_packet *tracker_packet, return num_next_hops; }
+/* Zero destination entries not destined for the specified next hop in the
- tracker packet */
static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, uint8_t *next_hop, struct bat_priv *bat_priv) { @@ -459,6 +722,8 @@ static void zero_tracker_packet(struct mcast_tracker_packet *tracker_packet, } }
+/* Remove zeroed destination entries and empty multicast entries in tracker
- packet */
static int shrink_tracker_packet(struct mcast_tracker_packet *tracker_packet, int tracker_packet_len) { @@ -544,15 +809,19 @@ void route_mcast_tracker_packet( struct mcast_tracker_packet *next_hop_tracker_packets, *next_hop_tracker_packet; struct dest_entries_list *next_hop;
struct mcast_forw_table_entry forw_table; struct sk_buff *skb; int num_next_hops, i; int *tracker_packet_lengths;
rcu_read_lock(); num_next_hops = tracker_next_hops(tracker_packet, tracker_packet_len,
&next_hops, bat_priv);
if (!num_next_hops) goto out;&next_hops, &forw_table, bat_priv);
- update_mcast_forw_table(&forw_table, bat_priv);
- next_hop_tracker_packets = kmalloc(tracker_packet_len * num_next_hops, GFP_ATOMIC); if (!next_hop_tracker_packets)
@@ -736,6 +1005,8 @@ ok: int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer);
INIT_LIST_HEAD(&bat_priv->mcast_forw_table);
start_mcast_tracker(bat_priv);
return 1;
diff --git a/batman-adv/types.h b/batman-adv/types.h index 675a50f..75adec9 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -158,6 +158,7 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct list_head vis_send_list;
- struct list_head mcast_forw_table; struct hashtable_t *orig_hash; struct hashtable_t *hna_local_hash; struct hashtable_t *hna_global_hash;
@@ -170,6 +171,7 @@ struct bat_priv { spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */
- spinlock_t mcast_forw_table_lock; /* protects mcast_forw_table */ int16_t num_local_hna; atomic_t hna_local_changed; struct delayed_work hna_work;
-- 1.7.2.3
With this commit the full multicast forwarding table, which is used for determining whether to forward a multicast data packet or not, can now be displayed via mcast_forw_table in BATMAN's debugfs directory.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_debugfs.c | 9 ++++++ multicast.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + 3 files changed, 92 insertions(+), 0 deletions(-)
diff --git a/batman-adv/bat_debugfs.c b/batman-adv/bat_debugfs.c index 0ae81d0..7b1b57d 100644 --- a/batman-adv/bat_debugfs.c +++ b/batman-adv/bat_debugfs.c @@ -32,6 +32,7 @@ #include "soft-interface.h" #include "vis.h" #include "icmp_socket.h" +#include "multicast.h"
static struct dentry *bat_debugfs;
@@ -252,6 +253,12 @@ static int transtable_local_open(struct inode *inode, struct file *file) return single_open(file, hna_local_seq_print_text, net_dev); }
+static int mcast_forw_table_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, mcast_forw_table_seq_print_text, net_dev); +} + static int vis_data_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; @@ -280,6 +287,7 @@ static BAT_DEBUGINFO(gateways, S_IRUGO, gateways_open); static BAT_DEBUGINFO(softif_neigh, S_IRUGO, softif_neigh_open); static BAT_DEBUGINFO(transtable_global, S_IRUGO, transtable_global_open); static BAT_DEBUGINFO(transtable_local, S_IRUGO, transtable_local_open); +static BAT_DEBUGINFO(mcast_forw_table, S_IRUGO, mcast_forw_table_open); static BAT_DEBUGINFO(vis_data, S_IRUGO, vis_data_open);
static struct bat_debuginfo *mesh_debuginfos[] = { @@ -288,6 +296,7 @@ static struct bat_debuginfo *mesh_debuginfos[] = { &bat_debuginfo_softif_neigh, &bat_debuginfo_transtable_global, &bat_debuginfo_transtable_local, + &bat_debuginfo_mcast_forw_table, &bat_debuginfo_vis_data, NULL, }; diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index bef3972..686e10b 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -158,6 +158,24 @@ void mcast_tracker_reset(struct bat_priv *bat_priv) start_mcast_tracker(bat_priv); }
+static inline int get_remaining_timeout( + struct mcast_forw_nexthop_entry *nexthop_entry, + struct bat_priv *bat_priv) +{ + int tracker_timeout = atomic_read(&bat_priv->mcast_tracker_timeout); + if (!tracker_timeout) + tracker_timeout = atomic_read(&bat_priv->mcast_tracker_interval) + * TRACKER_TIMEOUT_AUTO_X; + if (!tracker_timeout) + tracker_timeout = atomic_read(&bat_priv->orig_interval) + * TRACKER_TIMEOUT_AUTO_X / 2; + + tracker_timeout = jiffies_to_msecs(nexthop_entry->timeout) + + tracker_timeout - jiffies_to_msecs(jiffies); + + return (tracker_timeout > 0 ? tracker_timeout : 0); +} + static void prepare_forw_if_entry(struct list_head *forw_if_list, int16_t if_num, uint8_t *neigh_addr) { @@ -1002,6 +1020,70 @@ ok: return count; }
+static inline struct batman_if *if_num_to_batman_if(int16_t if_num) +{ + struct batman_if *batman_if; + + list_for_each_entry_rcu(batman_if, &if_list, list) + if (batman_if->if_num == if_num) + return batman_if; + + return NULL; +} + +int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) +{ + struct net_device *net_dev = (struct net_device *)seq->private; + struct bat_priv *bat_priv = netdev_priv(net_dev); + struct batman_if *batman_if; + struct mcast_forw_table_entry *table_entry; + struct mcast_forw_orig_entry *orig_entry; + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *nexthop_entry; + + seq_printf(seq, "[B.A.T.M.A.N. adv %s%s, MainIF/MAC: %s/%pM (%s)]\n", + SOURCE_VERSION, REVISION_VERSION_STR, + bat_priv->primary_if->net_dev->name, + bat_priv->primary_if->net_dev->dev_addr, net_dev->name); + seq_printf(seq, "Multicast group MAC\tOriginator\t" + "Outgoing interface\tNexthop - timeout in msecs\n"); + + rcu_read_lock(); + spin_lock_bh(&bat_priv->mcast_forw_table_lock); + list_for_each_entry(table_entry, &bat_priv->mcast_forw_table, list) { + seq_printf(seq, "%pM\n", table_entry->mcast_addr); + + list_for_each_entry(orig_entry, &table_entry->mcast_orig_list, + list) { + seq_printf(seq, "\t%pM\n", orig_entry->orig); + + list_for_each_entry(if_entry, + &orig_entry->mcast_if_list, list) { + batman_if = + if_num_to_batman_if(if_entry->if_num); + if (!batman_if) + continue; + + seq_printf(seq, "\t\t%s\n", + batman_if->net_dev->name); + + list_for_each_entry(nexthop_entry, + &if_entry->mcast_nexthop_list, + list) { + seq_printf(seq, "\t\t\t%pM - %i\n", + nexthop_entry->neigh_addr, + get_remaining_timeout( + nexthop_entry, bat_priv)); + } + } + } + } + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); + rcu_read_unlock(); + + return 0; +} + int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 2711d8b..0bd0590 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -30,6 +30,7 @@ void mcast_tracker_reset(struct bat_priv *bat_priv); void route_mcast_tracker_packet( struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct bat_priv *bat_priv); +int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
With this commit, the multicast forwarding table, which has been previously filled up due to multicast tracker packets, will now be checked frequently (once per second) for timeouted entries. If so these entries get removed from the table.
Note, that a more frequent check interval is not necessary, as multicast data will not only be forwarded if an entry exists, but also if that one might not have timeouted yet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + originator.c | 2 + 3 files changed, 73 insertions(+), 0 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index 686e10b..e47cc56 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -898,6 +898,76 @@ out: rcu_read_unlock(); }
+static void purge_mcast_nexthop_list(struct list_head *mcast_nexthop_list, + int *num_nexthops, + struct bat_priv *bat_priv) +{ + struct mcast_forw_nexthop_entry *nexthop_entry, *tmp_nexthop_entry; + + list_for_each_entry_safe(nexthop_entry, tmp_nexthop_entry, + mcast_nexthop_list, list) { + if (get_remaining_timeout(nexthop_entry, bat_priv)) + continue; + + list_del(&nexthop_entry->list); + kfree(nexthop_entry); + *num_nexthops = *num_nexthops - 1; + } +} + +static void purge_mcast_if_list(struct list_head *mcast_if_list, + struct bat_priv *bat_priv) +{ + struct mcast_forw_if_entry *if_entry, *tmp_if_entry; + + list_for_each_entry_safe(if_entry, tmp_if_entry, mcast_if_list, list) { + purge_mcast_nexthop_list(&if_entry->mcast_nexthop_list, + &if_entry->num_nexthops, + bat_priv); + + if (!list_empty(&if_entry->mcast_nexthop_list)) + continue; + + list_del(&if_entry->list); + kfree(if_entry); + } +} + +static void purge_mcast_orig_list(struct list_head *mcast_orig_list, + struct bat_priv *bat_priv) +{ + struct mcast_forw_orig_entry *orig_entry, *tmp_orig_entry; + + list_for_each_entry_safe(orig_entry, tmp_orig_entry, mcast_orig_list, + list) { + purge_mcast_if_list(&orig_entry->mcast_if_list, bat_priv); + + if (!list_empty(&orig_entry->mcast_if_list)) + continue; + + list_del(&orig_entry->list); + kfree(orig_entry); + } +} + +void purge_mcast_forw_table(struct bat_priv *bat_priv) +{ + struct mcast_forw_table_entry *table_entry, *tmp_table_entry; + + spin_lock_bh(&bat_priv->mcast_forw_table_lock); + list_for_each_entry_safe(table_entry, tmp_table_entry, + &bat_priv->mcast_forw_table, list) { + purge_mcast_orig_list(&table_entry->mcast_orig_list, bat_priv); + + if (!list_empty(&table_entry->mcast_orig_list)) + continue; + + list_del(&table_entry->list); + kfree(table_entry); + } + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); +} + static void mcast_tracker_timer(struct work_struct *work) { struct bat_priv *bat_priv = container_of(work, struct bat_priv, diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 0bd0590..7312afa 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -30,6 +30,7 @@ void mcast_tracker_reset(struct bat_priv *bat_priv); void route_mcast_tracker_packet( struct mcast_tracker_packet *tracker_packet, int tracker_packet_len, struct bat_priv *bat_priv); +void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv); diff --git a/batman-adv/originator.c b/batman-adv/originator.c index aee77d3..6c964e5 100644 --- a/batman-adv/originator.c +++ b/batman-adv/originator.c @@ -30,6 +30,7 @@ #include "hard-interface.h" #include "unicast.h" #include "soft-interface.h" +#include "multicast.h"
static void purge_orig(struct work_struct *work);
@@ -402,6 +403,7 @@ static void purge_orig(struct work_struct *work) struct bat_priv *bat_priv = container_of(delayed_work, struct bat_priv, orig_work);
+ purge_mcast_forw_table(bat_priv); _purge_orig(bat_priv); start_purge_timer(bat_priv); }
This patch adds the capability to encapsulate and send a node's own multicast data packets. Based on the previously established multicast forwarding table, the sender can decide wheather it actually has to send the multicast data to one or more of its interfaces or not.
Furthermore, the sending procedure also decides whether to broadcast or unicast a multicast data packet to its next-hops, depending on the configured mcast_fanout (default: < 3 next hops on an interface, send seperate unicast packets).
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 163 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ multicast.h | 1 + soft-interface.c | 27 ++++++++- types.h | 1 + 4 files changed, 188 insertions(+), 4 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index e47cc56..a8613ef 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -23,6 +23,8 @@ #include "multicast.h" #include "hash.h" #include "send.h" +#include "soft-interface.h" +#include "hard-interface.h" #include "compat.h"
/* If auto mode for tracker timeout has been selected, @@ -1154,6 +1156,167 @@ int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset) return 0; }
+static inline void nexthops_from_if_list(struct list_head *mcast_if_list, + struct list_head *nexthop_list, + struct bat_priv *bat_priv) +{ + struct batman_if *batman_if; + struct mcast_forw_if_entry *if_entry; + struct mcast_forw_nexthop_entry *nexthop_entry; + struct dest_entries_list *dest_entry; + int mcast_fanout = atomic_read(&bat_priv->mcast_fanout); + + list_for_each_entry(if_entry, mcast_if_list, list) { + rcu_read_lock(); + batman_if = if_num_to_batman_if(if_entry->if_num); + if (!batman_if) { + rcu_read_unlock(); + continue; + } + + kref_get(&batman_if->refcount); + rcu_read_unlock(); + + + /* send via broadcast */ + if (if_entry->num_nexthops > mcast_fanout) { + dest_entry = kmalloc(sizeof(struct dest_entries_list), + GFP_ATOMIC); + memcpy(dest_entry->dest, broadcast_addr, ETH_ALEN); + dest_entry->batman_if = batman_if; + list_add(&dest_entry->list, nexthop_list); + continue; + } + + /* send separate unicast packets */ + list_for_each_entry(nexthop_entry, + &if_entry->mcast_nexthop_list, list) { + if (!get_remaining_timeout(nexthop_entry, bat_priv)) + continue; + + dest_entry = kmalloc(sizeof(struct dest_entries_list), + GFP_ATOMIC); + memcpy(dest_entry->dest, nexthop_entry->neigh_addr, + ETH_ALEN); + + kref_get(&batman_if->refcount); + dest_entry->batman_if = batman_if; + list_add(&dest_entry->list, nexthop_list); + } + kref_put(&batman_if->refcount, hardif_free_ref); + } +} + +static inline void nexthops_from_orig_list(uint8_t *orig, + struct list_head *mcast_orig_list, + struct list_head *nexthop_list, + struct bat_priv *bat_priv) +{ + struct mcast_forw_orig_entry *orig_entry; + + list_for_each_entry(orig_entry, mcast_orig_list, list) { + if (memcmp(orig, orig_entry->orig, ETH_ALEN)) + continue; + + nexthops_from_if_list(&orig_entry->mcast_if_list, nexthop_list, + bat_priv); + break; + } +} + +static inline void nexthops_from_table(uint8_t *dest, uint8_t *orig, + struct list_head *mcast_forw_table, + struct list_head *nexthop_list, + struct bat_priv *bat_priv) +{ + struct mcast_forw_table_entry *table_entry; + + list_for_each_entry(table_entry, mcast_forw_table, list) { + if (memcmp(dest, table_entry->mcast_addr, ETH_ALEN)) + continue; + + nexthops_from_orig_list(orig, &table_entry->mcast_orig_list, + nexthop_list, bat_priv); + break; + } +} + +static void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) +{ + struct sk_buff *skb1; + struct mcast_packet *mcast_packet; + struct ethhdr *ethhdr; + int num_bcasts = 3, i; + struct list_head nexthop_list; + struct dest_entries_list *dest_entry, *tmp; + + mcast_packet = (struct mcast_packet *)skb->data; + ethhdr = (struct ethhdr *)(mcast_packet + 1); + + INIT_LIST_HEAD(&nexthop_list); + + mcast_packet->ttl--; + + spin_lock_bh(&bat_priv->mcast_forw_table_lock); + nexthops_from_table(ethhdr->h_dest, mcast_packet->orig, + &bat_priv->mcast_forw_table, &nexthop_list, + bat_priv); + spin_unlock_bh(&bat_priv->mcast_forw_table_lock); + + list_for_each_entry_safe(dest_entry, tmp, &nexthop_list, list) { + if (is_broadcast_ether_addr(dest_entry->dest)) { + for (i = 0; i < num_bcasts; i++) { + skb1 = skb_clone(skb, GFP_ATOMIC); + send_skb_packet(skb1, dest_entry->batman_if, + dest_entry->dest); + } + } else { + skb1 = skb_clone(skb, GFP_ATOMIC); + send_skb_packet(skb1, dest_entry->batman_if, + dest_entry->dest); + } + kref_put(&dest_entry->batman_if->refcount, hardif_free_ref); + list_del(&dest_entry->list); + kfree(dest_entry); + } +} + +int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) +{ + struct mcast_packet *mcast_packet; + + if (!bat_priv->primary_if) + goto dropped; + + if (my_skb_head_push(skb, sizeof(struct mcast_packet)) < 0) + goto dropped; + + mcast_packet = (struct mcast_packet *)skb->data; + mcast_packet->version = COMPAT_VERSION; + mcast_packet->ttl = TTL; + + /* batman packet type: broadcast */ + mcast_packet->packet_type = BAT_MCAST; + + /* hw address of first interface is the orig mac because only + * this mac is known throughout the mesh */ + memcpy(mcast_packet->orig, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + + /* set broadcast sequence number */ + mcast_packet->seqno = + htonl(atomic_inc_return(&bat_priv->mcast_seqno)); + + route_mcast_packet(skb, bat_priv); + + kfree_skb(skb); + return 0; + +dropped: + kfree_skb(skb); + return 1; +} + int mcast_init(struct bat_priv *bat_priv) { INIT_DELAYED_WORK(&bat_priv->mcast_tracker_work, mcast_tracker_timer); diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 7312afa..06dd398 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -32,6 +32,7 @@ void route_mcast_tracker_packet( int tracker_packet_len, struct bat_priv *bat_priv); void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); +int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv);
diff --git a/batman-adv/soft-interface.c b/batman-adv/soft-interface.c index 7cea678..2f327c4 100644 --- a/batman-adv/soft-interface.c +++ b/batman-adv/soft-interface.c @@ -38,6 +38,7 @@ #include <linux/if_vlan.h> #include "unicast.h" #include "routing.h" +#include "multicast.h"
static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd); @@ -347,7 +348,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) struct vlan_ethhdr *vhdr; int data_len = skb->len, ret; short vid = -1; - bool do_bcast = false; + bool bcast_dst = false, mcast_dst = false;
if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE) goto dropped; @@ -384,12 +385,22 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (ret < 0) goto dropped;
- if (ret == 0) - do_bcast = true; + /* dhcp request, which should be sent to the gateway + * directly? */ + if (ret) + goto unicast; + + if (is_broadcast_ether_addr(ethhdr->h_dest)) + bcast_dst = true; + else if (atomic_read(&bat_priv->mcast_mode) == + MCAST_MODE_PROACT_TRACKING) + mcast_dst = true; + else + bcast_dst = true; }
/* ethernet packet should be broadcasted */ - if (do_bcast) { + if (bcast_dst) { if (!bat_priv->primary_if) goto dropped;
@@ -418,8 +429,15 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) * the original skb. */ kfree_skb(skb);
+ /* multicast data with path optimization */ + } else if (mcast_dst) { + ret = mcast_send_skb(skb, bat_priv); + if (ret != 0) + goto dropped_freed; + /* unicast packet */ } else { +unicast: ret = unicast_send_skb(skb, bat_priv); if (ret != 0) goto dropped_freed; @@ -608,6 +626,7 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); + atomic_set(&bat_priv->mcast_seqno, 1); atomic_set(&bat_priv->hna_local_changed, 0);
bat_priv->primary_if = NULL; diff --git a/batman-adv/types.h b/batman-adv/types.h index 75adec9..2857fba 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -145,6 +145,7 @@ struct bat_priv { atomic_t mcast_fanout; /* uint */ atomic_t log_level; /* uint */ atomic_t bcast_seqno; + atomic_t mcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; char num_ifaces;
We need to check similar things for BAT_MCAST packets later too, therefore moving them to a seperate function.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- routing.c | 43 ++++++++++++++++++++++++++----------------- 1 files changed, 26 insertions(+), 17 deletions(-)
diff --git a/batman-adv/routing.c b/batman-adv/routing.c index 944dc94..9482db2 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -1278,6 +1278,31 @@ static int check_unicast_packet(struct sk_buff *skb, int hdr_size) return 0; }
+static int check_broadcast_packet(struct sk_buff *skb, int hdr_size) +{ + struct ethhdr *ethhdr; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, hdr_size))) + return -1; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with broadcast indication but unicast recipient */ + if (!is_broadcast_ether_addr(ethhdr->h_dest)) + return -1; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + return -1; + + /* ignore broadcasts sent by myself */ + if (is_my_mac(ethhdr->h_source)) + return -1; + + return 0; +} + int route_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if, int hdr_size) { @@ -1425,27 +1450,11 @@ int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if) struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct orig_node *orig_node = NULL; struct bcast_packet *bcast_packet; - struct ethhdr *ethhdr; int hdr_size = sizeof(struct bcast_packet); int ret = NET_RX_DROP; int32_t seq_diff;
- /* drop packet if it has not necessary minimum size */ - if (unlikely(!pskb_may_pull(skb, hdr_size))) - goto out; - - ethhdr = (struct ethhdr *)skb_mac_header(skb); - - /* packet with broadcast indication but unicast recipient */ - if (!is_broadcast_ether_addr(ethhdr->h_dest)) - goto out; - - /* packet with broadcast sender address */ - if (is_broadcast_ether_addr(ethhdr->h_source)) - goto out; - - /* ignore broadcasts sent by myself */ - if (is_my_mac(ethhdr->h_source)) + if (check_broadcast_packet(skb, hdr_size) < 0) goto out;
bcast_packet = (struct bcast_packet *)skb->data;
This patch adds the forwarding of multicast data packets to the local soft interface if this receiving node is a member of the same multicast group as specified in the multicast packet.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- hard-interface.c | 5 +++++ routing.c | 29 +++++++++++++++++++++++++++++ routing.h | 1 + 3 files changed, 35 insertions(+), 0 deletions(-)
diff --git a/batman-adv/hard-interface.c b/batman-adv/hard-interface.c index f478c4b..a535e09 100644 --- a/batman-adv/hard-interface.c +++ b/batman-adv/hard-interface.c @@ -624,6 +624,11 @@ int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, ret = recv_bcast_packet(skb, batman_if); break;
+ /* multicast packet */ + case BAT_MCAST: + ret = recv_mcast_packet(skb, batman_if); + break; + /* multicast tracker packet */ case BAT_MCAST_TRACKER: ret = recv_mcast_tracker_packet(skb, batman_if); diff --git a/batman-adv/routing.c b/batman-adv/routing.c index 9482db2..b5559c0 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -1510,6 +1510,35 @@ out: return ret; }
+int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) +{ + struct ethhdr *ethhdr; + MC_LIST *mc_entry; + int ret = 1; + int hdr_size = sizeof(struct mcast_packet); + + /* multicast data packets might be received via unicast or broadcast */ + if (check_unicast_packet(skb, hdr_size) < 0 && + check_broadcast_packet(skb, hdr_size) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet)); + + /* multicast for me? */ + netif_addr_lock_bh(recv_if->soft_iface); + netdev_for_each_mc_addr(mc_entry, recv_if->soft_iface) { + ret = memcmp(mc_entry->MC_LIST_ADDR, ethhdr->h_dest, ETH_ALEN); + if (!ret) + break; + } + netif_addr_unlock_bh(recv_if->soft_iface); + + if (!ret) + interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); + + return NET_RX_SUCCESS; +} + int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); diff --git a/batman-adv/routing.h b/batman-adv/routing.h index 0fad12a..722d837 100644 --- a/batman-adv/routing.h +++ b/batman-adv/routing.h @@ -38,6 +38,7 @@ int recv_icmp_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_ucast_frag_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct batman_if *recv_if); +int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_vis_packet(struct sk_buff *skb, struct batman_if *recv_if); int recv_bat_packet(struct sk_buff *skb, struct batman_if *recv_if);
This patch enables the forwarding of multicast data and uses the same methods for deciding to forward via broad- or unicast(s) as the local packet encapsulation already did.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- multicast.c | 2 +- multicast.h | 1 + routing.c | 4 ++++ 3 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/batman-adv/multicast.c b/batman-adv/multicast.c index a8613ef..70184cf 100644 --- a/batman-adv/multicast.c +++ b/batman-adv/multicast.c @@ -1241,7 +1241,7 @@ static inline void nexthops_from_table(uint8_t *dest, uint8_t *orig, } }
-static void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) +void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv) { struct sk_buff *skb1; struct mcast_packet *mcast_packet; diff --git a/batman-adv/multicast.h b/batman-adv/multicast.h index 06dd398..6dcf537 100644 --- a/batman-adv/multicast.h +++ b/batman-adv/multicast.h @@ -32,6 +32,7 @@ void route_mcast_tracker_packet( int tracker_packet_len, struct bat_priv *bat_priv); void purge_mcast_forw_table(struct bat_priv *bat_priv); int mcast_forw_table_seq_print_text(struct seq_file *seq, void *offset); +void route_mcast_packet(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv); int mcast_init(struct bat_priv *bat_priv); void mcast_free(struct bat_priv *bat_priv); diff --git a/batman-adv/routing.c b/batman-adv/routing.c index b5559c0..512d9ba 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -1512,6 +1512,7 @@ out:
int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct ethhdr *ethhdr; MC_LIST *mc_entry; int ret = 1; @@ -1522,6 +1523,9 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) check_broadcast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* forward multicast packet if necessary */ + route_mcast_packet(skb, bat_priv); + ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet));
/* multicast for me? */
This commit adds duplicate checks to avoid endless rebroadcasts in the case of forwarding multicast data packets via broadcasting.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- originator.c | 2 ++ routing.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++---- types.h | 3 +++ 3 files changed, 58 insertions(+), 4 deletions(-)
diff --git a/batman-adv/originator.c b/batman-adv/originator.c index 6c964e5..c819189 100644 --- a/batman-adv/originator.c +++ b/batman-adv/originator.c @@ -234,6 +234,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) orig_node->num_mca = 0; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); + orig_node->mcast_seqno_reset = jiffies - 1 + - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS);
diff --git a/batman-adv/routing.c b/batman-adv/routing.c index 512d9ba..8167526 100644 --- a/batman-adv/routing.c +++ b/batman-adv/routing.c @@ -1513,20 +1513,61 @@ out: int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) { struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct orig_node *orig_node = NULL; + struct mcast_packet *mcast_packet; struct ethhdr *ethhdr; MC_LIST *mc_entry; - int ret = 1; + int32_t seq_diff; + int ret = NET_RX_DROP; int hdr_size = sizeof(struct mcast_packet);
/* multicast data packets might be received via unicast or broadcast */ if (check_unicast_packet(skb, hdr_size) < 0 && check_broadcast_packet(skb, hdr_size) < 0) - return NET_RX_DROP; + goto out; + + mcast_packet = (struct mcast_packet *)skb->data; + + /* ignore broadcasts originated by myself */ + if (is_my_mac(mcast_packet->orig)) + goto out; + + if (mcast_packet->ttl < 2) + goto out; + + rcu_read_lock(); + orig_node = ((struct orig_node *) + hash_find(bat_priv->orig_hash, compare_orig, choose_orig, + mcast_packet->orig)); + + if (!orig_node) + goto unlock; + + kref_get(&orig_node->refcount); + rcu_read_unlock(); + + /* check whether the packet is a duplicate */ + if (get_bit_status(orig_node->mcast_bits, + orig_node->last_mcast_seqno, + ntohl(mcast_packet->seqno))) + goto out; + + seq_diff = ntohl(mcast_packet->seqno) - orig_node->last_mcast_seqno; + + /* check whether the packet is old and the host just restarted. */ + if (window_protected(bat_priv, seq_diff, + &orig_node->mcast_seqno_reset)) + goto out; + + /* mark broadcast in flood history, update window position + * if required. */ + if (bit_get_packet(bat_priv, orig_node->mcast_bits, seq_diff, 1)) + orig_node->last_mcast_seqno = ntohl(mcast_packet->seqno);
/* forward multicast packet if necessary */ route_mcast_packet(skb, bat_priv);
- ethhdr = (struct ethhdr *)(skb->data + sizeof(struct mcast_packet)); + ethhdr = (struct ethhdr *)(mcast_packet + 1);
/* multicast for me? */ netif_addr_lock_bh(recv_if->soft_iface); @@ -1540,7 +1581,15 @@ int recv_mcast_packet(struct sk_buff *skb, struct batman_if *recv_if) if (!ret) interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size);
- return NET_RX_SUCCESS; + ret = NET_RX_SUCCESS; + goto out; + +unlock: + rcu_read_unlock(); +out: + if (orig_node) + kref_put(&orig_node->refcount, orig_node_free_ref); + return ret; }
int recv_mcast_tracker_packet(struct sk_buff *skb, struct batman_if *recv_if) diff --git a/batman-adv/types.h b/batman-adv/types.h index 2857fba..d9b0429 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -74,6 +74,7 @@ struct orig_node { int tq_asym_penalty; unsigned long last_valid; unsigned long bcast_seqno_reset; + unsigned long mcast_seqno_reset; unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; @@ -84,7 +85,9 @@ struct orig_node { uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; + unsigned long mcast_bits[NUM_WORDS]; uint32_t last_bcast_seqno; + uint32_t last_mcast_seqno; struct hlist_head neigh_list; struct list_head frag_list; spinlock_t neigh_list_lock; /* protects neighbor list */
We may only optimize the multicast packet flow, if an mcast_mode has been activated and if we are a multicast receiver of the same group. Otherwise flood the multicast packet without optimizations.
This allows us to still flood multicast packets of protocols where it is not easily possible for a multicast sender to be a multicast receiver of the same group instead of dropping them (for instance IPv6 NDP).
This commit therefore also makes IPv6 usable again, if the proact_tracking multicast mode has been activated.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- soft-interface.c | 28 ++++++++++++++++++++++++++-- 1 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/batman-adv/soft-interface.c b/batman-adv/soft-interface.c index 2f327c4..2b202ae 100644 --- a/batman-adv/soft-interface.c +++ b/batman-adv/soft-interface.c @@ -340,6 +340,31 @@ static int interface_change_mtu(struct net_device *dev, int new_mtu) return 0; }
+static int mcast_may_optimize(uint8_t *dest, struct net_device *soft_iface) +{ + MC_LIST *mc_entry; + struct bat_priv *bat_priv = netdev_priv(soft_iface); + int mcast_mode = atomic_read(&bat_priv->mcast_mode); + + if (mcast_mode != MCAST_MODE_PROACT_TRACKING) + return 0; + + /* Still allow flooding of multicast packets of protocols where it is + * not easily possible for a multicast sender to be a multicast + * receiver of the same group (for instance IPv6 NDP) */ + netif_addr_lock_bh(soft_iface); + netdev_for_each_mc_addr(mc_entry, soft_iface) { + if (memcmp(dest, mc_entry->MC_LIST_ADDR, ETH_ALEN)) + continue; + + netif_addr_unlock_bh(soft_iface); + return 1; + } + netif_addr_unlock_bh(soft_iface); + + return 0; +} + int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) { struct ethhdr *ethhdr = (struct ethhdr *)skb->data; @@ -392,8 +417,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface)
if (is_broadcast_ether_addr(ethhdr->h_dest)) bcast_dst = true; - else if (atomic_read(&bat_priv->mcast_mode) == - MCAST_MODE_PROACT_TRACKING) + else if (mcast_may_optimize(ethhdr->h_dest, soft_iface)) mcast_dst = true; else bcast_dst = true;
Depending on the scenario, people might want to adjust the number of (re)broadcast of data packets - usually higher values in sparse or lower values in dense networks.
Signed-off-by: Linus Lüssing linus.luessing@saxnet.de --- bat_sysfs.c | 2 ++ send.c | 3 ++- types.h | 1 + 3 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/batman-adv/bat_sysfs.c b/batman-adv/bat_sysfs.c index 8f688db..7135c08 100644 --- a/batman-adv/bat_sysfs.c +++ b/batman-adv/bat_sysfs.c @@ -520,6 +520,7 @@ static BAT_ATTR(gw_mode, S_IRUGO | S_IWUSR, show_gw_mode, store_gw_mode); BAT_ATTR_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, update_mcast_tracker); BAT_ATTR_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, TQ_MAX_VALUE, NULL); +BAT_ATTR_UINT(num_bcasts, S_IRUGO | S_IWUSR, 0, INT_MAX, NULL); BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, post_gw_deselect); static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, @@ -544,6 +545,7 @@ static struct bat_attribute *mesh_attrs[] = { &bat_attr_gw_mode, &bat_attr_orig_interval, &bat_attr_hop_penalty, + &bat_attr_num_bcasts, &bat_attr_gw_sel_class, &bat_attr_gw_bandwidth, &bat_attr_mcast_mode, diff --git a/batman-adv/send.c b/batman-adv/send.c index 03db894..4169f6e 100644 --- a/batman-adv/send.c +++ b/batman-adv/send.c @@ -510,6 +510,7 @@ static void send_outstanding_bcast_packet(struct work_struct *work) struct sk_buff *skb1; struct net_device *soft_iface = forw_packet->if_incoming->soft_iface; struct bat_priv *bat_priv = netdev_priv(soft_iface); + int num_bcasts = atomic_read(&bat_priv->num_bcasts);
spin_lock_bh(&bat_priv->forw_bcast_list_lock); hlist_del(&forw_packet->list); @@ -534,7 +535,7 @@ static void send_outstanding_bcast_packet(struct work_struct *work) forw_packet->num_packets++;
/* if we still have some more bcasts to send */ - if (forw_packet->num_packets < 3) { + if (forw_packet->num_packets < num_bcasts) { _add_bcast_packet_to_list(bat_priv, forw_packet, ((5 * HZ) / 1000)); return; diff --git a/batman-adv/types.h b/batman-adv/types.h index d9b0429..5d72d4c 100644 --- a/batman-adv/types.h +++ b/batman-adv/types.h @@ -142,6 +142,7 @@ struct bat_priv { atomic_t gw_bandwidth; /* gw bandwidth */ atomic_t orig_interval; /* uint */ atomic_t hop_penalty; /* uint */ + atomic_t num_bcasts; /* uint */ atomic_t mcast_mode; /* MCAST_MODE_* */ atomic_t mcast_tracker_interval;/* uint, auto */ atomic_t mcast_tracker_timeout; /* uint, auto */
By the way, you can also pull these commits directly from http://git.open-mesh.org/?p=t_x/batman-adv.git;a=summary and easily monitor/review new changes here in the "multicast" branch.
This patchset is tagged as "multicast-v2", obviously.
Cheers, Linus
PS: The NDP patchsets are available there as well.
On Sat, Jan 22, 2011 at 02:21:23AM +0100, Linus Lüssing wrote:
Hi everyone,
Here's the next series of patches which should address the comments I got for the first one. Thanks for all the feedback!
Changelog:
- rebasing to commit [65e0869478bce153a799c0e774a117ba5fc78025], using new orig_hash methods
- putting seqno before ttl, 4 byte aligning mcast_packet [01/20]
- adapted compat.h to not use custom lock macros, instead only one macro for netif_addr_lock_bh() in case of older kernel versions
- merged spinlock-irqsave-to-bh commit into previous commits [20/20]
- moved mcast_may_optimize() to soft-interface.c [18/20], removed inlining (won't optimize much anyway...)
- purge_mcast_forw_table, splitted list operations into separate functions [12/20]
- use batman_if refcounting to reduce the time of rcu-locking [13/20]
- do not create nexthop entry if according batman_if is NULL [13/20]
- route_mcast_packet, split into separat functions [13/20]
- fix typo "seperate" [13/20]
- fix typo "i.g." [08/20]
- COMPAT_VERSION to 14 [01/20]
- use rcu-locking+refcounting for orig_node, remove orig_hash_lock [07/20], [17/20]
- made checkpatch-clean
- use __packed instead of __attribute_((packed)) [01/20]
- change tracker_packet_for_each_dest macro [07/20]: make a "break" in this macro to behave like usual, export parts from macro into own functions
TODO:
directly prepare mcast-tracker-packet in sk_buff
only create methods / variables in patches that need them
update mcast-doc
upload updated mcast-doc to wiki
maybe TODO?
- use hlist instead of list for mcast-table?
- use rcu-locking / refcounting for mcast_forw_table?
Cheers, Linus
b.a.t.m.a.n@lists.open-mesh.org