This patchset wants to introduce a new client announcement mechanism (previously called HNA) which totally revises the old one.
B.A.T.M.A.N.-advanced manages clients by the means of two translation tables: a local table and a global table. The first one stores all the clients directly connected to the node itself while the second one stores all the clients which are announced by other nodes in the mesh network.
In the current implementation the whole local table is sent within each OGM causing a big protocol overhead.
The core of the new implementation, instead, consists in avoiding this part and replace this procedure by sending only the local table _changes_ which happened in the last OGM interval. In this way, every node will update its global table applying the changes it finds in the OGM.
A roaming improvement is also provided exploiting the newly implemented announcement mechanism.
Moreover the global and local translation table are now lock free and rcu protected :-)
Patchset description: 1) Rename all the variables/functions/constants from *hna* to *tt* 2) Implement the new announcement mechanism 3) Implement the roaming optimisation 4) Protect by RCU the local and global table
** Patch 2/4 also introduces a dependency on the crc16 module since the new mechanism uses the crc16 computation function provided by this module. **
For more details, please refer to the commit message of each patch.
Regards, Antonio Quartulli
To be coherent, all the functions/variables/constats have been renamed to the TranslationTable style
Signed-off-by: Antonio Quartulli ordex@autistici.org --- README | 8 +- aggregation.c | 16 +- aggregation.h | 4 +- bat_debugfs.c | 4 +- hard-interface.c | 6 +- main.c | 14 +- main.h | 4 +- originator.c | 8 +- packet.h | 2 +- routing.c | 70 +++++----- routing.h | 6 +- send.c | 16 +- send.h | 2 +- soft-interface.c | 10 +- translation-table.c | 414 +++++++++++++++++++++++++------------------------- translation-table.h | 24 ++-- types.h | 24 ++-- unicast.c | 2 +- vis.c | 18 +- 19 files changed, 326 insertions(+), 326 deletions(-)
diff --git a/README b/README index 6aa36eb..47a840e 100644 --- a/README +++ b/README @@ -176,13 +176,13 @@ face. Each entry can/has to have the following values: -> "TQ mac value" - src mac's link quality towards mac address of a neighbor originator's interface which is being used for routing --> "HNA mac" - HNA announced by source mac +-> "TT mac" - TT announced by source mac -> "PRIMARY" - this is a primary interface -> "SEC mac" - secondary mac address of source (requires preceding PRIMARY)
The TQ value has a range from 4 to 255 with 255 being the best. -The HNA entries are showing which hosts are connected to the mesh +The TT entries are showing which hosts are connected to the mesh via bat0 or being bridged into the mesh network. The PRIMARY/SEC values are only applied on primary interfaces
@@ -219,7 +219,7 @@ abled during run time. Following log_levels are defined:
0 - All debug output disabled 1 - Enable messages related to routing / flooding / broadcasting -2 - Enable route or hna added / changed / deleted +2 - Enable route or tt added / changed / deleted 3 - Enable all messages
The debug output can be changed at runtime using the file @@ -227,7 +227,7 @@ The debug output can be changed at runtime using the file
# echo 2 > /sys/class/net/bat0/mesh/log_level
-will enable debug messages for when routes or HNAs change. +will enable debug messages for when routes or TTs change.
BATCTL diff --git a/aggregation.c b/aggregation.c index c11788c..9b94590 100644 --- a/aggregation.c +++ b/aggregation.c @@ -24,10 +24,10 @@ #include "send.h" #include "routing.h"
-/* calculate the size of the hna information for a given packet */ -static int hna_len(struct batman_packet *batman_packet) +/* calculate the size of the tt information for a given packet */ +static int tt_len(struct batman_packet *batman_packet) { - return batman_packet->num_hna * ETH_ALEN; + return batman_packet->num_tt * ETH_ALEN; }
/* return true if new_packet can be aggregated with forw_packet */ @@ -250,7 +250,7 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, { struct batman_packet *batman_packet; int buff_pos = 0; - unsigned char *hna_buff; + unsigned char *tt_buff;
batman_packet = (struct batman_packet *)packet_buff;
@@ -259,14 +259,14 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, orig_interval. */ batman_packet->seqno = ntohl(batman_packet->seqno);
- hna_buff = packet_buff + buff_pos + BAT_PACKET_LEN; + tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; receive_bat_packet(ethhdr, batman_packet, - hna_buff, hna_len(batman_packet), + tt_buff, tt_len(batman_packet), if_incoming);
- buff_pos += BAT_PACKET_LEN + hna_len(batman_packet); + buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_hna)); + batman_packet->num_tt)); } diff --git a/aggregation.h b/aggregation.h index 0622042..7e6d72f 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,9 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna) +static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_debugfs.c b/bat_debugfs.c index 0e9d435..abaeec5 100644 --- a/bat_debugfs.c +++ b/bat_debugfs.c @@ -241,13 +241,13 @@ static int softif_neigh_open(struct inode *inode, struct file *file) static int transtable_global_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_global_seq_print_text, net_dev); + return single_open(file, tt_global_seq_print_text, net_dev); }
static int transtable_local_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_local_seq_print_text, net_dev); + return single_open(file, tt_local_seq_print_text, net_dev); }
static int vis_data_open(struct inode *inode, struct file *file) diff --git a/hard-interface.c b/hard-interface.c index 3e888f1..9e4ac7d 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -157,10 +157,10 @@ static void primary_if_select(struct bat_priv *bat_priv, primary_if_update_addr(bat_priv);
/*** - * hacky trick to make sure that we send the HNA information via + * hacky trick to make sure that we send the TT information via * our new primary interface */ - atomic_set(&bat_priv->hna_local_changed, 1); + atomic_set(&bat_priv->tt_local_changed, 1);
out: spin_unlock_bh(&hardif_list_lock); @@ -345,7 +345,7 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_hna = 0; + batman_packet->num_tt = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; diff --git a/main.c b/main.c index 709b33b..2970908 100644 --- a/main.c +++ b/main.c @@ -81,8 +81,8 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->hna_lhash_lock); - spin_lock_init(&bat_priv->hna_ghash_lock); + spin_lock_init(&bat_priv->tt_lhash_lock); + spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -96,13 +96,13 @@ int mesh_init(struct net_device *soft_iface) if (originator_init(bat_priv) < 1) goto err;
- if (hna_local_init(bat_priv) < 1) + if (tt_local_init(bat_priv) < 1) goto err;
- if (hna_global_init(bat_priv) < 1) + if (tt_global_init(bat_priv) < 1) goto err;
- hna_local_add(soft_iface, soft_iface->dev_addr); + tt_local_add(soft_iface, soft_iface->dev_addr);
if (vis_init(bat_priv) < 1) goto err; @@ -133,8 +133,8 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- hna_local_free(bat_priv); - hna_global_free(bat_priv); + tt_local_free(bat_priv); + tt_global_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 101d9dc..50eb819 100644 --- a/main.h +++ b/main.h @@ -39,7 +39,7 @@ #define PURGE_TIMEOUT 200 /* purge originators after time in seconds if no * valid packet comes in -> TODO: check * influence on TQ_LOCAL_WINDOW_SIZE */ -#define LOCAL_HNA_TIMEOUT 3600 /* in seconds */ +#define TT_LOCAL_TIMEOUT 3600 /* in seconds */
#define TQ_LOCAL_WINDOW_SIZE 64 /* sliding packet range of received originator * messages in squence numbers (should be a @@ -89,7 +89,7 @@
#define DBG_BATMAN 1 /* all messages related to routing / flooding / * broadcasting / etc */ -#define DBG_ROUTES 2 /* route or hna added / changed / deleted */ +#define DBG_ROUTES 2 /* route or tt added / changed / deleted */ #define DBG_ALL 3
diff --git a/originator.c b/originator.c index ef4a9be..0314875 100644 --- a/originator.c +++ b/originator.c @@ -144,7 +144,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) spin_unlock_bh(&orig_node->neigh_list_lock);
frag_list_free(&orig_node->frag_list); - hna_global_del_orig(orig_node->bat_priv, orig_node, + tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
kfree(orig_node->bcast_own); @@ -222,7 +222,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; - orig_node->hna_buff = NULL; + orig_node->tt_buff = NULL; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -333,8 +333,8 @@ static bool purge_orig_node(struct bat_priv *bat_priv, &best_neigh_node)) { update_routes(bat_priv, orig_node, best_neigh_node, - orig_node->hna_buff, - orig_node->hna_buff_len); + orig_node->tt_buff, + orig_node->tt_buff_len); } }
diff --git a/packet.h b/packet.h index e757187..c225c3a 100644 --- a/packet.h +++ b/packet.h @@ -61,7 +61,7 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_hna; + uint8_t num_tt; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; diff --git a/routing.c b/routing.c index 49f5715..91b3709 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,28 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_HNA(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) +static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len) { - if ((hna_buff_len != orig_node->hna_buff_len) || - ((hna_buff_len > 0) && - (orig_node->hna_buff_len > 0) && - (memcmp(orig_node->hna_buff, hna_buff, hna_buff_len) != 0))) { + if ((tt_buff_len != orig_node->tt_buff_len) || + ((tt_buff_len > 0) && + (orig_node->tt_buff_len > 0) && + (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) {
- if (orig_node->hna_buff_len > 0) - hna_global_del_orig(bat_priv, orig_node, - "originator changed hna"); + if (orig_node->tt_buff_len > 0) + tt_global_del_orig(bat_priv, orig_node, + "originator changed tt");
- if ((hna_buff_len > 0) && (hna_buff)) - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + if ((tt_buff_len > 0) && (tt_buff)) + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len); } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { struct neigh_node *curr_router;
@@ -96,7 +96,7 @@ static void update_route(struct bat_priv *bat_priv,
bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); - hna_global_del_orig(bat_priv, orig_node, + tt_global_del_orig(bat_priv, orig_node, "originator timed out");
/* route added */ @@ -105,8 +105,8 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len);
/* route changed */ } else { @@ -135,8 +135,8 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len) + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len) { struct neigh_node *router = NULL;
@@ -147,10 +147,10 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node,
if (router != neigh_node) update_route(bat_priv, orig_node, neigh_node, - hna_buff, hna_buff_len); - /* may be just HNA changed */ + tt_buff, tt_buff_len); + /* may be just TT changed */ else - update_HNA(bat_priv, orig_node, hna_buff, hna_buff_len); + update_TT(bat_priv, orig_node, tt_buff, tt_buff_len);
out: if (router) @@ -387,14 +387,14 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_hna_buff_len; + int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -459,18 +459,18 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_hna_buff_len = (hna_buff_len > batman_packet->num_hna * ETH_ALEN ? - batman_packet->num_hna * ETH_ALEN : hna_buff_len); + tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? + batman_packet->num_tt * ETH_ALEN : tt_buff_len);
/* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); if (router == neigh_node) - goto update_hna; + goto update_tt;
/* if this neighbor does not offer a better TQ we won't consider it */ if (router && (router->tq_avg > neigh_node->tq_avg)) - goto update_hna; + goto update_tt;
/* if the TQ is the same and the link not more symetric we * won't consider it either */ @@ -488,16 +488,16 @@ static void update_orig(struct bat_priv *bat_priv, spin_unlock_bh(&orig_node_tmp->ogm_cnt_lock);
if (bcast_own_sum_orig >= bcast_own_sum_neigh) - goto update_hna; + goto update_tt; }
update_routes(bat_priv, orig_node, neigh_node, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len); goto update_gw;
-update_hna: +update_tt: update_routes(bat_priv, orig_node, router, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len);
update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) @@ -621,7 +621,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -818,14 +818,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, hna_buff, hna_buff_len, is_duplicate); + if_incoming, tt_buff, tt_buff_len, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, hna_buff_len, if_incoming); + 1, tt_buff_len, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -848,7 +848,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, hna_buff_len, if_incoming); + 0, tt_buff_len, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) diff --git a/routing.h b/routing.h index b5a064c..870f298 100644 --- a/routing.h +++ b/routing.h @@ -25,11 +25,11 @@ void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len); + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); diff --git a/send.c b/send.c index 02b541a..f30d0c6 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_hna)) { + batman_packet->num_tt)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -146,7 +146,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_hna * ETH_ALEN); + (batman_packet->num_tt * ETH_ALEN); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -222,7 +222,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_packet *batman_packet;
new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_hna * ETH_ALEN); + (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ @@ -231,7 +231,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, sizeof(struct batman_packet)); batman_packet = (struct batman_packet *)new_buff;
- batman_packet->num_hna = hna_local_fill_buffer(bat_priv, + batman_packet->num_tt = tt_local_fill_buffer(bat_priv, new_buff + sizeof(struct batman_packet), new_len - sizeof(struct batman_packet));
@@ -266,8 +266,8 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local hna has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->hna_local_changed)) && + /* if local tt has changed and interface is a primary interface */ + if ((atomic_read(&bat_priv->tt_local_changed)) && (hard_iface == primary_if)) rebuild_batman_packet(bat_priv, hard_iface);
@@ -309,7 +309,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -369,7 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + hna_buff_len, + sizeof(struct batman_packet) + tt_buff_len, if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 7b2ff19..247172d 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index 1772e2b..89a940a 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -363,11 +363,11 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL;
- /* only modify hna-table if it has been initialised before */ + /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { - hna_local_remove(bat_priv, dev->dev_addr, + tt_local_remove(bat_priv, dev->dev_addr, "mac address changed"); - hna_local_add(dev, addr->sa_data); + tt_local_add(dev, addr->sa_data); }
memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); @@ -425,7 +425,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) goto dropped;
/* TODO: check this for locks */ - hna_local_add(soft_iface, ethhdr->h_source); + tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { ret = gw_is_target(bat_priv, skb); @@ -663,7 +663,7 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->hna_local_changed, 0); + atomic_set(&bat_priv->tt_local_changed, 0);
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index f931830..25e6939 100644 --- a/translation-table.c +++ b/translation-table.c @@ -26,40 +26,40 @@ #include "hash.h" #include "originator.h"
-static void hna_local_purge(struct work_struct *work); -static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void tt_local_purge(struct work_struct *work); +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message);
/* returns 1 if they are the same mac addr */ -static int compare_lhna(struct hlist_node *node, void *data2) +static int compare_ltt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_local_entry, hash_entry); + void *data1 = container_of(node, struct tt_local_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
/* returns 1 if they are the same mac addr */ -static int compare_ghna(struct hlist_node *node, void *data2) +static int compare_gtt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_global_entry, hash_entry); + void *data1 = container_of(node, struct tt_global_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void hna_local_start_timer(struct bat_priv *bat_priv) +static void tt_local_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->hna_work, hna_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->hna_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); }
-static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, +static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_local_hash; + struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_local_entry *hna_local_entry, *hna_local_entry_tmp = NULL; + struct tt_local_entry *tt_local_entry, *tt_local_entry_tmp = NULL; int index;
if (!hash) @@ -69,26 +69,26 @@ static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, head, hash_entry) { - if (!compare_eth(hna_local_entry, data)) + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { + if (!compare_eth(tt_local_entry, data)) continue;
- hna_local_entry_tmp = hna_local_entry; + tt_local_entry_tmp = tt_local_entry; break; } rcu_read_unlock();
- return hna_local_entry_tmp; + return tt_local_entry_tmp; }
-static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, +static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_global_hash; + struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_global_entry *hna_global_entry; - struct hna_global_entry *hna_global_entry_tmp = NULL; + struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry_tmp = NULL; int index;
if (!hash) @@ -98,125 +98,125 @@ static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, head, hash_entry) { - if (!compare_eth(hna_global_entry, data)) + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { + if (!compare_eth(tt_global_entry, data)) continue;
- hna_global_entry_tmp = hna_global_entry; + tt_global_entry_tmp = tt_global_entry; break; } rcu_read_unlock();
- return hna_global_entry_tmp; + return tt_global_entry_tmp; }
-int hna_local_init(struct bat_priv *bat_priv) +int tt_local_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_local_hash) + if (bat_priv->tt_local_hash) return 1;
- bat_priv->hna_local_hash = hash_new(1024); + bat_priv->tt_local_hash = hash_new(1024);
- if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->hna_local_changed, 0); - hna_local_start_timer(bat_priv); + atomic_set(&bat_priv->tt_local_changed, 0); + tt_local_start_timer(bat_priv);
return 1; }
-void hna_local_add(struct net_device *soft_iface, uint8_t *addr) +void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct hna_local_entry *hna_local_entry; - struct hna_global_entry *hna_global_entry; + struct tt_local_entry *tt_local_entry; + struct tt_global_entry *tt_global_entry; int required_bytes;
- spin_lock_bh(&bat_priv->hna_lhash_lock); - hna_local_entry = hna_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- if (hna_local_entry) { - hna_local_entry->last_seen = jiffies; + if (tt_local_entry) { + tt_local_entry->last_seen = jiffies; return; }
/* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_hna That also should give a limit to + space in batman_packet->num_tt That also should give a limit to MAC-flooding. */ - required_bytes = (bat_priv->num_local_hna + 1) * ETH_ALEN; + required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; required_bytes += BAT_PACKET_LEN;
if ((required_bytes > ETH_DATA_LEN) || (atomic_read(&bat_priv->aggregated_ogms) && required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_hna + 1 > 255)) { + (bat_priv->num_local_tt + 1 > 255)) { bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local hna entry (%pM): " - "number of local hna entries exceeds packet size\n", + "Can't add new local tt entry (%pM): " + "number of local tt entries exceeds packet size\n", addr); return; }
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local hna entry: %pM\n", addr); + "Creating new local tt entry: %pM\n", addr);
- hna_local_entry = kmalloc(sizeof(struct hna_local_entry), GFP_ATOMIC); - if (!hna_local_entry) + tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); + if (!tt_local_entry) return;
- memcpy(hna_local_entry->addr, addr, ETH_ALEN); - hna_local_entry->last_seen = jiffies; + memcpy(tt_local_entry->addr, addr, ETH_ALEN); + tt_local_entry->last_seen = jiffies;
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) - hna_local_entry->never_purge = 1; + tt_local_entry->never_purge = 1; else - hna_local_entry->never_purge = 0; + tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hash_add(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry, &hna_local_entry->hash_entry); - bat_priv->num_local_hna++; - atomic_set(&bat_priv->hna_local_changed, 1); + hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry, &tt_local_entry->hash_entry); + bat_priv->num_local_tt++; + atomic_set(&bat_priv->tt_local_changed, 1);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = hna_global_hash_find(bat_priv, addr); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (hna_global_entry) - _hna_global_del_orig(bat_priv, hna_global_entry, - "local hna received"); + if (tt_global_entry) + _tt_global_del_orig(bat_priv, tt_global_entry, + "local tt received");
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); }
-int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node; struct hlist_head *head; int i, count = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { if (buff_len < (count + 1) * ETH_ALEN) break;
- memcpy(buff + (count * ETH_ALEN), hna_local_entry->addr, + memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, ETH_ALEN);
count++; @@ -224,20 +224,20 @@ int hna_local_fill_buffer(struct bat_priv *bat_priv, rcu_read_unlock(); }
- /* if we did not get all new local hnas see you next time ;-) */ - if (count == bat_priv->num_local_hna) - atomic_set(&bat_priv->hna_local_changed, 0); + /* if we did not get all new local tts see you next time ;-) */ + if (count == bat_priv->num_local_tt) + atomic_set(&bat_priv->tt_local_changed, 0);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return count; }
-int hna_local_seq_print_text(struct seq_file *seq, void *offset) +int tt_local_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -261,10 +261,10 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via HNA:\n", + "announced via TT:\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ @@ -279,7 +279,7 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -291,15 +291,15 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 22, " * %pM\n", - hna_local_entry->addr); + tt_local_entry->addr); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -309,180 +309,180 @@ out: return ret; }
-static void _hna_local_del(struct hlist_node *node, void *arg) +static void _tt_local_del(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct hna_local_entry, hash_entry); + void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_hna--; - atomic_set(&bat_priv->hna_local_changed, 1); + bat_priv->num_local_tt--; + atomic_set(&bat_priv->tt_local_changed, 1); }
-static void hna_local_del(struct bat_priv *bat_priv, - struct hna_local_entry *hna_local_entry, +static void tt_local_del(struct bat_priv *bat_priv, + struct tt_local_entry *tt_local_entry, char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local hna entry (%pM): %s\n", - hna_local_entry->addr, message); + bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + tt_local_entry->addr, message);
- hash_remove(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry->addr); - _hna_local_del(&hna_local_entry->hash_entry, bat_priv); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry->addr); + _tt_local_del(&tt_local_entry->hash_entry, bat_priv); }
-void hna_local_remove(struct bat_priv *bat_priv, +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_local_entry = hna_local_hash_find(bat_priv, addr); + tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, message); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, message);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void hna_local_purge(struct work_struct *work) +static void tt_local_purge(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, hna_work); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + container_of(delayed_work, struct bat_priv, tt_work); + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; unsigned long timeout; int i;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry_safe(hna_local_entry, node, node_tmp, + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { - if (hna_local_entry->never_purge) + if (tt_local_entry->never_purge) continue;
- timeout = hna_local_entry->last_seen; - timeout += LOCAL_HNA_TIMEOUT * HZ; + timeout = tt_local_entry->last_seen; + timeout += TT_LOCAL_TIMEOUT * HZ;
if (time_before(jiffies, timeout)) continue;
- hna_local_del(bat_priv, hna_local_entry, + tt_local_del(bat_priv, tt_local_entry, "address timed out"); } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); - hna_local_start_timer(bat_priv); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_local_start_timer(bat_priv); }
-void hna_local_free(struct bat_priv *bat_priv) +void tt_local_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->hna_work); - hash_delete(bat_priv->hna_local_hash, _hna_local_del, bat_priv); - bat_priv->hna_local_hash = NULL; + cancel_delayed_work_sync(&bat_priv->tt_work); + hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + bat_priv->tt_local_hash = NULL; }
-int hna_global_init(struct bat_priv *bat_priv) +int tt_global_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_global_hash) + if (bat_priv->tt_global_hash) return 1;
- bat_priv->hna_global_hash = hash_new(1024); + bat_priv->tt_global_hash = hash_new(1024);
- if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return 0;
return 1; }
-void hna_global_add_orig(struct bat_priv *bat_priv, +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { - struct hna_global_entry *hna_global_entry; - struct hna_local_entry *hna_local_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- while ((hna_buff_count + 1) * ETH_ALEN <= hna_buff_len) { - spin_lock_bh(&bat_priv->hna_ghash_lock); + while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if (!hna_global_entry) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + if (!tt_global_entry) { + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = - kmalloc(sizeof(struct hna_global_entry), + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC);
- if (!hna_global_entry) + if (!tt_global_entry) break;
- memcpy(hna_global_entry->addr, hna_ptr, ETH_ALEN); + memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN);
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global hna entry: " + "Creating new global tt entry: " "%pM (via %pM)\n", - hna_global_entry->addr, orig_node->orig); + tt_global_entry->addr, orig_node->orig);
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hash_add(bat_priv->hna_global_hash, compare_ghna, - choose_orig, hna_global_entry, - &hna_global_entry->hash_entry); + spin_lock_bh(&bat_priv->tt_ghash_lock); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry);
}
- hna_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->hna_ghash_lock); + tt_global_entry->orig_node = orig_node; + spin_unlock_bh(&bat_priv->tt_ghash_lock);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_local_entry = hna_local_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, - "global hna received"); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received");
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- hna_buff_count++; + tt_buff_count++; }
/* initialize, and overwrite if malloc succeeds */ - orig_node->hna_buff = NULL; - orig_node->hna_buff_len = 0; + orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0;
- if (hna_buff_len > 0) { - orig_node->hna_buff = kmalloc(hna_buff_len, GFP_ATOMIC); - if (orig_node->hna_buff) { - memcpy(orig_node->hna_buff, hna_buff, hna_buff_len); - orig_node->hna_buff_len = hna_buff_len; + if (tt_buff_len > 0) { + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; } } }
-int hna_global_seq_print_text(struct seq_file *seq, void *offset) +int tt_global_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_global_hash; - struct hna_global_entry *hna_global_entry; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -505,10 +505,10 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) goto out; }
- seq_printf(seq, "Globally announced HNAs received via the mesh %s\n", + seq_printf(seq, "Globally announced TTs received via the mesh %s\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ @@ -523,7 +523,7 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } @@ -534,17 +534,17 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 44, " * %pM via %pM\n", - hna_global_entry->addr, - hna_global_entry->orig_node->orig); + tt_global_entry->addr, + tt_global_entry->orig_node->orig); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -554,84 +554,84 @@ out: return ret; }
-static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message) { bat_dbg(DBG_ROUTES, bat_priv, - "Deleting global hna entry %pM (via %pM): %s\n", - hna_global_entry->addr, hna_global_entry->orig_node->orig, + "Deleting global tt entry %pM (via %pM): %s\n", + tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
- hash_remove(bat_priv->hna_global_hash, compare_ghna, choose_orig, - hna_global_entry->addr); - kfree(hna_global_entry); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, + tt_global_entry->addr); + kfree(tt_global_entry); }
-void hna_global_del_orig(struct bat_priv *bat_priv, +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message) { - struct hna_global_entry *hna_global_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- if (orig_node->hna_buff_len == 0) + if (orig_node->tt_buff_len == 0) return;
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- while ((hna_buff_count + 1) * ETH_ALEN <= orig_node->hna_buff_len) { - hna_ptr = orig_node->hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { + tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if ((hna_global_entry) && - (hna_global_entry->orig_node == orig_node)) - _hna_global_del_orig(bat_priv, hna_global_entry, + if ((tt_global_entry) && + (tt_global_entry->orig_node == orig_node)) + _tt_global_del_orig(bat_priv, tt_global_entry, message);
- hna_buff_count++; + tt_buff_count++; }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- orig_node->hna_buff_len = 0; - kfree(orig_node->hna_buff); - orig_node->hna_buff = NULL; + orig_node->tt_buff_len = 0; + kfree(orig_node->tt_buff); + orig_node->tt_buff = NULL; }
-static void hna_global_del(struct hlist_node *node, void *arg) +static void tt_global_del(struct hlist_node *node, void *arg) { - void *data = container_of(node, struct hna_global_entry, hash_entry); + void *data = container_of(node, struct tt_global_entry, hash_entry);
kfree(data); }
-void hna_global_free(struct bat_priv *bat_priv) +void tt_global_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->hna_global_hash, hna_global_del, NULL); - bat_priv->hna_global_hash = NULL; + hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + bat_priv->tt_global_hash = NULL; }
struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) { - struct hna_global_entry *hna_global_entry; + struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hna_global_entry = hna_global_hash_find(bat_priv, addr); + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (!hna_global_entry) + if (!tt_global_entry) goto out;
- if (!atomic_inc_not_zero(&hna_global_entry->orig_node->refcount)) + if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) goto out;
- orig_node = hna_global_entry->orig_node; + orig_node = tt_global_entry->orig_node;
out: - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } diff --git a/translation-table.h b/translation-table.h index f19931c..46152c3 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,22 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int hna_local_init(struct bat_priv *bat_priv); -void hna_local_add(struct net_device *soft_iface, uint8_t *addr); -void hna_local_remove(struct bat_priv *bat_priv, +int tt_local_init(struct bat_priv *bat_priv); +void tt_local_add(struct net_device *soft_iface, uint8_t *addr); +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message); -int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len); -int hna_local_seq_print_text(struct seq_file *seq, void *offset); -void hna_local_free(struct bat_priv *bat_priv); -int hna_global_init(struct bat_priv *bat_priv); -void hna_global_add_orig(struct bat_priv *bat_priv, +int tt_local_seq_print_text(struct seq_file *seq, void *offset); +void tt_local_free(struct bat_priv *bat_priv); +int tt_global_init(struct bat_priv *bat_priv); +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len); -int hna_global_seq_print_text(struct seq_file *seq, void *offset); -void hna_global_del_orig(struct bat_priv *bat_priv, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_seq_print_text(struct seq_file *seq, void *offset); +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); -void hna_global_free(struct bat_priv *bat_priv); +void tt_global_free(struct bat_priv *bat_priv); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 947bafc..b8c72c3 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,8 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; - unsigned char *hna_buff; - int16_t hna_buff_len; + unsigned char *tt_buff; + int16_t tt_buff_len; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -156,20 +156,20 @@ struct bat_priv { struct hlist_head gw_list; struct list_head vis_send_list; struct hashtable_t *orig_hash; - struct hashtable_t *hna_local_hash; - struct hashtable_t *hna_global_hash; + struct hashtable_t *tt_local_hash; + struct hashtable_t *tt_global_hash; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ - spinlock_t hna_lhash_lock; /* protects hna_local_hash */ - spinlock_t hna_ghash_lock; /* protects hna_global_hash */ + spinlock_t tt_lhash_lock; /* protects tt_local_hash */ + spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ - int16_t num_local_hna; - atomic_t hna_local_changed; - struct delayed_work hna_work; + int16_t num_local_tt; + atomic_t tt_local_changed; + struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; struct gw_node __rcu *curr_gw; /* rcu protected pointer */ @@ -192,14 +192,14 @@ struct socket_packet { struct icmp_packet_rr icmp_packet; };
-struct hna_local_entry { +struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; struct hlist_node hash_entry; };
-struct hna_global_entry { +struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; struct hlist_node hash_entry; @@ -262,7 +262,7 @@ struct vis_info { struct vis_info_entry { uint8_t src[ETH_ALEN]; uint8_t dest[ETH_ALEN]; - uint8_t quality; /* quality = 0 means HNA */ + uint8_t quality; /* quality = 0 means TT */ } __packed;
struct recvlist_node { diff --git a/unicast.c b/unicast.c index b46cbf1..19c3daf 100644 --- a/unicast.c +++ b/unicast.c @@ -300,7 +300,7 @@ int unicast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) goto find_router; }
- /* check for hna host - increases orig_node refcount */ + /* check for tt host - increases orig_node refcount */ orig_node = transtable_search(bat_priv, ethhdr->h_dest);
find_router: diff --git a/vis.c b/vis.c index c8f571d..c39f20c 100644 --- a/vis.c +++ b/vis.c @@ -194,7 +194,7 @@ static ssize_t vis_data_read_entry(char *buff, struct vis_info_entry *entry, { /* maximal length: max(4+17+2, 3+17+1+3+2) == 26 */ if (primary && entry->quality == 0) - return sprintf(buff, "HNA %pM, ", entry->dest); + return sprintf(buff, "TT %pM, ", entry->dest); else if (compare_eth(entry->src, src)) return sprintf(buff, "TQ %pM %d, ", entry->dest, entry->quality); @@ -622,7 +622,7 @@ static int generate_vis_packet(struct bat_priv *bat_priv) struct vis_info *info = (struct vis_info *)bat_priv->my_vis_info; struct vis_packet *packet = (struct vis_packet *)info->skb_packet->data; struct vis_info_entry *entry; - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry; int best_tq = -1, i;
info->first_seen = jiffies; @@ -678,29 +678,29 @@ next: rcu_read_unlock(); }
- hash = bat_priv->hna_local_hash; + hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(hna_local_entry, node, head, hash_entry) { + hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); memset(entry->src, 0, ETH_ALEN); - memcpy(entry->dest, hna_local_entry->addr, ETH_ALEN); - entry->quality = 0; /* 0 means HNA */ + memcpy(entry->dest, tt_local_entry->addr, ETH_ALEN); + entry->quality = 0; /* 0 means TT */ packet->entries++;
if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0; } } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Hna-improvements
Moreover: - COMPAT_VERSION has been increased to 14 - batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org --- aggregation.c | 23 +- aggregation.h | 6 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 10 +- originator.c | 8 +- packet.h | 34 ++- routing.c | 237 +++++++++-- routing.h | 10 +- send.c | 90 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1151 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 39 ++- types.h | 38 ++- unicast.c | 3 + 16 files changed, 1374 insertions(+), 314 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 9b94590..de59b5f 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,16 +20,11 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(struct batman_packet *batman_packet) -{ - return batman_packet->num_tt * ETH_ALEN; -} - /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -255,18 +250,20 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, batman_packet = (struct batman_packet *)packet_buff;
do { - /* network to host order for our 32bit seqno, and the - orig_interval. */ + /* network to host order for our 32bit seqno and the + orig_interval */ batman_packet->seqno = ntohl(batman_packet->seqno); + batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; - receive_bat_packet(ethhdr, batman_packet, - tt_buff, tt_len(batman_packet), - if_incoming);
- buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); + receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming); + + buff_pos += BAT_PACKET_LEN + + tt_len(batman_packet->tt_num_changes); + batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_tt)); + batman_packet->tt_num_changes)); } diff --git a/aggregation.h b/aggregation.h index 7e6d72f..c631a4c 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes * + sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/hard-interface.c b/hard-interface.c index 9e4ac7d..2a7c533 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -156,12 +156,6 @@ static void primary_if_select(struct bat_priv *bat_priv,
primary_if_update_addr(bat_priv);
- /*** - * hacky trick to make sure that we send the TT information via - * our new primary interface - */ - atomic_set(&bat_priv->tt_local_changed, 1); - out: spin_unlock_bh(&hardif_list_lock); } @@ -345,7 +339,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_tt = 0; + batman_packet->tt_num_changes = 0; + batman_packet->tt_ver_num = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; @@ -674,6 +669,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break; + /* Translation table query (request or response) */ + case BAT_TT_QUERY: + ret = recv_tt_query(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 2970908..a84679a 100644 --- a/main.c +++ b/main.c @@ -83,6 +83,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock); + spin_lock_init(&bat_priv->tt_changes_list_lock); + spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -92,14 +95,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_list); + INIT_LIST_HEAD(&bat_priv->tt_changes_list); + INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1) - goto err; - - if (tt_global_init(bat_priv) < 1) + if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr); @@ -133,8 +135,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv); - tt_global_free(bat_priv); + tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 50eb819..cc1c277 100644 --- a/main.h +++ b/main.h @@ -39,8 +39,8 @@ #define PURGE_TIMEOUT 200 /* purge originators after time in seconds if no * valid packet comes in -> TODO: check * influence on TQ_LOCAL_WINDOW_SIZE */ -#define TT_LOCAL_TIMEOUT 3600 /* in seconds */ - +#define TT_LOCAL_TIMEOUT 3600 /* in seconds */ +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */ #define TQ_LOCAL_WINDOW_SIZE 64 /* sliding packet range of received originator * messages in squence numbers (should be a * multiple of our word size) */ @@ -49,6 +49,12 @@ #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ + +/* Transtable operations */ +#define TT_ADD 0 +#define TT_DEL 1 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ diff --git a/originator.c b/originator.c index 0314875..be7257b 100644 --- a/originator.c +++ b/originator.c @@ -147,6 +147,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -215,6 +216,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock); + spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2); @@ -223,6 +225,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0; + atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -332,9 +336,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node, - best_neigh_node, - orig_node->tt_buff, - orig_node->tt_buff_len); + best_neigh_node); } }
diff --git a/packet.h b/packet.h index c225c3a..34a2775 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02 + struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,7 +67,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_tt; + uint8_t tt_ver_num; + uint16_t tt_crc; + uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; @@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl; + uint8_t ttvn; /* destination ttvn */ } __packed;
struct unicast_frag_packet { @@ -134,4 +143,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet { + uint8_t packet_type; + uint8_t version; /* batman version field */ + uint8_t dst[6]; + uint8_t ttl; + uint8_t flags; /* bit0: 0: -> tt_request + * 1: -> tt_response + * bit1: request the full table + */ + uint8_t src[6]; + uint8_t ttvn; /* if tt_request: ttvn that triggered the + * request + * if tt_response: new ttvn for the src + * orig_node + */ + uint16_t tt_data; /* if tt_request: crc associated with the + * ttvn + * if tt_response: table_size + */ +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 91b3709..838394b 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,68 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void update_transtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc) { - if ((tt_buff_len != orig_node->tt_buff_len) || - ((tt_buff_len > 0) && - (orig_node->tt_buff_len > 0) && - (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) { - - if (orig_node->tt_buff_len > 0) - tt_global_del_orig(bat_priv, orig_node, - "originator changed tt"); - - if ((tt_buff_len > 0) && (tt_buff)) - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); + struct tt_change *tt_change; + int count; + uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_tt_ver_num); + + /* the ttvn increased by one -> we can apply the attached changes */ + if (ttvn - orig_ttvn == 1) { + /* if it does not contain the changes send a tt request */ + if (!tt_num_changes) + goto request_table; + + for (count = 0; count < tt_num_changes; count++) { + tt_change = (struct tt_change *) tt_buff + count; + /* Check for the change op */ + if (tt_change->op == TT_DEL) + tt_global_del(bat_priv, orig_node, + tt_change->addr, + "tt remotely removed"); + else + if (!tt_global_add(bat_priv, orig_node, + tt_change->addr, + ttvn)) + /* In case of problem while storing a + * global_entry, we stop the updating + * procedure without committing the + * ttvn change. This will avoid to send + * corrupted data on tt_request + */ + return; + } + /* Let's save the buffer (if any) */ + tt_save_orig_buffer(bat_priv, orig_node, + tt_buff, tt_num_changes); + + atomic_set(&orig_node->last_tt_ver_num, ttvn); + + /* Even if we received the crc into the OGM, we prefer + * to recompute it to spot any possible inconsistency + * in the global table */ + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + } else { + /* if we missed more than one change or our tables are not + * in sync anymore -> request fresh tt data */ + if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { +request_table: + bat_dbg(DBG_ROUTES, bat_priv, "TT changes missing " + "for %pM. Need to retrieve last OGM buffer\n", + orig_node->orig); + send_tt_request(bat_priv, orig_node, ttvn, tt_crc, + true); + return; + } } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, - unsigned char *tt_buff, int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *curr_router;
@@ -93,7 +133,6 @@ static void update_route(struct bat_priv *bat_priv,
/* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node, @@ -105,9 +144,6 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); - /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv, @@ -135,8 +171,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *router = NULL;
@@ -146,11 +181,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node) - update_route(bat_priv, orig_node, neigh_node, - tt_buff, tt_buff_len); - /* may be just TT changed */ - else - update_TT(bat_priv, orig_node, tt_buff, tt_buff_len); + update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -387,14 +418,12 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *tt_buff, int tt_buff_len, - char is_duplicate) + unsigned char *tt_buff, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -459,9 +488,6 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? - batman_packet->num_tt * ETH_ALEN : tt_buff_len); - /* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); @@ -491,15 +517,19 @@ static void update_orig(struct bat_priv *bat_priv, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node, - tt_buff, tmp_tt_buff_len); - goto update_gw; + update_routes(bat_priv, orig_node, neigh_node);
update_tt: - update_routes(bat_priv, orig_node, router, - tt_buff, tmp_tt_buff_len); + /* I have to check for transtable changes only if the OGM has been + * sent through a primary interface */ + if (((batman_packet->orig != ethhdr->h_source) && + (batman_packet->ttl > 2)) || + (batman_packet->flags & PRIMARIES_FIRST_HOP)) + update_transtable(bat_priv, orig_node, tt_buff, + batman_packet->tt_num_changes, + batman_packet->tt_ver_num, + batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -621,7 +651,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, + unsigned char *tt_buff, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -660,12 +690,14 @@ void receive_bat_packet(struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] " - "(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, " - "TTL %d, V %d, IDF %d)\n", + "(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, " + "crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno, - batman_packet->tq, batman_packet->ttl, batman_packet->version, + batman_packet->tt_ver_num, batman_packet->tt_crc, + batman_packet->tt_num_changes, batman_packet->tq, + batman_packet->ttl, batman_packet->version, has_directlink_flag);
rcu_read_lock(); @@ -818,14 +850,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, tt_buff, tt_buff_len, is_duplicate); + if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, tt_buff_len, if_incoming); + 1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -848,7 +880,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, tt_buff_len, if_incoming); + 0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1195,6 +1227,69 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct tt_query_packet *tt_query; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + goto out; + + /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + tt_query = (struct tt_query_packet *)skb->data; + + tt_query->tt_data = ntohs(tt_query->tt_data); + + if (tt_query->flags & TT_REQUEST) { + /* Try to reply to this tt_request */ + ret = send_tt_response(bat_priv, tt_query); + if (ret != NET_RX_SUCCESS) { + bat_dbg(DBG_ROUTES, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + goto out; + } + /* We need to linearize the packet to access the TT data */ + if (skb_linearize(skb) < 0) + goto out; + + if (is_my_mac(tt_query->dst)) + handle_tt_response(bat_priv, tt_query); + else { + bat_dbg(DBG_ROUTES, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + +out: + kfree_skb(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1376,14 +1471,64 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(struct unicast_packet); + struct orig_node *orig_node; + struct ethhdr *ethhdr; + uint8_t curr_ttvn; + int16_t diff;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
+ if (is_my_mac(unicast_packet->dest)) + curr_ttvn = (uint8_t)atomic_read(&bat_priv->tt_ver_num); + else { + orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + + if (!orig_node) + return NET_RX_DROP; + + curr_ttvn = (uint8_t)atomic_read(&orig_node->last_tt_ver_num); + orig_node_free_ref(orig_node); + } + + diff = unicast_packet->ttvn - curr_ttvn; + /* Check whether I have to reroute the packet */ + if (unicast_packet->packet_type == BAT_UNICAST && + (diff < 0 && diff > -0xff/2)) { + /* Linearize the skb before accessing it */ + if (skb_linearize(skb) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + + sizeof(struct unicast_packet)); + + orig_node = transtable_search(bat_priv, ethhdr->h_dest); + + if (!orig_node) { + if (!is_my_client(bat_priv, ethhdr->h_dest)) + return NET_RX_DROP; + memcpy(unicast_packet->dest, + bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + } else { + memcpy(unicast_packet->dest, orig_node->orig, + ETH_ALEN); + curr_ttvn = (uint8_t) + atomic_read(&orig_node->last_tt_ver_num); + orig_node_free_ref(orig_node); + } + + unicast_packet->ttvn = curr_ttvn; + + bat_dbg(DBG_ROUTES, bat_priv, "HVN mismatch! " + "Rerouting unicast packet (for %pM) to %pM\n", + ethhdr->h_dest, unicast_packet->dest); + } /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/routing.h b/routing.h index 870f298..6f6a5f8 100644 --- a/routing.h +++ b/routing.h @@ -24,12 +24,11 @@
void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, - struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, - struct hard_iface *if_incoming); + struct batman_packet *batman_packet, + unsigned char *tt_buff, + struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len); + struct neigh_node *neigh_node); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f30d0c6..f85913e 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_tt)) { + batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -136,17 +136,18 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d," - " IDF %s) on interface %s [%pM]\n", + " IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"), + batman_packet->tt_ver_num, hard_iface->net_dev->name, - hard_iface->net_dev->dev_addr); + hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_tt * ETH_ALEN); + tt_len(batman_packet->tt_num_changes); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -214,26 +215,17 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) +static void realloc_packet_buffer(struct hard_iface *hard_iface, + int new_len) { - int new_len; unsigned char *new_buff; - struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(struct batman_packet)); - batman_packet = (struct batman_packet *)new_buff; - - batman_packet->num_tt = tt_local_fill_buffer(bat_priv, - new_buff + sizeof(struct batman_packet), - new_len - sizeof(struct batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff; @@ -241,6 +233,45 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+static void prepare_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + int new_len; + struct batman_packet *batman_packet; + + new_len = BAT_PACKET_LEN + + tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented */ + if (new_len > bat_priv->primary_if->soft_iface->mtu) + new_len = BAT_PACKET_LEN; + + realloc_packet_buffer(hard_iface, new_len); + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + + atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); + + batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv, + hard_iface->packet_buff + BAT_PACKET_LEN, + hard_iface->packet_len - BAT_PACKET_LEN); + +} + +static void reset_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + struct batman_packet *batman_packet; + + realloc_packet_buffer(hard_iface, BAT_PACKET_LEN); + + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + batman_packet->tt_num_changes = 0; +} + void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -266,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->tt_local_changed)) && - (hard_iface == primary_if)) - rebuild_batman_packet(bat_priv, hard_iface); + if (hard_iface == primary_if) { + /* if at least one change happened */ + if (atomic_read(&bat_priv->tt_local_changes) > 0) { + prepare_packet_buffer(bat_priv, hard_iface); + /* Increment the TTVN only once per OGM interval */ + atomic_inc(&bat_priv->tt_ver_num); + } + + /* if the changes have been sent enough times */ + if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) + reset_packet_buffer(bat_priv, hard_iface); + }
/** * NOTE: packet_buff might just have been re-allocated in - * rebuild_batman_packet() + * prepare_packet_buffer() or in reset_packet_buffer() */ batman_packet = (struct batman_packet *)hard_iface->packet_buff;
@@ -281,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
+ batman_packet->tt_ver_num = atomic_read(&bat_priv->tt_ver_num); + batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc)); + if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else @@ -309,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time; + uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); @@ -326,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl; + tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN); @@ -358,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno); + batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP; @@ -369,7 +414,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + tt_buff_len, + sizeof(struct batman_packet) + + tt_len(tt_num_changes), if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 247172d..842f4d1 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index 89a940a..fedb1ed 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -366,7 +366,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed"); tt_local_add(dev, addr->sa_data); }
@@ -424,7 +424,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if ((curr_softif_neigh) && (curr_softif_neigh->vid == vid)) goto dropped;
- /* TODO: check this for locks */ + /* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { @@ -663,7 +663,12 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->tt_local_changed, 0); + atomic_set(&bat_priv->tt_ver_num, 0); + atomic_set(&bat_priv->tt_local_changes, 0); + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + + bat_priv->tt_buff = NULL; + bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 25e6939..d55eeb5 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message); +#include <linux/crc16.h> + +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message); +static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(struct hlist_node *node, void *data2) @@ -47,14 +51,15 @@ static int compare_gtt(struct hlist_node *node, void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + msecs_to_jiffies(5000)); }
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; @@ -82,7 +87,7 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, }
static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; @@ -110,7 +115,42 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{ + unsigned long deadline; + deadline = starting_time + msecs_to_jiffies(timeout); + + return time_after(jiffies, deadline); +} + +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +{ + struct tt_change_node *tt_change_node; + + tt_change_node = (struct tt_change_node *) + kmalloc(sizeof(struct tt_change_node), GFP_ATOMIC); + + if (!tt_change_node) + return; + + tt_change_node->change.op = op; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + /* track the change in the OGMinterval list */ + list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); + atomic_inc(&bat_priv->tt_local_changes); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); +} + +int tt_len(int changes_num) +{ + return changes_num * sizeof(struct tt_change); +} + +static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -120,9 +160,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0); - tt_local_start_timer(bat_priv); - return 1; }
@@ -131,40 +168,24 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; - int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - return; + goto unlock; }
- /* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_tt That also should give a limit to - MAC-flooding. */ - required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; - required_bytes += BAT_PACKET_LEN; - - if ((required_bytes > ETH_DATA_LEN) || - (atomic_read(&bat_priv->aggregated_ogms) && - required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_tt + 1 > 255)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local tt entry (%pM): " - "number of local tt entries exceeds packet size\n", - addr); - return; - } - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local tt entry: %pM\n", addr); - tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - return; + goto unlock; + + tt_local_event(bat_priv, TT_ADD, addr); + + bat_dbg(DBG_ROUTES, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d\n", addr, + (uint8_t)atomic_read(&bat_priv->tt_ver_num));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; @@ -175,13 +196,9 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); - bat_priv->num_local_tt++; - atomic_set(&bat_priv->tt_local_changed, 1); - + atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ @@ -190,46 +207,60 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry) - _tt_global_del_orig(bat_priv, tt_global_entry, - "local tt received"); + _tt_global_del(bat_priv, tt_global_entry, + "local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock); + +unlock: + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct hlist_node *node; - struct hlist_head *head; - int i, count = 0; + int count = 0, tot_changes = 0; + struct tt_change_node *entry, *safe;
- spin_lock_bh(&bat_priv->tt_lhash_lock); + if (buff_len > 0) + tot_changes = buff_len / tt_len(1);
- for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; - - rcu_read_lock(); - hlist_for_each_entry_rcu(tt_local_entry, node, - head, hash_entry) { - if (buff_len < (count + 1) * ETH_ALEN) - break; - - memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, - ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock); + atomic_set(&bat_priv->tt_local_changes, 0);
+ list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (count < tot_changes) { + memcpy(buff + tt_len(count), + &entry->change, sizeof(struct tt_change)); count++; } - rcu_read_unlock(); + list_del(&entry->list); + kfree(entry); } + spin_unlock_bh(&bat_priv->tt_changes_list_lock);
- /* if we did not get all new local tts see you next time ;-) */ - if (count == bat_priv->num_local_tt) - atomic_set(&bat_priv->tt_local_changed, 0); + /* Keep the buffer for possible tt_request */ + spin_lock_bh(&bat_priv->tt_buff_lock); + kfree(bat_priv->tt_buff); + bat_priv->tt_buff_len = 0; + bat_priv->tt_buff = NULL; + /* We check whether this new OGM has no changes due to size + * problems */ + if (buff_len > 0) { + /** + * if kmalloc() fails we will reply with the full table + * instead of providing the diff + */ + bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + if (bat_priv->tt_buff) { + memcpy(bat_priv->tt_buff, buff, buff_len); + bat_priv->tt_buff_len = buff_len; + } + } + spin_unlock_bh(&bat_priv->tt_buff_lock);
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - return count; + return tot_changes; }
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -261,8 +292,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via TT:\n", - net_dev->name); + "announced via TT (TTVN: %u):\n", + net_dev->name, (uint8_t)atomic_read(&bat_priv->tt_ver_num));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -309,54 +340,50 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_tt--; - atomic_set(&bat_priv->tt_local_changed, 1); + atomic_dec(&bat_priv->num_local_tt); }
static void tt_local_del(struct bat_priv *bat_priv, - struct tt_local_entry *tt_local_entry, - char *message) + struct tt_local_entry *tt_local_entry, + char *message) { bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
+ atomic_dec(&bat_priv->num_local_tt); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr); - _tt_local_del(&tt_local_entry->hash_entry, bat_priv); + + tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) + if (tt_local_entry) { + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, message); - + } spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; - unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock); @@ -369,32 +396,52 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
- timeout = tt_local_entry->last_seen; - timeout += TT_LOCAL_TIMEOUT * HZ; - - if (time_before(jiffies, timeout)) + if (!is_out_of_time(tt_local_entry->last_seen, + TT_LOCAL_TIMEOUT * 1000)) continue;
+ tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + "address timed out"); } }
spin_unlock_bh(&bat_priv->tt_lhash_lock); - tt_local_start_timer(bat_priv); }
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + int i; + spinlock_t *list_lock; + struct hlist_head *head; + struct hlist_node *node, *node_tmp; + struct tt_local_entry *tt_local_entry; + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work); - hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + hash = bat_priv->tt_local_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + kfree(tt_local_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_local_hash = NULL; }
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -407,74 +454,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void tt_changes_list_free(struct bat_priv *bat_priv) +{ + struct tt_change_node *entry, *safe; + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + list_del(&entry->list); + kfree(entry); + } + + atomic_set(&bat_priv->tt_local_changes, 0); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); +} + +/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_addr, uint8_t ttvn) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; - - while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { - spin_lock_bh(&bat_priv->tt_ghash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if (!tt_global_entry) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - tt_global_entry = - kmalloc(sizeof(struct tt_global_entry), - GFP_ATOMIC); - - if (!tt_global_entry) - break; - - memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN); - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global tt entry: " - "%pM (via %pM)\n", - tt_global_entry->addr, orig_node->orig); - - spin_lock_bh(&bat_priv->tt_ghash_lock); - hash_add(bat_priv->tt_global_hash, compare_gtt, - choose_orig, tt_global_entry, - &tt_global_entry->hash_entry); - - } - + struct orig_node *orig_node_tmp; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + + if (!tt_global_entry) { + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), + GFP_ATOMIC); + if (!tt_global_entry) + goto unlock; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); + /* Assign the new orig_node */ + atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - /* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr); - - if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - - tt_buff_count++; - } - - /* initialize, and overwrite if malloc succeeds */ - orig_node->tt_buff = NULL; - orig_node->tt_buff_len = 0; - - if (tt_buff_len > 0) { - orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); - if (orig_node->tt_buff) { - memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); - orig_node->tt_buff_len = tt_buff_len; + tt_global_entry->ttvn = ttvn; + atomic_inc(&orig_node->tt_size); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry); + } else { + if (tt_global_entry->orig_node != orig_node) { + atomic_dec(&tt_global_entry->orig_node->tt_size); + orig_node_tmp = tt_global_entry->orig_node; + atomic_inc(&orig_node->refcount); + tt_global_entry->orig_node = orig_node; + tt_global_entry->ttvn = ttvn; + orig_node_free_ref(orig_node_tmp); + atomic_inc(&orig_node->tt_size); } } + + spin_unlock_bh(&bat_priv->tt_ghash_lock); + + bat_dbg(DBG_ROUTES, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->addr, orig_node->orig); + + /* remove address from local hash if present */ + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); + + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received"); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + return 1; +unlock: + spin_unlock_bh(&bat_priv->tt_ghash_lock); + return 0; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -507,17 +559,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
seq_printf(seq, "Globally announced TTs received via the mesh %s\n", net_dev->name); + seq_printf(seq, " %-13s %s %-15s %s\n", + "Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; - /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ + /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via + * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head) - buf_size += 43; + buf_size += 59; rcu_read_unlock(); }
@@ -536,10 +591,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { - pos += snprintf(buff + pos, 44, - " * %pM via %pM\n", + pos += snprintf(buff + pos, 61, + " * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr, - tt_global_entry->orig_node->orig); + tt_global_entry->ttvn, + tt_global_entry->orig_node->orig, + (uint8_t) atomic_read( + &tt_global_entry->orig_node-> + last_tt_ver_num)); } rcu_read_unlock(); } @@ -554,64 +613,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message) +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message) { + if (!tt_global_entry) + return; + bat_dbg(DBG_ROUTES, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
+ atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry); }
+void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *addr, char *message) +{ + struct tt_global_entry *tt_global_entry; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr); + + if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + atomic_dec(&orig_node->tt_size); + _tt_global_del(bat_priv, tt_global_entry, message); + } + spin_unlock_bh(&bat_priv->tt_ghash_lock); +} + void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message) + struct orig_node *orig_node, char *message) { struct tt_global_entry *tt_global_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; + int i; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct hlist_node *node, *safe; + struct hlist_head *head;
- if (orig_node->tt_buff_len == 0) + if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { - tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if ((tt_global_entry) && - (tt_global_entry->orig_node == orig_node)) - _tt_global_del_orig(bat_priv, tt_global_entry, - message); - - tt_buff_count++; + hlist_for_each_entry_safe(tt_global_entry, node, safe, + head, hash_entry) { + if (tt_global_entry->orig_node == orig_node) + _tt_global_del(bat_priv, tt_global_entry, + message); + } } + atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock); - - orig_node->tt_buff_len = 0; - kfree(orig_node->tt_buff); - orig_node->tt_buff = NULL; }
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL; }
@@ -635,3 +710,699 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } + +/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (compare_eth(tt_global_entry->orig_node, + orig_node)) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_global_entry->addr[j]); + total ^= total_one; + } + } + rcu_read_unlock(); + } + + return total; +} + +/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_local_entry->addr[j]); + total ^= total_one; + } + + rcu_read_unlock(); + } + + return total; +} + +static void tt_req_list_free(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes) +{ + uint16_t tt_buff_len = tt_len(tt_num_changes); + + /* Replace the old buffer only if I received something in the + * last OGM (the OGM could carry no changes) */ + spin_lock_bh(&orig_node->tt_buff_lock); + if (tt_buff_len > 0) { + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; + } + } + spin_unlock_bh(&orig_node->tt_buff_lock); +} + +static void tt_req_purge(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (is_out_of_time(node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) { + list_del(&node->list); + kfree(node); + } + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, bool full_table) +{ + struct sk_buff *skb; + struct tt_query_packet *tt_request; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if; + struct tt_req_node *tt_req_node = NULL; + int ret = 0; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_req_node, &bat_priv->tt_req_list, list) { + if (compare_eth(tt_req_node, dst_orig_node) && + !is_out_of_time(tt_req_node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) + goto unlock_tt; + } + + tt_req_node = kmalloc(sizeof(struct tt_req_node), GFP_ATOMIC); + if (!tt_req_node) { + ret = 1; + goto unlock_tt; + } + + memcpy(tt_req_node->addr, dst_orig_node->orig, ETH_ALEN); + tt_req_node->issued_at = jiffies; + + list_add(&tt_req_node->list, &bat_priv->tt_req_list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + tt_request = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet)); + + tt_request->packet_type = BAT_TT_QUERY; + tt_request->version = COMPAT_VERSION; + memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); + tt_request->ttl = TTL; + tt_request->ttvn = ttvn; + tt_request->tt_data = tt_crc; + tt_request->flags = TT_REQUEST; + + /* Request the full table if needed */ + if (full_table) + tt_request->flags |= TT_FULL_TABLE; + + neigh_node = find_router(bat_priv, dst_orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + bat_dbg(DBG_ROUTES, bat_priv, "Sending TT_REQUEST to %pM via %pM " + "[%c]\n", dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret == 1) { + kfree_skb(skb); + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_del(&tt_req_node->list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + kfree(tt_req_node); + } + return ret; +unlock_tt: + spin_unlock_bh(&bat_priv->tt_req_list_lock); + return ret; +} + +static int send_other_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if = NULL; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t orig_ttvn, req_ttvn; + int i, ret = NET_RX_DROP; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_ROUTES, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (%pM) [%c]\n", tt_request->src, + tt_request->ttvn, tt_request->dst, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + /* Let's get the orig node of the REAL destination */ + req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst); + if (!req_dst_orig_node) + goto out; + + /* I don't have any info about this node yet! */ + if (!req_dst_orig_node->tt_crc) + goto out; + + res_dst_orig_node = get_orig_node(bat_priv, tt_request->src); + if (!res_dst_orig_node) + goto out; + + neigh_node = find_router(bat_priv, res_dst_orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_tt_ver_num); + req_ttvn = tt_request->ttvn; + + /* I have not the requested data */ + if (orig_ttvn != req_ttvn || + tt_request->tt_data != req_dst_orig_node->tt_crc) + goto out; + + /* If it has explicitly been requested the full table */ + if (tt_request->flags & TT_FULL_TABLE || + !req_dst_orig_node->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&req_dst_orig_node->tt_buff_lock); + tt_len = req_dst_orig_node->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Copy the last orig_node's OGM buffer */ + memcpy(tt_buff, req_dst_orig_node->tt_buff, + req_dst_orig_node->tt_buff_len); + + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = (uint8_t) + atomic_read(&req_dst_orig_node->last_tt_ver_num); + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the orig_node's local table */ + hash = bat_priv->tt_global_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + if (tt_global_entry->orig_node == + req_dst_orig_node) { + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_global_entry->addr, + ETH_ALEN); + tt_count++; + } + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = NET_RX_SUCCESS; + goto out; + +unlock: + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + +out: + if (res_dst_orig_node) + orig_node_free_ref(res_dst_orig_node); + if (req_dst_orig_node) + orig_node_free_ref(req_dst_orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret == NET_RX_DROP) + kfree_skb(skb); + return ret; + +} +static int send_my_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct tt_local_entry *tt_local_entry; + struct hard_iface *primary_if = NULL; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t my_ttvn, req_ttvn; + int i, ret = NET_RX_DROP; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_ROUTES, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (me) [%c]\n", tt_request->src, + tt_request->ttvn, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + + my_ttvn = (uint8_t)atomic_read(&bat_priv->tt_ver_num); + req_ttvn = tt_request->ttvn; + + orig_node = get_orig_node(bat_priv, tt_request->src); + if (!orig_node) + goto out; + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* If the full table has been explicitly requested or the gap + * is too big send the whole local translation table */ + if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + !bat_priv->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&bat_priv->tt_buff_lock); + tt_len = bat_priv->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + memcpy(tt_buff, bat_priv->tt_buff, + bat_priv->tt_buff_len); + spin_unlock_bh(&bat_priv->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + bat_priv->primary_if->soft_iface->mtu) { + tt_len = bat_priv->primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the local table */ + tt_response->ttvn = + (uint8_t)atomic_read(&bat_priv->tt_ver_num); + + hash = bat_priv->tt_local_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_local_entry->addr, + ETH_ALEN); + tt_count++; + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = NET_RX_SUCCESS; + goto out; + +unlock: + spin_unlock_bh(&bat_priv->tt_buff_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret == NET_RX_DROP) + kfree_skb(skb); + return ret; + +} + +int send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + if (is_my_mac(tt_request->dst)) + return send_my_tt_response(bat_priv, tt_request); + else + return send_other_tt_response(bat_priv, tt_request); +} + +/* Substitute the TT response source's table with the newone carried by the + * packet */ +static void _tt_fill_gtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *tt_buff, + uint16_t table_size, uint8_t ttvn) +{ + int count; + unsigned char *tt_ptr; + + for (count = 0; count < table_size; count++) { + tt_ptr = tt_buff + (count * ETH_ALEN); + + /* If we fail to allocate a new entry we return immediatly */ + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + return; + } + atomic_set(&orig_node->last_tt_ver_num, ttvn); +} + +static void tt_fill_gtable(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct orig_node *orig_node = NULL; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + /* Purge the old table first.. */ + tt_global_del_orig(bat_priv, orig_node, "Received full table"); + + _tt_fill_gtable(bat_priv, orig_node, + ((unsigned char *)tt_response) + + sizeof(struct tt_query_packet), + tt_response->tt_data, + tt_response->ttvn); + + spin_lock_bh(&orig_node->tt_buff_lock); + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = NULL; + spin_unlock_bh(&orig_node->tt_buff_lock); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +static void tt_update_changes(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response, + struct tt_change *tt_change) +{ + struct orig_node *orig_node = NULL; + int i; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + + if (!orig_node) + goto out; + + for (i = 0; i < tt_response->tt_data; i++) { + if ((tt_change + i)->op == TT_DEL) + tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by tt_response"); + else + if (!tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, tt_response->ttvn)) + return; + } + + tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, + tt_response->tt_data); + atomic_set(&orig_node->last_tt_ver_num, tt_response->ttvn); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) +{ + struct tt_local_entry *tt_local_entry; + + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + + if (tt_local_entry) + return true; + return false; +} + +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct tt_req_node *node, *safe; + struct orig_node *orig_node = NULL; + + bat_dbg(DBG_ROUTES, bat_priv, "Received TT_RESPONSE from %pM for " + "ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + tt_response->tt_data, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + if (tt_response->flags & TT_FULL_TABLE) + tt_fill_gtable(bat_priv, tt_response); + else + tt_update_changes(bat_priv, tt_response, + (struct tt_change *)(tt_response + 1)); + + /* Delete the tt_req_node from pending tt_requests list */ + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (!compare_eth(node->addr, tt_response->src)) + continue; + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + /* Recalculate the CRC for this orig_node and store it */ + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + orig_node_free_ref(orig_node); +} + +int tt_init(struct bat_priv *bat_priv) +{ + if (!tt_local_init(bat_priv)) + return 0; + + if (!tt_global_init(bat_priv)) + return 0; + + tt_start_timer(bat_priv); + + return 1; +} + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} + +static void tt_purge(struct work_struct *work) +{ + struct delayed_work *delayed_work = + container_of(work, struct delayed_work, work); + struct bat_priv *bat_priv = + container_of(delayed_work, struct bat_priv, tt_work); + + tt_local_purge(bat_priv); + tt_req_purge(bat_priv); + + tt_start_timer(bat_priv); +} diff --git a/translation-table.h b/translation-table.h index 46152c3..4eef4f8 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,41 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, + uint8_t *new_addr); +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len); +int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); -int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); + uint8_t *addr, char *message); int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len); + struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + uint8_t ttvn); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message); -void tt_global_free(struct bat_priv *bat_priv); + struct orig_node *orig_node, char *message); +void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + char *message); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes); +uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv, + struct orig_node *dst_orig_node, uint8_t hvn, + uint16_t tt_crc, bool full_table); +int send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request); +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index b8c72c3..3a629a3 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; + atomic_t last_tt_ver_num; + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; + spinlock_t tt_buff_lock; + atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -94,10 +98,16 @@ struct orig_node { * neigh_node->real_packet_count */ spinlock_t bcast_seqno_lock; /* protects bcast_bits, * last_bcast_seqno */ + spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list; };
+struct tt_change { + uint8_t op; + uint8_t addr[ETH_ALEN]; +}; + struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; + atomic_t tt_ver_num; + atomic_t tt_ogm_append_cnt; + atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct hlist_head softif_neigh_list; struct softif_neigh __rcu *softif_neigh; @@ -154,21 +167,29 @@ struct bat_priv { struct hlist_head forw_bat_list; struct hlist_head forw_bcast_list; struct hlist_head gw_list; + struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; + struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ + spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ + spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ - int16_t num_local_tt; - atomic_t tt_local_changed; + atomic_t num_local_tt; + atomic_t tt_crc; /* Checksum of the local table, recomputed before + * sending a new OGM */ + unsigned char *tt_buff; + int16_t tt_buff_len; + spinlock_t tt_buff_lock; struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; @@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; + uint8_t ttvn; + /* entry in the global table */ struct hlist_node hash_entry; };
+struct tt_change_node { + struct list_head list; + struct tt_change change; +}; + +struct tt_req_node { + uint8_t addr[ETH_ALEN]; + unsigned long issued_at; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded diff --git a/unicast.c b/unicast.c index 19c3daf..4a8cb9e 100644 --- a/unicast.c +++ b/unicast.c @@ -329,6 +329,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); + /* set the destination tt version number */ + unicast_packet->ttvn = + (uint8_t)atomic_read(&orig_node->last_tt_ver_num);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(struct unicast_packet) >
On Wed, Apr 27, 2011 at 11:35:04PM +0200, Antonio Quartulli wrote:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Hna-improvements
Moreover:
- COMPAT_VERSION has been increased to 14
- batman-adv now depends on module "crc16" for tt crc computation
Hi Antonio
Shouldn't this dependency be listed in the Kconfig file? I think you need to add
select CRC16
See for example ./drivers/w1/slaves/Kconfig.
Andrew
On Thursday 28 April 2011 18:10:31 Andrew Lunn wrote: [..]
Moreover:
- COMPAT_VERSION has been increased to 14
- batman-adv now depends on module "crc16" for tt crc computation
[..]
Shouldn't this dependency be listed in the Kconfig file? I think you need to add
select CRC16
See for example ./drivers/w1/slaves/Kconfig.
Yes, but this patch is against the external module.
Kind regards, Sven
On Thursday 28 April 2011 18:10:31 Andrew Lunn wrote:
Shouldn't this dependency be listed in the Kconfig file? I think you need to add
select CRC16
Actually, the question is whether it is a wise move to introduce this dependency just for that function or whether somebody has an idea how to avoid it. Obviously, copying this function into our module won't be accepted by the kernel developers.
Regards, Marek
On Thu, Apr 28, 2011 at 07:34:29PM +0200, Marek Lindner wrote:
On Thursday 28 April 2011 18:10:31 Andrew Lunn wrote:
Shouldn't this dependency be listed in the Kconfig file? I think you need to add
select CRC16
Actually, the question is whether it is a wise move to introduce this dependency just for that function or whether somebody has an idea how to avoid it. Obviously, copying this function into our module won't be accepted by the kernel developers.
In my opinion it would not be so bad to include this dependency as the crc16 module provides this function only and no more. By the way any idea is welcome!
In my opinion it would not be so bad to include this dependency as the crc16 module provides this function only and no more.
+1
2011/4/28 Antonio Quartulli ordex@autistici.org:
On Thu, Apr 28, 2011 at 07:34:29PM +0200, Marek Lindner wrote:
On Thursday 28 April 2011 18:10:31 Andrew Lunn wrote:
Shouldn't this dependency be listed in the Kconfig file? I think you need to add
select CRC16
Actually, the question is whether it is a wise move to introduce this dependency just for that function or whether somebody has an idea how to avoid it. Obviously, copying this function into our module won't be accepted by the kernel developers.
In my opinion it would not be so bad to include this dependency as the crc16 module provides this function only and no more. By the way any idea is welcome!
-- Antonio Quartulli
..each of us alone is worth nothing.. Ernesto "Che" Guevara
On Wed, Apr 27, 2011 at 11:35:04PM +0200, Antonio Quartulli wrote:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
Hi Antonia
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Hna-improvements
This is a nice summary of the idea. The LaTeX document is also good. Great to see documentation...
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len,
int tt_num_changes)
{
- int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
- int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes *
sizeof(struct tt_change));
You indentation/wrapping is a bit strange. In the function declaration, i would of put the int tt_num_changes directly under int buff_pos. For next_buff_pos i would of put the whole ( ) subexpression on the next line, not split it in half. This happens throughout the patch.
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
+/* Transtable operations */ +#define TT_ADD 0 +#define TT_DEL 1
+++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
Indentation?
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14
What happened to version 13? It suggests this diff is against an older version of batman. Is there going to be merging problems?
@@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02
@@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl;
- uint8_t ttvn; /* destination ttvn */
} __packed;
What is ttvn? The vn in particular? Is it version? There is already ver and version used, do we want yet another way to say version?
@@ -134,4 +143,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet {
- uint8_t packet_type;
- uint8_t version; /* batman version field */
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t flags; /* bit0: 0: -> tt_request
* 1: -> tt_response
* bit1: request the full table
*/
Rather than document the bits, it would be better to reference to the macros TT_*. Somebody at some time with add new flags, or change the values and not update this description.
- uint8_t src[6];
- uint8_t ttvn; /* if tt_request: ttvn that triggered the
* request
* if tt_response: new ttvn for the src
* orig_node
*/
- uint16_t tt_data; /* if tt_request: crc associated with the
* ttvn
* if tt_response: table_size
*/
Maybe a union instead of tt_data being used for two different things? Makes it less confusing when reading the code.
diff --git a/routing.c b/routing.c index 91b3709..838394b 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,68 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node,
unsigned char *tt_buff, int tt_buff_len)
+static void update_transtable(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *tt_buff, uint8_t tt_num_changes,
uint8_t ttvn, uint16_t tt_crc)
{
- if ((tt_buff_len != orig_node->tt_buff_len) ||
((tt_buff_len > 0) &&
(orig_node->tt_buff_len > 0) &&
(memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) {
if (orig_node->tt_buff_len > 0)
tt_global_del_orig(bat_priv, orig_node,
"originator changed tt");
if ((tt_buff_len > 0) && (tt_buff))
tt_global_add_orig(bat_priv, orig_node,
tt_buff, tt_buff_len);
- struct tt_change *tt_change;
- int count;
- uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_tt_ver_num);
- /* the ttvn increased by one -> we can apply the attached changes */
- if (ttvn - orig_ttvn == 1) {
/* if it does not contain the changes send a tt request */
if (!tt_num_changes)
goto request_table;
Why would that happen? It sounds like you are handling a bug, not something which is designed to happen.
for (count = 0; count < tt_num_changes; count++) {
tt_change = (struct tt_change *) tt_buff + count;
/* Check for the change op */
if (tt_change->op == TT_DEL)
tt_global_del(bat_priv, orig_node,
tt_change->addr,
"tt remotely removed");
else
if (!tt_global_add(bat_priv, orig_node,
tt_change->addr,
ttvn))
/* In case of problem while storing a
* global_entry, we stop the updating
* procedure without committing the
* ttvn change. This will avoid to send
* corrupted data on tt_request
*/
return;
Why would an add fail? Because we are out of space? Does it make sense to have two passes over the changes. The first pass does all the deletes and the second pass the adds? Does that make it less likely the add will fail?
Also, the ttvn still has the old value, but some of the new content. Does this cause problems when somebody makes a request for the ttvn with the old value? The requester gets something between ttvn and ttvn+1, but thinks it has ttvn. Can subsequent updates work?
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] "
"(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, "
"TTL %d, V %d, IDF %d)\n",
"(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, "
ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno,"crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n",
batman_packet->tq, batman_packet->ttl, batman_packet->version,
batman_packet->tt_ver_num, batman_packet->tt_crc,
batman_packet->tt_num_changes, batman_packet->tq,
has_directlink_flag);batman_packet->ttl, batman_packet->version,
I think this is the information bisect uses to look for routing loops etc. Do you plan to extend bisect to look for TT problems? Does it make sense to add a new DBG_TT which dumps the adds and removes in the OGM?
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{
- struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface);
- struct tt_query_packet *tt_query;
- struct ethhdr *ethhdr;
- int ret = NET_RX_DROP;
- /* drop packet if it has not necessary minimum size */
- if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet))))
goto out;
- /* I could need to modify it */
- if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0)
goto out;
- ethhdr = (struct ethhdr *)skb_mac_header(skb);
- /* packet with unicast indication but broadcast recipient */
- if (is_broadcast_ether_addr(ethhdr->h_dest))
goto out;
- /* packet with broadcast sender address */
- if (is_broadcast_ether_addr(ethhdr->h_source))
goto out;
- tt_query = (struct tt_query_packet *)skb->data;
- tt_query->tt_data = ntohs(tt_query->tt_data);
- if (tt_query->flags & TT_REQUEST) {
/* Try to reply to this tt_request */
ret = send_tt_response(bat_priv, tt_query);
if (ret != NET_RX_SUCCESS) {
This looks wrong. The name send_tt_response() suggests we are sending, but you compare against NET_RX_SUCCESS!
bat_dbg(DBG_ROUTES, bat_priv,
"Routing TT_REQUEST to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
}
goto out;
- }
- /* We need to linearize the packet to access the TT data */
- if (skb_linearize(skb) < 0)
goto out;
Isn't this too late. You have already accessed tt_query->tt_data in the code above?
- diff = unicast_packet->ttvn - curr_ttvn;
- /* Check whether I have to reroute the packet */
- if (unicast_packet->packet_type == BAT_UNICAST &&
(diff < 0 && diff > -0xff/2)) {
Are there no helper methods to do this wrap around comparison in one of the linux header files?
Andrew
On sab, apr 30, 2011 at 10:42:26 +0200, Andrew Lunn wrote:
Hi Antonia
Hi Andrew, hi all
(don't worry for the typo ;) )
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Hna-improvements
This is a nice summary of the idea. The LaTeX document is also good. Great to see documentation...
A research project has been made upon this topic, indeed that document is an old draft of part of the project report. (The whole report will be published as soon as it is ready)
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len,
int tt_num_changes)
{
- int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
- int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes *
sizeof(struct tt_change));
You indentation/wrapping is a bit strange. In the function declaration, i would of put the int tt_num_changes directly under int buff_pos.
This is what I've done, but it seems that your mail client is messing up with the tabs (I think).
For next_buff_pos i would of put the whole ( ) subexpression on the next line, not split it in half. This happens throughout the patch.
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
+/* Transtable operations */ +#define TT_ADD 0 +#define TT_DEL 1
+++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
Indentation?
As above, but in this case I think I'll substitute the tab with spaces so that all the BAT_* definitions can be homogeneous
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14
What happened to version 13? It suggests this diff is against an older version of batman. Is there going to be merging problems?
There was a problem with the COMPAT_VERSION so I had to jump to 14 (I can't really remember the details, Marek should know something more :))
@@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02
@@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl;
- uint8_t ttvn; /* destination ttvn */
} __packed;
What is ttvn? The vn in particular? Is it version? There is already ver and version used, do we want yet another way to say version?
Translation Table Version Number. 'ttvn' is the abbreviation used in the documentation, so I decided to use it as field name. Only in the struct orig_node it is called last_tt_ver_num. Do you think I should use the latter everywhere? 'ttvn' is really nice and compact :)
@@ -134,4 +143,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet {
- uint8_t packet_type;
- uint8_t version; /* batman version field */
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t flags; /* bit0: 0: -> tt_request
* 1: -> tt_response
* bit1: request the full table
*/
Rather than document the bits, it would be better to reference to the macros TT_*. Somebody at some time with add new flags, or change the values and not update this description.
Mh..Honestly I prefer to understand what each bit means in a bitfield flag. What do you mean with reference to the macros? Should I explain here which macro can be assigned to the field?
- uint8_t src[6];
- uint8_t ttvn; /* if tt_request: ttvn that triggered the
* request
* if tt_response: new ttvn for the src
* orig_node
*/
- uint16_t tt_data; /* if tt_request: crc associated with the
* ttvn
* if tt_response: table_size
*/
Maybe a union instead of tt_data being used for two different things? Makes it less confusing when reading the code.
I decided to avoid a union because here we have two different things which have exactly the same length. So I opted for a "generic" name. What do style experts suggest? :) A union would probably make easier to understand what is going on while reading the code, as Andrew suggested.
- /* the ttvn increased by one -> we can apply the attached changes */
- if (ttvn - orig_ttvn == 1) {
/* if it does not contain the changes send a tt request */
if (!tt_num_changes)
goto request_table;
Why would that happen? It sounds like you are handling a bug, not something which is designed to happen.
We have two cases which would lead to this situation: 1) An OGM, after being sent TT_OGM_APPEND_MAX times, will not contain the changes anymore. If a node missed all the "full" OGMs, it will end up in this situation when receiving the next one. 2) The set of changes is too big to be appended to the OGM (due to the frame maximum size). The receiving node will send a tt_request to recover the changes (later on, it could also exploit the fragmentation, while the OGM cannot)
for (count = 0; count < tt_num_changes; count++) {
tt_change = (struct tt_change *) tt_buff + count;
/* Check for the change op */
if (tt_change->op == TT_DEL)
tt_global_del(bat_priv, orig_node,
tt_change->addr,
"tt remotely removed");
else
if (!tt_global_add(bat_priv, orig_node,
tt_change->addr,
ttvn))
/* In case of problem while storing a
* global_entry, we stop the updating
* procedure without committing the
* ttvn change. This will avoid to send
* corrupted data on tt_request
*/
return;
Why would an add fail? Because we are out of space? Does it make sense to have two passes over the changes. The first pass does all the deletes and the second pass the adds? Does that make it less likely the add will fail?
Yes, memory problem. Actually it is not possible to make two passes: e.g. imagine that the set of changes is the following: - DEL A - ADD A - DEL A (ok it is probably not really common, but still possible) If we make two passes we will have A again in the table while it should not be there. By the way, if we are going to add a client which is already in the table, we will not allocate more memory, but we will simply change the "pointer" of the originator serving such client in our structure (tt_global_entry->orig_node).
Also, the ttvn still has the old value, but some of the new content. Does this cause problems when somebody makes a request for the ttvn with the old value? The requester gets something between ttvn and ttvn+1, but thinks it has ttvn. Can subsequent updates work?
Remember that we added the TT_CRC. It was born as conutermeasure to node reboots, but now we are exploiting it as consistency check! This is why the code recomputes the crc after applying every change set. If something went wrong, on the next OGM the node will recognise the problem and ask for a "full table". Moreover the crc is sent within the tt_request message, so that if a intermediate node doesn't match it, the request is forwarded instead of immediatly reply.
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] "
"(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, "
"TTL %d, V %d, IDF %d)\n",
"(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, "
ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno,"crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n",
batman_packet->tq, batman_packet->ttl, batman_packet->version,
batman_packet->tt_ver_num, batman_packet->tt_crc,
batman_packet->tt_num_changes, batman_packet->tq,
has_directlink_flag);batman_packet->ttl, batman_packet->version,
I think this is the information bisect uses to look for routing loops etc. Do you plan to extend bisect to look for TT problems? Does it make sense to add a new DBG_TT which dumps the adds and removes in the OGM?
Sounds good to me :)
- if (tt_query->flags & TT_REQUEST) {
/* Try to reply to this tt_request */
ret = send_tt_response(bat_priv, tt_query);
if (ret != NET_RX_SUCCESS) {
This looks wrong. The name send_tt_response() suggests we are sending, but you compare against NET_RX_SUCCESS!
eheh you're nearly right. We are sending a tt_response here, BUT only if we have enough information to send such message we can consider the tt_request as correctly received, otherwise, as the code below suggests, we have to re-route the packet and so let route_unicast_packet() decide whether the mesage was correctly received or not.
bat_dbg(DBG_ROUTES, bat_priv,
"Routing TT_REQUEST to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
}
goto out;
- }
- /* We need to linearize the packet to access the TT data */
- if (skb_linearize(skb) < 0)
goto out;
Isn't this too late. You have already accessed tt_query->tt_data in the code above?
the access to the tt_data field is guaranteed by
pskb_may_pull(skb, sizeof(struct tt_query_packet))
(a few lines above inside the function), while here we are linearising the skb to let handle_tt_reponse access the data contained after the header. (If I correctly understood how the skb work, this should be ok). The comment refers to the data carried by the tt_response, not to the tt_data field.
- diff = unicast_packet->ttvn - curr_ttvn;
- /* Check whether I have to reroute the packet */
- if (unicast_packet->packet_type == BAT_UNICAST &&
(diff < 0 && diff > -0xff/2)) {
Are there no helper methods to do this wrap around comparison in one of the linux header files?
Honestly, I don't know. I'll investigate on it..
Andrew
Andrew thank you very much for reading the patches and for all your suggestion/criticism/corrections!
Regards,
You indentation/wrapping is a bit strange. In the function declaration, i would of put the int tt_num_changes directly under int buff_pos.
This is what I've done, but it seems that your mail client is messing up with the tabs (I think).
Possibly. Or the list server. I use mutt, same as you, and it normally gets tabs and the like correct.
+++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
Indentation?
As above, but in this case I think I'll substitute the tab with spaces so that all the BAT_* definitions can be homogeneous
checkpatch might complain, depending on the number of spaces.
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02
@@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl;
- uint8_t ttvn; /* destination ttvn */
} __packed;
What is ttvn? The vn in particular? Is it version? There is already ver and version used, do we want yet another way to say version?
Translation Table Version Number. 'ttvn' is the abbreviation used in the documentation, so I decided to use it as field name. Only in the struct orig_node it is called last_tt_ver_num. Do you think I should use the latter everywhere? 'ttvn' is really nice and compact :)
It is a minor point. ttvn is O.K. But how about ttver?
@@ -134,4 +143,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet {
- uint8_t packet_type;
- uint8_t version; /* batman version field */
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t flags; /* bit0: 0: -> tt_request
* 1: -> tt_response
* bit1: request the full table
*/
Rather than document the bits, it would be better to reference to the macros TT_*. Somebody at some time with add new flags, or change the values and not update this description.
Mh..Honestly I prefer to understand what each bit means in a bitfield flag. What do you mean with reference to the macros?
I mean say that flags is a combination of TT_RESPONSE, TT_REQUEST, TT_FULL_TABLE. The TT_* macros.
- uint8_t src[6];
- uint8_t ttvn; /* if tt_request: ttvn that triggered the
* request
* if tt_response: new ttvn for the src
* orig_node
*/
- uint16_t tt_data; /* if tt_request: crc associated with the
* ttvn
* if tt_response: table_size
*/
Maybe a union instead of tt_data being used for two different things? Makes it less confusing when reading the code.
I decided to avoid a union because here we have two different things which have exactly the same length. So I opted for a "generic" name. What do style experts suggest? :) A union would probably make easier to understand what is going on while reading the code, as Andrew suggested.
I believe in the saying: Code is written once, read a 100 times, so make it easy to read.
- /* the ttvn increased by one -> we can apply the attached changes */
- if (ttvn - orig_ttvn == 1) {
/* if it does not contain the changes send a tt request */
if (!tt_num_changes)
goto request_table;
Why would that happen? It sounds like you are handling a bug, not something which is designed to happen.
We have two cases which would lead to this situation:
- An OGM, after being sent TT_OGM_APPEND_MAX times, will not contain
the changes anymore. If a node missed all the "full" OGMs, it will end up in this situation when receiving the next one. 2) The set of changes is too big to be appended to the OGM (due to the frame maximum size). The receiving node will send a tt_request to recover the changes (later on, it could also exploit the fragmentation, while the OGM cannot)
O.K. so there is a good reason. So maybe hint about these reasons in the comment? Help the reader understand why it might happen.
Yes, memory problem. Actually it is not possible to make two passes: e.g. imagine that the set of changes is the following:
- DEL A
- ADD A
- DEL A
(ok it is probably not really common, but still possible)
And since it will not happen to often, it is not worth the code so find such situations and simply the transactions.
Remember that we added the TT_CRC. It was born as conutermeasure to node reboots, but now we are exploiting it as consistency check! This is why the code recomputes the crc after applying every change set. If something went wrong, on the next OGM the node will recognise the problem and ask for a "full table".
O.K. a clean self recovery. That is good.
- if (tt_query->flags & TT_REQUEST) {
/* Try to reply to this tt_request */
ret = send_tt_response(bat_priv, tt_query);
if (ret != NET_RX_SUCCESS) {
This looks wrong. The name send_tt_response() suggests we are sending, but you compare against NET_RX_SUCCESS!
eheh you're nearly right. We are sending a tt_response here, BUT only if we have enough information to send such message we can consider the tt_request as correctly received, otherwise, as the code below suggests, we have to re-route the packet and so let route_unicast_packet() decide whether the mesage was correctly received or not.
You definitely need a comment here. It is so counter intuitive that you are bound to get bug reports and patches by people who see this.
bat_dbg(DBG_ROUTES, bat_priv,
"Routing TT_REQUEST to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
}
goto out;
- }
- /* We need to linearize the packet to access the TT data */
- if (skb_linearize(skb) < 0)
goto out;
Isn't this too late. You have already accessed tt_query->tt_data in the code above?
the access to the tt_data field is guaranteed by
pskb_may_pull(skb, sizeof(struct tt_query_packet))
(a few lines above inside the function), while here we are linearising the skb to let handle_tt_reponse access the data contained after the header.
Ah, O.K. The comment is ambiguous and i got the wrong meaning. Maybe the comment could be:
/* We need to linearize the packet to access the TT change records */
Andrew
On Sat, Apr 30, 2011 at 07:48:39PM +0200, Andrew Lunn wrote:
You indentation/wrapping is a bit strange. In the function declaration, i would of put the int tt_num_changes directly under int buff_pos.
This is what I've done, but it seems that your mail client is messing up with the tabs (I think).
Possibly. Or the list server. I use mutt, same as you, and it normally gets tabs and the like correct.
Yes..then I don't know :)
+++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
Indentation?
As above, but in this case I think I'll substitute the tab with spaces so that all the BAT_* definitions can be homogeneous
checkpatch might complain, depending on the number of spaces.
Yep, I'll keep the patch checkpatch-compilant ;)
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02
@@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl;
- uint8_t ttvn; /* destination ttvn */
} __packed;
What is ttvn? The vn in particular? Is it version? There is already ver and version used, do we want yet another way to say version?
Translation Table Version Number. 'ttvn' is the abbreviation used in the documentation, so I decided to use it as field name. Only in the struct orig_node it is called last_tt_ver_num. Do you think I should use the latter everywhere? 'ttvn' is really nice and compact :)
It is a minor point. ttvn is O.K. But how about ttver?
Mh..Honestly I like ttvn, but I can put and explicit explanation in the field declaration in types.h.
I would also like to know what the other people think about
Rather than document the bits, it would be better to reference to the macros TT_*. Somebody at some time with add new flags, or change the values and not update this description.
Mh..Honestly I prefer to understand what each bit means in a bitfield flag. What do you mean with reference to the macros?
I mean say that flags is a combination of TT_RESPONSE, TT_REQUEST, TT_FULL_TABLE. The TT_* macros.
Oky!
- uint16_t tt_data; /* if tt_request: crc associated with the
* ttvn
* if tt_response: table_size
*/
Maybe a union instead of tt_data being used for two different things? Makes it less confusing when reading the code.
I decided to avoid a union because here we have two different things which have exactly the same length. So I opted for a "generic" name. What do style experts suggest? :) A union would probably make easier to understand what is going on while reading the code, as Andrew suggested.
I believe in the saying: Code is written once, read a 100 times, so make it easy to read.
- /* the ttvn increased by one -> we can apply the attached changes */
- if (ttvn - orig_ttvn == 1) {
/* if it does not contain the changes send a tt request */
if (!tt_num_changes)
goto request_table;
Why would that happen? It sounds like you are handling a bug, not something which is designed to happen.
We have two cases which would lead to this situation:
- An OGM, after being sent TT_OGM_APPEND_MAX times, will not contain
the changes anymore. If a node missed all the "full" OGMs, it will end up in this situation when receiving the next one. 2) The set of changes is too big to be appended to the OGM (due to the frame maximum size). The receiving node will send a tt_request to recover the changes (later on, it could also exploit the fragmentation, while the OGM cannot)
O.K. so there is a good reason. So maybe hint about these reasons in the comment? Help the reader understand why it might happen.
Ok I can add some comments more. But, should I reason as we do not have documentation at all? I mean, while deciding to put a comment or not..
Because, in my opinion, this piece of code woule be clearer after reading the doc.
Yes, memory problem. Actually it is not possible to make two passes: e.g. imagine that the set of changes is the following:
- DEL A
- ADD A
- DEL A
(ok it is probably not really common, but still possible)
And since it will not happen to often, it is not worth the code so find such situations and simply the transactions.
What do you exactly mean? Sorry but I didn't fully understand your sentence :(
Remember that we added the TT_CRC. It was born as conutermeasure to node reboots, but now we are exploiting it as consistency check! This is why the code recomputes the crc after applying every change set. If something went wrong, on the next OGM the node will recognise the problem and ask for a "full table".
O.K. a clean self recovery. That is good.
;)
- if (tt_query->flags & TT_REQUEST) {
/* Try to reply to this tt_request */
ret = send_tt_response(bat_priv, tt_query);
if (ret != NET_RX_SUCCESS) {
This looks wrong. The name send_tt_response() suggests we are sending, but you compare against NET_RX_SUCCESS!
eheh you're nearly right. We are sending a tt_response here, BUT only if we have enough information to send such message we can consider the tt_request as correctly received, otherwise, as the code below suggests, we have to re-route the packet and so let route_unicast_packet() decide whether the mesage was correctly received or not.
You definitely need a comment here. It is so counter intuitive that you are bound to get bug reports and patches by people who see this.
Ok, I'll add a commente here too
bat_dbg(DBG_ROUTES, bat_priv,
"Routing TT_REQUEST to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
}
goto out;
- }
- /* We need to linearize the packet to access the TT data */
- if (skb_linearize(skb) < 0)
goto out;
Isn't this too late. You have already accessed tt_query->tt_data in the code above?
the access to the tt_data field is guaranteed by
pskb_may_pull(skb, sizeof(struct tt_query_packet))
(a few lines above inside the function), while here we are linearising the skb to let handle_tt_reponse access the data contained after the header.
Ah, O.K. The comment is ambiguous and i got the wrong meaning. Maybe the comment could be:
/* We need to linearize the packet to access the TT change records */
Oky I'll correct the comment :-)
I understood that I have to work harder to write comments :D Thank you again!
Regards,
O.K. so there is a good reason. So maybe hint about these reasons in the comment? Help the reader understand why it might happen.
Ok I can add some comments more. But, should I reason as we do not have documentation at all? I mean, while deciding to put a comment or not..
If you want to reference to documentation, i think it should be in kernel documentation. So i would make the documentation part of this patch set. I.e. include a file Documentation/networking/batman-adv-tt.txt, and reference it.
Yes, memory problem. Actually it is not possible to make two passes: e.g. imagine that the set of changes is the following:
- DEL A
- ADD A
- DEL A
(ok it is probably not really common, but still possible)
And since it will not happen to often, it is not worth the code so find such situations and simply the transactions.
What do you exactly mean? Sorry but I didn't fully understand your sentence :(
You could parse the changes, DEL A, ADD A, DEL A, and optimize it down to just DEL A. But i guess it is not worth the effort.
I understood that I have to work harder to write comments :D
That is one approach. I often take another. Lots of very small functions, with names which make it clear what they do. The function names replace the comments.
This is no right/wrong, just different styles.
Andrew
On sab, apr 30, 2011 at 10:42:26 +0200, Andrew Lunn wrote:
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] "
"(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, "
"TTL %d, V %d, IDF %d)\n",
"(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, "
ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno,"crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n",
batman_packet->tq, batman_packet->ttl, batman_packet->version,
batman_packet->tt_ver_num, batman_packet->tt_crc,
batman_packet->tt_num_changes, batman_packet->tq,
has_directlink_flag);batman_packet->ttl, batman_packet->version,
I think this is the information bisect uses to look for routing loops etc. Do you plan to extend bisect to look for TT problems? Does it make sense to add a new DBG_TT which dumps the adds and removes in the OGM?
I don't think we really need a new log "channel". Till now all the hna operations were printed on DBG_ROUTE, so I think we could continue using it..
The bisect TT extension is not currently planned, but at least it is now supported :)
Regards,
On Tuesday 03 May 2011 17:50:07 Antonio Quartulli wrote:
I think this is the information bisect uses to look for routing loops etc. Do you plan to extend bisect to look for TT problems? Does it make sense to add a new DBG_TT which dumps the adds and removes in the OGM?
I don't think we really need a new log "channel". Till now all the hna operations were printed on DBG_ROUTE, so I think we could continue using it..
Actually, I liked Andrew's suggestion. So far the HNA handling did not have its own debug "channel" because it was plain simple - nothing much to debug there. The advanced handling we are going to add might require debugging in the future ...
Even if you don't plan to extend bisect at the moment, extra TT debug info would make it easier to add it later on. I'd be surprised if the current concept / code "just works". Bugs tend to hide in unexpected places. ;-)
Regards, Marek
On Tue, May 03, 2011 at 05:56:45PM +0200, Marek Lindner wrote:
On Tuesday 03 May 2011 17:50:07 Antonio Quartulli wrote:
I think this is the information bisect uses to look for routing loops etc. Do you plan to extend bisect to look for TT problems? Does it make sense to add a new DBG_TT which dumps the adds and removes in the OGM?
I don't think we really need a new log "channel". Till now all the hna operations were printed on DBG_ROUTE, so I think we could continue using it..
Actually, I liked Andrew's suggestion. So far the HNA handling did not have its own debug "channel" because it was plain simple - nothing much to debug there. The advanced handling we are going to add might require debugging in the future ...
Even if you don't plan to extend bisect at the moment, extra TT debug info would make it easier to add it later on. I'd be surprised if the current concept / code "just works". Bugs tend to hide in unexpected places. ;-)
Definitely :)
At this point I think it is better to introduce this new log "channel": BAT_TT. I'll redirect all the TT related messages to this new channel.
Thank you
Regards,
Exploting the new announcement implementation, it has been possible to improve the roaming mechanism and reduce the number of packet drops.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Roaming-improvements
Signed-off-by: Antonio Quartulli ordex@autistici.org --- hard-interface.c | 4 + main.c | 2 + main.h | 4 + originator.c | 1 + packet.h | 10 +++ routing.c | 70 ++++++++++++++++++++-- routing.h | 1 + send.c | 1 + soft-interface.c | 3 +- translation-table.c | 169 +++++++++++++++++++++++++++++++++++++++++++++----- translation-table.h | 7 ++- types.h | 24 +++++++ 12 files changed, 271 insertions(+), 25 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 2a7c533..e88fccd 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -673,6 +673,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_TT_QUERY: ret = recv_tt_query(skb, hard_iface); break; + /* Roaming advertisement */ + case BAT_ROAM_ADV: + ret = recv_roam_adv(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index a84679a..31cbecc 100644 --- a/main.c +++ b/main.c @@ -85,6 +85,7 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_roam_list_lock); spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); @@ -97,6 +98,7 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->softif_neigh_list); INIT_LIST_HEAD(&bat_priv->tt_changes_list); INIT_LIST_HEAD(&bat_priv->tt_req_list); + INIT_LIST_HEAD(&bat_priv->tt_roam_list);
if (originator_init(bat_priv) < 1) goto err; diff --git a/main.h b/main.h index cc1c277..802d87a 100644 --- a/main.h +++ b/main.h @@ -55,6 +55,10 @@ #define TT_ADD 0 #define TT_DEL 1
+#define ROAMING_MAX_TIME 20 /* Time in which a client can roam at most + * ROAMING_MAX_COUNT times */ +#define ROAMING_MAX_COUNT 5 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ diff --git a/originator.c b/originator.c index be7257b..2cb7425 100644 --- a/originator.c +++ b/originator.c @@ -221,6 +221,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) /* extra reference for return */ atomic_set(&orig_node->refcount, 2);
+ orig_node->tt_poss_change = false; orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; diff --git a/packet.h b/packet.h index 34a2775..e7ac34c 100644 --- a/packet.h +++ b/packet.h @@ -31,6 +31,7 @@ #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 #define BAT_TT_QUERY 0x07 +#define BAT_ROAM_ADV 0x08
/* this file is included by batctl which needs these defines */ #define COMPAT_VERSION 14 @@ -164,4 +165,13 @@ struct tt_query_packet { */ } __packed;
+struct roam_adv_packet { + uint8_t packet_type; + uint8_t version; + uint8_t dst[6]; + uint8_t ttl; + uint8_t src[6]; + uint8_t client[6]; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 838394b..431370a 100644 --- a/routing.c +++ b/routing.c @@ -89,7 +89,7 @@ static void update_transtable(struct bat_priv *bat_priv, else if (!tt_global_add(bat_priv, orig_node, tt_change->addr, - ttvn)) + ttvn, false)) /* In case of problem while storing a * global_entry, we stop the updating * procedure without committing the @@ -108,6 +108,10 @@ static void update_transtable(struct bat_priv *bat_priv, * to recompute it to spot any possible inconsistency * in the global table */ orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + if (tt_num_changes) + orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not * in sync anymore -> request fresh tt data */ @@ -1290,6 +1294,56 @@ out: return ret; }
+int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct roam_adv_packet *roam_adv_packet; + struct orig_node *orig_node; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + roam_adv_packet = (struct roam_adv_packet *)skb->data; + + if (!is_my_mac(roam_adv_packet->dst)) + return route_unicast_packet(skb, recv_if); + + orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + if (!orig_node) + goto out; + + tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + atomic_read(&orig_node->last_tt_ver_num) + 1, true); + + /* Roaming phase starts: I have a new information but the ttvn has been + * incremented yet. This flag will make me check all the incoming + * packets for the correct destination. */ + bat_priv->tt_poss_change = true; + + bat_dbg(DBG_ROUTES, bat_priv, "Received ROAMING_ADV from %pM " + "(client %pM)\n", roam_adv_packet->src, + roam_adv_packet->client); + + orig_node_free_ref(orig_node); + ret = NET_RX_SUCCESS; +out: + kfree(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1478,35 +1532,41 @@ int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) struct ethhdr *ethhdr; uint8_t curr_ttvn; int16_t diff; + bool tt_poss_change;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + return NET_RX_DROP; + unicast_packet = (struct unicast_packet *)skb->data;
- if (is_my_mac(unicast_packet->dest)) + if (is_my_mac(unicast_packet->dest)) { + tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->tt_ver_num); - else { + } else { orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node) return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_tt_ver_num); + tt_poss_change = orig_node->tt_poss_change; orig_node_free_ref(orig_node); }
diff = unicast_packet->ttvn - curr_ttvn; /* Check whether I have to reroute the packet */ if (unicast_packet->packet_type == BAT_UNICAST && - (diff < 0 && diff > -0xff/2)) { + ((diff < 0 && diff > -0xff/2) || tt_poss_change)) { /* Linearize the skb before accessing it */ if (skb_linearize(skb) < 0) return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data + sizeof(struct unicast_packet)); - orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) { diff --git a/routing.h b/routing.h index 6f6a5f8..e2943e0 100644 --- a/routing.h +++ b/routing.h @@ -37,6 +37,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f85913e..e975417 100644 --- a/send.c +++ b/send.c @@ -303,6 +303,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) prepare_packet_buffer(bat_priv, hard_iface); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->tt_ver_num); + bat_priv->tt_poss_change = false; }
/* if the changes have been sent enough times */ diff --git a/soft-interface.c b/soft-interface.c index fedb1ed..eb35ae9 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -366,7 +366,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed", false); tt_local_add(dev, addr->sa_data); }
@@ -669,6 +669,7 @@ struct net_device *softif_create(char *name)
bat_priv->tt_buff = NULL; bat_priv->tt_buff_len = 0; + bat_priv->tt_poss_change = false;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index d55eeb5..0b13473 100644 --- a/translation-table.c +++ b/translation-table.c @@ -168,6 +168,8 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; + uint8_t roam_addr[ETH_ALEN]; + struct orig_node *roam_orig_node;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); @@ -206,12 +208,21 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry) + /* Check whether it is a roaming! */ + if (tt_global_entry) { + memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); + roam_orig_node = tt_global_entry->orig_node; + /* This node is probably going to update its tt table */ + tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + send_roam_adv(bat_priv, tt_global_entry->addr, + tt_global_entry->orig_node); + } else + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - + return; unlock: spin_unlock_bh(&bat_priv->tt_lhash_lock); } @@ -364,7 +375,8 @@ static void tt_local_del(struct bat_priv *bat_priv, tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, + char *message, bool roaming) { struct tt_local_entry *tt_local_entry;
@@ -372,7 +384,11 @@ void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { - tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + if (roaming) + tt_local_event(bat_priv, TT_DEL, broadcast_addr); + else + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + tt_local_del(bat_priv, tt_local_entry, message); } spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -473,7 +489,7 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* caller must hold orig_node recount */ int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_addr, uint8_t ttvn) + unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; @@ -520,8 +536,9 @@ int tt_global_add(struct bat_priv *bat_priv, tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 1; unlock: @@ -905,6 +922,7 @@ out: kfree(tt_req_node); } return ret; + unlock_tt: spin_unlock_bh(&bat_priv->tt_req_list_lock); return ret; @@ -1249,7 +1267,7 @@ static void _tt_fill_gtable(struct bat_priv *bat_priv, tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */ - if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn, false)) return; } atomic_set(&orig_node->last_tt_ver_num, ttvn); @@ -1303,7 +1321,8 @@ static void tt_update_changes(struct bat_priv *bat_priv, "tt removed by tt_response"); else if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, tt_response->ttvn)) + (tt_change + i)->addr, + tt_response->ttvn, false)) return; }
@@ -1382,16 +1401,118 @@ int tt_init(struct bat_priv *bat_priv) return 1; }
-void tt_free(struct bat_priv *bat_priv) +static void tt_roam_list_free(struct bat_priv *bat_priv) { - cancel_delayed_work_sync(&bat_priv->tt_work); + struct tt_roam_node *node, *safe;
- tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); + spin_lock_bh(&bat_priv->tt_roam_list_lock);
- kfree(bat_priv->tt_buff); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +static void tt_roam_purge(struct bat_priv *bat_priv) +{ + struct tt_roam_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + if (!is_out_of_time(node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node) +{ + struct neigh_node *neigh_node; + struct sk_buff *skb; + struct roam_adv_packet *roam_adv_packet; + struct tt_roam_node *tt_roam_node; + bool found = false; + int ret = 1; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { + if (!compare_eth(tt_roam_node->addr, client)) + continue; + + if (is_out_of_time(tt_roam_node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + if (!atomic_dec_not_zero(&tt_roam_node->counter)) + /* Sorry, you roamed too many times! */ + goto unlock; + + found = true; + break; + } + + if (!found) { + tt_roam_node = kmalloc(sizeof(struct tt_roam_node), GFP_ATOMIC); + if (!tt_roam_node) + goto unlock; + + tt_roam_node->first_time = jiffies; + atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + memcpy(tt_roam_node->addr, client, ETH_ALEN); + + list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); + + skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + if (!skb) + goto free_skb; + + skb_reserve(skb, ETH_HLEN); + + roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, + sizeof(struct roam_adv_packet)); + + roam_adv_packet->packet_type = BAT_ROAM_ADV; + roam_adv_packet->version = COMPAT_VERSION; + roam_adv_packet->ttl = TTL; + memcpy(roam_adv_packet->src, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); + memcpy(roam_adv_packet->client, client, ETH_ALEN); + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto free_skb; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto free_neigh; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +free_neigh: + if (neigh_node) + neigh_node_free_ref(neigh_node); +free_skb: + if (ret) + kfree_skb(skb); + return; +unlock: + spin_unlock_bh(&bat_priv->tt_roam_list_lock); }
static void tt_purge(struct work_struct *work) @@ -1403,6 +1524,20 @@ static void tt_purge(struct work_struct *work)
tt_local_purge(bat_priv); tt_req_purge(bat_priv); + tt_roam_purge(bat_priv);
tt_start_timer(bat_priv); } + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + tt_roam_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} diff --git a/translation-table.h b/translation-table.h index 4eef4f8..05ef22c 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,6 +22,7 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
+struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); int tt_len(int changes_num); void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, uint8_t *new_addr); @@ -30,14 +31,14 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); + uint8_t *addr, char *message, bool roaming); int tt_local_seq_print_text(struct seq_file *seq, void *offset); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, int tt_buff_len); int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - uint8_t ttvn); + uint8_t ttvn, bool roaming); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); @@ -58,5 +59,7 @@ int send_tt_response(struct bat_priv *bat_priv, bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); void handle_tt_response(struct bat_priv *bat_priv, struct tt_query_packet *tt_response); +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 3a629a3..da7eda8 100644 --- a/types.h +++ b/types.h @@ -81,6 +81,14 @@ struct orig_node { int16_t tt_buff_len; spinlock_t tt_buff_lock; atomic_t tt_size; + bool tt_poss_change; /* this flag is needed to detect an ongoing + * roaming event. If it is true, it means that + * in the last OGM interval I sent a Roaming_adv, + * so I have to check every packet going to it + * whether the destination is still a client of + * its or not, it will be reset as soon as I'll + * receive a new TTVN from it */ + uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -158,6 +166,13 @@ struct bat_priv { atomic_t tt_ver_num; atomic_t tt_ogm_append_cnt; atomic_t tt_local_changes; /* changes registered in a OGM interval */ + bool tt_poss_change; /* this flag is needed to detect an ongoing + * roaming event. If it is true, it means that + * in the last OGM interval I received a + * Roaming_adv, so I have to check every packet + * going to me whether the destination is still + * a client of mine or not, it will be reset as + * soon as I'll increase my TTVN */ char num_ifaces; struct hlist_head softif_neigh_list; struct softif_neigh __rcu *softif_neigh; @@ -173,6 +188,7 @@ struct bat_priv { struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; struct list_head tt_req_list; /* list of pending tt_requests */ + struct list_head tt_roam_list; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -180,6 +196,7 @@ struct bat_priv { spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ + spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ @@ -239,6 +256,13 @@ struct tt_req_node { struct list_head list; };
+struct tt_roam_node { + uint8_t addr[ETH_ALEN]; + atomic_t counter; + unsigned long first_time; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org --- main.c | 2 - translation-table.c | 266 +++++++++++++++++++++++++++++---------------------- types.h | 6 +- vis.c | 13 +-- 4 files changed, 161 insertions(+), 126 deletions(-)
diff --git a/main.c b/main.c index 31cbecc..a3783f8 100644 --- a/main.c +++ b/main.c @@ -81,8 +81,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/translation-table.c b/translation-table.c index 0b13473..700d57b 100644 --- a/translation-table.c +++ b/translation-table.c @@ -78,6 +78,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -107,6 +110,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -123,6 +129,34 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + call_rcu(&tt_local_entry->rcu, tt_local_entry_free_rcu); +} + +static void tt_global_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + call_rcu(&tt_global_entry->rcu, tt_global_entry_free_rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) { struct tt_change_node *tt_change_node; @@ -166,22 +200,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_ADD, addr);
@@ -191,6 +222,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -200,31 +232,26 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -306,8 +333,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->tt_ver_num));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -321,7 +346,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -341,8 +365,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -351,15 +373,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, char *message) @@ -372,26 +385,28 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - if (roaming) - tt_local_event(bat_priv, TT_DEL, broadcast_addr); - else - tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + if (!tt_local_entry) + goto out;
- tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (roaming) + tt_local_event(bat_priv, TT_DEL, broadcast_addr); + else + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -400,13 +415,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -417,22 +433,26 @@ static void tt_local_purge(struct bat_priv *bat_priv) continue;
tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_ROUTES, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -447,7 +467,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -492,10 +512,9 @@ int tt_global_add(struct bat_priv *bat_priv, unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -503,16 +522,19 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -525,25 +547,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_ROUTES, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -579,8 +594,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -595,10 +608,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -620,8 +633,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -635,7 +646,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_ROUTES, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -643,25 +654,29 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, char *message) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { - atomic_dec(&orig_node->tt_size); + if (tt_global_entry->orig_node == orig_node) _tt_global_del(bat_priv, tt_global_entry, message); - } - spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -672,38 +687,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock;
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_ROUTES, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -712,19 +748,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -781,7 +817,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1323,7 +1358,7 @@ static void tt_update_changes(struct bat_priv *bat_priv, if (!tt_global_add(bat_priv, orig_node, (tt_change + i)->addr, tt_response->ttvn, false)) - return; + goto out; }
tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, @@ -1337,15 +1372,17 @@ out:
bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1381,11 +1418,10 @@ void handle_tt_response(struct bat_priv *bat_priv, if (!orig_node) goto out;
- spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); out: - orig_node_free_ref(orig_node); + if (orig_node) + orig_node_free_ref(orig_node); }
int tt_init(struct bat_priv *bat_priv) diff --git a/types.h b/types.h index da7eda8..e0cdc5f 100644 --- a/types.h +++ b/types.h @@ -193,8 +193,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -234,6 +232,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -241,6 +241,8 @@ struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; uint8_t ttvn; + atomic_t refcount; + struct rcu_head rcu; /* entry in the global table */ struct hlist_node hash_entry; }; diff --git a/vis.c b/vis.c index c39f20c..4c27950 100644 --- a/vis.c +++ b/vis.c @@ -680,11 +680,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -693,14 +694,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
In this new patchset, patch 2/4 has been revised following Andrew's suggestions. In particular all the changes are new comments, except one: send_tt_response() has been slightly modified to avoid it to return NET_RX_SUCCESS/NET_RX_DROP so avoiding to create confusion.
Regards, Antonio Quartulli
To be coherent, all the functions/variables/constats have been renamed to the TranslationTable style
Signed-off-by: Antonio Quartulli ordex@autistici.org --- README | 8 +- aggregation.c | 16 +- aggregation.h | 4 +- bat_debugfs.c | 4 +- hard-interface.c | 6 +- main.c | 14 +- main.h | 4 +- originator.c | 8 +- packet.h | 2 +- routing.c | 70 +++++----- routing.h | 6 +- send.c | 16 +- send.h | 2 +- soft-interface.c | 10 +- translation-table.c | 414 +++++++++++++++++++++++++------------------------- translation-table.h | 24 ++-- types.h | 24 ++-- unicast.c | 2 +- vis.c | 18 +- 19 files changed, 326 insertions(+), 326 deletions(-)
diff --git a/README b/README index 6aa36eb..47a840e 100644 --- a/README +++ b/README @@ -176,13 +176,13 @@ face. Each entry can/has to have the following values: -> "TQ mac value" - src mac's link quality towards mac address of a neighbor originator's interface which is being used for routing --> "HNA mac" - HNA announced by source mac +-> "TT mac" - TT announced by source mac -> "PRIMARY" - this is a primary interface -> "SEC mac" - secondary mac address of source (requires preceding PRIMARY)
The TQ value has a range from 4 to 255 with 255 being the best. -The HNA entries are showing which hosts are connected to the mesh +The TT entries are showing which hosts are connected to the mesh via bat0 or being bridged into the mesh network. The PRIMARY/SEC values are only applied on primary interfaces
@@ -219,7 +219,7 @@ abled during run time. Following log_levels are defined:
0 - All debug output disabled 1 - Enable messages related to routing / flooding / broadcasting -2 - Enable route or hna added / changed / deleted +2 - Enable route or tt added / changed / deleted 3 - Enable all messages
The debug output can be changed at runtime using the file @@ -227,7 +227,7 @@ The debug output can be changed at runtime using the file
# echo 2 > /sys/class/net/bat0/mesh/log_level
-will enable debug messages for when routes or HNAs change. +will enable debug messages for when routes or TTs change.
BATCTL diff --git a/aggregation.c b/aggregation.c index c11788c..9b94590 100644 --- a/aggregation.c +++ b/aggregation.c @@ -24,10 +24,10 @@ #include "send.h" #include "routing.h"
-/* calculate the size of the hna information for a given packet */ -static int hna_len(struct batman_packet *batman_packet) +/* calculate the size of the tt information for a given packet */ +static int tt_len(struct batman_packet *batman_packet) { - return batman_packet->num_hna * ETH_ALEN; + return batman_packet->num_tt * ETH_ALEN; }
/* return true if new_packet can be aggregated with forw_packet */ @@ -250,7 +250,7 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, { struct batman_packet *batman_packet; int buff_pos = 0; - unsigned char *hna_buff; + unsigned char *tt_buff;
batman_packet = (struct batman_packet *)packet_buff;
@@ -259,14 +259,14 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, orig_interval. */ batman_packet->seqno = ntohl(batman_packet->seqno);
- hna_buff = packet_buff + buff_pos + BAT_PACKET_LEN; + tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; receive_bat_packet(ethhdr, batman_packet, - hna_buff, hna_len(batman_packet), + tt_buff, tt_len(batman_packet), if_incoming);
- buff_pos += BAT_PACKET_LEN + hna_len(batman_packet); + buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_hna)); + batman_packet->num_tt)); } diff --git a/aggregation.h b/aggregation.h index 0622042..7e6d72f 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,9 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna) +static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_debugfs.c b/bat_debugfs.c index 0e9d435..abaeec5 100644 --- a/bat_debugfs.c +++ b/bat_debugfs.c @@ -241,13 +241,13 @@ static int softif_neigh_open(struct inode *inode, struct file *file) static int transtable_global_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_global_seq_print_text, net_dev); + return single_open(file, tt_global_seq_print_text, net_dev); }
static int transtable_local_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_local_seq_print_text, net_dev); + return single_open(file, tt_local_seq_print_text, net_dev); }
static int vis_data_open(struct inode *inode, struct file *file) diff --git a/hard-interface.c b/hard-interface.c index 3e888f1..9e4ac7d 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -157,10 +157,10 @@ static void primary_if_select(struct bat_priv *bat_priv, primary_if_update_addr(bat_priv);
/*** - * hacky trick to make sure that we send the HNA information via + * hacky trick to make sure that we send the TT information via * our new primary interface */ - atomic_set(&bat_priv->hna_local_changed, 1); + atomic_set(&bat_priv->tt_local_changed, 1);
out: spin_unlock_bh(&hardif_list_lock); @@ -345,7 +345,7 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_hna = 0; + batman_packet->num_tt = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; diff --git a/main.c b/main.c index 709b33b..2970908 100644 --- a/main.c +++ b/main.c @@ -81,8 +81,8 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->hna_lhash_lock); - spin_lock_init(&bat_priv->hna_ghash_lock); + spin_lock_init(&bat_priv->tt_lhash_lock); + spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -96,13 +96,13 @@ int mesh_init(struct net_device *soft_iface) if (originator_init(bat_priv) < 1) goto err;
- if (hna_local_init(bat_priv) < 1) + if (tt_local_init(bat_priv) < 1) goto err;
- if (hna_global_init(bat_priv) < 1) + if (tt_global_init(bat_priv) < 1) goto err;
- hna_local_add(soft_iface, soft_iface->dev_addr); + tt_local_add(soft_iface, soft_iface->dev_addr);
if (vis_init(bat_priv) < 1) goto err; @@ -133,8 +133,8 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- hna_local_free(bat_priv); - hna_global_free(bat_priv); + tt_local_free(bat_priv); + tt_global_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 101d9dc..50eb819 100644 --- a/main.h +++ b/main.h @@ -39,7 +39,7 @@ #define PURGE_TIMEOUT 200 /* purge originators after time in seconds if no * valid packet comes in -> TODO: check * influence on TQ_LOCAL_WINDOW_SIZE */ -#define LOCAL_HNA_TIMEOUT 3600 /* in seconds */ +#define TT_LOCAL_TIMEOUT 3600 /* in seconds */
#define TQ_LOCAL_WINDOW_SIZE 64 /* sliding packet range of received originator * messages in squence numbers (should be a @@ -89,7 +89,7 @@
#define DBG_BATMAN 1 /* all messages related to routing / flooding / * broadcasting / etc */ -#define DBG_ROUTES 2 /* route or hna added / changed / deleted */ +#define DBG_ROUTES 2 /* route or tt added / changed / deleted */ #define DBG_ALL 3
diff --git a/originator.c b/originator.c index ef4a9be..0314875 100644 --- a/originator.c +++ b/originator.c @@ -144,7 +144,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) spin_unlock_bh(&orig_node->neigh_list_lock);
frag_list_free(&orig_node->frag_list); - hna_global_del_orig(orig_node->bat_priv, orig_node, + tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
kfree(orig_node->bcast_own); @@ -222,7 +222,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; - orig_node->hna_buff = NULL; + orig_node->tt_buff = NULL; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -333,8 +333,8 @@ static bool purge_orig_node(struct bat_priv *bat_priv, &best_neigh_node)) { update_routes(bat_priv, orig_node, best_neigh_node, - orig_node->hna_buff, - orig_node->hna_buff_len); + orig_node->tt_buff, + orig_node->tt_buff_len); } }
diff --git a/packet.h b/packet.h index e757187..c225c3a 100644 --- a/packet.h +++ b/packet.h @@ -61,7 +61,7 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_hna; + uint8_t num_tt; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; diff --git a/routing.c b/routing.c index 49f5715..91b3709 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,28 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_HNA(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) +static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len) { - if ((hna_buff_len != orig_node->hna_buff_len) || - ((hna_buff_len > 0) && - (orig_node->hna_buff_len > 0) && - (memcmp(orig_node->hna_buff, hna_buff, hna_buff_len) != 0))) { + if ((tt_buff_len != orig_node->tt_buff_len) || + ((tt_buff_len > 0) && + (orig_node->tt_buff_len > 0) && + (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) {
- if (orig_node->hna_buff_len > 0) - hna_global_del_orig(bat_priv, orig_node, - "originator changed hna"); + if (orig_node->tt_buff_len > 0) + tt_global_del_orig(bat_priv, orig_node, + "originator changed tt");
- if ((hna_buff_len > 0) && (hna_buff)) - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + if ((tt_buff_len > 0) && (tt_buff)) + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len); } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { struct neigh_node *curr_router;
@@ -96,7 +96,7 @@ static void update_route(struct bat_priv *bat_priv,
bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); - hna_global_del_orig(bat_priv, orig_node, + tt_global_del_orig(bat_priv, orig_node, "originator timed out");
/* route added */ @@ -105,8 +105,8 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len);
/* route changed */ } else { @@ -135,8 +135,8 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len) + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len) { struct neigh_node *router = NULL;
@@ -147,10 +147,10 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node,
if (router != neigh_node) update_route(bat_priv, orig_node, neigh_node, - hna_buff, hna_buff_len); - /* may be just HNA changed */ + tt_buff, tt_buff_len); + /* may be just TT changed */ else - update_HNA(bat_priv, orig_node, hna_buff, hna_buff_len); + update_TT(bat_priv, orig_node, tt_buff, tt_buff_len);
out: if (router) @@ -387,14 +387,14 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_hna_buff_len; + int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -459,18 +459,18 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_hna_buff_len = (hna_buff_len > batman_packet->num_hna * ETH_ALEN ? - batman_packet->num_hna * ETH_ALEN : hna_buff_len); + tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? + batman_packet->num_tt * ETH_ALEN : tt_buff_len);
/* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); if (router == neigh_node) - goto update_hna; + goto update_tt;
/* if this neighbor does not offer a better TQ we won't consider it */ if (router && (router->tq_avg > neigh_node->tq_avg)) - goto update_hna; + goto update_tt;
/* if the TQ is the same and the link not more symetric we * won't consider it either */ @@ -488,16 +488,16 @@ static void update_orig(struct bat_priv *bat_priv, spin_unlock_bh(&orig_node_tmp->ogm_cnt_lock);
if (bcast_own_sum_orig >= bcast_own_sum_neigh) - goto update_hna; + goto update_tt; }
update_routes(bat_priv, orig_node, neigh_node, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len); goto update_gw;
-update_hna: +update_tt: update_routes(bat_priv, orig_node, router, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len);
update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) @@ -621,7 +621,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -818,14 +818,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, hna_buff, hna_buff_len, is_duplicate); + if_incoming, tt_buff, tt_buff_len, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, hna_buff_len, if_incoming); + 1, tt_buff_len, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -848,7 +848,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, hna_buff_len, if_incoming); + 0, tt_buff_len, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) diff --git a/routing.h b/routing.h index b5a064c..870f298 100644 --- a/routing.h +++ b/routing.h @@ -25,11 +25,11 @@ void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len); + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); diff --git a/send.c b/send.c index 02b541a..f30d0c6 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_hna)) { + batman_packet->num_tt)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -146,7 +146,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_hna * ETH_ALEN); + (batman_packet->num_tt * ETH_ALEN); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -222,7 +222,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_packet *batman_packet;
new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_hna * ETH_ALEN); + (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ @@ -231,7 +231,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, sizeof(struct batman_packet)); batman_packet = (struct batman_packet *)new_buff;
- batman_packet->num_hna = hna_local_fill_buffer(bat_priv, + batman_packet->num_tt = tt_local_fill_buffer(bat_priv, new_buff + sizeof(struct batman_packet), new_len - sizeof(struct batman_packet));
@@ -266,8 +266,8 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local hna has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->hna_local_changed)) && + /* if local tt has changed and interface is a primary interface */ + if ((atomic_read(&bat_priv->tt_local_changed)) && (hard_iface == primary_if)) rebuild_batman_packet(bat_priv, hard_iface);
@@ -309,7 +309,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -369,7 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + hna_buff_len, + sizeof(struct batman_packet) + tt_buff_len, if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 7b2ff19..247172d 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index 1772e2b..89a940a 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -363,11 +363,11 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL;
- /* only modify hna-table if it has been initialised before */ + /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { - hna_local_remove(bat_priv, dev->dev_addr, + tt_local_remove(bat_priv, dev->dev_addr, "mac address changed"); - hna_local_add(dev, addr->sa_data); + tt_local_add(dev, addr->sa_data); }
memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); @@ -425,7 +425,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) goto dropped;
/* TODO: check this for locks */ - hna_local_add(soft_iface, ethhdr->h_source); + tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { ret = gw_is_target(bat_priv, skb); @@ -663,7 +663,7 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->hna_local_changed, 0); + atomic_set(&bat_priv->tt_local_changed, 0);
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index f931830..25e6939 100644 --- a/translation-table.c +++ b/translation-table.c @@ -26,40 +26,40 @@ #include "hash.h" #include "originator.h"
-static void hna_local_purge(struct work_struct *work); -static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void tt_local_purge(struct work_struct *work); +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message);
/* returns 1 if they are the same mac addr */ -static int compare_lhna(struct hlist_node *node, void *data2) +static int compare_ltt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_local_entry, hash_entry); + void *data1 = container_of(node, struct tt_local_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
/* returns 1 if they are the same mac addr */ -static int compare_ghna(struct hlist_node *node, void *data2) +static int compare_gtt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_global_entry, hash_entry); + void *data1 = container_of(node, struct tt_global_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void hna_local_start_timer(struct bat_priv *bat_priv) +static void tt_local_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->hna_work, hna_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->hna_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); }
-static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, +static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_local_hash; + struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_local_entry *hna_local_entry, *hna_local_entry_tmp = NULL; + struct tt_local_entry *tt_local_entry, *tt_local_entry_tmp = NULL; int index;
if (!hash) @@ -69,26 +69,26 @@ static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, head, hash_entry) { - if (!compare_eth(hna_local_entry, data)) + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { + if (!compare_eth(tt_local_entry, data)) continue;
- hna_local_entry_tmp = hna_local_entry; + tt_local_entry_tmp = tt_local_entry; break; } rcu_read_unlock();
- return hna_local_entry_tmp; + return tt_local_entry_tmp; }
-static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, +static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_global_hash; + struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_global_entry *hna_global_entry; - struct hna_global_entry *hna_global_entry_tmp = NULL; + struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry_tmp = NULL; int index;
if (!hash) @@ -98,125 +98,125 @@ static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, head, hash_entry) { - if (!compare_eth(hna_global_entry, data)) + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { + if (!compare_eth(tt_global_entry, data)) continue;
- hna_global_entry_tmp = hna_global_entry; + tt_global_entry_tmp = tt_global_entry; break; } rcu_read_unlock();
- return hna_global_entry_tmp; + return tt_global_entry_tmp; }
-int hna_local_init(struct bat_priv *bat_priv) +int tt_local_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_local_hash) + if (bat_priv->tt_local_hash) return 1;
- bat_priv->hna_local_hash = hash_new(1024); + bat_priv->tt_local_hash = hash_new(1024);
- if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->hna_local_changed, 0); - hna_local_start_timer(bat_priv); + atomic_set(&bat_priv->tt_local_changed, 0); + tt_local_start_timer(bat_priv);
return 1; }
-void hna_local_add(struct net_device *soft_iface, uint8_t *addr) +void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct hna_local_entry *hna_local_entry; - struct hna_global_entry *hna_global_entry; + struct tt_local_entry *tt_local_entry; + struct tt_global_entry *tt_global_entry; int required_bytes;
- spin_lock_bh(&bat_priv->hna_lhash_lock); - hna_local_entry = hna_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- if (hna_local_entry) { - hna_local_entry->last_seen = jiffies; + if (tt_local_entry) { + tt_local_entry->last_seen = jiffies; return; }
/* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_hna That also should give a limit to + space in batman_packet->num_tt That also should give a limit to MAC-flooding. */ - required_bytes = (bat_priv->num_local_hna + 1) * ETH_ALEN; + required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; required_bytes += BAT_PACKET_LEN;
if ((required_bytes > ETH_DATA_LEN) || (atomic_read(&bat_priv->aggregated_ogms) && required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_hna + 1 > 255)) { + (bat_priv->num_local_tt + 1 > 255)) { bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local hna entry (%pM): " - "number of local hna entries exceeds packet size\n", + "Can't add new local tt entry (%pM): " + "number of local tt entries exceeds packet size\n", addr); return; }
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local hna entry: %pM\n", addr); + "Creating new local tt entry: %pM\n", addr);
- hna_local_entry = kmalloc(sizeof(struct hna_local_entry), GFP_ATOMIC); - if (!hna_local_entry) + tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); + if (!tt_local_entry) return;
- memcpy(hna_local_entry->addr, addr, ETH_ALEN); - hna_local_entry->last_seen = jiffies; + memcpy(tt_local_entry->addr, addr, ETH_ALEN); + tt_local_entry->last_seen = jiffies;
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) - hna_local_entry->never_purge = 1; + tt_local_entry->never_purge = 1; else - hna_local_entry->never_purge = 0; + tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hash_add(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry, &hna_local_entry->hash_entry); - bat_priv->num_local_hna++; - atomic_set(&bat_priv->hna_local_changed, 1); + hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry, &tt_local_entry->hash_entry); + bat_priv->num_local_tt++; + atomic_set(&bat_priv->tt_local_changed, 1);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = hna_global_hash_find(bat_priv, addr); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (hna_global_entry) - _hna_global_del_orig(bat_priv, hna_global_entry, - "local hna received"); + if (tt_global_entry) + _tt_global_del_orig(bat_priv, tt_global_entry, + "local tt received");
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); }
-int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node; struct hlist_head *head; int i, count = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { if (buff_len < (count + 1) * ETH_ALEN) break;
- memcpy(buff + (count * ETH_ALEN), hna_local_entry->addr, + memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, ETH_ALEN);
count++; @@ -224,20 +224,20 @@ int hna_local_fill_buffer(struct bat_priv *bat_priv, rcu_read_unlock(); }
- /* if we did not get all new local hnas see you next time ;-) */ - if (count == bat_priv->num_local_hna) - atomic_set(&bat_priv->hna_local_changed, 0); + /* if we did not get all new local tts see you next time ;-) */ + if (count == bat_priv->num_local_tt) + atomic_set(&bat_priv->tt_local_changed, 0);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return count; }
-int hna_local_seq_print_text(struct seq_file *seq, void *offset) +int tt_local_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -261,10 +261,10 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via HNA:\n", + "announced via TT:\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ @@ -279,7 +279,7 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -291,15 +291,15 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 22, " * %pM\n", - hna_local_entry->addr); + tt_local_entry->addr); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -309,180 +309,180 @@ out: return ret; }
-static void _hna_local_del(struct hlist_node *node, void *arg) +static void _tt_local_del(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct hna_local_entry, hash_entry); + void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_hna--; - atomic_set(&bat_priv->hna_local_changed, 1); + bat_priv->num_local_tt--; + atomic_set(&bat_priv->tt_local_changed, 1); }
-static void hna_local_del(struct bat_priv *bat_priv, - struct hna_local_entry *hna_local_entry, +static void tt_local_del(struct bat_priv *bat_priv, + struct tt_local_entry *tt_local_entry, char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local hna entry (%pM): %s\n", - hna_local_entry->addr, message); + bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + tt_local_entry->addr, message);
- hash_remove(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry->addr); - _hna_local_del(&hna_local_entry->hash_entry, bat_priv); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry->addr); + _tt_local_del(&tt_local_entry->hash_entry, bat_priv); }
-void hna_local_remove(struct bat_priv *bat_priv, +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_local_entry = hna_local_hash_find(bat_priv, addr); + tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, message); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, message);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void hna_local_purge(struct work_struct *work) +static void tt_local_purge(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, hna_work); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + container_of(delayed_work, struct bat_priv, tt_work); + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; unsigned long timeout; int i;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry_safe(hna_local_entry, node, node_tmp, + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { - if (hna_local_entry->never_purge) + if (tt_local_entry->never_purge) continue;
- timeout = hna_local_entry->last_seen; - timeout += LOCAL_HNA_TIMEOUT * HZ; + timeout = tt_local_entry->last_seen; + timeout += TT_LOCAL_TIMEOUT * HZ;
if (time_before(jiffies, timeout)) continue;
- hna_local_del(bat_priv, hna_local_entry, + tt_local_del(bat_priv, tt_local_entry, "address timed out"); } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); - hna_local_start_timer(bat_priv); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_local_start_timer(bat_priv); }
-void hna_local_free(struct bat_priv *bat_priv) +void tt_local_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->hna_work); - hash_delete(bat_priv->hna_local_hash, _hna_local_del, bat_priv); - bat_priv->hna_local_hash = NULL; + cancel_delayed_work_sync(&bat_priv->tt_work); + hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + bat_priv->tt_local_hash = NULL; }
-int hna_global_init(struct bat_priv *bat_priv) +int tt_global_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_global_hash) + if (bat_priv->tt_global_hash) return 1;
- bat_priv->hna_global_hash = hash_new(1024); + bat_priv->tt_global_hash = hash_new(1024);
- if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return 0;
return 1; }
-void hna_global_add_orig(struct bat_priv *bat_priv, +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { - struct hna_global_entry *hna_global_entry; - struct hna_local_entry *hna_local_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- while ((hna_buff_count + 1) * ETH_ALEN <= hna_buff_len) { - spin_lock_bh(&bat_priv->hna_ghash_lock); + while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if (!hna_global_entry) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + if (!tt_global_entry) { + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = - kmalloc(sizeof(struct hna_global_entry), + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC);
- if (!hna_global_entry) + if (!tt_global_entry) break;
- memcpy(hna_global_entry->addr, hna_ptr, ETH_ALEN); + memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN);
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global hna entry: " + "Creating new global tt entry: " "%pM (via %pM)\n", - hna_global_entry->addr, orig_node->orig); + tt_global_entry->addr, orig_node->orig);
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hash_add(bat_priv->hna_global_hash, compare_ghna, - choose_orig, hna_global_entry, - &hna_global_entry->hash_entry); + spin_lock_bh(&bat_priv->tt_ghash_lock); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry);
}
- hna_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->hna_ghash_lock); + tt_global_entry->orig_node = orig_node; + spin_unlock_bh(&bat_priv->tt_ghash_lock);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_local_entry = hna_local_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, - "global hna received"); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received");
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- hna_buff_count++; + tt_buff_count++; }
/* initialize, and overwrite if malloc succeeds */ - orig_node->hna_buff = NULL; - orig_node->hna_buff_len = 0; + orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0;
- if (hna_buff_len > 0) { - orig_node->hna_buff = kmalloc(hna_buff_len, GFP_ATOMIC); - if (orig_node->hna_buff) { - memcpy(orig_node->hna_buff, hna_buff, hna_buff_len); - orig_node->hna_buff_len = hna_buff_len; + if (tt_buff_len > 0) { + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; } } }
-int hna_global_seq_print_text(struct seq_file *seq, void *offset) +int tt_global_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_global_hash; - struct hna_global_entry *hna_global_entry; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -505,10 +505,10 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) goto out; }
- seq_printf(seq, "Globally announced HNAs received via the mesh %s\n", + seq_printf(seq, "Globally announced TTs received via the mesh %s\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ @@ -523,7 +523,7 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } @@ -534,17 +534,17 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 44, " * %pM via %pM\n", - hna_global_entry->addr, - hna_global_entry->orig_node->orig); + tt_global_entry->addr, + tt_global_entry->orig_node->orig); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -554,84 +554,84 @@ out: return ret; }
-static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message) { bat_dbg(DBG_ROUTES, bat_priv, - "Deleting global hna entry %pM (via %pM): %s\n", - hna_global_entry->addr, hna_global_entry->orig_node->orig, + "Deleting global tt entry %pM (via %pM): %s\n", + tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
- hash_remove(bat_priv->hna_global_hash, compare_ghna, choose_orig, - hna_global_entry->addr); - kfree(hna_global_entry); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, + tt_global_entry->addr); + kfree(tt_global_entry); }
-void hna_global_del_orig(struct bat_priv *bat_priv, +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message) { - struct hna_global_entry *hna_global_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- if (orig_node->hna_buff_len == 0) + if (orig_node->tt_buff_len == 0) return;
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- while ((hna_buff_count + 1) * ETH_ALEN <= orig_node->hna_buff_len) { - hna_ptr = orig_node->hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { + tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if ((hna_global_entry) && - (hna_global_entry->orig_node == orig_node)) - _hna_global_del_orig(bat_priv, hna_global_entry, + if ((tt_global_entry) && + (tt_global_entry->orig_node == orig_node)) + _tt_global_del_orig(bat_priv, tt_global_entry, message);
- hna_buff_count++; + tt_buff_count++; }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- orig_node->hna_buff_len = 0; - kfree(orig_node->hna_buff); - orig_node->hna_buff = NULL; + orig_node->tt_buff_len = 0; + kfree(orig_node->tt_buff); + orig_node->tt_buff = NULL; }
-static void hna_global_del(struct hlist_node *node, void *arg) +static void tt_global_del(struct hlist_node *node, void *arg) { - void *data = container_of(node, struct hna_global_entry, hash_entry); + void *data = container_of(node, struct tt_global_entry, hash_entry);
kfree(data); }
-void hna_global_free(struct bat_priv *bat_priv) +void tt_global_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->hna_global_hash, hna_global_del, NULL); - bat_priv->hna_global_hash = NULL; + hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + bat_priv->tt_global_hash = NULL; }
struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) { - struct hna_global_entry *hna_global_entry; + struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hna_global_entry = hna_global_hash_find(bat_priv, addr); + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (!hna_global_entry) + if (!tt_global_entry) goto out;
- if (!atomic_inc_not_zero(&hna_global_entry->orig_node->refcount)) + if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) goto out;
- orig_node = hna_global_entry->orig_node; + orig_node = tt_global_entry->orig_node;
out: - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } diff --git a/translation-table.h b/translation-table.h index f19931c..46152c3 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,22 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int hna_local_init(struct bat_priv *bat_priv); -void hna_local_add(struct net_device *soft_iface, uint8_t *addr); -void hna_local_remove(struct bat_priv *bat_priv, +int tt_local_init(struct bat_priv *bat_priv); +void tt_local_add(struct net_device *soft_iface, uint8_t *addr); +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message); -int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len); -int hna_local_seq_print_text(struct seq_file *seq, void *offset); -void hna_local_free(struct bat_priv *bat_priv); -int hna_global_init(struct bat_priv *bat_priv); -void hna_global_add_orig(struct bat_priv *bat_priv, +int tt_local_seq_print_text(struct seq_file *seq, void *offset); +void tt_local_free(struct bat_priv *bat_priv); +int tt_global_init(struct bat_priv *bat_priv); +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len); -int hna_global_seq_print_text(struct seq_file *seq, void *offset); -void hna_global_del_orig(struct bat_priv *bat_priv, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_seq_print_text(struct seq_file *seq, void *offset); +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); -void hna_global_free(struct bat_priv *bat_priv); +void tt_global_free(struct bat_priv *bat_priv); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 947bafc..b8c72c3 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,8 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; - unsigned char *hna_buff; - int16_t hna_buff_len; + unsigned char *tt_buff; + int16_t tt_buff_len; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -156,20 +156,20 @@ struct bat_priv { struct hlist_head gw_list; struct list_head vis_send_list; struct hashtable_t *orig_hash; - struct hashtable_t *hna_local_hash; - struct hashtable_t *hna_global_hash; + struct hashtable_t *tt_local_hash; + struct hashtable_t *tt_global_hash; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ - spinlock_t hna_lhash_lock; /* protects hna_local_hash */ - spinlock_t hna_ghash_lock; /* protects hna_global_hash */ + spinlock_t tt_lhash_lock; /* protects tt_local_hash */ + spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ - int16_t num_local_hna; - atomic_t hna_local_changed; - struct delayed_work hna_work; + int16_t num_local_tt; + atomic_t tt_local_changed; + struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; struct gw_node __rcu *curr_gw; /* rcu protected pointer */ @@ -192,14 +192,14 @@ struct socket_packet { struct icmp_packet_rr icmp_packet; };
-struct hna_local_entry { +struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; struct hlist_node hash_entry; };
-struct hna_global_entry { +struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; struct hlist_node hash_entry; @@ -262,7 +262,7 @@ struct vis_info { struct vis_info_entry { uint8_t src[ETH_ALEN]; uint8_t dest[ETH_ALEN]; - uint8_t quality; /* quality = 0 means HNA */ + uint8_t quality; /* quality = 0 means TT */ } __packed;
struct recvlist_node { diff --git a/unicast.c b/unicast.c index b46cbf1..19c3daf 100644 --- a/unicast.c +++ b/unicast.c @@ -300,7 +300,7 @@ int unicast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) goto find_router; }
- /* check for hna host - increases orig_node refcount */ + /* check for tt host - increases orig_node refcount */ orig_node = transtable_search(bat_priv, ethhdr->h_dest);
find_router: diff --git a/vis.c b/vis.c index c8f571d..c39f20c 100644 --- a/vis.c +++ b/vis.c @@ -194,7 +194,7 @@ static ssize_t vis_data_read_entry(char *buff, struct vis_info_entry *entry, { /* maximal length: max(4+17+2, 3+17+1+3+2) == 26 */ if (primary && entry->quality == 0) - return sprintf(buff, "HNA %pM, ", entry->dest); + return sprintf(buff, "TT %pM, ", entry->dest); else if (compare_eth(entry->src, src)) return sprintf(buff, "TQ %pM %d, ", entry->dest, entry->quality); @@ -622,7 +622,7 @@ static int generate_vis_packet(struct bat_priv *bat_priv) struct vis_info *info = (struct vis_info *)bat_priv->my_vis_info; struct vis_packet *packet = (struct vis_packet *)info->skb_packet->data; struct vis_info_entry *entry; - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry; int best_tq = -1, i;
info->first_seen = jiffies; @@ -678,29 +678,29 @@ next: rcu_read_unlock(); }
- hash = bat_priv->hna_local_hash; + hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(hna_local_entry, node, head, hash_entry) { + hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); memset(entry->src, 0, ETH_ALEN); - memcpy(entry->dest, hna_local_entry->addr, ETH_ALEN); - entry->quality = 0; /* 0 means HNA */ + memcpy(entry->dest, tt_local_entry->addr, ETH_ALEN); + entry->quality = 0; /* 0 means TT */ packet->entries++;
if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0; } } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
To be coherent, all the functions/variables/constats have been renamed to the TranslationTable style
Signed-off-by: Antonio Quartulli ordex@autistici.org --- In this new version, some sentences has been slightly modified.
README | 8 +- aggregation.c | 16 +- aggregation.h | 4 +- bat_debugfs.c | 4 +- hard-interface.c | 6 +- main.c | 14 +- main.h | 4 +- originator.c | 8 +- packet.h | 2 +- routing.c | 70 +++++----- routing.h | 6 +- send.c | 16 +- send.h | 2 +- soft-interface.c | 10 +- translation-table.c | 414 +++++++++++++++++++++++++------------------------- translation-table.h | 24 ++-- types.h | 24 ++-- unicast.c | 2 +- vis.c | 18 +- 19 files changed, 326 insertions(+), 326 deletions(-)
diff --git a/README b/README index 6aa36eb..22e799e 100644 --- a/README +++ b/README @@ -176,13 +176,13 @@ face. Each entry can/has to have the following values: -> "TQ mac value" - src mac's link quality towards mac address of a neighbor originator's interface which is being used for routing --> "HNA mac" - HNA announced by source mac +-> "TT mac" - TT announced by source mac -> "PRIMARY" - this is a primary interface -> "SEC mac" - secondary mac address of source (requires preceding PRIMARY)
The TQ value has a range from 4 to 255 with 255 being the best. -The HNA entries are showing which hosts are connected to the mesh +The TT entries are showing which hosts are connected to the mesh via bat0 or being bridged into the mesh network. The PRIMARY/SEC values are only applied on primary interfaces
@@ -219,7 +219,7 @@ abled during run time. Following log_levels are defined:
0 - All debug output disabled 1 - Enable messages related to routing / flooding / broadcasting -2 - Enable route or hna added / changed / deleted +2 - Enable route or tt entry added / changed / deleted 3 - Enable all messages
The debug output can be changed at runtime using the file @@ -227,7 +227,7 @@ The debug output can be changed at runtime using the file
# echo 2 > /sys/class/net/bat0/mesh/log_level
-will enable debug messages for when routes or HNAs change. +will enable debug messages for when routes or TTs change.
BATCTL diff --git a/aggregation.c b/aggregation.c index c11788c..9b94590 100644 --- a/aggregation.c +++ b/aggregation.c @@ -24,10 +24,10 @@ #include "send.h" #include "routing.h"
-/* calculate the size of the hna information for a given packet */ -static int hna_len(struct batman_packet *batman_packet) +/* calculate the size of the tt information for a given packet */ +static int tt_len(struct batman_packet *batman_packet) { - return batman_packet->num_hna * ETH_ALEN; + return batman_packet->num_tt * ETH_ALEN; }
/* return true if new_packet can be aggregated with forw_packet */ @@ -250,7 +250,7 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, { struct batman_packet *batman_packet; int buff_pos = 0; - unsigned char *hna_buff; + unsigned char *tt_buff;
batman_packet = (struct batman_packet *)packet_buff;
@@ -259,14 +259,14 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, orig_interval. */ batman_packet->seqno = ntohl(batman_packet->seqno);
- hna_buff = packet_buff + buff_pos + BAT_PACKET_LEN; + tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; receive_bat_packet(ethhdr, batman_packet, - hna_buff, hna_len(batman_packet), + tt_buff, tt_len(batman_packet), if_incoming);
- buff_pos += BAT_PACKET_LEN + hna_len(batman_packet); + buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_hna)); + batman_packet->num_tt)); } diff --git a/aggregation.h b/aggregation.h index 0622042..7e6d72f 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,9 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_hna) +static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_hna * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_debugfs.c b/bat_debugfs.c index 0e9d435..abaeec5 100644 --- a/bat_debugfs.c +++ b/bat_debugfs.c @@ -241,13 +241,13 @@ static int softif_neigh_open(struct inode *inode, struct file *file) static int transtable_global_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_global_seq_print_text, net_dev); + return single_open(file, tt_global_seq_print_text, net_dev); }
static int transtable_local_open(struct inode *inode, struct file *file) { struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, hna_local_seq_print_text, net_dev); + return single_open(file, tt_local_seq_print_text, net_dev); }
static int vis_data_open(struct inode *inode, struct file *file) diff --git a/hard-interface.c b/hard-interface.c index 7e2f772..dfbfccc 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -154,10 +154,10 @@ static void primary_if_select(struct bat_priv *bat_priv, primary_if_update_addr(bat_priv);
/*** - * hacky trick to make sure that we send the HNA information via + * hacky trick to make sure that we send the TT information via * our new primary interface */ - atomic_set(&bat_priv->hna_local_changed, 1); + atomic_set(&bat_priv->tt_local_changed, 1); }
static bool hardif_is_iface_up(struct hard_iface *hard_iface) @@ -339,7 +339,7 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_hna = 0; + batman_packet->num_tt = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; diff --git a/main.c b/main.c index 7edf8d7..0a7cee0 100644 --- a/main.c +++ b/main.c @@ -84,8 +84,8 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->hna_lhash_lock); - spin_lock_init(&bat_priv->hna_ghash_lock); + spin_lock_init(&bat_priv->tt_lhash_lock); + spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -100,13 +100,13 @@ int mesh_init(struct net_device *soft_iface) if (originator_init(bat_priv) < 1) goto err;
- if (hna_local_init(bat_priv) < 1) + if (tt_local_init(bat_priv) < 1) goto err;
- if (hna_global_init(bat_priv) < 1) + if (tt_global_init(bat_priv) < 1) goto err;
- hna_local_add(soft_iface, soft_iface->dev_addr); + tt_local_add(soft_iface, soft_iface->dev_addr);
if (vis_init(bat_priv) < 1) goto err; @@ -137,8 +137,8 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- hna_local_free(bat_priv); - hna_global_free(bat_priv); + tt_local_free(bat_priv); + tt_global_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 101d9dc..4832f32 100644 --- a/main.h +++ b/main.h @@ -39,7 +39,7 @@ #define PURGE_TIMEOUT 200 /* purge originators after time in seconds if no * valid packet comes in -> TODO: check * influence on TQ_LOCAL_WINDOW_SIZE */ -#define LOCAL_HNA_TIMEOUT 3600 /* in seconds */ +#define TT_LOCAL_TIMEOUT 3600 /* in seconds */
#define TQ_LOCAL_WINDOW_SIZE 64 /* sliding packet range of received originator * messages in squence numbers (should be a @@ -89,7 +89,7 @@
#define DBG_BATMAN 1 /* all messages related to routing / flooding / * broadcasting / etc */ -#define DBG_ROUTES 2 /* route or hna added / changed / deleted */ +#define DBG_ROUTES 2 /* route or tt entry added / changed / deleted */ #define DBG_ALL 3
diff --git a/originator.c b/originator.c index 51af91b..080ec88 100644 --- a/originator.c +++ b/originator.c @@ -142,7 +142,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) spin_unlock_bh(&orig_node->neigh_list_lock);
frag_list_free(&orig_node->frag_list); - hna_global_del_orig(orig_node->bat_priv, orig_node, + tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
kfree(orig_node->bcast_own); @@ -220,7 +220,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; - orig_node->hna_buff = NULL; + orig_node->tt_buff = NULL; orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -331,8 +331,8 @@ static bool purge_orig_node(struct bat_priv *bat_priv, &best_neigh_node)) { update_routes(bat_priv, orig_node, best_neigh_node, - orig_node->hna_buff, - orig_node->hna_buff_len); + orig_node->tt_buff, + orig_node->tt_buff_len); } }
diff --git a/packet.h b/packet.h index e757187..c225c3a 100644 --- a/packet.h +++ b/packet.h @@ -61,7 +61,7 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_hna; + uint8_t num_tt; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; diff --git a/routing.c b/routing.c index 49f5715..91b3709 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,28 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_HNA(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) +static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len) { - if ((hna_buff_len != orig_node->hna_buff_len) || - ((hna_buff_len > 0) && - (orig_node->hna_buff_len > 0) && - (memcmp(orig_node->hna_buff, hna_buff, hna_buff_len) != 0))) { + if ((tt_buff_len != orig_node->tt_buff_len) || + ((tt_buff_len > 0) && + (orig_node->tt_buff_len > 0) && + (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) {
- if (orig_node->hna_buff_len > 0) - hna_global_del_orig(bat_priv, orig_node, - "originator changed hna"); + if (orig_node->tt_buff_len > 0) + tt_global_del_orig(bat_priv, orig_node, + "originator changed tt");
- if ((hna_buff_len > 0) && (hna_buff)) - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + if ((tt_buff_len > 0) && (tt_buff)) + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len); } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, struct neigh_node *neigh_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { struct neigh_node *curr_router;
@@ -96,7 +96,7 @@ static void update_route(struct bat_priv *bat_priv,
bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); - hna_global_del_orig(bat_priv, orig_node, + tt_global_del_orig(bat_priv, orig_node, "originator timed out");
/* route added */ @@ -105,8 +105,8 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - hna_global_add_orig(bat_priv, orig_node, - hna_buff, hna_buff_len); + tt_global_add_orig(bat_priv, orig_node, + tt_buff, tt_buff_len);
/* route changed */ } else { @@ -135,8 +135,8 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len) + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len) { struct neigh_node *router = NULL;
@@ -147,10 +147,10 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node,
if (router != neigh_node) update_route(bat_priv, orig_node, neigh_node, - hna_buff, hna_buff_len); - /* may be just HNA changed */ + tt_buff, tt_buff_len); + /* may be just TT changed */ else - update_HNA(bat_priv, orig_node, hna_buff, hna_buff_len); + update_TT(bat_priv, orig_node, tt_buff, tt_buff_len);
out: if (router) @@ -387,14 +387,14 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_hna_buff_len; + int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -459,18 +459,18 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_hna_buff_len = (hna_buff_len > batman_packet->num_hna * ETH_ALEN ? - batman_packet->num_hna * ETH_ALEN : hna_buff_len); + tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? + batman_packet->num_tt * ETH_ALEN : tt_buff_len);
/* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); if (router == neigh_node) - goto update_hna; + goto update_tt;
/* if this neighbor does not offer a better TQ we won't consider it */ if (router && (router->tq_avg > neigh_node->tq_avg)) - goto update_hna; + goto update_tt;
/* if the TQ is the same and the link not more symetric we * won't consider it either */ @@ -488,16 +488,16 @@ static void update_orig(struct bat_priv *bat_priv, spin_unlock_bh(&orig_node_tmp->ogm_cnt_lock);
if (bcast_own_sum_orig >= bcast_own_sum_neigh) - goto update_hna; + goto update_tt; }
update_routes(bat_priv, orig_node, neigh_node, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len); goto update_gw;
-update_hna: +update_tt: update_routes(bat_priv, orig_node, router, - hna_buff, tmp_hna_buff_len); + tt_buff, tmp_tt_buff_len);
update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) @@ -621,7 +621,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -818,14 +818,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, hna_buff, hna_buff_len, is_duplicate); + if_incoming, tt_buff, tt_buff_len, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, hna_buff_len, if_incoming); + 1, tt_buff_len, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -848,7 +848,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, hna_buff_len, if_incoming); + 0, tt_buff_len, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) diff --git a/routing.h b/routing.h index b5a064c..870f298 100644 --- a/routing.h +++ b/routing.h @@ -25,11 +25,11 @@ void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *hna_buff, int hna_buff_len, + unsigned char *tt_buff, int tt_buff_len, struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *hna_buff, - int hna_buff_len); + struct neigh_node *neigh_node, unsigned char *tt_buff, + int tt_buff_len); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); diff --git a/send.c b/send.c index 02b541a..f30d0c6 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_hna)) { + batman_packet->num_tt)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -146,7 +146,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_hna * ETH_ALEN); + (batman_packet->num_tt * ETH_ALEN); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -222,7 +222,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, struct batman_packet *batman_packet;
new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_hna * ETH_ALEN); + (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ @@ -231,7 +231,7 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, sizeof(struct batman_packet)); batman_packet = (struct batman_packet *)new_buff;
- batman_packet->num_hna = hna_local_fill_buffer(bat_priv, + batman_packet->num_tt = tt_local_fill_buffer(bat_priv, new_buff + sizeof(struct batman_packet), new_len - sizeof(struct batman_packet));
@@ -266,8 +266,8 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local hna has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->hna_local_changed)) && + /* if local tt has changed and interface is a primary interface */ + if ((atomic_read(&bat_priv->tt_local_changed)) && (hard_iface == primary_if)) rebuild_batman_packet(bat_priv, hard_iface);
@@ -309,7 +309,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -369,7 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + hna_buff_len, + sizeof(struct batman_packet) + tt_buff_len, if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 7b2ff19..247172d 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int hna_buff_len, + uint8_t directlink, int tt_buff_len, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index 8023c4e..ae10ecc 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -539,11 +539,11 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL;
- /* only modify hna-table if it has been initialised before */ + /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { - hna_local_remove(bat_priv, dev->dev_addr, + tt_local_remove(bat_priv, dev->dev_addr, "mac address changed"); - hna_local_add(dev, addr->sa_data); + tt_local_add(dev, addr->sa_data); }
memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); @@ -601,7 +601,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) goto dropped;
/* TODO: check this for locks */ - hna_local_add(soft_iface, ethhdr->h_source); + tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { ret = gw_is_target(bat_priv, skb); @@ -839,7 +839,7 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->hna_local_changed, 0); + atomic_set(&bat_priv->tt_local_changed, 0);
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index f931830..2bd6a31 100644 --- a/translation-table.c +++ b/translation-table.c @@ -26,40 +26,40 @@ #include "hash.h" #include "originator.h"
-static void hna_local_purge(struct work_struct *work); -static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void tt_local_purge(struct work_struct *work); +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message);
/* returns 1 if they are the same mac addr */ -static int compare_lhna(struct hlist_node *node, void *data2) +static int compare_ltt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_local_entry, hash_entry); + void *data1 = container_of(node, struct tt_local_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
/* returns 1 if they are the same mac addr */ -static int compare_ghna(struct hlist_node *node, void *data2) +static int compare_gtt(struct hlist_node *node, void *data2) { - void *data1 = container_of(node, struct hna_global_entry, hash_entry); + void *data1 = container_of(node, struct tt_global_entry, hash_entry);
return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void hna_local_start_timer(struct bat_priv *bat_priv) +static void tt_local_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->hna_work, hna_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->hna_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); }
-static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, +static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_local_hash; + struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_local_entry *hna_local_entry, *hna_local_entry_tmp = NULL; + struct tt_local_entry *tt_local_entry, *tt_local_entry_tmp = NULL; int index;
if (!hash) @@ -69,26 +69,26 @@ static struct hna_local_entry *hna_local_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, head, hash_entry) { - if (!compare_eth(hna_local_entry, data)) + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { + if (!compare_eth(tt_local_entry, data)) continue;
- hna_local_entry_tmp = hna_local_entry; + tt_local_entry_tmp = tt_local_entry; break; } rcu_read_unlock();
- return hna_local_entry_tmp; + return tt_local_entry_tmp; }
-static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, +static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, void *data) { - struct hashtable_t *hash = bat_priv->hna_global_hash; + struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; struct hlist_node *node; - struct hna_global_entry *hna_global_entry; - struct hna_global_entry *hna_global_entry_tmp = NULL; + struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry_tmp = NULL; int index;
if (!hash) @@ -98,125 +98,125 @@ static struct hna_global_entry *hna_global_hash_find(struct bat_priv *bat_priv, head = &hash->table[index];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, head, hash_entry) { - if (!compare_eth(hna_global_entry, data)) + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { + if (!compare_eth(tt_global_entry, data)) continue;
- hna_global_entry_tmp = hna_global_entry; + tt_global_entry_tmp = tt_global_entry; break; } rcu_read_unlock();
- return hna_global_entry_tmp; + return tt_global_entry_tmp; }
-int hna_local_init(struct bat_priv *bat_priv) +int tt_local_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_local_hash) + if (bat_priv->tt_local_hash) return 1;
- bat_priv->hna_local_hash = hash_new(1024); + bat_priv->tt_local_hash = hash_new(1024);
- if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->hna_local_changed, 0); - hna_local_start_timer(bat_priv); + atomic_set(&bat_priv->tt_local_changed, 0); + tt_local_start_timer(bat_priv);
return 1; }
-void hna_local_add(struct net_device *soft_iface, uint8_t *addr) +void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct hna_local_entry *hna_local_entry; - struct hna_global_entry *hna_global_entry; + struct tt_local_entry *tt_local_entry; + struct tt_global_entry *tt_global_entry; int required_bytes;
- spin_lock_bh(&bat_priv->hna_lhash_lock); - hna_local_entry = hna_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- if (hna_local_entry) { - hna_local_entry->last_seen = jiffies; + if (tt_local_entry) { + tt_local_entry->last_seen = jiffies; return; }
/* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_hna That also should give a limit to + space in batman_packet->num_tt That also should give a limit to MAC-flooding. */ - required_bytes = (bat_priv->num_local_hna + 1) * ETH_ALEN; + required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; required_bytes += BAT_PACKET_LEN;
if ((required_bytes > ETH_DATA_LEN) || (atomic_read(&bat_priv->aggregated_ogms) && required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_hna + 1 > 255)) { + (bat_priv->num_local_tt + 1 > 255)) { bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local hna entry (%pM): " - "number of local hna entries exceeds packet size\n", + "Can't add new local tt entry (%pM): " + "number of local tt entries exceeds packet size\n", addr); return; }
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local hna entry: %pM\n", addr); + "Creating new local tt entry: %pM\n", addr);
- hna_local_entry = kmalloc(sizeof(struct hna_local_entry), GFP_ATOMIC); - if (!hna_local_entry) + tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); + if (!tt_local_entry) return;
- memcpy(hna_local_entry->addr, addr, ETH_ALEN); - hna_local_entry->last_seen = jiffies; + memcpy(tt_local_entry->addr, addr, ETH_ALEN); + tt_local_entry->last_seen = jiffies;
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) - hna_local_entry->never_purge = 1; + tt_local_entry->never_purge = 1; else - hna_local_entry->never_purge = 0; + tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hash_add(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry, &hna_local_entry->hash_entry); - bat_priv->num_local_hna++; - atomic_set(&bat_priv->hna_local_changed, 1); + hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry, &tt_local_entry->hash_entry); + bat_priv->num_local_tt++; + atomic_set(&bat_priv->tt_local_changed, 1);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = hna_global_hash_find(bat_priv, addr); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (hna_global_entry) - _hna_global_del_orig(bat_priv, hna_global_entry, - "local hna received"); + if (tt_global_entry) + _tt_global_del_orig(bat_priv, tt_global_entry, + "local tt received");
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); }
-int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node; struct hlist_head *head; int i, count = 0;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { if (buff_len < (count + 1) * ETH_ALEN) break;
- memcpy(buff + (count * ETH_ALEN), hna_local_entry->addr, + memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, ETH_ALEN);
count++; @@ -224,20 +224,20 @@ int hna_local_fill_buffer(struct bat_priv *bat_priv, rcu_read_unlock(); }
- /* if we did not get all new local hnas see you next time ;-) */ - if (count == bat_priv->num_local_hna) - atomic_set(&bat_priv->hna_local_changed, 0); + /* if we did not get all new local tts see you next time ;-) */ + if (count == bat_priv->num_local_tt) + atomic_set(&bat_priv->tt_local_changed, 0);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return count; }
-int hna_local_seq_print_text(struct seq_file *seq, void *offset) +int tt_local_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -261,10 +261,10 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via HNA:\n", + "announced via TT:\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ @@ -279,7 +279,7 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -291,15 +291,15 @@ int hna_local_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_local_entry, node, + hlist_for_each_entry_rcu(tt_local_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 22, " * %pM\n", - hna_local_entry->addr); + tt_local_entry->addr); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -309,180 +309,180 @@ out: return ret; }
-static void _hna_local_del(struct hlist_node *node, void *arg) +static void _tt_local_del(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct hna_local_entry, hash_entry); + void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_hna--; - atomic_set(&bat_priv->hna_local_changed, 1); + bat_priv->num_local_tt--; + atomic_set(&bat_priv->tt_local_changed, 1); }
-static void hna_local_del(struct bat_priv *bat_priv, - struct hna_local_entry *hna_local_entry, +static void tt_local_del(struct bat_priv *bat_priv, + struct tt_local_entry *tt_local_entry, char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local hna entry (%pM): %s\n", - hna_local_entry->addr, message); + bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + tt_local_entry->addr, message);
- hash_remove(bat_priv->hna_local_hash, compare_lhna, choose_orig, - hna_local_entry->addr); - _hna_local_del(&hna_local_entry->hash_entry, bat_priv); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, + tt_local_entry->addr); + _tt_local_del(&tt_local_entry->hash_entry, bat_priv); }
-void hna_local_remove(struct bat_priv *bat_priv, +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_local_entry = hna_local_hash_find(bat_priv, addr); + tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, message); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, message);
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void hna_local_purge(struct work_struct *work) +static void tt_local_purge(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, hna_work); - struct hashtable_t *hash = bat_priv->hna_local_hash; - struct hna_local_entry *hna_local_entry; + container_of(delayed_work, struct bat_priv, tt_work); + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; unsigned long timeout; int i;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry_safe(hna_local_entry, node, node_tmp, + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { - if (hna_local_entry->never_purge) + if (tt_local_entry->never_purge) continue;
- timeout = hna_local_entry->last_seen; - timeout += LOCAL_HNA_TIMEOUT * HZ; + timeout = tt_local_entry->last_seen; + timeout += TT_LOCAL_TIMEOUT * HZ;
if (time_before(jiffies, timeout)) continue;
- hna_local_del(bat_priv, hna_local_entry, + tt_local_del(bat_priv, tt_local_entry, "address timed out"); } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); - hna_local_start_timer(bat_priv); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_local_start_timer(bat_priv); }
-void hna_local_free(struct bat_priv *bat_priv) +void tt_local_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_local_hash) + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->hna_work); - hash_delete(bat_priv->hna_local_hash, _hna_local_del, bat_priv); - bat_priv->hna_local_hash = NULL; + cancel_delayed_work_sync(&bat_priv->tt_work); + hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + bat_priv->tt_local_hash = NULL; }
-int hna_global_init(struct bat_priv *bat_priv) +int tt_global_init(struct bat_priv *bat_priv) { - if (bat_priv->hna_global_hash) + if (bat_priv->tt_global_hash) return 1;
- bat_priv->hna_global_hash = hash_new(1024); + bat_priv->tt_global_hash = hash_new(1024);
- if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return 0;
return 1; }
-void hna_global_add_orig(struct bat_priv *bat_priv, +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len) + unsigned char *tt_buff, int tt_buff_len) { - struct hna_global_entry *hna_global_entry; - struct hna_local_entry *hna_local_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- while ((hna_buff_count + 1) * ETH_ALEN <= hna_buff_len) { - spin_lock_bh(&bat_priv->hna_ghash_lock); + while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { + spin_lock_bh(&bat_priv->tt_ghash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if (!hna_global_entry) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + if (!tt_global_entry) { + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- hna_global_entry = - kmalloc(sizeof(struct hna_global_entry), + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC);
- if (!hna_global_entry) + if (!tt_global_entry) break;
- memcpy(hna_global_entry->addr, hna_ptr, ETH_ALEN); + memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN);
bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global hna entry: " + "Creating new global tt entry: " "%pM (via %pM)\n", - hna_global_entry->addr, orig_node->orig); + tt_global_entry->addr, orig_node->orig);
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hash_add(bat_priv->hna_global_hash, compare_ghna, - choose_orig, hna_global_entry, - &hna_global_entry->hash_entry); + spin_lock_bh(&bat_priv->tt_ghash_lock); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry);
}
- hna_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->hna_ghash_lock); + tt_global_entry->orig_node = orig_node; + spin_unlock_bh(&bat_priv->tt_ghash_lock);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock);
- hna_ptr = hna_buff + (hna_buff_count * ETH_ALEN); - hna_local_entry = hna_local_hash_find(bat_priv, hna_ptr); + tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); + tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr);
- if (hna_local_entry) - hna_local_del(bat_priv, hna_local_entry, - "global hna received"); + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received");
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock);
- hna_buff_count++; + tt_buff_count++; }
/* initialize, and overwrite if malloc succeeds */ - orig_node->hna_buff = NULL; - orig_node->hna_buff_len = 0; + orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0;
- if (hna_buff_len > 0) { - orig_node->hna_buff = kmalloc(hna_buff_len, GFP_ATOMIC); - if (orig_node->hna_buff) { - memcpy(orig_node->hna_buff, hna_buff, hna_buff_len); - orig_node->hna_buff_len = hna_buff_len; + if (tt_buff_len > 0) { + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; } } }
-int hna_global_seq_print_text(struct seq_file *seq, void *offset) +int tt_global_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->hna_global_hash; - struct hna_global_entry *hna_global_entry; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; struct hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; @@ -505,10 +505,10 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) goto out; }
- seq_printf(seq, "Globally announced HNAs received via the mesh %s\n", + seq_printf(seq, "Globally announced TT entries received via the mesh %s\n", net_dev->name);
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ @@ -523,7 +523,7 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } @@ -534,17 +534,17 @@ int hna_global_seq_print_text(struct seq_file *seq, void *offset) head = &hash->table[i];
rcu_read_lock(); - hlist_for_each_entry_rcu(hna_global_entry, node, + hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { pos += snprintf(buff + pos, 44, " * %pM via %pM\n", - hna_global_entry->addr, - hna_global_entry->orig_node->orig); + tt_global_entry->addr, + tt_global_entry->orig_node->orig); } rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
seq_printf(seq, "%s", buff); kfree(buff); @@ -554,84 +554,84 @@ out: return ret; }
-static void _hna_global_del_orig(struct bat_priv *bat_priv, - struct hna_global_entry *hna_global_entry, +static void _tt_global_del_orig(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, char *message) { bat_dbg(DBG_ROUTES, bat_priv, - "Deleting global hna entry %pM (via %pM): %s\n", - hna_global_entry->addr, hna_global_entry->orig_node->orig, + "Deleting global tt entry %pM (via %pM): %s\n", + tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
- hash_remove(bat_priv->hna_global_hash, compare_ghna, choose_orig, - hna_global_entry->addr); - kfree(hna_global_entry); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, + tt_global_entry->addr); + kfree(tt_global_entry); }
-void hna_global_del_orig(struct bat_priv *bat_priv, +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message) { - struct hna_global_entry *hna_global_entry; - int hna_buff_count = 0; - unsigned char *hna_ptr; + struct tt_global_entry *tt_global_entry; + int tt_buff_count = 0; + unsigned char *tt_ptr;
- if (orig_node->hna_buff_len == 0) + if (orig_node->tt_buff_len == 0) return;
- spin_lock_bh(&bat_priv->hna_ghash_lock); + spin_lock_bh(&bat_priv->tt_ghash_lock);
- while ((hna_buff_count + 1) * ETH_ALEN <= orig_node->hna_buff_len) { - hna_ptr = orig_node->hna_buff + (hna_buff_count * ETH_ALEN); - hna_global_entry = hna_global_hash_find(bat_priv, hna_ptr); + while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { + tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); + tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- if ((hna_global_entry) && - (hna_global_entry->orig_node == orig_node)) - _hna_global_del_orig(bat_priv, hna_global_entry, + if ((tt_global_entry) && + (tt_global_entry->orig_node == orig_node)) + _tt_global_del_orig(bat_priv, tt_global_entry, message);
- hna_buff_count++; + tt_buff_count++; }
- spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- orig_node->hna_buff_len = 0; - kfree(orig_node->hna_buff); - orig_node->hna_buff = NULL; + orig_node->tt_buff_len = 0; + kfree(orig_node->tt_buff); + orig_node->tt_buff = NULL; }
-static void hna_global_del(struct hlist_node *node, void *arg) +static void tt_global_del(struct hlist_node *node, void *arg) { - void *data = container_of(node, struct hna_global_entry, hash_entry); + void *data = container_of(node, struct tt_global_entry, hash_entry);
kfree(data); }
-void hna_global_free(struct bat_priv *bat_priv) +void tt_global_free(struct bat_priv *bat_priv) { - if (!bat_priv->hna_global_hash) + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->hna_global_hash, hna_global_del, NULL); - bat_priv->hna_global_hash = NULL; + hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + bat_priv->tt_global_hash = NULL; }
struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) { - struct hna_global_entry *hna_global_entry; + struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->hna_ghash_lock); - hna_global_entry = hna_global_hash_find(bat_priv, addr); + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (!hna_global_entry) + if (!tt_global_entry) goto out;
- if (!atomic_inc_not_zero(&hna_global_entry->orig_node->refcount)) + if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) goto out;
- orig_node = hna_global_entry->orig_node; + orig_node = tt_global_entry->orig_node;
out: - spin_unlock_bh(&bat_priv->hna_ghash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } diff --git a/translation-table.h b/translation-table.h index f19931c..46152c3 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,22 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int hna_local_init(struct bat_priv *bat_priv); -void hna_local_add(struct net_device *soft_iface, uint8_t *addr); -void hna_local_remove(struct bat_priv *bat_priv, +int tt_local_init(struct bat_priv *bat_priv); +void tt_local_add(struct net_device *soft_iface, uint8_t *addr); +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message); -int hna_local_fill_buffer(struct bat_priv *bat_priv, +int tt_local_fill_buffer(struct bat_priv *bat_priv, unsigned char *buff, int buff_len); -int hna_local_seq_print_text(struct seq_file *seq, void *offset); -void hna_local_free(struct bat_priv *bat_priv); -int hna_global_init(struct bat_priv *bat_priv); -void hna_global_add_orig(struct bat_priv *bat_priv, +int tt_local_seq_print_text(struct seq_file *seq, void *offset); +void tt_local_free(struct bat_priv *bat_priv); +int tt_global_init(struct bat_priv *bat_priv); +void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *hna_buff, int hna_buff_len); -int hna_global_seq_print_text(struct seq_file *seq, void *offset); -void hna_global_del_orig(struct bat_priv *bat_priv, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_seq_print_text(struct seq_file *seq, void *offset); +void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); -void hna_global_free(struct bat_priv *bat_priv); +void tt_global_free(struct bat_priv *bat_priv); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 9ae507a..6b6c32e 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,8 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; - unsigned char *hna_buff; - int16_t hna_buff_len; + unsigned char *tt_buff; + int16_t tt_buff_len; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -155,21 +155,21 @@ struct bat_priv { struct hlist_head softif_neigh_vids; struct list_head vis_send_list; struct hashtable_t *orig_hash; - struct hashtable_t *hna_local_hash; - struct hashtable_t *hna_global_hash; + struct hashtable_t *tt_local_hash; + struct hashtable_t *tt_global_hash; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ - spinlock_t hna_lhash_lock; /* protects hna_local_hash */ - spinlock_t hna_ghash_lock; /* protects hna_global_hash */ + spinlock_t tt_lhash_lock; /* protects tt_local_hash */ + spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ spinlock_t softif_neigh_vid_lock; /* protects soft-interface vid list */ - int16_t num_local_hna; - atomic_t hna_local_changed; - struct delayed_work hna_work; + int16_t num_local_tt; + atomic_t tt_local_changed; + struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; struct gw_node __rcu *curr_gw; /* rcu protected pointer */ @@ -192,14 +192,14 @@ struct socket_packet { struct icmp_packet_rr icmp_packet; };
-struct hna_local_entry { +struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; struct hlist_node hash_entry; };
-struct hna_global_entry { +struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; struct hlist_node hash_entry; @@ -262,7 +262,7 @@ struct vis_info { struct vis_info_entry { uint8_t src[ETH_ALEN]; uint8_t dest[ETH_ALEN]; - uint8_t quality; /* quality = 0 means HNA */ + uint8_t quality; /* quality = 0 client */ } __packed;
struct recvlist_node { diff --git a/unicast.c b/unicast.c index b46cbf1..19c3daf 100644 --- a/unicast.c +++ b/unicast.c @@ -300,7 +300,7 @@ int unicast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) goto find_router; }
- /* check for hna host - increases orig_node refcount */ + /* check for tt host - increases orig_node refcount */ orig_node = transtable_search(bat_priv, ethhdr->h_dest);
find_router: diff --git a/vis.c b/vis.c index c8f571d..c39f20c 100644 --- a/vis.c +++ b/vis.c @@ -194,7 +194,7 @@ static ssize_t vis_data_read_entry(char *buff, struct vis_info_entry *entry, { /* maximal length: max(4+17+2, 3+17+1+3+2) == 26 */ if (primary && entry->quality == 0) - return sprintf(buff, "HNA %pM, ", entry->dest); + return sprintf(buff, "TT %pM, ", entry->dest); else if (compare_eth(entry->src, src)) return sprintf(buff, "TQ %pM %d, ", entry->dest, entry->quality); @@ -622,7 +622,7 @@ static int generate_vis_packet(struct bat_priv *bat_priv) struct vis_info *info = (struct vis_info *)bat_priv->my_vis_info; struct vis_packet *packet = (struct vis_packet *)info->skb_packet->data; struct vis_info_entry *entry; - struct hna_local_entry *hna_local_entry; + struct tt_local_entry *tt_local_entry; int best_tq = -1, i;
info->first_seen = jiffies; @@ -678,29 +678,29 @@ next: rcu_read_unlock(); }
- hash = bat_priv->hna_local_hash; + hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->hna_lhash_lock); + spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(hna_local_entry, node, head, hash_entry) { + hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); memset(entry->src, 0, ETH_ALEN); - memcpy(entry->dest, hna_local_entry->addr, ETH_ALEN); - entry->quality = 0; /* 0 means HNA */ + memcpy(entry->dest, tt_local_entry->addr, ETH_ALEN); + entry->quality = 0; /* 0 means TT */ packet->entries++;
if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0; } } }
- spin_unlock_bh(&bat_priv->hna_lhash_lock); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
On Thursday 05 May 2011 08:42:45 Antonio Quartulli wrote:
To be coherent, all the functions/variables/constats have been renamed to the TranslationTable style
Applied in revision 160dd13.
Thanks, Marek
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Client-announcement
Moreover: - COMPAT_VERSION has been increased to 14 - batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org --- aggregation.c | 23 +- aggregation.h | 6 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 10 +- originator.c | 8 +- packet.h | 34 ++- routing.c | 241 +++++++++-- routing.h | 10 +- send.c | 90 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1147 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 39 ++- types.h | 38 ++- unicast.c | 3 + 16 files changed, 1374 insertions(+), 314 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 9b94590..de59b5f 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,16 +20,11 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(struct batman_packet *batman_packet) -{ - return batman_packet->num_tt * ETH_ALEN; -} - /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -255,18 +250,20 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, batman_packet = (struct batman_packet *)packet_buff;
do { - /* network to host order for our 32bit seqno, and the - orig_interval. */ + /* network to host order for our 32bit seqno and the + orig_interval */ batman_packet->seqno = ntohl(batman_packet->seqno); + batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; - receive_bat_packet(ethhdr, batman_packet, - tt_buff, tt_len(batman_packet), - if_incoming);
- buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); + receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming); + + buff_pos += BAT_PACKET_LEN + + tt_len(batman_packet->tt_num_changes); + batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_tt)); + batman_packet->tt_num_changes)); } diff --git a/aggregation.h b/aggregation.h index 7e6d72f..c631a4c 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes * + sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/hard-interface.c b/hard-interface.c index 9e4ac7d..4fcd22e 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -156,12 +156,6 @@ static void primary_if_select(struct bat_priv *bat_priv,
primary_if_update_addr(bat_priv);
- /*** - * hacky trick to make sure that we send the TT information via - * our new primary interface - */ - atomic_set(&bat_priv->tt_local_changed, 1); - out: spin_unlock_bh(&hardif_list_lock); } @@ -345,7 +339,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_tt = 0; + batman_packet->tt_num_changes = 0; + batman_packet->ttvn = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; @@ -674,6 +669,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break; + /* Translation table query (request or response) */ + case BAT_TT_QUERY: + ret = recv_tt_query(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 2970908..a84679a 100644 --- a/main.c +++ b/main.c @@ -83,6 +83,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock); + spin_lock_init(&bat_priv->tt_changes_list_lock); + spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -92,14 +95,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_list); + INIT_LIST_HEAD(&bat_priv->tt_changes_list); + INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1) - goto err; - - if (tt_global_init(bat_priv) < 1) + if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr); @@ -133,8 +135,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv); - tt_global_free(bat_priv); + tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 50eb819..cc1c277 100644 --- a/main.h +++ b/main.h @@ -39,8 +39,8 @@ #define PURGE_TIMEOUT 200 /* purge originators after time in seconds if no * valid packet comes in -> TODO: check * influence on TQ_LOCAL_WINDOW_SIZE */ -#define TT_LOCAL_TIMEOUT 3600 /* in seconds */ - +#define TT_LOCAL_TIMEOUT 3600 /* in seconds */ +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */ #define TQ_LOCAL_WINDOW_SIZE 64 /* sliding packet range of received originator * messages in squence numbers (should be a * multiple of our word size) */ @@ -49,6 +49,12 @@ #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ + +/* Transtable operations */ +#define TT_ADD 0 +#define TT_DEL 1 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ diff --git a/originator.c b/originator.c index 0314875..be7257b 100644 --- a/originator.c +++ b/originator.c @@ -147,6 +147,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -215,6 +216,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock); + spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2); @@ -223,6 +225,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0; + atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -332,9 +336,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node, - best_neigh_node, - orig_node->tt_buff, - orig_node->tt_buff_len); + best_neigh_node); } }
diff --git a/packet.h b/packet.h index c225c3a..de8bf3b 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02 + struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,7 +67,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_tt; + uint8_t ttvn; /* translation table version number */ + uint16_t tt_crc; + uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; @@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl; + uint8_t ttvn; /* destination translation table version number */ } __packed;
struct unicast_frag_packet { @@ -134,4 +143,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet { + uint8_t packet_type; + uint8_t version; /* batman version field */ + uint8_t dst[6]; + uint8_t ttl; + uint8_t flags; /* this field is a combination of: + * - TT_REQUEST or TT_RESPONSE + * - TT_FULL_TABLE + */ + uint8_t src[6]; + uint8_t ttvn; /* if TT_REQUEST: ttvn that triggered the + * request + * if TT_RESPONSE: new ttvn for the src + * orig_node + */ + uint16_t tt_data; /* if TT_REQUEST: crc associated with the + * ttvn + * if TT_RESPONSE: table_size + */ +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 91b3709..52107cd 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,71 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void update_transtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc) { - if ((tt_buff_len != orig_node->tt_buff_len) || - ((tt_buff_len > 0) && - (orig_node->tt_buff_len > 0) && - (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) { - - if (orig_node->tt_buff_len > 0) - tt_global_del_orig(bat_priv, orig_node, - "originator changed tt"); - - if ((tt_buff_len > 0) && (tt_buff)) - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); + struct tt_change *tt_change; + int count; + uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + + /* the ttvn increased by one -> we can apply the attached changes */ + if (ttvn - orig_ttvn == 1) { + /* the OGM could not contain the changes because they were too + * many to fit in one frame or because they have already been + * sent TT_OGM_APPEND_MAX times. In this case send a tt + * request */ + if (!tt_num_changes) + goto request_table; + + for (count = 0; count < tt_num_changes; count++) { + tt_change = (struct tt_change *) tt_buff + count; + /* Check for the change op */ + if (tt_change->op == TT_DEL) + tt_global_del(bat_priv, orig_node, + tt_change->addr, + "tt remotely removed"); + else + if (!tt_global_add(bat_priv, orig_node, + tt_change->addr, + ttvn)) + /* In case of problem while storing a + * global_entry, we stop the updating + * procedure without committing the + * ttvn change. This will avoid to send + * corrupted data on tt_request + */ + return; + } + /* Let's save the buffer (if any) */ + tt_save_orig_buffer(bat_priv, orig_node, + tt_buff, tt_num_changes); + + atomic_set(&orig_node->last_ttvn, ttvn); + + /* Even if we received the crc into the OGM, we prefer + * to recompute it to spot any possible inconsistency + * in the global table */ + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + } else { + /* if we missed more than one change or our tables are not + * in sync anymore -> request fresh tt data */ + if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { +request_table: + bat_dbg(DBG_ROUTES, bat_priv, "TT changes missing " + "for %pM. Need to retrieve last OGM buffer\n", + orig_node->orig); + send_tt_request(bat_priv, orig_node, ttvn, tt_crc, + true); + return; + } } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, - unsigned char *tt_buff, int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *curr_router;
@@ -93,7 +136,6 @@ static void update_route(struct bat_priv *bat_priv,
/* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node, @@ -105,9 +147,6 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); - /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv, @@ -135,8 +174,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *router = NULL;
@@ -146,11 +184,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node) - update_route(bat_priv, orig_node, neigh_node, - tt_buff, tt_buff_len); - /* may be just TT changed */ - else - update_TT(bat_priv, orig_node, tt_buff, tt_buff_len); + update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -387,14 +421,12 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *tt_buff, int tt_buff_len, - char is_duplicate) + unsigned char *tt_buff, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -459,9 +491,6 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? - batman_packet->num_tt * ETH_ALEN : tt_buff_len); - /* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); @@ -491,15 +520,19 @@ static void update_orig(struct bat_priv *bat_priv, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node, - tt_buff, tmp_tt_buff_len); - goto update_gw; + update_routes(bat_priv, orig_node, neigh_node);
update_tt: - update_routes(bat_priv, orig_node, router, - tt_buff, tmp_tt_buff_len); + /* I have to check for transtable changes only if the OGM has been + * sent through a primary interface */ + if (((batman_packet->orig != ethhdr->h_source) && + (batman_packet->ttl > 2)) || + (batman_packet->flags & PRIMARIES_FIRST_HOP)) + update_transtable(bat_priv, orig_node, tt_buff, + batman_packet->tt_num_changes, + batman_packet->ttvn, + batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -621,7 +654,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, + unsigned char *tt_buff, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -660,12 +693,14 @@ void receive_bat_packet(struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] " - "(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, " - "TTL %d, V %d, IDF %d)\n", + "(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, " + "crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno, - batman_packet->tq, batman_packet->ttl, batman_packet->version, + batman_packet->ttvn, batman_packet->tt_crc, + batman_packet->tt_num_changes, batman_packet->tq, + batman_packet->ttl, batman_packet->version, has_directlink_flag);
rcu_read_lock(); @@ -818,14 +853,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, tt_buff, tt_buff_len, is_duplicate); + if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, tt_buff_len, if_incoming); + 1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -848,7 +883,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, tt_buff_len, if_incoming); + 0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1195,6 +1230,70 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct tt_query_packet *tt_query; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + goto out; + + /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + tt_query = (struct tt_query_packet *)skb->data; + + tt_query->tt_data = ntohs(tt_query->tt_data); + + if (tt_query->flags & TT_REQUEST) { + /* If we cannot provide an answer the tt_request is + * forwarded */ + if (!send_tt_response(bat_priv, tt_query)) { + bat_dbg(DBG_ROUTES, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + goto out; + } + /* packet needs to be linearised to access the TT changes records */ + if (skb_linearize(skb) < 0) + goto out; + + if (is_my_mac(tt_query->dst)) + handle_tt_response(bat_priv, tt_query); + else { + bat_dbg(DBG_ROUTES, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + +out: + kfree_skb(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1376,14 +1475,64 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(struct unicast_packet); + struct orig_node *orig_node; + struct ethhdr *ethhdr; + uint8_t curr_ttvn; + int16_t diff;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
+ if (is_my_mac(unicast_packet->dest)) + curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + else { + orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + + if (!orig_node) + return NET_RX_DROP; + + curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + diff = unicast_packet->ttvn - curr_ttvn; + /* Check whether I have to reroute the packet */ + if (unicast_packet->packet_type == BAT_UNICAST && + (diff < 0 && diff > -0xff/2)) { + /* Linearize the skb before accessing it */ + if (skb_linearize(skb) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + + sizeof(struct unicast_packet)); + + orig_node = transtable_search(bat_priv, ethhdr->h_dest); + + if (!orig_node) { + if (!is_my_client(bat_priv, ethhdr->h_dest)) + return NET_RX_DROP; + memcpy(unicast_packet->dest, + bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + } else { + memcpy(unicast_packet->dest, orig_node->orig, + ETH_ALEN); + curr_ttvn = (uint8_t) + atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + unicast_packet->ttvn = curr_ttvn; + + bat_dbg(DBG_ROUTES, bat_priv, "HVN mismatch! " + "Rerouting unicast packet (for %pM) to %pM\n", + ethhdr->h_dest, unicast_packet->dest); + } /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/routing.h b/routing.h index 870f298..6f6a5f8 100644 --- a/routing.h +++ b/routing.h @@ -24,12 +24,11 @@
void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, - struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, - struct hard_iface *if_incoming); + struct batman_packet *batman_packet, + unsigned char *tt_buff, + struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len); + struct neigh_node *neigh_node); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f30d0c6..aa0ad64 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_tt)) { + batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -136,17 +136,17 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d," - " IDF %s) on interface %s [%pM]\n", + " IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"), - hard_iface->net_dev->name, + batman_packet->ttvn, hard_iface->net_dev->name, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_tt * ETH_ALEN); + tt_len(batman_packet->tt_num_changes); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -214,26 +214,17 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) +static void realloc_packet_buffer(struct hard_iface *hard_iface, + int new_len) { - int new_len; unsigned char *new_buff; - struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(struct batman_packet)); - batman_packet = (struct batman_packet *)new_buff; - - batman_packet->num_tt = tt_local_fill_buffer(bat_priv, - new_buff + sizeof(struct batman_packet), - new_len - sizeof(struct batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff; @@ -241,6 +232,46 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+/* when calling this function (hard_iface == primary_if) has to be true */ +static void prepare_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + int new_len; + struct batman_packet *batman_packet; + + new_len = BAT_PACKET_LEN + + tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented */ + if (new_len > hard_iface->soft_iface->mtu) + new_len = BAT_PACKET_LEN; + + realloc_packet_buffer(hard_iface, new_len); + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + + atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); + + batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv, + hard_iface->packet_buff + BAT_PACKET_LEN, + hard_iface->packet_len - BAT_PACKET_LEN); + +} + +static void reset_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + struct batman_packet *batman_packet; + + realloc_packet_buffer(hard_iface, BAT_PACKET_LEN); + + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + batman_packet->tt_num_changes = 0; +} + void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -266,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->tt_local_changed)) && - (hard_iface == primary_if)) - rebuild_batman_packet(bat_priv, hard_iface); + if (hard_iface == primary_if) { + /* if at least one change happened */ + if (atomic_read(&bat_priv->tt_local_changes) > 0) { + prepare_packet_buffer(bat_priv, hard_iface); + /* Increment the TTVN only once per OGM interval */ + atomic_inc(&bat_priv->ttvn); + } + + /* if the changes have been sent enough times */ + if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) + reset_packet_buffer(bat_priv, hard_iface); + }
/** * NOTE: packet_buff might just have been re-allocated in - * rebuild_batman_packet() + * prepare_packet_buffer() or in reset_packet_buffer() */ batman_packet = (struct batman_packet *)hard_iface->packet_buff;
@@ -281,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
+ batman_packet->ttvn = atomic_read(&bat_priv->ttvn); + batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc)); + if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else @@ -309,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time; + uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); @@ -326,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl; + tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN); @@ -358,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno); + batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP; @@ -369,7 +414,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + tt_buff_len, + sizeof(struct batman_packet) + + tt_len(tt_num_changes), if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 247172d..842f4d1 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index 89a940a..96b98f7 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -366,7 +366,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed"); tt_local_add(dev, addr->sa_data); }
@@ -424,7 +424,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if ((curr_softif_neigh) && (curr_softif_neigh->vid == vid)) goto dropped;
- /* TODO: check this for locks */ + /* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { @@ -663,7 +663,12 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->tt_local_changed, 0); + atomic_set(&bat_priv->ttvn, 0); + atomic_set(&bat_priv->tt_local_changes, 0); + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + + bat_priv->tt_buff = NULL; + bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 25e6939..698c3d4 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message); +#include <linux/crc16.h> + +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message); +static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(struct hlist_node *node, void *data2) @@ -47,14 +51,15 @@ static int compare_gtt(struct hlist_node *node, void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + msecs_to_jiffies(5000)); }
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; @@ -82,7 +87,7 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, }
static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; @@ -110,7 +115,42 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{ + unsigned long deadline; + deadline = starting_time + msecs_to_jiffies(timeout); + + return time_after(jiffies, deadline); +} + +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +{ + struct tt_change_node *tt_change_node; + + tt_change_node = (struct tt_change_node *) + kmalloc(sizeof(struct tt_change_node), GFP_ATOMIC); + + if (!tt_change_node) + return; + + tt_change_node->change.op = op; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + /* track the change in the OGMinterval list */ + list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); + atomic_inc(&bat_priv->tt_local_changes); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); +} + +int tt_len(int changes_num) +{ + return changes_num * sizeof(struct tt_change); +} + +static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -120,9 +160,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0); - tt_local_start_timer(bat_priv); - return 1; }
@@ -131,40 +168,24 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; - int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - return; + goto unlock; }
- /* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_tt That also should give a limit to - MAC-flooding. */ - required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; - required_bytes += BAT_PACKET_LEN; - - if ((required_bytes > ETH_DATA_LEN) || - (atomic_read(&bat_priv->aggregated_ogms) && - required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_tt + 1 > 255)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local tt entry (%pM): " - "number of local tt entries exceeds packet size\n", - addr); - return; - } - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local tt entry: %pM\n", addr); - tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - return; + goto unlock; + + tt_local_event(bat_priv, TT_ADD, addr); + + bat_dbg(DBG_ROUTES, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d\n", addr, + (uint8_t)atomic_read(&bat_priv->ttvn));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; @@ -175,13 +196,9 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); - bat_priv->num_local_tt++; - atomic_set(&bat_priv->tt_local_changed, 1); - + atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ @@ -190,46 +207,60 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry) - _tt_global_del_orig(bat_priv, tt_global_entry, - "local tt received"); + _tt_global_del(bat_priv, tt_global_entry, + "local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock); + +unlock: + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct hlist_node *node; - struct hlist_head *head; - int i, count = 0; + int count = 0, tot_changes = 0; + struct tt_change_node *entry, *safe;
- spin_lock_bh(&bat_priv->tt_lhash_lock); + if (buff_len > 0) + tot_changes = buff_len / tt_len(1);
- for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; - - rcu_read_lock(); - hlist_for_each_entry_rcu(tt_local_entry, node, - head, hash_entry) { - if (buff_len < (count + 1) * ETH_ALEN) - break; - - memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, - ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock); + atomic_set(&bat_priv->tt_local_changes, 0);
+ list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (count < tot_changes) { + memcpy(buff + tt_len(count), + &entry->change, sizeof(struct tt_change)); count++; } - rcu_read_unlock(); + list_del(&entry->list); + kfree(entry); } + spin_unlock_bh(&bat_priv->tt_changes_list_lock);
- /* if we did not get all new local tts see you next time ;-) */ - if (count == bat_priv->num_local_tt) - atomic_set(&bat_priv->tt_local_changed, 0); + /* Keep the buffer for possible tt_request */ + spin_lock_bh(&bat_priv->tt_buff_lock); + kfree(bat_priv->tt_buff); + bat_priv->tt_buff_len = 0; + bat_priv->tt_buff = NULL; + /* We check whether this new OGM has no changes due to size + * problems */ + if (buff_len > 0) { + /** + * if kmalloc() fails we will reply with the full table + * instead of providing the diff + */ + bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + if (bat_priv->tt_buff) { + memcpy(bat_priv->tt_buff, buff, buff_len); + bat_priv->tt_buff_len = buff_len; + } + } + spin_unlock_bh(&bat_priv->tt_buff_lock);
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - return count; + return tot_changes; }
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -261,8 +292,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via TT:\n", - net_dev->name); + "announced via TT (TTVN: %u):\n", + net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -309,54 +340,50 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_tt--; - atomic_set(&bat_priv->tt_local_changed, 1); + atomic_dec(&bat_priv->num_local_tt); }
static void tt_local_del(struct bat_priv *bat_priv, - struct tt_local_entry *tt_local_entry, - char *message) + struct tt_local_entry *tt_local_entry, + char *message) { bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
+ atomic_dec(&bat_priv->num_local_tt); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr); - _tt_local_del(&tt_local_entry->hash_entry, bat_priv); + + tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) + if (tt_local_entry) { + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, message); - + } spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; - unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock); @@ -369,32 +396,52 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
- timeout = tt_local_entry->last_seen; - timeout += TT_LOCAL_TIMEOUT * HZ; - - if (time_before(jiffies, timeout)) + if (!is_out_of_time(tt_local_entry->last_seen, + TT_LOCAL_TIMEOUT * 1000)) continue;
+ tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + "address timed out"); } }
spin_unlock_bh(&bat_priv->tt_lhash_lock); - tt_local_start_timer(bat_priv); }
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + int i; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct hlist_head *head; + struct hlist_node *node, *node_tmp; + struct tt_local_entry *tt_local_entry; + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work); - hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + hash = bat_priv->tt_local_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + kfree(tt_local_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_local_hash = NULL; }
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -407,74 +454,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void tt_changes_list_free(struct bat_priv *bat_priv) +{ + struct tt_change_node *entry, *safe; + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + list_del(&entry->list); + kfree(entry); + } + + atomic_set(&bat_priv->tt_local_changes, 0); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); +} + +/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_addr, uint8_t ttvn) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; - - while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { - spin_lock_bh(&bat_priv->tt_ghash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if (!tt_global_entry) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - tt_global_entry = - kmalloc(sizeof(struct tt_global_entry), - GFP_ATOMIC); - - if (!tt_global_entry) - break; - - memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN); - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global tt entry: " - "%pM (via %pM)\n", - tt_global_entry->addr, orig_node->orig); - - spin_lock_bh(&bat_priv->tt_ghash_lock); - hash_add(bat_priv->tt_global_hash, compare_gtt, - choose_orig, tt_global_entry, - &tt_global_entry->hash_entry); - - } - + struct orig_node *orig_node_tmp; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + + if (!tt_global_entry) { + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), + GFP_ATOMIC); + if (!tt_global_entry) + goto unlock; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); + /* Assign the new orig_node */ + atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - /* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr); - - if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - - tt_buff_count++; - } - - /* initialize, and overwrite if malloc succeeds */ - orig_node->tt_buff = NULL; - orig_node->tt_buff_len = 0; - - if (tt_buff_len > 0) { - orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); - if (orig_node->tt_buff) { - memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); - orig_node->tt_buff_len = tt_buff_len; + tt_global_entry->ttvn = ttvn; + atomic_inc(&orig_node->tt_size); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry); + } else { + if (tt_global_entry->orig_node != orig_node) { + atomic_dec(&tt_global_entry->orig_node->tt_size); + orig_node_tmp = tt_global_entry->orig_node; + atomic_inc(&orig_node->refcount); + tt_global_entry->orig_node = orig_node; + tt_global_entry->ttvn = ttvn; + orig_node_free_ref(orig_node_tmp); + atomic_inc(&orig_node->tt_size); } } + + spin_unlock_bh(&bat_priv->tt_ghash_lock); + + bat_dbg(DBG_ROUTES, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->addr, orig_node->orig); + + /* remove address from local hash if present */ + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); + + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received"); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + return 1; +unlock: + spin_unlock_bh(&bat_priv->tt_ghash_lock); + return 0; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -507,17 +559,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
seq_printf(seq, "Globally announced TTs received via the mesh %s\n", net_dev->name); + seq_printf(seq, " %-13s %s %-15s %s\n", + "Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; - /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ + /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via + * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head) - buf_size += 43; + buf_size += 59; rcu_read_unlock(); }
@@ -536,10 +591,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { - pos += snprintf(buff + pos, 44, - " * %pM via %pM\n", + pos += snprintf(buff + pos, 61, + " * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr, - tt_global_entry->orig_node->orig); + tt_global_entry->ttvn, + tt_global_entry->orig_node->orig, + (uint8_t) atomic_read( + &tt_global_entry->orig_node-> + last_ttvn)); } rcu_read_unlock(); } @@ -554,64 +613,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message) +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message) { + if (!tt_global_entry) + return; + bat_dbg(DBG_ROUTES, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
+ atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry); }
+void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *addr, char *message) +{ + struct tt_global_entry *tt_global_entry; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr); + + if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + atomic_dec(&orig_node->tt_size); + _tt_global_del(bat_priv, tt_global_entry, message); + } + spin_unlock_bh(&bat_priv->tt_ghash_lock); +} + void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message) + struct orig_node *orig_node, char *message) { struct tt_global_entry *tt_global_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; + int i; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct hlist_node *node, *safe; + struct hlist_head *head;
- if (orig_node->tt_buff_len == 0) + if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { - tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if ((tt_global_entry) && - (tt_global_entry->orig_node == orig_node)) - _tt_global_del_orig(bat_priv, tt_global_entry, - message); - - tt_buff_count++; + hlist_for_each_entry_safe(tt_global_entry, node, safe, + head, hash_entry) { + if (tt_global_entry->orig_node == orig_node) + _tt_global_del(bat_priv, tt_global_entry, + message); + } } + atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock); - - orig_node->tt_buff_len = 0; - kfree(orig_node->tt_buff); - orig_node->tt_buff = NULL; }
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL; }
@@ -635,3 +710,695 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } + +/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (compare_eth(tt_global_entry->orig_node, + orig_node)) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_global_entry->addr[j]); + total ^= total_one; + } + } + rcu_read_unlock(); + } + + return total; +} + +/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_local_entry->addr[j]); + total ^= total_one; + } + + rcu_read_unlock(); + } + + return total; +} + +static void tt_req_list_free(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes) +{ + uint16_t tt_buff_len = tt_len(tt_num_changes); + + /* Replace the old buffer only if I received something in the + * last OGM (the OGM could carry no changes) */ + spin_lock_bh(&orig_node->tt_buff_lock); + if (tt_buff_len > 0) { + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; + } + } + spin_unlock_bh(&orig_node->tt_buff_lock); +} + +static void tt_req_purge(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (is_out_of_time(node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) { + list_del(&node->list); + kfree(node); + } + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, bool full_table) +{ + struct sk_buff *skb; + struct tt_query_packet *tt_request; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if; + struct tt_req_node *tt_req_node = NULL; + int ret = 0; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_req_node, &bat_priv->tt_req_list, list) { + if (compare_eth(tt_req_node, dst_orig_node) && + !is_out_of_time(tt_req_node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) + goto unlock_tt; + } + + tt_req_node = kmalloc(sizeof(struct tt_req_node), GFP_ATOMIC); + if (!tt_req_node) { + ret = 1; + goto unlock_tt; + } + + memcpy(tt_req_node->addr, dst_orig_node->orig, ETH_ALEN); + tt_req_node->issued_at = jiffies; + + list_add(&tt_req_node->list, &bat_priv->tt_req_list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + tt_request = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet)); + + tt_request->packet_type = BAT_TT_QUERY; + tt_request->version = COMPAT_VERSION; + memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); + tt_request->ttl = TTL; + tt_request->ttvn = ttvn; + tt_request->tt_data = tt_crc; + tt_request->flags = TT_REQUEST; + + /* Request the full table if needed */ + if (full_table) + tt_request->flags |= TT_FULL_TABLE; + + neigh_node = find_router(bat_priv, dst_orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + bat_dbg(DBG_ROUTES, bat_priv, "Sending TT_REQUEST to %pM via %pM " + "[%c]\n", dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret == 1) { + kfree_skb(skb); + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_del(&tt_req_node->list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + kfree(tt_req_node); + } + return ret; +unlock_tt: + spin_unlock_bh(&bat_priv->tt_req_list_lock); + return ret; +} + +static bool send_other_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if = NULL; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t orig_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_ROUTES, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (%pM) [%c]\n", tt_request->src, + tt_request->ttvn, tt_request->dst, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + /* Let's get the orig node of the REAL destination */ + req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst); + if (!req_dst_orig_node) + goto out; + + res_dst_orig_node = get_orig_node(bat_priv, tt_request->src); + if (!res_dst_orig_node) + goto out; + + neigh_node = find_router(bat_priv, res_dst_orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn); + req_ttvn = tt_request->ttvn; + + /* I have not the requested data */ + if (orig_ttvn != req_ttvn || + tt_request->tt_data != req_dst_orig_node->tt_crc) + goto out; + + /* If it has explicitly been requested the full table */ + if (tt_request->flags & TT_FULL_TABLE || + !req_dst_orig_node->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&req_dst_orig_node->tt_buff_lock); + tt_len = req_dst_orig_node->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Copy the last orig_node's OGM buffer */ + memcpy(tt_buff, req_dst_orig_node->tt_buff, + req_dst_orig_node->tt_buff_len); + + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = (uint8_t) + atomic_read(&req_dst_orig_node->last_ttvn); + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the orig_node's local table */ + hash = bat_priv->tt_global_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + if (tt_global_entry->orig_node == + req_dst_orig_node) { + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_global_entry->addr, + ETH_ALEN); + tt_count++; + } + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + +out: + if (res_dst_orig_node) + orig_node_free_ref(res_dst_orig_node); + if (req_dst_orig_node) + orig_node_free_ref(req_dst_orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + return ret; + +} +static bool send_my_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct tt_local_entry *tt_local_entry; + struct hard_iface *primary_if = NULL; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t my_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_ROUTES, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (me) [%c]\n", tt_request->src, + tt_request->ttvn, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + + my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + req_ttvn = tt_request->ttvn; + + orig_node = get_orig_node(bat_priv, tt_request->src); + if (!orig_node) + goto out; + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* If the full table has been explicitly requested or the gap + * is too big send the whole local translation table */ + if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + !bat_priv->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&bat_priv->tt_buff_lock); + tt_len = bat_priv->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + memcpy(tt_buff, bat_priv->tt_buff, + bat_priv->tt_buff_len); + spin_unlock_bh(&bat_priv->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + bat_priv->primary_if->soft_iface->mtu) { + tt_len = bat_priv->primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the local table */ + tt_response->ttvn = + (uint8_t)atomic_read(&bat_priv->ttvn); + + hash = bat_priv->tt_local_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_local_entry->addr, + ETH_ALEN); + tt_count++; + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&bat_priv->tt_buff_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + /* This packet was for me, so it doesn't need to be re-routed */ + return true; +} + +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + if (is_my_mac(tt_request->dst)) + return send_my_tt_response(bat_priv, tt_request); + else + return send_other_tt_response(bat_priv, tt_request); +} + +/* Substitute the TT response source's table with the newone carried by the + * packet */ +static void _tt_fill_gtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *tt_buff, + uint16_t table_size, uint8_t ttvn) +{ + int count; + unsigned char *tt_ptr; + + for (count = 0; count < table_size; count++) { + tt_ptr = tt_buff + (count * ETH_ALEN); + + /* If we fail to allocate a new entry we return immediatly */ + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + return; + } + atomic_set(&orig_node->last_ttvn, ttvn); +} + +static void tt_fill_gtable(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct orig_node *orig_node = NULL; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + /* Purge the old table first.. */ + tt_global_del_orig(bat_priv, orig_node, "Received full table"); + + _tt_fill_gtable(bat_priv, orig_node, + ((unsigned char *)tt_response) + + sizeof(struct tt_query_packet), + tt_response->tt_data, + tt_response->ttvn); + + spin_lock_bh(&orig_node->tt_buff_lock); + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = NULL; + spin_unlock_bh(&orig_node->tt_buff_lock); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +static void tt_update_changes(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response, + struct tt_change *tt_change) +{ + struct orig_node *orig_node = NULL; + int i; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + + if (!orig_node) + goto out; + + for (i = 0; i < tt_response->tt_data; i++) { + if ((tt_change + i)->op == TT_DEL) + tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by tt_response"); + else + if (!tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, tt_response->ttvn)) + return; + } + + tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, + tt_response->tt_data); + atomic_set(&orig_node->last_ttvn, tt_response->ttvn); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) +{ + struct tt_local_entry *tt_local_entry; + + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + + if (tt_local_entry) + return true; + return false; +} + +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct tt_req_node *node, *safe; + struct orig_node *orig_node = NULL; + + bat_dbg(DBG_ROUTES, bat_priv, "Received TT_RESPONSE from %pM for " + "ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + tt_response->tt_data, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + if (tt_response->flags & TT_FULL_TABLE) + tt_fill_gtable(bat_priv, tt_response); + else + tt_update_changes(bat_priv, tt_response, + (struct tt_change *)(tt_response + 1)); + + /* Delete the tt_req_node from pending tt_requests list */ + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (!compare_eth(node->addr, tt_response->src)) + continue; + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + /* Recalculate the CRC for this orig_node and store it */ + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + orig_node_free_ref(orig_node); +} + +int tt_init(struct bat_priv *bat_priv) +{ + if (!tt_local_init(bat_priv)) + return 0; + + if (!tt_global_init(bat_priv)) + return 0; + + tt_start_timer(bat_priv); + + return 1; +} + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} + +static void tt_purge(struct work_struct *work) +{ + struct delayed_work *delayed_work = + container_of(work, struct delayed_work, work); + struct bat_priv *bat_priv = + container_of(delayed_work, struct bat_priv, tt_work); + + tt_local_purge(bat_priv); + tt_req_purge(bat_priv); + + tt_start_timer(bat_priv); +} diff --git a/translation-table.h b/translation-table.h index 46152c3..68cb1bc 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,41 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, + uint8_t *new_addr); +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len); +int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); -int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); + uint8_t *addr, char *message); int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len); + struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + uint8_t ttvn); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message); -void tt_global_free(struct bat_priv *bat_priv); + struct orig_node *orig_node, char *message); +void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + char *message); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes); +uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv, + struct orig_node *dst_orig_node, uint8_t hvn, + uint16_t tt_crc, bool full_table); +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request); +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index b8c72c3..ba97028 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; + atomic_t last_ttvn; /* last seen translation table version number */ + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ + atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -94,10 +98,16 @@ struct orig_node { * neigh_node->real_packet_count */ spinlock_t bcast_seqno_lock; /* protects bcast_bits, * last_bcast_seqno */ + spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list; };
+struct tt_change { + uint8_t op; + uint8_t addr[ETH_ALEN]; +}; + struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; + atomic_t ttvn; /* tranlation table version number */ + atomic_t tt_ogm_append_cnt; + atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct hlist_head softif_neigh_list; struct softif_neigh __rcu *softif_neigh; @@ -154,21 +167,29 @@ struct bat_priv { struct hlist_head forw_bat_list; struct hlist_head forw_bcast_list; struct hlist_head gw_list; + struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; + struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ + spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ + spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ - int16_t num_local_tt; - atomic_t tt_local_changed; + atomic_t num_local_tt; + atomic_t tt_crc; /* Checksum of the local table, recomputed before + * sending a new OGM */ + unsigned char *tt_buff; + int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; @@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; + uint8_t ttvn; + /* entry in the global table */ struct hlist_node hash_entry; };
+struct tt_change_node { + struct list_head list; + struct tt_change change; +}; + +struct tt_req_node { + uint8_t addr[ETH_ALEN]; + unsigned long issued_at; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded diff --git a/unicast.c b/unicast.c index 19c3daf..7a9c02c 100644 --- a/unicast.c +++ b/unicast.c @@ -329,6 +329,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); + /* set the destination tt version number */ + unicast_packet->ttvn = + (uint8_t)atomic_read(&orig_node->last_ttvn);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(struct unicast_packet) >
Exploting the new announcement implementation, it has been possible to improve the roaming mechanism and reduce the number of packet drops.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Roaming-improvements
Signed-off-by: Antonio Quartulli ordex@autistici.org --- hard-interface.c | 4 + main.c | 2 + main.h | 4 + originator.c | 1 + packet.h | 10 +++ routing.c | 70 ++++++++++++++++++++-- routing.h | 1 + send.c | 1 + soft-interface.c | 3 +- translation-table.c | 169 +++++++++++++++++++++++++++++++++++++++++++++----- translation-table.h | 7 ++- types.h | 24 +++++++ 12 files changed, 271 insertions(+), 25 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 4fcd22e..a1cf040 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -673,6 +673,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_TT_QUERY: ret = recv_tt_query(skb, hard_iface); break; + /* Roaming advertisement */ + case BAT_ROAM_ADV: + ret = recv_roam_adv(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index a84679a..31cbecc 100644 --- a/main.c +++ b/main.c @@ -85,6 +85,7 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_roam_list_lock); spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); @@ -97,6 +98,7 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->softif_neigh_list); INIT_LIST_HEAD(&bat_priv->tt_changes_list); INIT_LIST_HEAD(&bat_priv->tt_req_list); + INIT_LIST_HEAD(&bat_priv->tt_roam_list);
if (originator_init(bat_priv) < 1) goto err; diff --git a/main.h b/main.h index cc1c277..802d87a 100644 --- a/main.h +++ b/main.h @@ -55,6 +55,10 @@ #define TT_ADD 0 #define TT_DEL 1
+#define ROAMING_MAX_TIME 20 /* Time in which a client can roam at most + * ROAMING_MAX_COUNT times */ +#define ROAMING_MAX_COUNT 5 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ diff --git a/originator.c b/originator.c index be7257b..2cb7425 100644 --- a/originator.c +++ b/originator.c @@ -221,6 +221,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) /* extra reference for return */ atomic_set(&orig_node->refcount, 2);
+ orig_node->tt_poss_change = false; orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; diff --git a/packet.h b/packet.h index de8bf3b..396a2e6 100644 --- a/packet.h +++ b/packet.h @@ -31,6 +31,7 @@ #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 #define BAT_TT_QUERY 0x07 +#define BAT_ROAM_ADV 0x08
/* this file is included by batctl which needs these defines */ #define COMPAT_VERSION 14 @@ -164,4 +165,13 @@ struct tt_query_packet { */ } __packed;
+struct roam_adv_packet { + uint8_t packet_type; + uint8_t version; + uint8_t dst[6]; + uint8_t ttl; + uint8_t src[6]; + uint8_t client[6]; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 52107cd..39bc7b2 100644 --- a/routing.c +++ b/routing.c @@ -92,7 +92,7 @@ static void update_transtable(struct bat_priv *bat_priv, else if (!tt_global_add(bat_priv, orig_node, tt_change->addr, - ttvn)) + ttvn, false)) /* In case of problem while storing a * global_entry, we stop the updating * procedure without committing the @@ -111,6 +111,10 @@ static void update_transtable(struct bat_priv *bat_priv, * to recompute it to spot any possible inconsistency * in the global table */ orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + if (tt_num_changes) + orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not * in sync anymore -> request fresh tt data */ @@ -1294,6 +1298,56 @@ out: return ret; }
+int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct roam_adv_packet *roam_adv_packet; + struct orig_node *orig_node; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + roam_adv_packet = (struct roam_adv_packet *)skb->data; + + if (!is_my_mac(roam_adv_packet->dst)) + return route_unicast_packet(skb, recv_if); + + orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + if (!orig_node) + goto out; + + tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + atomic_read(&orig_node->last_ttvn) + 1, true); + + /* Roaming phase starts: I have a new information but the ttvn has been + * incremented yet. This flag will make me check all the incoming + * packets for the correct destination. */ + bat_priv->tt_poss_change = true; + + bat_dbg(DBG_ROUTES, bat_priv, "Received ROAMING_ADV from %pM " + "(client %pM)\n", roam_adv_packet->src, + roam_adv_packet->client); + + orig_node_free_ref(orig_node); + ret = NET_RX_SUCCESS; +out: + kfree(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1482,35 +1536,41 @@ int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) struct ethhdr *ethhdr; uint8_t curr_ttvn; int16_t diff; + bool tt_poss_change;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + return NET_RX_DROP; + unicast_packet = (struct unicast_packet *)skb->data;
- if (is_my_mac(unicast_packet->dest)) + if (is_my_mac(unicast_packet->dest)) { + tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); - else { + } else { orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node) return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + tt_poss_change = orig_node->tt_poss_change; orig_node_free_ref(orig_node); }
diff = unicast_packet->ttvn - curr_ttvn; /* Check whether I have to reroute the packet */ if (unicast_packet->packet_type == BAT_UNICAST && - (diff < 0 && diff > -0xff/2)) { + ((diff < 0 && diff > -0xff/2) || tt_poss_change)) { /* Linearize the skb before accessing it */ if (skb_linearize(skb) < 0) return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data + sizeof(struct unicast_packet)); - orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) { diff --git a/routing.h b/routing.h index 6f6a5f8..e2943e0 100644 --- a/routing.h +++ b/routing.h @@ -37,6 +37,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index aa0ad64..3f45f39 100644 --- a/send.c +++ b/send.c @@ -303,6 +303,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) prepare_packet_buffer(bat_priv, hard_iface); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->ttvn); + bat_priv->tt_poss_change = false; }
/* if the changes have been sent enough times */ diff --git a/soft-interface.c b/soft-interface.c index 96b98f7..3be105d 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -366,7 +366,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify tt-table if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed", false); tt_local_add(dev, addr->sa_data); }
@@ -669,6 +669,7 @@ struct net_device *softif_create(char *name)
bat_priv->tt_buff = NULL; bat_priv->tt_buff_len = 0; + bat_priv->tt_poss_change = false;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 698c3d4..b533f0a 100644 --- a/translation-table.c +++ b/translation-table.c @@ -168,6 +168,8 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; + uint8_t roam_addr[ETH_ALEN]; + struct orig_node *roam_orig_node;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); @@ -206,12 +208,21 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry) + /* Check whether it is a roaming! */ + if (tt_global_entry) { + memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); + roam_orig_node = tt_global_entry->orig_node; + /* This node is probably going to update its tt table */ + tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + send_roam_adv(bat_priv, tt_global_entry->addr, + tt_global_entry->orig_node); + } else + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - + return; unlock: spin_unlock_bh(&bat_priv->tt_lhash_lock); } @@ -364,7 +375,8 @@ static void tt_local_del(struct bat_priv *bat_priv, tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, + char *message, bool roaming) { struct tt_local_entry *tt_local_entry;
@@ -372,7 +384,11 @@ void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { - tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + if (roaming) + tt_local_event(bat_priv, TT_DEL, broadcast_addr); + else + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + tt_local_del(bat_priv, tt_local_entry, message); } spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -473,7 +489,7 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* caller must hold orig_node recount */ int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_addr, uint8_t ttvn) + unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; @@ -520,8 +536,9 @@ int tt_global_add(struct bat_priv *bat_priv, tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 1; unlock: @@ -905,6 +922,7 @@ out: kfree(tt_req_node); } return ret; + unlock_tt: spin_unlock_bh(&bat_priv->tt_req_list_lock); return ret; @@ -1245,7 +1263,7 @@ static void _tt_fill_gtable(struct bat_priv *bat_priv, tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */ - if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn, false)) return; } atomic_set(&orig_node->last_ttvn, ttvn); @@ -1299,7 +1317,8 @@ static void tt_update_changes(struct bat_priv *bat_priv, "tt removed by tt_response"); else if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, tt_response->ttvn)) + (tt_change + i)->addr, + tt_response->ttvn, false)) return; }
@@ -1378,16 +1397,118 @@ int tt_init(struct bat_priv *bat_priv) return 1; }
-void tt_free(struct bat_priv *bat_priv) +static void tt_roam_list_free(struct bat_priv *bat_priv) { - cancel_delayed_work_sync(&bat_priv->tt_work); + struct tt_roam_node *node, *safe;
- tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); + spin_lock_bh(&bat_priv->tt_roam_list_lock);
- kfree(bat_priv->tt_buff); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +static void tt_roam_purge(struct bat_priv *bat_priv) +{ + struct tt_roam_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + if (!is_out_of_time(node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node) +{ + struct neigh_node *neigh_node; + struct sk_buff *skb; + struct roam_adv_packet *roam_adv_packet; + struct tt_roam_node *tt_roam_node; + bool found = false; + int ret = 1; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { + if (!compare_eth(tt_roam_node->addr, client)) + continue; + + if (is_out_of_time(tt_roam_node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + if (!atomic_dec_not_zero(&tt_roam_node->counter)) + /* Sorry, you roamed too many times! */ + goto unlock; + + found = true; + break; + } + + if (!found) { + tt_roam_node = kmalloc(sizeof(struct tt_roam_node), GFP_ATOMIC); + if (!tt_roam_node) + goto unlock; + + tt_roam_node->first_time = jiffies; + atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + memcpy(tt_roam_node->addr, client, ETH_ALEN); + + list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); + + skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + if (!skb) + goto free_skb; + + skb_reserve(skb, ETH_HLEN); + + roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, + sizeof(struct roam_adv_packet)); + + roam_adv_packet->packet_type = BAT_ROAM_ADV; + roam_adv_packet->version = COMPAT_VERSION; + roam_adv_packet->ttl = TTL; + memcpy(roam_adv_packet->src, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); + memcpy(roam_adv_packet->client, client, ETH_ALEN); + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto free_skb; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto free_neigh; + + bat_dbg(DBG_ROUTES, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +free_neigh: + if (neigh_node) + neigh_node_free_ref(neigh_node); +free_skb: + if (ret) + kfree_skb(skb); + return; +unlock: + spin_unlock_bh(&bat_priv->tt_roam_list_lock); }
static void tt_purge(struct work_struct *work) @@ -1399,6 +1520,20 @@ static void tt_purge(struct work_struct *work)
tt_local_purge(bat_priv); tt_req_purge(bat_priv); + tt_roam_purge(bat_priv);
tt_start_timer(bat_priv); } + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + tt_roam_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} diff --git a/translation-table.h b/translation-table.h index 68cb1bc..7344415 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,6 +22,7 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
+struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); int tt_len(int changes_num); void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, uint8_t *new_addr); @@ -30,14 +31,14 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); + uint8_t *addr, char *message, bool roaming); int tt_local_seq_print_text(struct seq_file *seq, void *offset); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, int tt_buff_len); int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - uint8_t ttvn); + uint8_t ttvn, bool roaming); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); @@ -58,5 +59,7 @@ bool send_tt_response(struct bat_priv *bat_priv, bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); void handle_tt_response(struct bat_priv *bat_priv, struct tt_query_packet *tt_response); +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index ba97028..c1e88a4 100644 --- a/types.h +++ b/types.h @@ -81,6 +81,14 @@ struct orig_node { int16_t tt_buff_len; spinlock_t tt_buff_lock; /* protects tt_buff */ atomic_t tt_size; + bool tt_poss_change; /* this flag is needed to detect an ongoing + * roaming event. If it is true, it means that + * in the last OGM interval I sent a Roaming_adv, + * so I have to check every packet going to it + * whether the destination is still a client of + * its or not, it will be reset as soon as I'll + * receive a new TTVN from it */ + uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -158,6 +166,13 @@ struct bat_priv { atomic_t ttvn; /* tranlation table version number */ atomic_t tt_ogm_append_cnt; atomic_t tt_local_changes; /* changes registered in a OGM interval */ + bool tt_poss_change; /* this flag is needed to detect an ongoing + * roaming event. If it is true, it means that + * in the last OGM interval I received a + * Roaming_adv, so I have to check every packet + * going to me whether the destination is still + * a client of mine or not, it will be reset as + * soon as I'll increase my TTVN */ char num_ifaces; struct hlist_head softif_neigh_list; struct softif_neigh __rcu *softif_neigh; @@ -173,6 +188,7 @@ struct bat_priv { struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; struct list_head tt_req_list; /* list of pending tt_requests */ + struct list_head tt_roam_list; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -180,6 +196,7 @@ struct bat_priv { spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ + spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ @@ -239,6 +256,13 @@ struct tt_req_node { struct list_head list; };
+struct tt_roam_node { + uint8_t addr[ETH_ALEN]; + atomic_t counter; + unsigned long first_time; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded
+struct roam_adv_packet {
- uint8_t packet_type;
- uint8_t version;
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t src[6];
- uint8_t client[6];
+} __packed;
Maybe put ttl at the end, to help with alignment?
- tt_global_add(bat_priv, orig_node, roam_adv_packet->client,
atomic_read(&orig_node->last_ttvn) + 1, true);
- /* Roaming phase starts: I have a new information but the ttvn has been
* incremented yet. This flag will make me check all the incoming
* packets for the correct destination. */
The grammar in that comment could be better:
/* Roaming phase starts: I have new information but the ttvn has not * been incremented yet. This flag will make me check all the incoming * packets for the correct destination. */
+void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client,
struct orig_node *orig_node)
+{
- struct neigh_node *neigh_node;
- struct sk_buff *skb;
- struct roam_adv_packet *roam_adv_packet;
- struct tt_roam_node *tt_roam_node;
- bool found = false;
- int ret = 1;
- spin_lock_bh(&bat_priv->tt_roam_list_lock);
- /* The new tt_req will be issued only if I'm not waiting for a
* reply from the same orig_node yet */
- list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) {
if (!compare_eth(tt_roam_node->addr, client))
continue;
if (is_out_of_time(tt_roam_node->first_time,
ROAMING_MAX_TIME * 1000))
continue;
if (!atomic_dec_not_zero(&tt_roam_node->counter))
/* Sorry, you roamed too many times! */
goto unlock;
found = true;
break;
- }
- if (!found) {
tt_roam_node = kmalloc(sizeof(struct tt_roam_node), GFP_ATOMIC);
if (!tt_roam_node)
goto unlock;
tt_roam_node->first_time = jiffies;
atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1);
memcpy(tt_roam_node->addr, client, ETH_ALEN);
list_add(&tt_roam_node->list, &bat_priv->tt_roam_list);
- }
- spin_unlock_bh(&bat_priv->tt_roam_list_lock);
- skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN);
- if (!skb)
goto free_skb;
If the allocation fails, go free it ?
- skb_reserve(skb, ETH_HLEN);
- roam_adv_packet = (struct roam_adv_packet *)skb_put(skb,
sizeof(struct roam_adv_packet));
- roam_adv_packet->packet_type = BAT_ROAM_ADV;
- roam_adv_packet->version = COMPAT_VERSION;
- roam_adv_packet->ttl = TTL;
- memcpy(roam_adv_packet->src,
bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN);
- memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN);
- memcpy(roam_adv_packet->client, client, ETH_ALEN);
- neigh_node = find_router(bat_priv, orig_node, NULL);
- if (!neigh_node)
goto free_skb;
- if (neigh_node->if_incoming->if_status != IF_ACTIVE)
goto free_neigh;
- bat_dbg(DBG_ROUTES, bat_priv,
"Sending ROAMING_ADV to %pM (client %pM) via %pM\n",
orig_node->orig, client, neigh_node->addr);
- send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);
- ret = 0;
+free_neigh:
- if (neigh_node)
neigh_node_free_ref(neigh_node);
+free_skb:
- if (ret)
kfree_skb(skb);
- return;
+unlock:
- spin_unlock_bh(&bat_priv->tt_roam_list_lock);
}
All these different goto's makes me think of BASIC. How about breaking this function up into a number of functions.
1) find an existing tt_roam_node 2) Create a new tt_roam_node 3) Allocate and fill in the roam_adv_packet. 4) Find the neigh_node and send the packet.
You then end up with something like
void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, struct orig_node *orig_node) { spin_lock_bh(&bat_priv->tt_roam_list_lock); tt_roam_node = find_tt_roam_node(client);
if (!tt_roam_node) { tt_roam_node = new_tt_roam_node(client, bat_priv); } spin_unlock_bh(&bat_priv->tt_roam_list_lock);
if (tt_roam_node) roam_pkt = build_roam_pkt(bat_priv, orig_node, client); if (roan_pkt) send_roam_pkt(roam_pkt, orig_node, client); }
No goto's and easier to understand. It also makes it clear that tt_roam_node is not actually used while sending the packet, so maybe it does not belong inside send_roam_adv()?
- bool tt_poss_change; /* this flag is needed to detect an ongoing
* roaming event. If it is true, it means that
* in the last OGM interval I sent a Roaming_adv,
* so I have to check every packet going to it
* whether the destination is still a client of
* its or not, it will be reset as soon as I'll
* receive a new TTVN from it */
Too many it/its. I have a hard time understanding what it is.
So, mostly comments about the comments and style issues.
Andrew
On Wed, May 04, 2011 at 01:22:34PM +0200, Andrew Lunn wrote:
+struct roam_adv_packet {
- uint8_t packet_type;
- uint8_t version;
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t src[6];
- uint8_t client[6];
+} __packed;
Maybe put ttl at the end, to help with alignment?
As I did for the tt_query packet, the initial four fields are the same as the unicast_packet so that I can exploit route_unicast_packet() instead of writing routing function.
Is that a major issue?
- tt_global_add(bat_priv, orig_node, roam_adv_packet->client,
atomic_read(&orig_node->last_ttvn) + 1, true);
- /* Roaming phase starts: I have a new information but the ttvn has been
* incremented yet. This flag will make me check all the incoming
* packets for the correct destination. */
The grammar in that comment could be better:
/* Roaming phase starts: I have new information but the ttvn has not * been incremented yet. This flag will make me check all the incoming * packets for the correct destination. */
Thanks and sorry for my poor grammar :)
- skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN);
- if (!skb)
goto free_skb;
If the allocation fails, go free it ?
It's a matter of label. I'll correct it
- skb_reserve(skb, ETH_HLEN);
- roam_adv_packet = (struct roam_adv_packet *)skb_put(skb,
sizeof(struct roam_adv_packet));
- roam_adv_packet->packet_type = BAT_ROAM_ADV;
- roam_adv_packet->version = COMPAT_VERSION;
- roam_adv_packet->ttl = TTL;
- memcpy(roam_adv_packet->src,
bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN);
- memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN);
- memcpy(roam_adv_packet->client, client, ETH_ALEN);
- neigh_node = find_router(bat_priv, orig_node, NULL);
- if (!neigh_node)
goto free_skb;
- if (neigh_node->if_incoming->if_status != IF_ACTIVE)
goto free_neigh;
- bat_dbg(DBG_ROUTES, bat_priv,
"Sending ROAMING_ADV to %pM (client %pM) via %pM\n",
orig_node->orig, client, neigh_node->addr);
- send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);
- ret = 0;
+free_neigh:
- if (neigh_node)
neigh_node_free_ref(neigh_node);
+free_skb:
- if (ret)
kfree_skb(skb);
- return;
+unlock:
- spin_unlock_bh(&bat_priv->tt_roam_list_lock);
}
All these different goto's makes me think of BASIC. How about breaking this function up into a number of functions.
- find an existing tt_roam_node
- Create a new tt_roam_node
- Allocate and fill in the roam_adv_packet.
- Find the neigh_node and send the packet.
You then end up with something like
void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, struct orig_node *orig_node) { spin_lock_bh(&bat_priv->tt_roam_list_lock);
tt_roam_node = find_tt_roam_node(client);
if (!tt_roam_node) { tt_roam_node = new_tt_roam_node(client, bat_priv); } spin_unlock_bh(&bat_priv->tt_roam_list_lock);
if (tt_roam_node) roam_pkt = build_roam_pkt(bat_priv, orig_node, client); if (roan_pkt) send_roam_pkt(roam_pkt, orig_node, client); }
No goto's and easier to understand. It also makes it clear that tt_roam_node is not actually used while sending the packet, so maybe it does not belong inside send_roam_adv()?
Ok..I got the point. Maybe I will not be so drastic but I will follow your suggestion
- bool tt_poss_change; /* this flag is needed to detect an ongoing
* roaming event. If it is true, it means that
* in the last OGM interval I sent a Roaming_adv,
* so I have to check every packet going to it
* whether the destination is still a client of
* its or not, it will be reset as soon as I'll
* receive a new TTVN from it */
Too many it/its. I have a hard time understanding what it is.
You are definitely right. I'll rewrite the comment.
So, mostly comments about the comments and style issues.
Andrew
Thank you very much Andrew!
Regards,
On Wed, May 04, 2011 at 03:36:37PM +0200, Antonio Quartulli wrote:
On Wed, May 04, 2011 at 01:22:34PM +0200, Andrew Lunn wrote:
+struct roam_adv_packet {
- uint8_t packet_type;
- uint8_t version;
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t src[6];
- uint8_t client[6];
+} __packed;
Maybe put ttl at the end, to help with alignment?
As I did for the tt_query packet, the initial four fields are the same as the unicast_packet so that I can exploit route_unicast_packet() instead of writing routing function.
Is that a major issue?
No. It just that gcc might optimize accesses to src and client as a word read + 1/2 word read, if they where 1/2 word aligned. With ttl where it is, src and client are in strange alignments, so gcc will have to do byte access. But this is not the fast path, so it does not matter much.
- tt_global_add(bat_priv, orig_node, roam_adv_packet->client,
atomic_read(&orig_node->last_ttvn) + 1, true);
- /* Roaming phase starts: I have a new information but the ttvn has been
* incremented yet. This flag will make me check all the incoming
* packets for the correct destination. */
The grammar in that comment could be better:
/* Roaming phase starts: I have new information but the ttvn has not * been incremented yet. This flag will make me check all the incoming * packets for the correct destination. */
Thanks and sorry for my poor grammar :)
Actually, it is mostly very good....
Ok..I got the point. Maybe I will not be so drastic but I will follow your suggestion
Lots of small functions is my style. However, the Linux coding style documentation says something similar:
Chapter 6: Functions
Functions should be short and sweet, and do just one thing. They should fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24, as we all know), and do one thing and do that well.
It is well worth reading Documentation/CodingStyle
Andrew
On Wed, May 04, 2011 at 03:52:23PM +0200, Andrew Lunn wrote:
On Wed, May 04, 2011 at 03:36:37PM +0200, Antonio Quartulli wrote:
On Wed, May 04, 2011 at 01:22:34PM +0200, Andrew Lunn wrote:
+struct roam_adv_packet {
- uint8_t packet_type;
- uint8_t version;
- uint8_t dst[6];
- uint8_t ttl;
- uint8_t src[6];
- uint8_t client[6];
+} __packed;
Maybe put ttl at the end, to help with alignment?
As I did for the tt_query packet, the initial four fields are the same as the unicast_packet so that I can exploit route_unicast_packet() instead of writing routing function.
Is that a major issue?
No. It just that gcc might optimize accesses to src and client as a word read + 1/2 word read, if they where 1/2 word aligned. With ttl where it is, src and client are in strange alignments, so gcc will have to do byte access.
Understood. Thanks for the explanation.
But this is not the fast path, so it does not matter much.
Exactly..So I think we can leave as it is in this case
- tt_global_add(bat_priv, orig_node, roam_adv_packet->client,
atomic_read(&orig_node->last_ttvn) + 1, true);
- /* Roaming phase starts: I have a new information but the ttvn has been
* incremented yet. This flag will make me check all the incoming
* packets for the correct destination. */
The grammar in that comment could be better:
/* Roaming phase starts: I have new information but the ttvn has not * been incremented yet. This flag will make me check all the incoming * packets for the correct destination. */
Thanks and sorry for my poor grammar :)
Actually, it is mostly very good....
Ok..I got the point. Maybe I will not be so drastic but I will follow your suggestion
Lots of small functions is my style. However, the Linux coding style documentation says something similar:
Chapter 6: Functions
Functions should be short and sweet, and do just one thing. They should fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24, as we all know), and do one thing and do that well.
It is well worth reading Documentation/CodingStyle
Mh, thank you for showing me this document. I'll deeply read it as soon as possible! :)
Regards,
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org --- main.c | 2 - translation-table.c | 266 +++++++++++++++++++++++++++++---------------------- types.h | 6 +- vis.c | 13 +-- 4 files changed, 161 insertions(+), 126 deletions(-)
diff --git a/main.c b/main.c index 31cbecc..a3783f8 100644 --- a/main.c +++ b/main.c @@ -81,8 +81,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/translation-table.c b/translation-table.c index b533f0a..00d0255 100644 --- a/translation-table.c +++ b/translation-table.c @@ -78,6 +78,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -107,6 +110,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -123,6 +129,34 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + call_rcu(&tt_local_entry->rcu, tt_local_entry_free_rcu); +} + +static void tt_global_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + call_rcu(&tt_global_entry->rcu, tt_global_entry_free_rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) { struct tt_change_node *tt_change_node; @@ -166,22 +200,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_ADD, addr);
@@ -191,6 +222,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -200,31 +232,26 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -306,8 +333,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -321,7 +346,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -341,8 +365,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -351,15 +373,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, char *message) @@ -372,26 +385,28 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - if (roaming) - tt_local_event(bat_priv, TT_DEL, broadcast_addr); - else - tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + if (!tt_local_entry) + goto out;
- tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (roaming) + tt_local_event(bat_priv, TT_DEL, broadcast_addr); + else + tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); + + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -400,13 +415,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */ int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -417,22 +433,26 @@ static void tt_local_purge(struct bat_priv *bat_priv) continue;
tt_local_event(bat_priv, TT_DEL, tt_local_entry->addr); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_ROUTES, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -447,7 +467,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -492,10 +512,9 @@ int tt_global_add(struct bat_priv *bat_priv, unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -503,16 +522,19 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -525,25 +547,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_ROUTES, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -579,8 +594,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -595,10 +608,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -620,8 +633,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -635,7 +646,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_ROUTES, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -643,25 +654,29 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, char *message) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { - atomic_dec(&orig_node->tt_size); + if (tt_global_entry->orig_node == orig_node) _tt_global_del(bat_priv, tt_global_entry, message); - } - spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -672,38 +687,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_ROUTES, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -712,19 +748,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -781,7 +817,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1319,7 +1354,7 @@ static void tt_update_changes(struct bat_priv *bat_priv, if (!tt_global_add(bat_priv, orig_node, (tt_change + i)->addr, tt_response->ttvn, false)) - return; + goto out; }
tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, @@ -1333,15 +1368,17 @@ out:
bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1377,11 +1414,10 @@ void handle_tt_response(struct bat_priv *bat_priv, if (!orig_node) goto out;
- spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); out: - orig_node_free_ref(orig_node); + if (orig_node) + orig_node_free_ref(orig_node); }
int tt_init(struct bat_priv *bat_priv) diff --git a/types.h b/types.h index c1e88a4..e831230 100644 --- a/types.h +++ b/types.h @@ -193,8 +193,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -234,6 +232,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -241,6 +241,8 @@ struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; uint8_t ttvn; + atomic_t refcount; + struct rcu_head rcu; /* entry in the global table */ struct hlist_node hash_entry; }; diff --git a/vis.c b/vis.c index c39f20c..4c27950 100644 --- a/vis.c +++ b/vis.c @@ -680,11 +680,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -693,14 +694,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Client-announcement
Moreover: - COMPAT_VERSION has been increased to 14 - batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org --- 1) send_tt_request() has been modified in order to keep tt_req_node managing code in separated helper functions.
2) the DBG_TT log channel has been added
aggregation.c | 23 +- aggregation.h | 6 +- bat_sysfs.c | 2 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 14 +- originator.c | 8 +- packet.h | 34 ++- routing.c | 227 ++++++++--- routing.h | 10 +- send.c | 90 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1141 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 42 ++- types.h | 38 ++- unicast.c | 3 + 17 files changed, 1362 insertions(+), 315 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 9b94590..de59b5f 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,16 +20,11 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(struct batman_packet *batman_packet) -{ - return batman_packet->num_tt * ETH_ALEN; -} - /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -255,18 +250,20 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, batman_packet = (struct batman_packet *)packet_buff;
do { - /* network to host order for our 32bit seqno, and the - orig_interval. */ + /* network to host order for our 32bit seqno and the + orig_interval */ batman_packet->seqno = ntohl(batman_packet->seqno); + batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; - receive_bat_packet(ethhdr, batman_packet, - tt_buff, tt_len(batman_packet), - if_incoming);
- buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); + receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming); + + buff_pos += BAT_PACKET_LEN + + tt_len(batman_packet->tt_num_changes); + batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_tt)); + batman_packet->tt_num_changes)); } diff --git a/aggregation.h b/aggregation.h index 7e6d72f..c631a4c 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes * + sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_sysfs.c b/bat_sysfs.c index 497a070..5c85834 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -368,7 +368,7 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); #ifdef CONFIG_BATMAN_ADV_DEBUG -BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); +BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 7, NULL); #endif
static struct bat_attribute *mesh_attrs[] = { diff --git a/hard-interface.c b/hard-interface.c index dfbfccc..69ef99a 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -152,12 +152,6 @@ static void primary_if_select(struct bat_priv *bat_priv, batman_packet->ttl = TTL;
primary_if_update_addr(bat_priv); - - /*** - * hacky trick to make sure that we send the TT information via - * our new primary interface - */ - atomic_set(&bat_priv->tt_local_changed, 1); }
static bool hardif_is_iface_up(struct hard_iface *hard_iface) @@ -339,7 +333,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_tt = 0; + batman_packet->tt_num_changes = 0; + batman_packet->ttvn = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; @@ -658,6 +653,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break; + /* Translation table query (request or response) */ + case BAT_TT_QUERY: + ret = recv_tt_query(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 0a7cee0..edb3e07 100644 --- a/main.c +++ b/main.c @@ -86,6 +86,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock); + spin_lock_init(&bat_priv->tt_changes_list_lock); + spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -96,14 +99,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); + INIT_LIST_HEAD(&bat_priv->tt_changes_list); + INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1) - goto err; - - if (tt_global_init(bat_priv) < 1) + if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr); @@ -137,8 +139,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv); - tt_global_free(bat_priv); + tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 3ca3941..f2b6fcc 100644 --- a/main.h +++ b/main.h @@ -46,11 +46,19 @@ /* sliding packet range of received originator messages in squence numbers * (should be a multiple of our word size) */ #define TQ_LOCAL_WINDOW_SIZE 64 +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */ + #define TQ_GLOBAL_WINDOW_SIZE 5 #define TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ + +/* Transtable operations */ +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ @@ -90,9 +98,9 @@
/* all messages related to routing / flooding / broadcasting / etc */ #define DBG_BATMAN 1 -/* route or tt entry added / changed / deleted */ -#define DBG_ROUTES 2 -#define DBG_ALL 3 +#define DBG_ROUTES 2 /* route added / changed / deleted */ +#define DBG_TT 4 /* translation table operations */ +#define DBG_ALL 7
/* diff --git a/originator.c b/originator.c index 080ec88..d4e26fd 100644 --- a/originator.c +++ b/originator.c @@ -145,6 +145,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -213,6 +214,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock); + spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2); @@ -221,6 +223,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0; + atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -330,9 +334,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node, - best_neigh_node, - orig_node->tt_buff, - orig_node->tt_buff_len); + best_neigh_node); } }
diff --git a/packet.h b/packet.h index eda9965..14f501e 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02 + struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,7 +67,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_tt; + uint8_t ttvn; /* translation table version number */ + uint16_t tt_crc; + uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; @@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl; + uint8_t ttvn; /* destination translation table version number */ } __packed;
struct unicast_frag_packet { @@ -133,4 +142,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet { + uint8_t packet_type; + uint8_t version; /* batman version field */ + uint8_t dst[ETH_ALEN]; + uint8_t ttl; + uint8_t flags; /* this field is a combination of: + * - TT_REQUEST or TT_RESPONSE + * - TT_FULL_TABLE + */ + uint8_t src[ETH_ALEN]; + uint8_t ttvn; /* if TT_REQUEST: ttvn that triggered the + * request + * if TT_RESPONSE: new ttvn for the src + * orig_node + */ + uint16_t tt_data; /* if TT_REQUEST: crc associated with the + * ttvn + * if TT_RESPONSE: table_size + */ +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index bb1c3ec..ad526e5 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,55 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void update_transtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc) { - if ((tt_buff_len != orig_node->tt_buff_len) || - ((tt_buff_len > 0) && - (orig_node->tt_buff_len > 0) && - (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) { - - if (orig_node->tt_buff_len > 0) - tt_global_del_orig(bat_priv, orig_node, - "originator changed tt"); - - if ((tt_buff_len > 0) && (tt_buff)) - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); + uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + bool full_table = true; + + /* the ttvn increased by one -> we can apply the attached changes */ + if (ttvn - orig_ttvn == 1) { + /* the OGM could not contain the changes because they were too + * many to fit in one frame or because they have already been + * sent TT_OGM_APPEND_MAX times. In this case send a tt + * request */ + if (!tt_num_changes) { + full_table = false; + goto request_table; + } + + tt_update_changes(bat_priv, orig_node, tt_num_changes, ttvn, + (struct tt_change *)tt_buff); + + /* Even if we received the crc into the OGM, we prefer + * to recompute it to spot any possible inconsistency + * in the global table */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + } else { + /* if we missed more than one change or our tables are not + * in sync anymore -> request fresh tt data */ + if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { +request_table: + bat_dbg(DBG_TT, bat_priv, "TT inconsistency for %pM. " + "Need to retrieve the correct information " + "(ttvn: %u last_ttvn: %u crc: %u last_crc: " + "%u num_changes: %u)\n", orig_node->orig, ttvn, + orig_ttvn, tt_crc, orig_node->tt_crc, + tt_num_changes); + send_tt_request(bat_priv, orig_node, ttvn, tt_crc, + full_table); + return; + } } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, - unsigned char *tt_buff, int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *curr_router;
@@ -93,11 +120,10 @@ static void update_route(struct bat_priv *bat_priv,
/* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node, - "originator timed out"); + "Deleted route towards originator");
/* route added */ } else if ((!curr_router) && (neigh_node)) { @@ -105,9 +131,6 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); - /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv, @@ -135,8 +158,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *router = NULL;
@@ -146,11 +168,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node) - update_route(bat_priv, orig_node, neigh_node, - tt_buff, tt_buff_len); - /* may be just TT changed */ - else - update_TT(bat_priv, orig_node, tt_buff, tt_buff_len); + update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -363,14 +381,12 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *tt_buff, int tt_buff_len, - char is_duplicate) + unsigned char *tt_buff, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -435,9 +451,6 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? - batman_packet->num_tt * ETH_ALEN : tt_buff_len); - /* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); @@ -467,15 +480,19 @@ static void update_orig(struct bat_priv *bat_priv, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node, - tt_buff, tmp_tt_buff_len); - goto update_gw; + update_routes(bat_priv, orig_node, neigh_node);
update_tt: - update_routes(bat_priv, orig_node, router, - tt_buff, tmp_tt_buff_len); + /* I have to check for transtable changes only if the OGM has been + * sent through a primary interface */ + if (((batman_packet->orig != ethhdr->h_source) && + (batman_packet->ttl > 2)) || + (batman_packet->flags & PRIMARIES_FIRST_HOP)) + update_transtable(bat_priv, orig_node, tt_buff, + batman_packet->tt_num_changes, + batman_packet->ttvn, + batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -597,7 +614,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, + unsigned char *tt_buff, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -636,12 +653,14 @@ void receive_bat_packet(struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] " - "(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, " - "TTL %d, V %d, IDF %d)\n", + "(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, " + "crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno, - batman_packet->tq, batman_packet->ttl, batman_packet->version, + batman_packet->ttvn, batman_packet->tt_crc, + batman_packet->tt_num_changes, batman_packet->tq, + batman_packet->ttl, batman_packet->version, has_directlink_flag);
rcu_read_lock(); @@ -794,14 +813,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, tt_buff, tt_buff_len, is_duplicate); + if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, tt_buff_len, if_incoming); + 1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -824,7 +843,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, tt_buff_len, if_incoming); + 0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1171,6 +1190,70 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct tt_query_packet *tt_query; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + goto out; + + /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + tt_query = (struct tt_query_packet *)skb->data; + + tt_query->tt_data = ntohs(tt_query->tt_data); + + if (tt_query->flags & TT_REQUEST) { + /* If we cannot provide an answer the tt_request is + * forwarded */ + if (!send_tt_response(bat_priv, tt_query)) { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + goto out; + } + /* packet needs to be linearised to access the TT changes records */ + if (skb_linearize(skb) < 0) + goto out; + + if (is_my_mac(tt_query->dst)) + handle_tt_response(bat_priv, tt_query); + else { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + +out: + kfree_skb(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1356,14 +1439,64 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(struct unicast_packet); + struct orig_node *orig_node; + struct ethhdr *ethhdr; + uint8_t curr_ttvn; + int16_t diff;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
+ if (is_my_mac(unicast_packet->dest)) + curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + else { + orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + + if (!orig_node) + return NET_RX_DROP; + + curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + diff = unicast_packet->ttvn - curr_ttvn; + /* Check whether I have to reroute the packet */ + if (unicast_packet->packet_type == BAT_UNICAST && + (diff < 0 && diff > -0xff/2)) { + /* Linearize the skb before accessing it */ + if (skb_linearize(skb) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + + sizeof(struct unicast_packet)); + + orig_node = transtable_search(bat_priv, ethhdr->h_dest); + + if (!orig_node) { + if (!is_my_client(bat_priv, ethhdr->h_dest)) + return NET_RX_DROP; + memcpy(unicast_packet->dest, + bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + } else { + memcpy(unicast_packet->dest, orig_node->orig, + ETH_ALEN); + curr_ttvn = (uint8_t) + atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + bat_dbg(DBG_ROUTES, bat_priv, "TTVN mismatch (old_ttvn %u " + "new_ttvn %u)! Rerouting unicast packet (for %pM) to " + "%pM\n", ethhdr->h_dest, unicast_packet->dest); + + unicast_packet->ttvn = curr_ttvn; + } /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/routing.h b/routing.h index 870f298..6f6a5f8 100644 --- a/routing.h +++ b/routing.h @@ -24,12 +24,11 @@
void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, - struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, - struct hard_iface *if_incoming); + struct batman_packet *batman_packet, + unsigned char *tt_buff, + struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len); + struct neigh_node *neigh_node); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f30d0c6..aa0ad64 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_tt)) { + batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -136,17 +136,17 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d," - " IDF %s) on interface %s [%pM]\n", + " IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"), - hard_iface->net_dev->name, + batman_packet->ttvn, hard_iface->net_dev->name, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_tt * ETH_ALEN); + tt_len(batman_packet->tt_num_changes); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -214,26 +214,17 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) +static void realloc_packet_buffer(struct hard_iface *hard_iface, + int new_len) { - int new_len; unsigned char *new_buff; - struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(struct batman_packet)); - batman_packet = (struct batman_packet *)new_buff; - - batman_packet->num_tt = tt_local_fill_buffer(bat_priv, - new_buff + sizeof(struct batman_packet), - new_len - sizeof(struct batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff; @@ -241,6 +232,46 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+/* when calling this function (hard_iface == primary_if) has to be true */ +static void prepare_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + int new_len; + struct batman_packet *batman_packet; + + new_len = BAT_PACKET_LEN + + tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented */ + if (new_len > hard_iface->soft_iface->mtu) + new_len = BAT_PACKET_LEN; + + realloc_packet_buffer(hard_iface, new_len); + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + + atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); + + batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv, + hard_iface->packet_buff + BAT_PACKET_LEN, + hard_iface->packet_len - BAT_PACKET_LEN); + +} + +static void reset_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + struct batman_packet *batman_packet; + + realloc_packet_buffer(hard_iface, BAT_PACKET_LEN); + + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + batman_packet->tt_num_changes = 0; +} + void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -266,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->tt_local_changed)) && - (hard_iface == primary_if)) - rebuild_batman_packet(bat_priv, hard_iface); + if (hard_iface == primary_if) { + /* if at least one change happened */ + if (atomic_read(&bat_priv->tt_local_changes) > 0) { + prepare_packet_buffer(bat_priv, hard_iface); + /* Increment the TTVN only once per OGM interval */ + atomic_inc(&bat_priv->ttvn); + } + + /* if the changes have been sent enough times */ + if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) + reset_packet_buffer(bat_priv, hard_iface); + }
/** * NOTE: packet_buff might just have been re-allocated in - * rebuild_batman_packet() + * prepare_packet_buffer() or in reset_packet_buffer() */ batman_packet = (struct batman_packet *)hard_iface->packet_buff;
@@ -281,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
+ batman_packet->ttvn = atomic_read(&bat_priv->ttvn); + batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc)); + if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else @@ -309,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time; + uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); @@ -326,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl; + tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN); @@ -358,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno); + batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP; @@ -369,7 +414,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + tt_buff_len, + sizeof(struct batman_packet) + + tt_len(tt_num_changes), if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 247172d..842f4d1 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index c76a33e..5c34bcc 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -542,7 +542,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed"); tt_local_add(dev, addr->sa_data); }
@@ -600,7 +600,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (curr_softif_neigh) goto dropped;
- /* TODO: check this for locks */ + /* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { @@ -839,7 +839,12 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->tt_local_changed, 0); + atomic_set(&bat_priv->ttvn, 0); + atomic_set(&bat_priv->tt_local_changes, 0); + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + + bat_priv->tt_buff = NULL; + bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 7b72966..66b6bf7 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message); +#include <linux/crc16.h> + +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message); +static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(struct hlist_node *node, void *data2) @@ -47,14 +51,15 @@ static int compare_gtt(struct hlist_node *node, void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + msecs_to_jiffies(5000)); }
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; @@ -82,7 +87,7 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, }
static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; @@ -110,7 +115,42 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{ + unsigned long deadline; + deadline = starting_time + msecs_to_jiffies(timeout); + + return time_after(jiffies, deadline); +} + +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +{ + struct tt_change_node *tt_change_node; + + tt_change_node = (struct tt_change_node *) + kmalloc(sizeof(struct tt_change_node), GFP_ATOMIC); + + if (!tt_change_node) + return; + + tt_change_node->change.flags = op; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + /* track the change in the OGMinterval list */ + list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); + atomic_inc(&bat_priv->tt_local_changes); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); +} + +int tt_len(int changes_num) +{ + return changes_num * sizeof(struct tt_change); +} + +static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -120,9 +160,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0); - tt_local_start_timer(bat_priv); - return 1; }
@@ -131,40 +168,24 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; - int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - return; - } - - /* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_tt That also should give a limit to - MAC-flooding. */ - required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; - required_bytes += BAT_PACKET_LEN; - - if ((required_bytes > ETH_DATA_LEN) || - (atomic_read(&bat_priv->aggregated_ogms) && - required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_tt + 1 > 255)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local tt entry (%pM): " - "number of local tt entries exceeds packet size\n", - addr); - return; + goto unlock; }
- bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local tt entry: %pM\n", addr); - tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - return; + goto unlock; + + tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + + bat_dbg(DBG_TT, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d)\n", addr, + (uint8_t)atomic_read(&bat_priv->ttvn));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; @@ -175,13 +196,9 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); - bat_priv->num_local_tt++; - atomic_set(&bat_priv->tt_local_changed, 1); - + atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ @@ -190,46 +207,60 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry) - _tt_global_del_orig(bat_priv, tt_global_entry, - "local tt received"); + _tt_global_del(bat_priv, tt_global_entry, + "local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock); + return; +unlock: + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct hlist_node *node; - struct hlist_head *head; - int i, count = 0; + int count = 0, tot_changes = 0; + struct tt_change_node *entry, *safe;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - - for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; - - rcu_read_lock(); - hlist_for_each_entry_rcu(tt_local_entry, node, - head, hash_entry) { - if (buff_len < (count + 1) * ETH_ALEN) - break; + if (buff_len > 0) + tot_changes = buff_len / tt_len(1);
- memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, - ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock); + atomic_set(&bat_priv->tt_local_changes, 0);
+ list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (count < tot_changes) { + memcpy(buff + tt_len(count), + &entry->change, sizeof(struct tt_change)); count++; } - rcu_read_unlock(); + list_del(&entry->list); + kfree(entry); } + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + /* Keep the buffer for possible tt_request */ + spin_lock_bh(&bat_priv->tt_buff_lock); + kfree(bat_priv->tt_buff); + bat_priv->tt_buff_len = 0; + bat_priv->tt_buff = NULL; + /* We check whether this new OGM has no changes due to size + * problems */ + if (buff_len > 0) { + /** + * if kmalloc() fails we will reply with the full table + * instead of providing the diff + */ + bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + if (bat_priv->tt_buff) { + memcpy(bat_priv->tt_buff, buff, buff_len); + bat_priv->tt_buff_len = buff_len; + } + } + spin_unlock_bh(&bat_priv->tt_buff_lock);
- /* if we did not get all new local tts see you next time ;-) */ - if (count == bat_priv->num_local_tt) - atomic_set(&bat_priv->tt_local_changed, 0); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return count; + return tot_changes; }
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -261,8 +292,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via TT:\n", - net_dev->name); + "announced via TT (TTVN: %u):\n", + net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -309,54 +340,50 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_tt--; - atomic_set(&bat_priv->tt_local_changed, 1); + atomic_dec(&bat_priv->num_local_tt); }
static void tt_local_del(struct bat_priv *bat_priv, - struct tt_local_entry *tt_local_entry, - char *message) + struct tt_local_entry *tt_local_entry, + char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + bat_dbg(DBG_TT, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
+ atomic_dec(&bat_priv->num_local_tt); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr); - _tt_local_del(&tt_local_entry->hash_entry, bat_priv); + + tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) + if (tt_local_entry) { + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, message); - + } spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; - unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock); @@ -369,32 +396,53 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
- timeout = tt_local_entry->last_seen; - timeout += TT_LOCAL_TIMEOUT * HZ; - - if (time_before(jiffies, timeout)) + if (!is_out_of_time(tt_local_entry->last_seen, + TT_LOCAL_TIMEOUT * 1000)) continue;
+ tt_local_event(bat_priv, TT_CHANGE_DEL, + tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + "address timed out"); } }
spin_unlock_bh(&bat_priv->tt_lhash_lock); - tt_local_start_timer(bat_priv); }
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + int i; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct hlist_head *head; + struct hlist_node *node, *node_tmp; + struct tt_local_entry *tt_local_entry; + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work); - hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + hash = bat_priv->tt_local_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + kfree(tt_local_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_local_hash = NULL; }
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -407,74 +455,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void tt_changes_list_free(struct bat_priv *bat_priv) { - struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; - - while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { - spin_lock_bh(&bat_priv->tt_ghash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if (!tt_global_entry) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); + struct tt_change_node *entry, *safe;
- tt_global_entry = - kmalloc(sizeof(struct tt_global_entry), - GFP_ATOMIC); + spin_lock_bh(&bat_priv->tt_changes_list_lock);
- if (!tt_global_entry) - break; - - memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN); - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global tt entry: " - "%pM (via %pM)\n", - tt_global_entry->addr, orig_node->orig); + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + list_del(&entry->list); + kfree(entry); + }
- spin_lock_bh(&bat_priv->tt_ghash_lock); - hash_add(bat_priv->tt_global_hash, compare_gtt, - choose_orig, tt_global_entry, - &tt_global_entry->hash_entry); + atomic_set(&bat_priv->tt_local_changes, 0); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); +}
- } +/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_addr, uint8_t ttvn) +{ + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + struct orig_node *orig_node_tmp;
+ spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + + if (!tt_global_entry) { + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), + GFP_ATOMIC); + if (!tt_global_entry) + goto unlock; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); + /* Assign the new orig_node */ + atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - /* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr); - - if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_global_entry->ttvn = ttvn; + atomic_inc(&orig_node->tt_size); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry); + } else { + if (tt_global_entry->orig_node != orig_node) { + atomic_dec(&tt_global_entry->orig_node->tt_size); + orig_node_tmp = tt_global_entry->orig_node; + atomic_inc(&orig_node->refcount); + tt_global_entry->orig_node = orig_node; + tt_global_entry->ttvn = ttvn; + orig_node_free_ref(orig_node_tmp); + atomic_inc(&orig_node->tt_size); + } + }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- tt_buff_count++; - } + bat_dbg(DBG_TT, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->addr, orig_node->orig);
- /* initialize, and overwrite if malloc succeeds */ - orig_node->tt_buff = NULL; - orig_node->tt_buff_len = 0; + /* remove address from local hash if present */ + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
- if (tt_buff_len > 0) { - orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); - if (orig_node->tt_buff) { - memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); - orig_node->tt_buff_len = tt_buff_len; - } - } + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received"); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + return 1; +unlock: + spin_unlock_bh(&bat_priv->tt_ghash_lock); + return 0; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -508,17 +561,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, "Globally announced TT entries received via the mesh %s\n", net_dev->name); + seq_printf(seq, " %-13s %s %-15s %s\n", + "Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; - /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ + /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via + * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head) - buf_size += 43; + buf_size += 59; rcu_read_unlock(); }
@@ -537,10 +593,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { - pos += snprintf(buff + pos, 44, - " * %pM via %pM\n", + pos += snprintf(buff + pos, 61, + " * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr, - tt_global_entry->orig_node->orig); + tt_global_entry->ttvn, + tt_global_entry->orig_node->orig, + (uint8_t) atomic_read( + &tt_global_entry->orig_node-> + last_ttvn)); } rcu_read_unlock(); } @@ -555,64 +615,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message) +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message) { - bat_dbg(DBG_ROUTES, bat_priv, + if (!tt_global_entry) + return; + + bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
+ atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry); }
+void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *addr, char *message) +{ + struct tt_global_entry *tt_global_entry; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr); + + if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + atomic_dec(&orig_node->tt_size); + _tt_global_del(bat_priv, tt_global_entry, message); + } + spin_unlock_bh(&bat_priv->tt_ghash_lock); +} + void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message) + struct orig_node *orig_node, char *message) { struct tt_global_entry *tt_global_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; + int i; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct hlist_node *node, *safe; + struct hlist_head *head;
- if (orig_node->tt_buff_len == 0) + if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { - tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if ((tt_global_entry) && - (tt_global_entry->orig_node == orig_node)) - _tt_global_del_orig(bat_priv, tt_global_entry, - message); - - tt_buff_count++; + hlist_for_each_entry_safe(tt_global_entry, node, safe, + head, hash_entry) { + if (tt_global_entry->orig_node == orig_node) + _tt_global_del(bat_priv, tt_global_entry, + message); + } } + atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock); - - orig_node->tt_buff_len = 0; - kfree(orig_node->tt_buff); - orig_node->tt_buff = NULL; }
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL; }
@@ -636,3 +712,692 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } + +/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (compare_eth(tt_global_entry->orig_node, + orig_node)) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_global_entry->addr[j]); + total ^= total_one; + } + } + rcu_read_unlock(); + } + + return total; +} + +/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_local_entry->addr[j]); + total ^= total_one; + } + + rcu_read_unlock(); + } + + return total; +} + +static void tt_req_list_free(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes) +{ + uint16_t tt_buff_len = tt_len(tt_num_changes); + + /* Replace the old buffer only if I received something in the + * last OGM (the OGM could carry no changes) */ + spin_lock_bh(&orig_node->tt_buff_lock); + if (tt_buff_len > 0) { + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; + } + } + spin_unlock_bh(&orig_node->tt_buff_lock); +} + +static void tt_req_purge(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (is_out_of_time(node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) { + list_del(&node->list); + kfree(node); + } + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +/* returns the pointer to the new tt_req_node struct if no request + * has already been issued for this orig_node, NULL otherwise */ +static struct tt_req_node *new_tt_req_node(struct bat_priv *bat_priv, + struct orig_node *orig_node) +{ + struct tt_req_node *tt_req_node_tmp, *tt_req_node = NULL; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry(tt_req_node_tmp, &bat_priv->tt_req_list, list) { + if (compare_eth(tt_req_node_tmp, orig_node) && + !is_out_of_time(tt_req_node_tmp->issued_at, + TT_REQUEST_TIMEOUT * 1000)) + goto unlock; + } + + tt_req_node = kmalloc(sizeof(struct tt_req_node), GFP_ATOMIC); + if (!tt_req_node) + goto unlock; + + memcpy(tt_req_node->addr, orig_node->orig, ETH_ALEN); + tt_req_node->issued_at = jiffies; + + list_add(&tt_req_node->list, &bat_priv->tt_req_list); +unlock: + spin_unlock_bh(&bat_priv->tt_req_list_lock); + return tt_req_node; +} + +int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, bool full_table) +{ + struct sk_buff *skb; + struct tt_query_packet *tt_request; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if; + struct tt_req_node *tt_req_node; + int ret = 0; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + tt_req_node = new_tt_req_node(bat_priv, dst_orig_node); + if (!tt_req_node) + goto out; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + tt_request = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet)); + + tt_request->packet_type = BAT_TT_QUERY; + tt_request->version = COMPAT_VERSION; + memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); + tt_request->ttl = TTL; + tt_request->ttvn = ttvn; + tt_request->tt_data = tt_crc; + tt_request->flags = TT_REQUEST; + + if (full_table) + tt_request->flags |= TT_FULL_TABLE; + + neigh_node = find_router(bat_priv, dst_orig_node, NULL); + if (!neigh_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Sending TT_REQUEST to %pM via %pM " + "[%c]\n", dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret) { + kfree_skb(skb); + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_del(&tt_req_node->list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + kfree(tt_req_node); + } + return ret; +} + +static bool send_other_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if = NULL; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t orig_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (%pM) [%c]\n", tt_request->src, + tt_request->ttvn, tt_request->dst, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + /* Let's get the orig node of the REAL destination */ + req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst); + if (!req_dst_orig_node) + goto out; + + res_dst_orig_node = get_orig_node(bat_priv, tt_request->src); + if (!res_dst_orig_node) + goto out; + + neigh_node = find_router(bat_priv, res_dst_orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn); + req_ttvn = tt_request->ttvn; + + /* I have not the requested data */ + if (orig_ttvn != req_ttvn || + tt_request->tt_data != req_dst_orig_node->tt_crc) + goto out; + + /* If it has explicitly been requested the full table */ + if (tt_request->flags & TT_FULL_TABLE || + !req_dst_orig_node->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&req_dst_orig_node->tt_buff_lock); + tt_len = req_dst_orig_node->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Copy the last orig_node's OGM buffer */ + memcpy(tt_buff, req_dst_orig_node->tt_buff, + req_dst_orig_node->tt_buff_len); + + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = (uint8_t) + atomic_read(&req_dst_orig_node->last_ttvn); + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the orig_node's local table */ + hash = bat_priv->tt_global_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + if (tt_global_entry->orig_node == + req_dst_orig_node) { + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_global_entry->addr, + ETH_ALEN); + tt_count++; + } + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + +out: + if (res_dst_orig_node) + orig_node_free_ref(res_dst_orig_node); + if (req_dst_orig_node) + orig_node_free_ref(req_dst_orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + return ret; + +} +static bool send_my_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct tt_local_entry *tt_local_entry; + struct hard_iface *primary_if = NULL; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t my_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (me) [%c]\n", tt_request->src, + tt_request->ttvn, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + + my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + req_ttvn = tt_request->ttvn; + + orig_node = get_orig_node(bat_priv, tt_request->src); + if (!orig_node) + goto out; + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* If the full table has been explicitly requested or the gap + * is too big send the whole local translation table */ + if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + !bat_priv->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&bat_priv->tt_buff_lock); + tt_len = bat_priv->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + memcpy(tt_buff, bat_priv->tt_buff, + bat_priv->tt_buff_len); + spin_unlock_bh(&bat_priv->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + bat_priv->primary_if->soft_iface->mtu) { + tt_len = bat_priv->primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the local table */ + tt_response->ttvn = + (uint8_t)atomic_read(&bat_priv->ttvn); + + hash = bat_priv->tt_local_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_local_entry->addr, + ETH_ALEN); + tt_count++; + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&bat_priv->tt_buff_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + /* This packet was for me, so it doesn't need to be re-routed */ + return true; +} + +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + if (is_my_mac(tt_request->dst)) + return send_my_tt_response(bat_priv, tt_request); + else + return send_other_tt_response(bat_priv, tt_request); +} + +/* Substitute the TT response source's table with the newone carried by the + * packet */ +static void _tt_fill_gtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *tt_buff, + uint16_t table_size, uint8_t ttvn) +{ + int count; + unsigned char *tt_ptr; + + for (count = 0; count < table_size; count++) { + tt_ptr = tt_buff + (count * ETH_ALEN); + + /* If we fail to allocate a new entry we return immediatly */ + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + return; + } + atomic_set(&orig_node->last_ttvn, ttvn); +} + +static void tt_fill_gtable(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct orig_node *orig_node = NULL; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + /* Purge the old table first.. */ + tt_global_del_orig(bat_priv, orig_node, "Received full table"); + + _tt_fill_gtable(bat_priv, orig_node, + ((unsigned char *)tt_response) + + sizeof(struct tt_query_packet), + tt_response->tt_data, + tt_response->ttvn); + + spin_lock_bh(&orig_node->tt_buff_lock); + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = NULL; + spin_unlock_bh(&orig_node->tt_buff_lock); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change) +{ + int i; + + for (i = 0; i < tt_num_changes; i++) { + if ((tt_change + i)->flags & TT_CHANGE_DEL) + tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by changes"); + else + if (!tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, ttvn)) + return; + } + + tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, + tt_num_changes); + atomic_set(&orig_node->last_ttvn, ttvn); +} + +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) +{ + struct tt_local_entry *tt_local_entry; + + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + + if (tt_local_entry) + return true; + return false; +} + +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct tt_req_node *node, *safe; + struct orig_node *orig_node = NULL; + + bat_dbg(DBG_TT, bat_priv, "Received TT_RESPONSE from %pM for " + "ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + tt_response->tt_data, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + if (tt_response->flags & TT_FULL_TABLE) + tt_fill_gtable(bat_priv, tt_response); + else + tt_update_changes(bat_priv, orig_node, tt_response->tt_data, + tt_response->ttvn, + (struct tt_change *)(tt_response + 1)); + + /* Delete the tt_req_node from pending tt_requests list */ + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (!compare_eth(node->addr, tt_response->src)) + continue; + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + /* Recalculate the CRC for this orig_node and store it */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +int tt_init(struct bat_priv *bat_priv) +{ + if (!tt_local_init(bat_priv)) + return 0; + + if (!tt_global_init(bat_priv)) + return 0; + + tt_start_timer(bat_priv); + + return 1; +} + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} + +static void tt_purge(struct work_struct *work) +{ + struct delayed_work *delayed_work = + container_of(work, struct delayed_work, work); + struct bat_priv *bat_priv = + container_of(delayed_work, struct bat_priv, tt_work); + + tt_local_purge(bat_priv); + tt_req_purge(bat_priv); + + tt_start_timer(bat_priv); +} diff --git a/translation-table.h b/translation-table.h index 46152c3..f203b49 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,44 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, + uint8_t *new_addr); +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len); +int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); -int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); + uint8_t *addr, char *message); int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len); + struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + uint8_t ttvn); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message); -void tt_global_free(struct bat_priv *bat_priv); + struct orig_node *orig_node, char *message); +void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + char *message); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes); +uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv, + struct orig_node *dst_orig_node, uint8_t hvn, + uint16_t tt_crc, bool full_table); +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request); +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change); +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index fab70e8..0848fcc 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; + atomic_t last_ttvn; /* last seen translation table version number */ + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ + atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -94,10 +98,16 @@ struct orig_node { spinlock_t ogm_cnt_lock; /* bcast_seqno_lock protects bcast_bits, last_bcast_seqno */ spinlock_t bcast_seqno_lock; + spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list; };
+struct tt_change { + uint8_t flags; + uint8_t addr[ETH_ALEN]; +}; + struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; + atomic_t ttvn; /* tranlation table version number */ + atomic_t tt_ogm_append_cnt; + atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -153,22 +166,30 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct hlist_head softif_neigh_vids; + struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; + struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ + spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ + spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ spinlock_t softif_neigh_vid_lock; /* protects soft-interface vid list */ - int16_t num_local_tt; - atomic_t tt_local_changed; + atomic_t num_local_tt; + /* Checksum of the local table, recomputed before sending a new OGM */ + atomic_t tt_crc; + unsigned char *tt_buff; + int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; @@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; + uint8_t ttvn; + /* entry in the global table */ struct hlist_node hash_entry; };
+struct tt_change_node { + struct list_head list; + struct tt_change change; +}; + +struct tt_req_node { + uint8_t addr[ETH_ALEN]; + unsigned long issued_at; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded diff --git a/unicast.c b/unicast.c index 19c3daf..7a9c02c 100644 --- a/unicast.c +++ b/unicast.c @@ -329,6 +329,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); + /* set the destination tt version number */ + unicast_packet->ttvn = + (uint8_t)atomic_read(&orig_node->last_ttvn);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(struct unicast_packet) >
Exploting the new announcement implementation, it has been possible to improve the roaming mechanism and reduce the number of packet drops.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Roaming-improvements
Signed-off-by: Antonio Quartulli ordex@autistici.org --- 1) To avoid crc inconsistency during the roaming phase a flag field has been added to the tt_global_entry struct (for the TT_GLOBAL_ROAM flag) and a new flag (TT_CHANGE_ROAM) has been added to the tt_change structure. In this way, roaming marked global entries are not taken into account during the global_crc computation, avoiding crc inconsistency.
This problem is caused by entries that are deleted on a node local table (due to the roaming_adv) but that are still present on the other nodes global tables (this entries are going to be removed on the next OGM). Obiously, this situation would lead to a crc computation inconsisteny.
2) The send_roam_adv() function has been modified to keep the code managing tt_roam_node in separated helper functions.
3) tt_poss_change comment has been revised
hard-interface.c | 4 + main.c | 2 + main.h | 14 +++- originator.c | 1 + packet.h | 10 +++ routing.c | 67 +++++++++++++++- routing.h | 1 + send.c | 1 + soft-interface.c | 3 +- translation-table.c | 214 ++++++++++++++++++++++++++++++++++++++++++++++----- translation-table.h | 9 ++- types.h | 26 ++++++- 12 files changed, 318 insertions(+), 34 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 69ef99a..815caf7 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -657,6 +657,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_TT_QUERY: ret = recv_tt_query(skb, hard_iface); break; + /* Roaming advertisement */ + case BAT_ROAM_ADV: + ret = recv_roam_adv(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index edb3e07..6e96fd6 100644 --- a/main.c +++ b/main.c @@ -88,6 +88,7 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_roam_list_lock); spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); @@ -101,6 +102,7 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); INIT_LIST_HEAD(&bat_priv->tt_changes_list); INIT_LIST_HEAD(&bat_priv->tt_req_list); + INIT_LIST_HEAD(&bat_priv->tt_roam_list);
if (originator_init(bat_priv) < 1) goto err; diff --git a/main.h b/main.h index f2b6fcc..af782ec 100644 --- a/main.h +++ b/main.h @@ -55,9 +55,17 @@
#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
-/* Transtable operations */ -#define TT_CHANGE_ADD 0x00 -#define TT_CHANGE_DEL 0x01 +/* Transtable chang flags */ +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 +#define TT_CHANGE_ROAM 0x02 + +/* Transtable global entry flags */ +#define TT_GLOBAL_ROAM 0x01 + +#define ROAMING_MAX_TIME 20 /* Time in which a client can roam at most + * ROAMING_MAX_COUNT times */ +#define ROAMING_MAX_COUNT 5
#define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
diff --git a/originator.c b/originator.c index d4e26fd..bece4da 100644 --- a/originator.c +++ b/originator.c @@ -219,6 +219,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) /* extra reference for return */ atomic_set(&orig_node->refcount, 2);
+ orig_node->tt_poss_change = false; orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; diff --git a/packet.h b/packet.h index 14f501e..3a4ecbf 100644 --- a/packet.h +++ b/packet.h @@ -31,6 +31,7 @@ #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 #define BAT_TT_QUERY 0x07 +#define BAT_ROAM_ADV 0x08
/* this file is included by batctl which needs these defines */ #define COMPAT_VERSION 14 @@ -163,4 +164,13 @@ struct tt_query_packet { */ } __packed;
+struct roam_adv_packet { + uint8_t packet_type; + uint8_t version; + uint8_t dst[6]; + uint8_t ttl; + uint8_t src[6]; + uint8_t client[6]; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index ad526e5..00b0dee 100644 --- a/routing.c +++ b/routing.c @@ -92,6 +92,9 @@ static void update_transtable(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not * in sync anymore -> request fresh tt data */ @@ -1254,6 +1257,56 @@ out: return ret; }
+int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct roam_adv_packet *roam_adv_packet; + struct orig_node *orig_node; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + roam_adv_packet = (struct roam_adv_packet *)skb->data; + + if (!is_my_mac(roam_adv_packet->dst)) + return route_unicast_packet(skb, recv_if); + + orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + if (!orig_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Received ROAMING_ADV from %pM " + "(client %pM)\n", roam_adv_packet->src, + roam_adv_packet->client); + + tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + atomic_read(&orig_node->last_ttvn) + 1, true); + + /* Roaming phase starts: I have new information but the ttvn has not + * been incremented yet. This flag will make me check all the incoming + * packets for the correct destination. */ + bat_priv->tt_poss_change = true; + + orig_node_free_ref(orig_node); + ret = NET_RX_SUCCESS; +out: + kfree(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1446,35 +1499,41 @@ int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) struct ethhdr *ethhdr; uint8_t curr_ttvn; int16_t diff; + bool tt_poss_change;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + return NET_RX_DROP; + unicast_packet = (struct unicast_packet *)skb->data;
- if (is_my_mac(unicast_packet->dest)) + if (is_my_mac(unicast_packet->dest)) { + tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); - else { + } else { orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node) return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + tt_poss_change = orig_node->tt_poss_change; orig_node_free_ref(orig_node); }
diff = unicast_packet->ttvn - curr_ttvn; /* Check whether I have to reroute the packet */ if (unicast_packet->packet_type == BAT_UNICAST && - (diff < 0 && diff > -0xff/2)) { + ((diff < 0 && diff > -0xff/2) || tt_poss_change)) { /* Linearize the skb before accessing it */ if (skb_linearize(skb) < 0) return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data + sizeof(struct unicast_packet)); - orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) { diff --git a/routing.h b/routing.h index 6f6a5f8..e2943e0 100644 --- a/routing.h +++ b/routing.h @@ -37,6 +37,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index aa0ad64..3f45f39 100644 --- a/send.c +++ b/send.c @@ -303,6 +303,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) prepare_packet_buffer(bat_priv, hard_iface); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->ttvn); + bat_priv->tt_poss_change = false; }
/* if the changes have been sent enough times */ diff --git a/soft-interface.c b/soft-interface.c index 5c34bcc..613b833 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -542,7 +542,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed", false); tt_local_add(dev, addr->sa_data); }
@@ -845,6 +845,7 @@ struct net_device *softif_create(char *name)
bat_priv->tt_buff = NULL; bat_priv->tt_buff_len = 0; + bat_priv->tt_poss_change = false;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 66b6bf7..d14072f 100644 --- a/translation-table.c +++ b/translation-table.c @@ -123,7 +123,8 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
-static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr, + uint8_t roaming) { struct tt_change_node *tt_change_node;
@@ -134,6 +135,9 @@ static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) return;
tt_change_node->change.flags = op; + if (roaming) + tt_change_node->change.flags |= TT_GLOBAL_ROAM; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN);
spin_lock_bh(&bat_priv->tt_changes_list_lock); @@ -168,6 +172,8 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; + uint8_t roam_addr[ETH_ALEN]; + struct orig_node *roam_orig_node;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); @@ -181,7 +187,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) if (!tt_local_entry) goto unlock;
- tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
bat_dbg(DBG_TT, bat_priv, "Creating new local tt entry: %pM (ttvn: %d)\n", addr, @@ -206,11 +212,20 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry) + /* Check whether it is a roaming! */ + if (tt_global_entry) { + memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); + roam_orig_node = tt_global_entry->orig_node; + /* This node is probably going to update its tt table */ + tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + send_roam_adv(bat_priv, tt_global_entry->addr, + tt_global_entry->orig_node); + } else + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock); return; unlock: spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -364,7 +379,8 @@ static void tt_local_del(struct bat_priv *bat_priv, tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, + char *message, bool roaming) { struct tt_local_entry *tt_local_entry;
@@ -372,7 +388,8 @@ void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, + roaming); tt_local_del(bat_priv, tt_local_entry, message); } spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -401,7 +418,7 @@ static void tt_local_purge(struct bat_priv *bat_priv) continue;
tt_local_event(bat_priv, TT_CHANGE_DEL, - tt_local_entry->addr); + tt_local_entry->addr, false); tt_local_del(bat_priv, tt_local_entry, "address timed out"); } @@ -474,7 +491,7 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* caller must hold orig_node recount */ int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_addr, uint8_t ttvn) + unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; @@ -494,6 +511,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; atomic_inc(&orig_node->tt_size); hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, @@ -505,6 +523,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; orig_node_free_ref(orig_node_tmp); atomic_inc(&orig_node->tt_size); } @@ -521,8 +540,9 @@ int tt_global_add(struct bat_priv *bat_priv, tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 1; unlock: @@ -635,7 +655,7 @@ static void _tt_global_del(struct bat_priv *bat_priv,
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *addr, char *message) + unsigned char *addr, char *message, bool roaming) { struct tt_global_entry *tt_global_entry;
@@ -643,9 +663,14 @@ void tt_global_del(struct bat_priv *bat_priv, tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (roaming) { + tt_global_entry->flags |= TT_GLOBAL_ROAM; + goto out; + } atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } +out: spin_unlock_bh(&bat_priv->tt_ghash_lock); }
@@ -731,6 +756,12 @@ uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) head, hash_entry) { if (compare_eth(tt_global_entry->orig_node, orig_node)) { + /* Roaming clients are in the global table for + * consistency only. They don't have to be + * taken into account while computing the + * global crc */ + if (tt_global_entry->flags & TT_GLOBAL_ROAM) + continue; total_one = 0; for (j = 0; j < ETH_ALEN; j++) total_one = crc16_byte(total_one, @@ -1252,7 +1283,7 @@ static void _tt_fill_gtable(struct bat_priv *bat_priv, tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */ - if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn, false)) return; } atomic_set(&orig_node->last_ttvn, ttvn); @@ -1297,10 +1328,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, if ((tt_change + i)->flags & TT_CHANGE_DEL) tt_global_del(bat_priv, orig_node, (tt_change + i)->addr, - "tt removed by changes"); + "tt removed by changes", + (tt_change + i)->flags & TT_CHANGE_ROAM); else if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, ttvn)) + (tt_change + i)->addr, ttvn, false)) + /* In case of problem while storing a + * global_entry, we stop the updating + * procedure without committing the + * ttvn change. This will avoid to send + * corrupted data on tt_request + */ return; }
@@ -1359,6 +1397,9 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; out: if (orig_node) orig_node_free_ref(orig_node); @@ -1377,16 +1418,133 @@ int tt_init(struct bat_priv *bat_priv) return 1; }
-void tt_free(struct bat_priv *bat_priv) +static void tt_roam_list_free(struct bat_priv *bat_priv) { - cancel_delayed_work_sync(&bat_priv->tt_work); + struct tt_roam_node *node, *safe;
- tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); + spin_lock_bh(&bat_priv->tt_roam_list_lock);
- kfree(bat_priv->tt_buff); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +static void tt_roam_purge(struct bat_priv *bat_priv) +{ + struct tt_roam_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + if (!is_out_of_time(node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +/* This function checks whether the client already reached the + * maximum number of possible roaming phases. In this case the ROAMING_ADV + * will not be sent. + * + * returns true if the ROAMING_ADV can be sent, false otherwise */ +static bool tt_check_roam_count(struct bat_priv *bat_priv, + uint8_t *client) +{ + struct tt_roam_node *tt_roam_node; + bool ret = false; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { + if (!compare_eth(tt_roam_node->addr, client)) + continue; + + if (is_out_of_time(tt_roam_node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + if (!atomic_dec_not_zero(&tt_roam_node->counter)) + /* Sorry, you roamed too many times! */ + goto unlock; + ret = true; + break; + } + + if (!ret) { + tt_roam_node = kmalloc(sizeof(struct tt_roam_node), GFP_ATOMIC); + if (!tt_roam_node) + goto unlock; + + tt_roam_node->first_time = jiffies; + atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + memcpy(tt_roam_node->addr, client, ETH_ALEN); + + list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); + ret = true; + } + +unlock: + spin_unlock_bh(&bat_priv->tt_roam_list_lock); + return ret; +} + +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node) +{ + struct neigh_node *neigh_node = NULL; + struct sk_buff *skb = NULL; + struct roam_adv_packet *roam_adv_packet; + int ret = 1; + + /* before going on we have to check whether the client has + * already roamed to us too many times */ + if (!tt_check_roam_count(bat_priv, client)) + goto out; + + skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, + sizeof(struct roam_adv_packet)); + + roam_adv_packet->packet_type = BAT_ROAM_ADV; + roam_adv_packet->version = COMPAT_VERSION; + roam_adv_packet->ttl = TTL; + memcpy(roam_adv_packet->src, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); + memcpy(roam_adv_packet->client, client, ETH_ALEN); + + neigh_node = find_router(bat_priv, orig_node, NULL); + if (!neigh_node) + goto out; + + if (neigh_node->if_incoming->if_status != IF_ACTIVE) + goto out; + + bat_dbg(DBG_TT, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (ret) + kfree_skb(skb); + return; }
static void tt_purge(struct work_struct *work) @@ -1398,6 +1556,20 @@ static void tt_purge(struct work_struct *work)
tt_local_purge(bat_priv); tt_req_purge(bat_priv); + tt_roam_purge(bat_priv);
tt_start_timer(bat_priv); } + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + tt_roam_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} diff --git a/translation-table.h b/translation-table.h index f203b49..b08d30a 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,6 +22,7 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
+struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); int tt_len(int changes_num); void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, uint8_t *new_addr); @@ -30,20 +31,20 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); + uint8_t *addr, char *message, bool roaming); int tt_local_seq_print_text(struct seq_file *seq, void *offset); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, int tt_buff_len); int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - uint8_t ttvn); + uint8_t ttvn, bool roaming); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - char *message); + char *message, bool roaming); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, uint8_t tt_num_changes); @@ -61,5 +62,7 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); void handle_tt_response(struct bat_priv *bat_priv, struct tt_query_packet *tt_response); +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 0848fcc..b148bc3 100644 --- a/types.h +++ b/types.h @@ -81,6 +81,13 @@ struct orig_node { int16_t tt_buff_len; spinlock_t tt_buff_lock; /* protects tt_buff */ atomic_t tt_size; + bool tt_poss_change; /* This flag is used to detect an ongoing roaming + * phase. If true, then I sent a Roaming_adv to + * this orig_node and I have to inspect every + * packet directed to it to check whether it is + * still the true destination or not. This flag + * will be reset to false as soon as I receive a + * new TTVN from this orig_node */ uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -158,6 +165,12 @@ struct bat_priv { atomic_t ttvn; /* tranlation table version number */ atomic_t tt_ogm_append_cnt; atomic_t tt_local_changes; /* changes registered in a OGM interval */ + bool tt_poss_change; /* This flag is used to detect an ongoing roaming + * phase. If true, then I received a Roaming_adv + * and I have to inspect every packet directed to + * me to check whether I am still the true + * destination or not. This flag will be reset to + * false as soon as I increase my TTVN */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -172,6 +185,7 @@ struct bat_priv { struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; struct list_head tt_req_list; /* list of pending tt_requests */ + struct list_head tt_roam_list; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -179,6 +193,7 @@ struct bat_priv { spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ + spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ @@ -224,8 +239,8 @@ struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; uint8_t ttvn; - /* entry in the global table */ - struct hlist_node hash_entry; + uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + struct hlist_node hash_entry; /* entry in the global table */ };
struct tt_change_node { @@ -239,6 +254,13 @@ struct tt_req_node { struct list_head list; };
+struct tt_roam_node { + uint8_t addr[ETH_ALEN]; + atomic_t counter; + unsigned long first_time; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org --- main.c | 2 - routing.c | 2 - translation-table.c | 256 +++++++++++++++++++++++++++++---------------------- types.h | 6 +- vis.c | 13 +-- 5 files changed, 155 insertions(+), 124 deletions(-)
diff --git a/main.c b/main.c index 6e96fd6..5f3cab1 100644 --- a/main.c +++ b/main.c @@ -84,8 +84,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/routing.c b/routing.c index 00b0dee..b681568 100644 --- a/routing.c +++ b/routing.c @@ -89,9 +89,7 @@ static void update_transtable(struct bat_priv *bat_priv, /* Even if we received the crc into the OGM, we prefer * to recompute it to spot any possible inconsistency * in the global table */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/translation-table.c b/translation-table.c index d14072f..be4f851 100644 --- a/translation-table.c +++ b/translation-table.c @@ -78,6 +78,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -107,6 +110,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -123,8 +129,36 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + call_rcu(&tt_local_entry->rcu, tt_local_entry_free_rcu); +} + +static void tt_global_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + call_rcu(&tt_global_entry->rcu, tt_global_entry_free_rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr, - uint8_t roaming) + bool roaming) { struct tt_change_node *tt_change_node;
@@ -170,22 +204,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
@@ -195,6 +226,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -204,31 +236,26 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -310,8 +337,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -325,7 +350,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -345,8 +369,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -355,15 +377,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, char *message) @@ -376,23 +389,24 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, - roaming); - tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (!tt_local_entry) + goto out; + + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, roaming); + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -401,13 +415,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */ int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -419,22 +434,26 @@ static void tt_local_purge(struct bat_priv *bat_priv)
tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, false); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_TT, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -449,7 +468,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -494,10 +513,9 @@ int tt_global_add(struct bat_priv *bat_priv, unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -505,17 +523,20 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; tt_global_entry->flags = 0x00; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -529,25 +550,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_TT, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -584,8 +598,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -600,10 +612,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -625,8 +637,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -640,7 +650,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -648,30 +658,34 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, char *message, bool roaming) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (tt_global_entry->orig_node == orig_node) { if (roaming) { tt_global_entry->flags |= TT_GLOBAL_ROAM; goto out; } - atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -682,38 +696,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_TT, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -722,19 +757,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -797,7 +832,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1349,15 +1383,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1394,9 +1430,7 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->tt_req_list_lock);
/* Recalculate the CRC for this orig_node and store it */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/types.h b/types.h index b148bc3..fdc6993 100644 --- a/types.h +++ b/types.h @@ -190,8 +190,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -232,6 +230,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -240,6 +240,8 @@ struct tt_global_entry { struct orig_node *orig_node; uint8_t ttvn; uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; /* entry in the global table */ };
diff --git a/vis.c b/vis.c index c39f20c..4c27950 100644 --- a/vis.c +++ b/vis.c @@ -680,11 +680,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -693,14 +694,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org --- main.c | 2 - routing.c | 2 - translation-table.c | 256 +++++++++++++++++++++++++++++---------------------- types.h | 6 +- vis.c | 13 +-- 5 files changed, 155 insertions(+), 124 deletions(-)
diff --git a/main.c b/main.c index 6e96fd6..5f3cab1 100644 --- a/main.c +++ b/main.c @@ -84,8 +84,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/routing.c b/routing.c index 00b0dee..b681568 100644 --- a/routing.c +++ b/routing.c @@ -89,9 +89,7 @@ static void update_transtable(struct bat_priv *bat_priv, /* Even if we received the crc into the OGM, we prefer * to recompute it to spot any possible inconsistency * in the global table */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/translation-table.c b/translation-table.c index d14072f..be4f851 100644 --- a/translation-table.c +++ b/translation-table.c @@ -78,6 +78,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -107,6 +110,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -123,8 +129,36 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + call_rcu(&tt_local_entry->rcu, tt_local_entry_free_rcu); +} + +static void tt_global_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + call_rcu(&tt_global_entry->rcu, tt_global_entry_free_rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr, - uint8_t roaming) + bool roaming) { struct tt_change_node *tt_change_node;
@@ -170,22 +204,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
@@ -195,6 +226,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -204,31 +236,26 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -310,8 +337,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -325,7 +350,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -345,8 +369,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -355,15 +377,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, char *message) @@ -376,23 +389,24 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, - roaming); - tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (!tt_local_entry) + goto out; + + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, roaming); + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -401,13 +415,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */ int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -419,22 +434,26 @@ static void tt_local_purge(struct bat_priv *bat_priv)
tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, false); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_TT, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -449,7 +468,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -494,10 +513,9 @@ int tt_global_add(struct bat_priv *bat_priv, unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -505,17 +523,20 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; tt_global_entry->flags = 0x00; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -529,25 +550,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_TT, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -584,8 +598,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -600,10 +612,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -625,8 +637,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -640,7 +650,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -648,30 +658,34 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, char *message, bool roaming) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (tt_global_entry->orig_node == orig_node) { if (roaming) { tt_global_entry->flags |= TT_GLOBAL_ROAM; goto out; } - atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -682,38 +696,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_TT, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -722,19 +757,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -797,7 +832,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1349,15 +1383,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1394,9 +1430,7 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->tt_req_list_lock);
/* Recalculate the CRC for this orig_node and store it */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/types.h b/types.h index b148bc3..fdc6993 100644 --- a/types.h +++ b/types.h @@ -190,8 +190,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -232,6 +230,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -240,6 +240,8 @@ struct tt_global_entry { struct orig_node *orig_node; uint8_t ttvn; uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; /* entry in the global table */ };
diff --git a/vis.c b/vis.c index c39f20c..4c27950 100644 --- a/vis.c +++ b/vis.c @@ -680,11 +680,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -693,14 +694,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Client-announcement
Moreover: - COMPAT_VERSION has been increased to 14 - batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org --- aggregation.c | 23 +- aggregation.h | 6 +- bat_sysfs.c | 2 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 14 +- originator.c | 8 +- packet.h | 34 ++- routing.c | 227 ++++++++--- routing.h | 10 +- send.c | 90 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1135 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 42 ++- types.h | 38 ++- unicast.c | 3 + 17 files changed, 1356 insertions(+), 315 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 9b94590..de59b5f 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,16 +20,11 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(struct batman_packet *batman_packet) -{ - return batman_packet->num_tt * ETH_ALEN; -} - /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -255,18 +250,20 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, batman_packet = (struct batman_packet *)packet_buff;
do { - /* network to host order for our 32bit seqno, and the - orig_interval. */ + /* network to host order for our 32bit seqno and the + orig_interval */ batman_packet->seqno = ntohl(batman_packet->seqno); + batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; - receive_bat_packet(ethhdr, batman_packet, - tt_buff, tt_len(batman_packet), - if_incoming);
- buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); + receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming); + + buff_pos += BAT_PACKET_LEN + + tt_len(batman_packet->tt_num_changes); + batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_tt)); + batman_packet->tt_num_changes)); } diff --git a/aggregation.h b/aggregation.h index 7e6d72f..c631a4c 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes * + sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_sysfs.c b/bat_sysfs.c index 497a070..5c85834 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -368,7 +368,7 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); #ifdef CONFIG_BATMAN_ADV_DEBUG -BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); +BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 7, NULL); #endif
static struct bat_attribute *mesh_attrs[] = { diff --git a/hard-interface.c b/hard-interface.c index dfbfccc..69ef99a 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -152,12 +152,6 @@ static void primary_if_select(struct bat_priv *bat_priv, batman_packet->ttl = TTL;
primary_if_update_addr(bat_priv); - - /*** - * hacky trick to make sure that we send the TT information via - * our new primary interface - */ - atomic_set(&bat_priv->tt_local_changed, 1); }
static bool hardif_is_iface_up(struct hard_iface *hard_iface) @@ -339,7 +333,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_tt = 0; + batman_packet->tt_num_changes = 0; + batman_packet->ttvn = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; @@ -658,6 +653,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break; + /* Translation table query (request or response) */ + case BAT_TT_QUERY: + ret = recv_tt_query(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 0a7cee0..edb3e07 100644 --- a/main.c +++ b/main.c @@ -86,6 +86,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock); + spin_lock_init(&bat_priv->tt_changes_list_lock); + spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -96,14 +99,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); + INIT_LIST_HEAD(&bat_priv->tt_changes_list); + INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1) - goto err; - - if (tt_global_init(bat_priv) < 1) + if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr); @@ -137,8 +139,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv); - tt_global_free(bat_priv); + tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 3ca3941..883e467 100644 --- a/main.h +++ b/main.h @@ -46,11 +46,19 @@ /* sliding packet range of received originator messages in squence numbers * (should be a multiple of our word size) */ #define TQ_LOCAL_WINDOW_SIZE 64 +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */ + #define TQ_GLOBAL_WINDOW_SIZE 5 #define TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ + +/* Transtable change flags */ +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ @@ -90,9 +98,9 @@
/* all messages related to routing / flooding / broadcasting / etc */ #define DBG_BATMAN 1 -/* route or tt entry added / changed / deleted */ -#define DBG_ROUTES 2 -#define DBG_ALL 3 +#define DBG_ROUTES 2 /* route added / changed / deleted */ +#define DBG_TT 4 /* translation table operations */ +#define DBG_ALL 7
/* diff --git a/originator.c b/originator.c index 080ec88..d4e26fd 100644 --- a/originator.c +++ b/originator.c @@ -145,6 +145,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -213,6 +214,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock); + spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2); @@ -221,6 +223,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0; + atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -330,9 +334,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node, - best_neigh_node, - orig_node->tt_buff, - orig_node->tt_buff_len); + best_neigh_node); } }
diff --git a/packet.h b/packet.h index eda9965..14f501e 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02 + struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,7 +67,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_tt; + uint8_t ttvn; /* translation table version number */ + uint16_t tt_crc; + uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align; } __packed; @@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl; + uint8_t ttvn; /* destination translation table version number */ } __packed;
struct unicast_frag_packet { @@ -133,4 +142,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet { + uint8_t packet_type; + uint8_t version; /* batman version field */ + uint8_t dst[ETH_ALEN]; + uint8_t ttl; + uint8_t flags; /* this field is a combination of: + * - TT_REQUEST or TT_RESPONSE + * - TT_FULL_TABLE + */ + uint8_t src[ETH_ALEN]; + uint8_t ttvn; /* if TT_REQUEST: ttvn that triggered the + * request + * if TT_RESPONSE: new ttvn for the src + * orig_node + */ + uint16_t tt_data; /* if TT_REQUEST: crc associated with the + * ttvn + * if TT_RESPONSE: table_size + */ +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 8c403ce..80218fc 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,55 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void update_transtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc) { - if ((tt_buff_len != orig_node->tt_buff_len) || - ((tt_buff_len > 0) && - (orig_node->tt_buff_len > 0) && - (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) { - - if (orig_node->tt_buff_len > 0) - tt_global_del_orig(bat_priv, orig_node, - "originator changed tt"); - - if ((tt_buff_len > 0) && (tt_buff)) - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); + uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + bool full_table = true; + + /* the ttvn increased by one -> we can apply the attached changes */ + if (ttvn - orig_ttvn == 1) { + /* the OGM could not contain the changes because they were too + * many to fit in one frame or because they have already been + * sent TT_OGM_APPEND_MAX times. In this case send a tt + * request */ + if (!tt_num_changes) { + full_table = false; + goto request_table; + } + + tt_update_changes(bat_priv, orig_node, tt_num_changes, ttvn, + (struct tt_change *)tt_buff); + + /* Even if we received the crc into the OGM, we prefer + * to recompute it to spot any possible inconsistency + * in the global table */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + } else { + /* if we missed more than one change or our tables are not + * in sync anymore -> request fresh tt data */ + if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { +request_table: + bat_dbg(DBG_TT, bat_priv, "TT inconsistency for %pM. " + "Need to retrieve the correct information " + "(ttvn: %u last_ttvn: %u crc: %u last_crc: " + "%u num_changes: %u)\n", orig_node->orig, ttvn, + orig_ttvn, tt_crc, orig_node->tt_crc, + tt_num_changes); + send_tt_request(bat_priv, orig_node, ttvn, tt_crc, + full_table); + return; + } } }
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, - unsigned char *tt_buff, int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *curr_router;
@@ -93,11 +120,10 @@ static void update_route(struct bat_priv *bat_priv,
/* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node, - "originator timed out"); + "Deleted route towards originator");
/* route added */ } else if ((!curr_router) && (neigh_node)) { @@ -105,9 +131,6 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); - /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv, @@ -135,8 +158,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *router = NULL;
@@ -146,11 +168,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node) - update_route(bat_priv, orig_node, neigh_node, - tt_buff, tt_buff_len); - /* may be just TT changed */ - else - update_TT(bat_priv, orig_node, tt_buff, tt_buff_len); + update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -363,14 +381,12 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming, - unsigned char *tt_buff, int tt_buff_len, - char is_duplicate) + unsigned char *tt_buff, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -435,9 +451,6 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? - batman_packet->num_tt * ETH_ALEN : tt_buff_len); - /* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); @@ -467,15 +480,19 @@ static void update_orig(struct bat_priv *bat_priv, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node, - tt_buff, tmp_tt_buff_len); - goto update_gw; + update_routes(bat_priv, orig_node, neigh_node);
update_tt: - update_routes(bat_priv, orig_node, router, - tt_buff, tmp_tt_buff_len); + /* I have to check for transtable changes only if the OGM has been + * sent through a primary interface */ + if (((batman_packet->orig != ethhdr->h_source) && + (batman_packet->ttl > 2)) || + (batman_packet->flags & PRIMARIES_FIRST_HOP)) + update_transtable(bat_priv, orig_node, tt_buff, + batman_packet->tt_num_changes, + batman_packet->ttvn, + batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -597,7 +614,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, + unsigned char *tt_buff, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -636,12 +653,14 @@ void receive_bat_packet(struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] " - "(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, " - "TTL %d, V %d, IDF %d)\n", + "(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, " + "crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno, - batman_packet->tq, batman_packet->ttl, batman_packet->version, + batman_packet->ttvn, batman_packet->tt_crc, + batman_packet->tt_num_changes, batman_packet->tq, + batman_packet->ttl, batman_packet->version, has_directlink_flag);
rcu_read_lock(); @@ -794,14 +813,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, tt_buff, tt_buff_len, is_duplicate); + if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, tt_buff_len, if_incoming); + 1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -824,7 +843,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, tt_buff_len, if_incoming); + 0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1171,6 +1190,70 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct tt_query_packet *tt_query; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + goto out; + + /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + tt_query = (struct tt_query_packet *)skb->data; + + tt_query->tt_data = ntohs(tt_query->tt_data); + + if (tt_query->flags & TT_REQUEST) { + /* If we cannot provide an answer the tt_request is + * forwarded */ + if (!send_tt_response(bat_priv, tt_query)) { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + goto out; + } + /* packet needs to be linearised to access the TT changes records */ + if (skb_linearize(skb) < 0) + goto out; + + if (is_my_mac(tt_query->dst)) + handle_tt_response(bat_priv, tt_query); + else { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + +out: + kfree_skb(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1359,14 +1442,64 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(struct unicast_packet); + struct orig_node *orig_node; + struct ethhdr *ethhdr; + uint8_t curr_ttvn; + int16_t diff;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
+ if (is_my_mac(unicast_packet->dest)) + curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + else { + orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + + if (!orig_node) + return NET_RX_DROP; + + curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + diff = unicast_packet->ttvn - curr_ttvn; + /* Check whether I have to reroute the packet */ + if (unicast_packet->packet_type == BAT_UNICAST && + (diff < 0 && diff > -0xff/2)) { + /* Linearize the skb before accessing it */ + if (skb_linearize(skb) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + + sizeof(struct unicast_packet)); + + orig_node = transtable_search(bat_priv, ethhdr->h_dest); + + if (!orig_node) { + if (!is_my_client(bat_priv, ethhdr->h_dest)) + return NET_RX_DROP; + memcpy(unicast_packet->dest, + bat_priv->primary_if->net_dev->dev_addr, + ETH_ALEN); + } else { + memcpy(unicast_packet->dest, orig_node->orig, + ETH_ALEN); + curr_ttvn = (uint8_t) + atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + bat_dbg(DBG_ROUTES, bat_priv, "TTVN mismatch (old_ttvn %u " + "new_ttvn %u)! Rerouting unicast packet (for %pM) to " + "%pM\n", ethhdr->h_dest, unicast_packet->dest); + + unicast_packet->ttvn = curr_ttvn; + } /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/routing.h b/routing.h index 870f298..6f6a5f8 100644 --- a/routing.h +++ b/routing.h @@ -24,12 +24,11 @@
void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr, - struct batman_packet *batman_packet, - unsigned char *tt_buff, int tt_buff_len, - struct hard_iface *if_incoming); + struct batman_packet *batman_packet, + unsigned char *tt_buff, + struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, unsigned char *tt_buff, - int tt_buff_len); + struct neigh_node *neigh_node); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f30d0c6..aa0ad64 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_tt)) { + batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -136,17 +136,17 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d," - " IDF %s) on interface %s [%pM]\n", + " IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"), - hard_iface->net_dev->name, + batman_packet->ttvn, hard_iface->net_dev->name, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) + - (batman_packet->num_tt * ETH_ALEN); + tt_len(batman_packet->tt_num_changes); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -214,26 +214,17 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) +static void realloc_packet_buffer(struct hard_iface *hard_iface, + int new_len) { - int new_len; unsigned char *new_buff; - struct batman_packet *batman_packet;
- new_len = sizeof(struct batman_packet) + - (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(struct batman_packet)); - batman_packet = (struct batman_packet *)new_buff; - - batman_packet->num_tt = tt_local_fill_buffer(bat_priv, - new_buff + sizeof(struct batman_packet), - new_len - sizeof(struct batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff; @@ -241,6 +232,46 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+/* when calling this function (hard_iface == primary_if) has to be true */ +static void prepare_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + int new_len; + struct batman_packet *batman_packet; + + new_len = BAT_PACKET_LEN + + tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented */ + if (new_len > hard_iface->soft_iface->mtu) + new_len = BAT_PACKET_LEN; + + realloc_packet_buffer(hard_iface, new_len); + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + + atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); + + batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv, + hard_iface->packet_buff + BAT_PACKET_LEN, + hard_iface->packet_len - BAT_PACKET_LEN); + +} + +static void reset_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + struct batman_packet *batman_packet; + + realloc_packet_buffer(hard_iface, BAT_PACKET_LEN); + + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + batman_packet->tt_num_changes = 0; +} + void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -266,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->tt_local_changed)) && - (hard_iface == primary_if)) - rebuild_batman_packet(bat_priv, hard_iface); + if (hard_iface == primary_if) { + /* if at least one change happened */ + if (atomic_read(&bat_priv->tt_local_changes) > 0) { + prepare_packet_buffer(bat_priv, hard_iface); + /* Increment the TTVN only once per OGM interval */ + atomic_inc(&bat_priv->ttvn); + } + + /* if the changes have been sent enough times */ + if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) + reset_packet_buffer(bat_priv, hard_iface); + }
/** * NOTE: packet_buff might just have been re-allocated in - * rebuild_batman_packet() + * prepare_packet_buffer() or in reset_packet_buffer() */ batman_packet = (struct batman_packet *)hard_iface->packet_buff;
@@ -281,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
+ batman_packet->ttvn = atomic_read(&bat_priv->ttvn); + batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc)); + if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else @@ -309,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time; + uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); @@ -326,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl; + tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN); @@ -358,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno); + batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP; @@ -369,7 +414,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(struct batman_packet) + tt_buff_len, + sizeof(struct batman_packet) + + tt_len(tt_num_changes), if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index 247172d..842f4d1 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index c76a33e..5c34bcc 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -542,7 +542,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed"); tt_local_add(dev, addr->sa_data); }
@@ -600,7 +600,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (curr_softif_neigh) goto dropped;
- /* TODO: check this for locks */ + /* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { @@ -839,7 +839,12 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->tt_local_changed, 0); + atomic_set(&bat_priv->ttvn, 0); + atomic_set(&bat_priv->tt_local_changes, 0); + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + + bat_priv->tt_buff = NULL; + bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 7b72966..bf3d3aa 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message); +#include <linux/crc16.h> + +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message); +static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(struct hlist_node *node, void *data2) @@ -47,14 +51,15 @@ static int compare_gtt(struct hlist_node *node, void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + msecs_to_jiffies(5000)); }
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; @@ -82,7 +87,7 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, }
static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, - void *data) + void *data) { struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; @@ -110,7 +115,42 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{ + unsigned long deadline; + deadline = starting_time + msecs_to_jiffies(timeout); + + return time_after(jiffies, deadline); +} + +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +{ + struct tt_change_node *tt_change_node; + + tt_change_node = (struct tt_change_node *) + kmalloc(sizeof(struct tt_change_node), GFP_ATOMIC); + + if (!tt_change_node) + return; + + tt_change_node->change.flags = op; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + /* track the change in the OGMinterval list */ + list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); + atomic_inc(&bat_priv->tt_local_changes); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); +} + +int tt_len(int changes_num) +{ + return changes_num * sizeof(struct tt_change); +} + +static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -120,9 +160,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0); - tt_local_start_timer(bat_priv); - return 1; }
@@ -131,40 +168,24 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; - int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - return; - } - - /* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_tt That also should give a limit to - MAC-flooding. */ - required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; - required_bytes += BAT_PACKET_LEN; - - if ((required_bytes > ETH_DATA_LEN) || - (atomic_read(&bat_priv->aggregated_ogms) && - required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_tt + 1 > 255)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local tt entry (%pM): " - "number of local tt entries exceeds packet size\n", - addr); - return; + goto unlock; }
- bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local tt entry: %pM\n", addr); - tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - return; + goto unlock; + + tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + + bat_dbg(DBG_TT, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d)\n", addr, + (uint8_t)atomic_read(&bat_priv->ttvn));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; @@ -175,13 +196,9 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); - bat_priv->num_local_tt++; - atomic_set(&bat_priv->tt_local_changed, 1); - + atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ @@ -190,46 +207,60 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry) - _tt_global_del_orig(bat_priv, tt_global_entry, - "local tt received"); + _tt_global_del(bat_priv, tt_global_entry, + "local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock); + return; +unlock: + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct hlist_node *node; - struct hlist_head *head; - int i, count = 0; - - spin_lock_bh(&bat_priv->tt_lhash_lock); - - for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; + int count = 0, tot_changes = 0; + struct tt_change_node *entry, *safe;
- rcu_read_lock(); - hlist_for_each_entry_rcu(tt_local_entry, node, - head, hash_entry) { - if (buff_len < (count + 1) * ETH_ALEN) - break; + if (buff_len > 0) + tot_changes = buff_len / tt_len(1);
- memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, - ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock); + atomic_set(&bat_priv->tt_local_changes, 0);
+ list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (count < tot_changes) { + memcpy(buff + tt_len(count), + &entry->change, sizeof(struct tt_change)); count++; } - rcu_read_unlock(); + list_del(&entry->list); + kfree(entry); } + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + /* Keep the buffer for possible tt_request */ + spin_lock_bh(&bat_priv->tt_buff_lock); + kfree(bat_priv->tt_buff); + bat_priv->tt_buff_len = 0; + bat_priv->tt_buff = NULL; + /* We check whether this new OGM has no changes due to size + * problems */ + if (buff_len > 0) { + /** + * if kmalloc() fails we will reply with the full table + * instead of providing the diff + */ + bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + if (bat_priv->tt_buff) { + memcpy(bat_priv->tt_buff, buff, buff_len); + bat_priv->tt_buff_len = buff_len; + } + } + spin_unlock_bh(&bat_priv->tt_buff_lock);
- /* if we did not get all new local tts see you next time ;-) */ - if (count == bat_priv->num_local_tt) - atomic_set(&bat_priv->tt_local_changed, 0); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return count; + return tot_changes; }
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -261,8 +292,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via TT:\n", - net_dev->name); + "announced via TT (TTVN: %u):\n", + net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -309,54 +340,50 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_tt--; - atomic_set(&bat_priv->tt_local_changed, 1); + atomic_dec(&bat_priv->num_local_tt); }
static void tt_local_del(struct bat_priv *bat_priv, - struct tt_local_entry *tt_local_entry, - char *message) + struct tt_local_entry *tt_local_entry, + char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + bat_dbg(DBG_TT, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
+ atomic_dec(&bat_priv->num_local_tt); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr); - _tt_local_del(&tt_local_entry->hash_entry, bat_priv); + + tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) + if (tt_local_entry) { + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, message); - + } spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; - unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock); @@ -369,32 +396,53 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
- timeout = tt_local_entry->last_seen; - timeout += TT_LOCAL_TIMEOUT * HZ; - - if (time_before(jiffies, timeout)) + if (!is_out_of_time(tt_local_entry->last_seen, + TT_LOCAL_TIMEOUT * 1000)) continue;
+ tt_local_event(bat_priv, TT_CHANGE_DEL, + tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + "address timed out"); } }
spin_unlock_bh(&bat_priv->tt_lhash_lock); - tt_local_start_timer(bat_priv); }
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + int i; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct hlist_head *head; + struct hlist_node *node, *node_tmp; + struct tt_local_entry *tt_local_entry; + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work); - hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + hash = bat_priv->tt_local_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + kfree(tt_local_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_local_hash = NULL; }
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -407,74 +455,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len) +static void tt_changes_list_free(struct bat_priv *bat_priv) { - struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; - - while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { - spin_lock_bh(&bat_priv->tt_ghash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); + struct tt_change_node *entry, *safe;
- if (!tt_global_entry) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); + spin_lock_bh(&bat_priv->tt_changes_list_lock);
- tt_global_entry = - kmalloc(sizeof(struct tt_global_entry), - GFP_ATOMIC); - - if (!tt_global_entry) - break; - - memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN); - - bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global tt entry: " - "%pM (via %pM)\n", - tt_global_entry->addr, orig_node->orig); + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + list_del(&entry->list); + kfree(entry); + }
- spin_lock_bh(&bat_priv->tt_ghash_lock); - hash_add(bat_priv->tt_global_hash, compare_gtt, - choose_orig, tt_global_entry, - &tt_global_entry->hash_entry); + atomic_set(&bat_priv->tt_local_changes, 0); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); +}
- } +/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *tt_addr, uint8_t ttvn) +{ + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + struct orig_node *orig_node_tmp;
+ spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + + if (!tt_global_entry) { + tt_global_entry = + kmalloc(sizeof(struct tt_global_entry), + GFP_ATOMIC); + if (!tt_global_entry) + goto unlock; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); + /* Assign the new orig_node */ + atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - /* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr); - - if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_global_entry->ttvn = ttvn; + atomic_inc(&orig_node->tt_size); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry); + } else { + if (tt_global_entry->orig_node != orig_node) { + atomic_dec(&tt_global_entry->orig_node->tt_size); + orig_node_tmp = tt_global_entry->orig_node; + atomic_inc(&orig_node->refcount); + tt_global_entry->orig_node = orig_node; + tt_global_entry->ttvn = ttvn; + orig_node_free_ref(orig_node_tmp); + atomic_inc(&orig_node->tt_size); + } + }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- tt_buff_count++; - } + bat_dbg(DBG_TT, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->addr, orig_node->orig);
- /* initialize, and overwrite if malloc succeeds */ - orig_node->tt_buff = NULL; - orig_node->tt_buff_len = 0; + /* remove address from local hash if present */ + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
- if (tt_buff_len > 0) { - orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); - if (orig_node->tt_buff) { - memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); - orig_node->tt_buff_len = tt_buff_len; - } - } + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received"); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + return 1; +unlock: + spin_unlock_bh(&bat_priv->tt_ghash_lock); + return 0; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -508,17 +561,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, "Globally announced TT entries received via the mesh %s\n", net_dev->name); + seq_printf(seq, " %-13s %s %-15s %s\n", + "Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; - /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ + /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via + * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head) - buf_size += 43; + buf_size += 59; rcu_read_unlock(); }
@@ -537,10 +593,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { - pos += snprintf(buff + pos, 44, - " * %pM via %pM\n", + pos += snprintf(buff + pos, 61, + " * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr, - tt_global_entry->orig_node->orig); + tt_global_entry->ttvn, + tt_global_entry->orig_node->orig, + (uint8_t) atomic_read( + &tt_global_entry->orig_node-> + last_ttvn)); } rcu_read_unlock(); } @@ -555,64 +615,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - char *message) +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + char *message) { - bat_dbg(DBG_ROUTES, bat_priv, + if (!tt_global_entry) + return; + + bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
+ atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry); }
+void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, + unsigned char *addr, char *message) +{ + struct tt_global_entry *tt_global_entry; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr); + + if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + atomic_dec(&orig_node->tt_size); + _tt_global_del(bat_priv, tt_global_entry, message); + } + spin_unlock_bh(&bat_priv->tt_ghash_lock); +} + void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message) + struct orig_node *orig_node, char *message) { struct tt_global_entry *tt_global_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; + int i; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct hlist_node *node, *safe; + struct hlist_head *head;
- if (orig_node->tt_buff_len == 0) + if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { - tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if ((tt_global_entry) && - (tt_global_entry->orig_node == orig_node)) - _tt_global_del_orig(bat_priv, tt_global_entry, - message); - - tt_buff_count++; + hlist_for_each_entry_safe(tt_global_entry, node, safe, + head, hash_entry) { + if (tt_global_entry->orig_node == orig_node) + _tt_global_del(bat_priv, tt_global_entry, + message); + } } + atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock); - - orig_node->tt_buff_len = 0; - kfree(orig_node->tt_buff); - orig_node->tt_buff = NULL; }
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL; }
@@ -636,3 +712,686 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } + +/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (compare_eth(tt_global_entry->orig_node, + orig_node)) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_global_entry->addr[j]); + total ^= total_one; + } + } + rcu_read_unlock(); + } + + return total; +} + +/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_local_entry->addr[j]); + total ^= total_one; + } + + rcu_read_unlock(); + } + + return total; +} + +static void tt_req_list_free(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes) +{ + uint16_t tt_buff_len = tt_len(tt_num_changes); + + /* Replace the old buffer only if I received something in the + * last OGM (the OGM could carry no changes) */ + spin_lock_bh(&orig_node->tt_buff_lock); + if (tt_buff_len > 0) { + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; + } + } + spin_unlock_bh(&orig_node->tt_buff_lock); +} + +static void tt_req_purge(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (is_out_of_time(node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) { + list_del(&node->list); + kfree(node); + } + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +/* returns the pointer to the new tt_req_node struct if no request + * has already been issued for this orig_node, NULL otherwise */ +static struct tt_req_node *new_tt_req_node(struct bat_priv *bat_priv, + struct orig_node *orig_node) +{ + struct tt_req_node *tt_req_node_tmp, *tt_req_node = NULL; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry(tt_req_node_tmp, &bat_priv->tt_req_list, list) { + if (compare_eth(tt_req_node_tmp, orig_node) && + !is_out_of_time(tt_req_node_tmp->issued_at, + TT_REQUEST_TIMEOUT * 1000)) + goto unlock; + } + + tt_req_node = kmalloc(sizeof(struct tt_req_node), GFP_ATOMIC); + if (!tt_req_node) + goto unlock; + + memcpy(tt_req_node->addr, orig_node->orig, ETH_ALEN); + tt_req_node->issued_at = jiffies; + + list_add(&tt_req_node->list, &bat_priv->tt_req_list); +unlock: + spin_unlock_bh(&bat_priv->tt_req_list_lock); + return tt_req_node; +} + +int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, bool full_table) +{ + struct sk_buff *skb; + struct tt_query_packet *tt_request; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if; + struct tt_req_node *tt_req_node; + int ret = 0; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + tt_req_node = new_tt_req_node(bat_priv, dst_orig_node); + if (!tt_req_node) + goto out; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + tt_request = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet)); + + tt_request->packet_type = BAT_TT_QUERY; + tt_request->version = COMPAT_VERSION; + memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); + tt_request->ttl = TTL; + tt_request->ttvn = ttvn; + tt_request->tt_data = tt_crc; + tt_request->flags = TT_REQUEST; + + if (full_table) + tt_request->flags |= TT_FULL_TABLE; + + neigh_node = orig_node_get_router(dst_orig_node); + if (!neigh_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Sending TT_REQUEST to %pM via %pM " + "[%c]\n", dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret) { + kfree_skb(skb); + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_del(&tt_req_node->list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + kfree(tt_req_node); + } + return ret; +} + +static bool send_other_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if = NULL; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t orig_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (%pM) [%c]\n", tt_request->src, + tt_request->ttvn, tt_request->dst, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + /* Let's get the orig node of the REAL destination */ + req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst); + if (!req_dst_orig_node) + goto out; + + res_dst_orig_node = get_orig_node(bat_priv, tt_request->src); + if (!res_dst_orig_node) + goto out; + + neigh_node = orig_node_get_router(res_dst_orig_node); + if (!neigh_node) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn); + req_ttvn = tt_request->ttvn; + + /* I have not the requested data */ + if (orig_ttvn != req_ttvn || + tt_request->tt_data != req_dst_orig_node->tt_crc) + goto out; + + /* If it has explicitly been requested the full table */ + if (tt_request->flags & TT_FULL_TABLE || + !req_dst_orig_node->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&req_dst_orig_node->tt_buff_lock); + tt_len = req_dst_orig_node->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Copy the last orig_node's OGM buffer */ + memcpy(tt_buff, req_dst_orig_node->tt_buff, + req_dst_orig_node->tt_buff_len); + + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = (uint8_t) + atomic_read(&req_dst_orig_node->last_ttvn); + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the orig_node's local table */ + hash = bat_priv->tt_global_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + if (tt_global_entry->orig_node == + req_dst_orig_node) { + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_global_entry->addr, + ETH_ALEN); + tt_count++; + } + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + +out: + if (res_dst_orig_node) + orig_node_free_ref(res_dst_orig_node); + if (req_dst_orig_node) + orig_node_free_ref(req_dst_orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + return ret; + +} +static bool send_my_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct tt_local_entry *tt_local_entry; + struct hard_iface *primary_if = NULL; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t my_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (me) [%c]\n", tt_request->src, + tt_request->ttvn, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + + my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + req_ttvn = tt_request->ttvn; + + orig_node = get_orig_node(bat_priv, tt_request->src); + if (!orig_node) + goto out; + + neigh_node = orig_node_get_router(orig_node); + if (!neigh_node) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* If the full table has been explicitly requested or the gap + * is too big send the whole local translation table */ + if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + !bat_priv->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&bat_priv->tt_buff_lock); + tt_len = bat_priv->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + memcpy(tt_buff, bat_priv->tt_buff, + bat_priv->tt_buff_len); + spin_unlock_bh(&bat_priv->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + bat_priv->primary_if->soft_iface->mtu) { + tt_len = bat_priv->primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the local table */ + tt_response->ttvn = + (uint8_t)atomic_read(&bat_priv->ttvn); + + hash = bat_priv->tt_local_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_local_entry->addr, + ETH_ALEN); + tt_count++; + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&bat_priv->tt_buff_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + /* This packet was for me, so it doesn't need to be re-routed */ + return true; +} + +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + if (is_my_mac(tt_request->dst)) + return send_my_tt_response(bat_priv, tt_request); + else + return send_other_tt_response(bat_priv, tt_request); +} + +/* Substitute the TT response source's table with the newone carried by the + * packet */ +static void _tt_fill_gtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *tt_buff, + uint16_t table_size, uint8_t ttvn) +{ + int count; + unsigned char *tt_ptr; + + for (count = 0; count < table_size; count++) { + tt_ptr = tt_buff + (count * ETH_ALEN); + + /* If we fail to allocate a new entry we return immediatly */ + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + return; + } + atomic_set(&orig_node->last_ttvn, ttvn); +} + +static void tt_fill_gtable(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct orig_node *orig_node = NULL; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + /* Purge the old table first.. */ + tt_global_del_orig(bat_priv, orig_node, "Received full table"); + + _tt_fill_gtable(bat_priv, orig_node, + ((unsigned char *)tt_response) + + sizeof(struct tt_query_packet), + tt_response->tt_data, + tt_response->ttvn); + + spin_lock_bh(&orig_node->tt_buff_lock); + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = NULL; + spin_unlock_bh(&orig_node->tt_buff_lock); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change) +{ + int i; + + for (i = 0; i < tt_num_changes; i++) { + if ((tt_change + i)->flags & TT_CHANGE_DEL) + tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by changes"); + else + if (!tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, ttvn)) + return; + } + + tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, + tt_num_changes); + atomic_set(&orig_node->last_ttvn, ttvn); +} + +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) +{ + struct tt_local_entry *tt_local_entry; + + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + + if (tt_local_entry) + return true; + return false; +} + +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct tt_req_node *node, *safe; + struct orig_node *orig_node = NULL; + + bat_dbg(DBG_TT, bat_priv, "Received TT_RESPONSE from %pM for " + "ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + tt_response->tt_data, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + if (tt_response->flags & TT_FULL_TABLE) + tt_fill_gtable(bat_priv, tt_response); + else + tt_update_changes(bat_priv, orig_node, tt_response->tt_data, + tt_response->ttvn, + (struct tt_change *)(tt_response + 1)); + + /* Delete the tt_req_node from pending tt_requests list */ + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (!compare_eth(node->addr, tt_response->src)) + continue; + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + /* Recalculate the CRC for this orig_node and store it */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +int tt_init(struct bat_priv *bat_priv) +{ + if (!tt_local_init(bat_priv)) + return 0; + + if (!tt_global_init(bat_priv)) + return 0; + + tt_start_timer(bat_priv); + + return 1; +} + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} + +static void tt_purge(struct work_struct *work) +{ + struct delayed_work *delayed_work = + container_of(work, struct delayed_work, work); + struct bat_priv *bat_priv = + container_of(delayed_work, struct bat_priv, tt_work); + + tt_local_purge(bat_priv); + tt_req_purge(bat_priv); + + tt_start_timer(bat_priv); +} diff --git a/translation-table.h b/translation-table.h index 46152c3..f203b49 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,44 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, + uint8_t *new_addr); +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len); +int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); -int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); + uint8_t *addr, char *message); int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - unsigned char *tt_buff, int tt_buff_len); + struct orig_node *orig_node, + unsigned char *tt_buff, int tt_buff_len); +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + uint8_t ttvn); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, char *message); -void tt_global_free(struct bat_priv *bat_priv); + struct orig_node *orig_node, char *message); +void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *addr, + char *message); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + unsigned char *tt_buff, uint8_t tt_num_changes); +uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv, + struct orig_node *dst_orig_node, uint8_t hvn, + uint16_t tt_crc, bool full_table); +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request); +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change); +bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index fab70e8..0848fcc 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; + atomic_t last_ttvn; /* last seen translation table version number */ + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ + atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -94,10 +98,16 @@ struct orig_node { spinlock_t ogm_cnt_lock; /* bcast_seqno_lock protects bcast_bits, last_bcast_seqno */ spinlock_t bcast_seqno_lock; + spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list; };
+struct tt_change { + uint8_t flags; + uint8_t addr[ETH_ALEN]; +}; + struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; + atomic_t ttvn; /* tranlation table version number */ + atomic_t tt_ogm_append_cnt; + atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -153,22 +166,30 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct hlist_head softif_neigh_vids; + struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; + struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ + spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ + spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ spinlock_t softif_neigh_vid_lock; /* protects soft-interface vid list */ - int16_t num_local_tt; - atomic_t tt_local_changed; + atomic_t num_local_tt; + /* Checksum of the local table, recomputed before sending a new OGM */ + atomic_t tt_crc; + unsigned char *tt_buff; + int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; @@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; + uint8_t ttvn; + /* entry in the global table */ struct hlist_node hash_entry; };
+struct tt_change_node { + struct list_head list; + struct tt_change change; +}; + +struct tt_req_node { + uint8_t addr[ETH_ALEN]; + unsigned long issued_at; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded diff --git a/unicast.c b/unicast.c index bab6076..d6cb0f3 100644 --- a/unicast.c +++ b/unicast.c @@ -326,6 +326,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); + /* set the destination tt version number */ + unicast_packet->ttvn = + (uint8_t)atomic_read(&orig_node->last_ttvn);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(struct unicast_packet) >
Hello Antonio,
thanks for the patches. I hope it's not inappropriate to reply to the first patch (I could not find a 0/3 mail in my mailbox). I've tested your patchset from commit 515c6ae18d90c7c403be0260048f72f42c2959b0 from your repo (date: Apr 27 14:28:07) in my KVM Emulation setup [1]. I've used a cross-like setup [2] and moved node 1 to other nodes (by killing/restarting wirefilters). The OGM intervals were set very long (60000).
My findings were:
* code compiled fine on kernels 2.6.21 - 2.6.39 * roaming works fine and promptly when going with node 1 from 5 to 6 * roaming back and forth is also fine (5 -> 6 -> 5) * roaming 2 steps in one period fails (5 -> 6 -> 7), but connectivity is back after one OGM period * MAC address conflicts lead to very short roaming advertisement floods (10 packets per host), and then the fighting hosts remain silent for quite some time (> 30 seconds) - this is fine * clean sys/debug interface tables, nice * no "surprises" like panics or other weird things happened :)
Regarding the Multi-Step roaming, as we discussed this might be solved by forwarding the advertisements to the already known new host. However, this is not critical and can be worked on in a later patch, as the current patchset already improves the situation and does not make it worse in any way. Instead of calling it "fast roaming", we can then call it "rapid roaming". :D
Note that I only did functional tests. No code was harmed during the tests.
You may add my Acked-by signature if you want.
Nice job, thanks Simon
[1] http://www.open-mesh.org/wiki/open-mesh/Emulation [2] http://packetmixer.de/setup.png
On Tue, May 10, 2011 at 03:02:09PM +0200, Antonio Quartulli wrote:
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Client-announcement
Moreover:
- COMPAT_VERSION has been increased to 14
- batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org
aggregation.c | 23 +- aggregation.h | 6 +- bat_sysfs.c | 2 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 14 +- originator.c | 8 +- packet.h | 34 ++- routing.c | 227 ++++++++--- routing.h | 10 +- send.c | 90 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1135 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 42 ++- types.h | 38 ++- unicast.c | 3 + 17 files changed, 1356 insertions(+), 315 deletions(-)
diff --git a/aggregation.c b/aggregation.c index 9b94590..de59b5f 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,16 +20,11 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(struct batman_packet *batman_packet) -{
- return batman_packet->num_tt * ETH_ALEN;
-}
/* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(struct batman_packet *new_batman_packet, int packet_len, @@ -255,18 +250,20 @@ void receive_aggr_bat_packet(struct ethhdr *ethhdr, unsigned char *packet_buff, batman_packet = (struct batman_packet *)packet_buff;
do {
/* network to host order for our 32bit seqno, and the
orig_interval. */
/* network to host order for our 32bit seqno and the
orig_interval */
batman_packet->seqno = ntohl(batman_packet->seqno);
batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN;
receive_bat_packet(ethhdr, batman_packet,
tt_buff, tt_len(batman_packet),
if_incoming);
buff_pos += BAT_PACKET_LEN + tt_len(batman_packet);
receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming);
buff_pos += BAT_PACKET_LEN +
tt_len(batman_packet->tt_num_changes);
- batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len,
batman_packet->num_tt));
batman_packet->tt_num_changes));
} diff --git a/aggregation.h b/aggregation.h index 7e6d72f..c631a4c 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len,
int tt_num_changes)
{
- int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN);
int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes *
sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES);
diff --git a/bat_sysfs.c b/bat_sysfs.c index 497a070..5c85834 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -368,7 +368,7 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); #ifdef CONFIG_BATMAN_ADV_DEBUG -BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); +BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 7, NULL); #endif
static struct bat_attribute *mesh_attrs[] = { diff --git a/hard-interface.c b/hard-interface.c index dfbfccc..69ef99a 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -152,12 +152,6 @@ static void primary_if_select(struct bat_priv *bat_priv, batman_packet->ttl = TTL;
primary_if_update_addr(bat_priv);
- /***
* hacky trick to make sure that we send the TT information via
* our new primary interface
*/
- atomic_set(&bat_priv->tt_local_changed, 1);
}
static bool hardif_is_iface_up(struct hard_iface *hard_iface) @@ -339,7 +333,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, char *iface_name) batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE;
- batman_packet->num_tt = 0;
batman_packet->tt_num_changes = 0;
batman_packet->ttvn = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++;
@@ -658,6 +653,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break;
/* Translation table query (request or response) */
- case BAT_TT_QUERY:
ret = recv_tt_query(skb, hard_iface);
default: ret = NET_RX_DROP; }break;
diff --git a/main.c b/main.c index 0a7cee0..edb3e07 100644 --- a/main.c +++ b/main.c @@ -86,6 +86,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock);
- spin_lock_init(&bat_priv->tt_changes_list_lock);
- spin_lock_init(&bat_priv->tt_req_list_lock);
- spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock);
@@ -96,14 +99,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids);
INIT_LIST_HEAD(&bat_priv->tt_changes_list);
INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1)
goto err;
- if (tt_global_init(bat_priv) < 1)
if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr);
@@ -137,8 +139,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv);
- tt_global_free(bat_priv);
tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index 3ca3941..883e467 100644 --- a/main.h +++ b/main.h @@ -46,11 +46,19 @@ /* sliding packet range of received originator messages in squence numbers
- (should be a multiple of our word size) */
#define TQ_LOCAL_WINDOW_SIZE 64 +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */
#define TQ_GLOBAL_WINDOW_SIZE 5 #define TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
+/* Transtable change flags */ +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01
#define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ @@ -90,9 +98,9 @@
/* all messages related to routing / flooding / broadcasting / etc */ #define DBG_BATMAN 1 -/* route or tt entry added / changed / deleted */ -#define DBG_ROUTES 2 -#define DBG_ALL 3 +#define DBG_ROUTES 2 /* route added / changed / deleted */ +#define DBG_TT 4 /* translation table operations */ +#define DBG_ALL 7
/* diff --git a/originator.c b/originator.c index 080ec88..d4e26fd 100644 --- a/originator.c +++ b/originator.c @@ -145,6 +145,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
- kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node);
@@ -213,6 +214,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock);
spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2);
@@ -221,6 +223,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL;
- orig_node->tt_buff_len = 0;
- atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1
@@ -330,9 +334,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node,
best_neigh_node,
orig_node->tt_buff,
orig_node->tt_buff_len);
} }best_neigh_node);
diff --git a/packet.h b/packet.h index eda9965..14f501e 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02
struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,7 +67,9 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl;
- uint8_t num_tt;
- uint8_t ttvn; /* translation table version number */
- uint16_t tt_crc;
- uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ uint8_t align;
} __packed; @@ -101,6 +109,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl;
- uint8_t ttvn; /* destination translation table version number */
} __packed;
struct unicast_frag_packet { @@ -133,4 +142,25 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet {
- uint8_t packet_type;
- uint8_t version; /* batman version field */
- uint8_t dst[ETH_ALEN];
- uint8_t ttl;
- uint8_t flags; /* this field is a combination of:
* - TT_REQUEST or TT_RESPONSE
* - TT_FULL_TABLE
*/
- uint8_t src[ETH_ALEN];
- uint8_t ttvn; /* if TT_REQUEST: ttvn that triggered the
* request
* if TT_RESPONSE: new ttvn for the src
* orig_node
*/
- uint16_t tt_data; /* if TT_REQUEST: crc associated with the
* ttvn
* if TT_RESPONSE: table_size
*/
+} __packed;
#endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 8c403ce..80218fc 100644 --- a/routing.c +++ b/routing.c @@ -64,28 +64,55 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node,
unsigned char *tt_buff, int tt_buff_len)
+static void update_transtable(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *tt_buff, uint8_t tt_num_changes,
uint8_t ttvn, uint16_t tt_crc)
{
- if ((tt_buff_len != orig_node->tt_buff_len) ||
((tt_buff_len > 0) &&
(orig_node->tt_buff_len > 0) &&
(memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) {
if (orig_node->tt_buff_len > 0)
tt_global_del_orig(bat_priv, orig_node,
"originator changed tt");
if ((tt_buff_len > 0) && (tt_buff))
tt_global_add_orig(bat_priv, orig_node,
tt_buff, tt_buff_len);
- uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn);
- bool full_table = true;
- /* the ttvn increased by one -> we can apply the attached changes */
- if (ttvn - orig_ttvn == 1) {
/* the OGM could not contain the changes because they were too
* many to fit in one frame or because they have already been
* sent TT_OGM_APPEND_MAX times. In this case send a tt
* request */
if (!tt_num_changes) {
full_table = false;
goto request_table;
}
tt_update_changes(bat_priv, orig_node, tt_num_changes, ttvn,
(struct tt_change *)tt_buff);
/* Even if we received the crc into the OGM, we prefer
* to recompute it to spot any possible inconsistency
* in the global table */
spin_lock_bh(&bat_priv->tt_ghash_lock);
orig_node->tt_crc = tt_global_crc(bat_priv, orig_node);
spin_unlock_bh(&bat_priv->tt_ghash_lock);
- } else {
/* if we missed more than one change or our tables are not
* in sync anymore -> request fresh tt data */
if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) {
+request_table:
bat_dbg(DBG_TT, bat_priv, "TT inconsistency for %pM. "
"Need to retrieve the correct information "
"(ttvn: %u last_ttvn: %u crc: %u last_crc: "
"%u num_changes: %u)\n", orig_node->orig, ttvn,
orig_ttvn, tt_crc, orig_node->tt_crc,
tt_num_changes);
send_tt_request(bat_priv, orig_node, ttvn, tt_crc,
full_table);
return;
}}
}
static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node,
struct neigh_node *neigh_node,
unsigned char *tt_buff, int tt_buff_len)
struct neigh_node *neigh_node)
{ struct neigh_node *curr_router;
@@ -93,11 +120,10 @@ static void update_route(struct bat_priv *bat_priv,
/* route deleted */ if ((curr_router) && (!neigh_node)) {
- bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node,
"originator timed out");
"Deleted route towards originator");
/* route added */ } else if ((!curr_router) && (neigh_node)) {
@@ -105,9 +131,6 @@ static void update_route(struct bat_priv *bat_priv, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr);
tt_global_add_orig(bat_priv, orig_node,
tt_buff, tt_buff_len);
- /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv,
@@ -135,8 +158,7 @@ static void update_route(struct bat_priv *bat_priv,
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node,
struct neigh_node *neigh_node, unsigned char *tt_buff,
int tt_buff_len)
struct neigh_node *neigh_node)
{ struct neigh_node *router = NULL;
@@ -146,11 +168,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node)
update_route(bat_priv, orig_node, neigh_node,
tt_buff, tt_buff_len);
- /* may be just TT changed */
- else
update_TT(bat_priv, orig_node, tt_buff, tt_buff_len);
update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -363,14 +381,12 @@ static void update_orig(struct bat_priv *bat_priv, struct ethhdr *ethhdr, struct batman_packet *batman_packet, struct hard_iface *if_incoming,
unsigned char *tt_buff, int tt_buff_len,
char is_duplicate)
unsigned char *tt_buff, char is_duplicate)
{ struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node;
int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): "
@@ -435,9 +451,6 @@ static void update_orig(struct bat_priv *bat_priv,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ?
batman_packet->num_tt * ETH_ALEN : tt_buff_len);
- /* if this neighbor already is our next hop there is nothing
router = orig_node_get_router(orig_node);
- to change */
@@ -467,15 +480,19 @@ static void update_orig(struct bat_priv *bat_priv, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node,
tt_buff, tmp_tt_buff_len);
- goto update_gw;
- update_routes(bat_priv, orig_node, neigh_node);
update_tt:
- update_routes(bat_priv, orig_node, router,
tt_buff, tmp_tt_buff_len);
- /* I have to check for transtable changes only if the OGM has been
* sent through a primary interface */
- if (((batman_packet->orig != ethhdr->h_source) &&
(batman_packet->ttl > 2)) ||
(batman_packet->flags & PRIMARIES_FIRST_HOP))
update_transtable(bat_priv, orig_node, tt_buff,
batman_packet->tt_num_changes,
batman_packet->ttvn,
batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -597,7 +614,7 @@ out:
void receive_bat_packet(struct ethhdr *ethhdr, struct batman_packet *batman_packet,
unsigned char *tt_buff, int tt_buff_len,
unsigned char *tt_buff, struct hard_iface *if_incoming)
{ struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -636,12 +653,14 @@ void receive_bat_packet(struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] "
"(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, "
"TTL %d, V %d, IDF %d)\n",
"(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, "
ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno,"crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n",
batman_packet->tq, batman_packet->ttl, batman_packet->version,
batman_packet->ttvn, batman_packet->tt_crc,
batman_packet->tt_num_changes, batman_packet->tq,
batman_packet->ttl, batman_packet->version,
has_directlink_flag);
rcu_read_lock();
@@ -794,14 +813,14 @@ void receive_bat_packet(struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet,
if_incoming, tt_buff, tt_buff_len, is_duplicate);
if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet,
1, tt_buff_len, if_incoming);
1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n");
@@ -824,7 +843,7 @@ void receive_bat_packet(struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet,
0, tt_buff_len, if_incoming);
0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1171,6 +1190,70 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{
- struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface);
- struct tt_query_packet *tt_query;
- struct ethhdr *ethhdr;
- int ret = NET_RX_DROP;
- /* drop packet if it has not necessary minimum size */
- if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet))))
goto out;
- /* I could need to modify it */
- if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0)
goto out;
- ethhdr = (struct ethhdr *)skb_mac_header(skb);
- /* packet with unicast indication but broadcast recipient */
- if (is_broadcast_ether_addr(ethhdr->h_dest))
goto out;
- /* packet with broadcast sender address */
- if (is_broadcast_ether_addr(ethhdr->h_source))
goto out;
- tt_query = (struct tt_query_packet *)skb->data;
- tt_query->tt_data = ntohs(tt_query->tt_data);
- if (tt_query->flags & TT_REQUEST) {
/* If we cannot provide an answer the tt_request is
* forwarded */
if (!send_tt_response(bat_priv, tt_query)) {
bat_dbg(DBG_TT, bat_priv,
"Routing TT_REQUEST to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
}
ret = NET_RX_SUCCESS;
goto out;
- }
- /* packet needs to be linearised to access the TT changes records */
- if (skb_linearize(skb) < 0)
goto out;
- if (is_my_mac(tt_query->dst))
handle_tt_response(bat_priv, tt_query);
- else {
bat_dbg(DBG_TT, bat_priv,
"Routing TT_RESPONSE to %pM [%c]\n",
tt_query->dst,
(tt_query->flags & TT_FULL_TABLE ? 'F' : '.'));
tt_query->tt_data = htons(tt_query->tt_data);
return route_unicast_packet(skb, recv_if);
- }
- ret = NET_RX_SUCCESS;
+out:
- kfree_skb(skb);
- return ret;
+}
/* find a suitable router for this originator, and use
- bonding if possible. increases the found neighbors
- refcount.*/
@@ -1359,14 +1442,64 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) {
struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(struct unicast_packet);
struct orig_node *orig_node;
struct ethhdr *ethhdr;
uint8_t curr_ttvn;
int16_t diff;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
if (is_my_mac(unicast_packet->dest))
curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn);
else {
orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node)
return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn);
orig_node_free_ref(orig_node);
}
diff = unicast_packet->ttvn - curr_ttvn;
/* Check whether I have to reroute the packet */
if (unicast_packet->packet_type == BAT_UNICAST &&
(diff < 0 && diff > -0xff/2)) {
/* Linearize the skb before accessing it */
if (skb_linearize(skb) < 0)
return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data +
sizeof(struct unicast_packet));
orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) {
if (!is_my_client(bat_priv, ethhdr->h_dest))
return NET_RX_DROP;
memcpy(unicast_packet->dest,
bat_priv->primary_if->net_dev->dev_addr,
ETH_ALEN);
} else {
memcpy(unicast_packet->dest, orig_node->orig,
ETH_ALEN);
curr_ttvn = (uint8_t)
atomic_read(&orig_node->last_ttvn);
orig_node_free_ref(orig_node);
}
bat_dbg(DBG_ROUTES, bat_priv, "TTVN mismatch (old_ttvn %u "
"new_ttvn %u)! Rerouting unicast packet (for %pM) to "
"%pM\n", ethhdr->h_dest, unicast_packet->dest);
unicast_packet->ttvn = curr_ttvn;
} /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size);
diff --git a/routing.h b/routing.h index 870f298..6f6a5f8 100644 --- a/routing.h +++ b/routing.h @@ -24,12 +24,11 @@
void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(struct ethhdr *ethhdr,
struct batman_packet *batman_packet,
unsigned char *tt_buff, int tt_buff_len,
struct hard_iface *if_incoming);
struct batman_packet *batman_packet,
unsigned char *tt_buff,
struct hard_iface *if_incoming);
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node,
struct neigh_node *neigh_node, unsigned char *tt_buff,
int tt_buff_len);
struct neigh_node *neigh_node);
int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index f30d0c6..aa0ad64 100644 --- a/send.c +++ b/send.c @@ -121,7 +121,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len,
batman_packet->num_tt)) {
batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an
- ordinary base packet */
@@ -136,17 +136,17 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d,"
" IDF %s) on interface %s [%pM]\n",
" IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"),
hard_iface->net_dev->name,
batman_packet->ttvn, hard_iface->net_dev->name, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(struct batman_packet) +
(batman_packet->num_tt * ETH_ALEN);
packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos);tt_len(batman_packet->tt_num_changes);
@@ -214,26 +214,17 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv,
struct hard_iface *hard_iface)
+static void realloc_packet_buffer(struct hard_iface *hard_iface,
int new_len)
{
int new_len; unsigned char *new_buff;
struct batman_packet *batman_packet;
new_len = sizeof(struct batman_packet) +
(bat_priv->num_local_tt * ETH_ALEN);
new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(struct batman_packet));
batman_packet = (struct batman_packet *)new_buff;
batman_packet->num_tt = tt_local_fill_buffer(bat_priv,
new_buff + sizeof(struct batman_packet),
new_len - sizeof(struct batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff;
@@ -241,6 +232,46 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+/* when calling this function (hard_iface == primary_if) has to be true */ +static void prepare_packet_buffer(struct bat_priv *bat_priv,
struct hard_iface *hard_iface)
+{
- int new_len;
- struct batman_packet *batman_packet;
- new_len = BAT_PACKET_LEN +
tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes));
- /* if we have too many changes for one packet don't send any
* and wait for the tt table request which will be fragmented */
- if (new_len > hard_iface->soft_iface->mtu)
new_len = BAT_PACKET_LEN;
- realloc_packet_buffer(hard_iface, new_len);
- batman_packet = (struct batman_packet *)hard_iface->packet_buff;
- atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv));
- /* reset the sending counter */
- atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX);
- batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv,
hard_iface->packet_buff + BAT_PACKET_LEN,
hard_iface->packet_len - BAT_PACKET_LEN);
+}
+static void reset_packet_buffer(struct bat_priv *bat_priv,
- struct hard_iface *hard_iface)
+{
- struct batman_packet *batman_packet;
- realloc_packet_buffer(hard_iface, BAT_PACKET_LEN);
- batman_packet = (struct batman_packet *)hard_iface->packet_buff;
- batman_packet->tt_num_changes = 0;
+}
void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -266,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */
- if ((atomic_read(&bat_priv->tt_local_changed)) &&
(hard_iface == primary_if))
rebuild_batman_packet(bat_priv, hard_iface);
if (hard_iface == primary_if) {
/* if at least one change happened */
if (atomic_read(&bat_priv->tt_local_changes) > 0) {
prepare_packet_buffer(bat_priv, hard_iface);
/* Increment the TTVN only once per OGM interval */
atomic_inc(&bat_priv->ttvn);
}
/* if the changes have been sent enough times */
if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt))
reset_packet_buffer(bat_priv, hard_iface);
}
/**
- NOTE: packet_buff might just have been re-allocated in
* rebuild_batman_packet()
*/ batman_packet = (struct batman_packet *)hard_iface->packet_buff;* prepare_packet_buffer() or in reset_packet_buffer()
@@ -281,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
- batman_packet->ttvn = atomic_read(&bat_priv->ttvn);
- batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc));
- if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else
@@ -309,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet,
uint8_t directlink, int tt_buff_len,
uint8_t directlink, struct hard_iface *if_incoming)
{ struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time;
uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n");
@@ -326,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl;
tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN);
@@ -358,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno);
batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP;
@@ -369,7 +414,8 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet,
sizeof(struct batman_packet) + tt_buff_len,
sizeof(struct batman_packet) +
tt_len(tt_num_changes), if_incoming, 0, send_time);
}
diff --git a/send.h b/send.h index 247172d..842f4d1 100644 --- a/send.h +++ b/send.h @@ -29,7 +29,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, struct ethhdr *ethhdr, struct batman_packet *batman_packet,
uint8_t directlink, int tt_buff_len,
uint8_t directlink, struct hard_iface *if_outgoing);
int add_bcast_packet_to_list(struct bat_priv *bat_priv, struct sk_buff *skb); void send_outstanding_bat_packet(struct work_struct *work); diff --git a/soft-interface.c b/soft-interface.c index c76a33e..5c34bcc 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -542,7 +542,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr,
"mac address changed");
tt_local_add(dev, addr->sa_data); }"mac address changed");
@@ -600,7 +600,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (curr_softif_neigh) goto dropped;
- /* TODO: check this for locks */
/* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) {
@@ -839,7 +839,12 @@ struct net_device *softif_create(char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1);
- atomic_set(&bat_priv->tt_local_changed, 0);
atomic_set(&bat_priv->ttvn, 0);
atomic_set(&bat_priv->tt_local_changes, 0);
atomic_set(&bat_priv->tt_ogm_append_cnt, 0);
bat_priv->tt_buff = NULL;
bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0;
diff --git a/translation-table.c b/translation-table.c index 7b72966..bf3d3aa 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv,
struct tt_global_entry *tt_global_entry,
char *message);
+#include <linux/crc16.h>
+static void _tt_global_del(struct bat_priv *bat_priv,
struct tt_global_entry *tt_global_entry,
char *message);
+static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(struct hlist_node *node, void *data2) @@ -47,14 +51,15 @@ static int compare_gtt(struct hlist_node *node, void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) {
- INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge);
- queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ);
- INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge);
- queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work,
msecs_to_jiffies(5000));
}
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv,
void *data)
void *data)
{ struct hashtable_t *hash = bat_priv->tt_local_hash; struct hlist_head *head; @@ -82,7 +87,7 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, }
static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv,
void *data)
void *data)
{ struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_head *head; @@ -110,7 +115,42 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{
- unsigned long deadline;
- deadline = starting_time + msecs_to_jiffies(timeout);
- return time_after(jiffies, deadline);
+}
+static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +{
- struct tt_change_node *tt_change_node;
- tt_change_node = (struct tt_change_node *)
kmalloc(sizeof(struct tt_change_node), GFP_ATOMIC);
- if (!tt_change_node)
return;
- tt_change_node->change.flags = op;
- memcpy(tt_change_node->change.addr, addr, ETH_ALEN);
- spin_lock_bh(&bat_priv->tt_changes_list_lock);
- /* track the change in the OGMinterval list */
- list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list);
- atomic_inc(&bat_priv->tt_local_changes);
- spin_unlock_bh(&bat_priv->tt_changes_list_lock);
- atomic_set(&bat_priv->tt_ogm_append_cnt, 0);
+}
+int tt_len(int changes_num) +{
- return changes_num * sizeof(struct tt_change);
+}
+static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -120,9 +160,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0);
- tt_local_start_timer(bat_priv);
- return 1;
}
@@ -131,40 +168,24 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry;
int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies;
return;
}
/* only announce as many hosts as possible in the batman-packet and
space in batman_packet->num_tt That also should give a limit to
MAC-flooding. */
required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN;
required_bytes += BAT_PACKET_LEN;
if ((required_bytes > ETH_DATA_LEN) ||
(atomic_read(&bat_priv->aggregated_ogms) &&
required_bytes > MAX_AGGREGATION_BYTES) ||
(bat_priv->num_local_tt + 1 > 255)) {
bat_dbg(DBG_ROUTES, bat_priv,
"Can't add new local tt entry (%pM): "
"number of local tt entries exceeds packet size\n",
addr);
return;
}goto unlock;
- bat_dbg(DBG_ROUTES, bat_priv,
"Creating new local tt entry: %pM\n", addr);
- tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry)
return;
goto unlock;
tt_local_event(bat_priv, TT_CHANGE_ADD, addr);
bat_dbg(DBG_TT, bat_priv,
"Creating new local tt entry: %pM (ttvn: %d)\n", addr,
(uint8_t)atomic_read(&bat_priv->ttvn));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies;
@@ -175,13 +196,9 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock);
- hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry);
- bat_priv->num_local_tt++;
- atomic_set(&bat_priv->tt_local_changed, 1);
atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */
@@ -190,46 +207,60 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry)
_tt_global_del_orig(bat_priv, tt_global_entry,
"local tt received");
_tt_global_del(bat_priv, tt_global_entry,
"local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock);
return;
+unlock:
- spin_unlock_bh(&bat_priv->tt_lhash_lock);
}
-int tt_local_fill_buffer(struct bat_priv *bat_priv,
unsigned char *buff, int buff_len)
+int tt_changes_fill_buffer(struct bat_priv *bat_priv,
unsigned char *buff, int buff_len)
{
- struct hashtable_t *hash = bat_priv->tt_local_hash;
- struct tt_local_entry *tt_local_entry;
- struct hlist_node *node;
- struct hlist_head *head;
- int i, count = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock);
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
- int count = 0, tot_changes = 0;
- struct tt_change_node *entry, *safe;
rcu_read_lock();
hlist_for_each_entry_rcu(tt_local_entry, node,
head, hash_entry) {
if (buff_len < (count + 1) * ETH_ALEN)
break;
- if (buff_len > 0)
tot_changes = buff_len / tt_len(1);
memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr,
ETH_ALEN);
spin_lock_bh(&bat_priv->tt_changes_list_lock);
atomic_set(&bat_priv->tt_local_changes, 0);
list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list,
list) {
if (count < tot_changes) {
memcpy(buff + tt_len(count),
&entry->change, sizeof(struct tt_change)); count++;
}
rcu_read_unlock();
list_del(&entry->list);
}kfree(entry);
- spin_unlock_bh(&bat_priv->tt_changes_list_lock);
- /* Keep the buffer for possible tt_request */
- spin_lock_bh(&bat_priv->tt_buff_lock);
- kfree(bat_priv->tt_buff);
- bat_priv->tt_buff_len = 0;
- bat_priv->tt_buff = NULL;
- /* We check whether this new OGM has no changes due to size
* problems */
- if (buff_len > 0) {
/**
* if kmalloc() fails we will reply with the full table
* instead of providing the diff
*/
bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC);
if (bat_priv->tt_buff) {
memcpy(bat_priv->tt_buff, buff, buff_len);
bat_priv->tt_buff_len = buff_len;
}
- }
- spin_unlock_bh(&bat_priv->tt_buff_lock);
- /* if we did not get all new local tts see you next time ;-) */
- if (count == bat_priv->num_local_tt)
atomic_set(&bat_priv->tt_local_changed, 0);
- spin_unlock_bh(&bat_priv->tt_lhash_lock);
- return count;
- return tot_changes;
}
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -261,8 +292,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) "
"announced via TT:\n",
net_dev->name);
"announced via TT (TTVN: %u):\n",
net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -309,54 +340,50 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = (struct bat_priv *)arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data);
- bat_priv->num_local_tt--;
- atomic_set(&bat_priv->tt_local_changed, 1);
- atomic_dec(&bat_priv->num_local_tt);
}
static void tt_local_del(struct bat_priv *bat_priv,
struct tt_local_entry *tt_local_entry,
char *message)
struct tt_local_entry *tt_local_entry,
char *message)
{
- bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n",
bat_dbg(DBG_TT, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
atomic_dec(&bat_priv->num_local_tt);
hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- _tt_local_del(&tt_local_entry->hash_entry, bat_priv);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv);
}
-void tt_local_remove(struct bat_priv *bat_priv,
uint8_t *addr, char *message)
+void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock);
tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry)
- if (tt_local_entry) {
tt_local_del(bat_priv, tt_local_entry, message);tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr);
- } spin_unlock_bh(&bat_priv->tt_lhash_lock);
}
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) {
struct delayed_work *delayed_work =
container_of(work, struct delayed_work, work);
struct bat_priv *bat_priv =
container_of(delayed_work, struct bat_priv, tt_work);
struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head;
unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -369,32 +396,53 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
timeout = tt_local_entry->last_seen;
timeout += TT_LOCAL_TIMEOUT * HZ;
if (time_before(jiffies, timeout))
if (!is_out_of_time(tt_local_entry->last_seen,
TT_LOCAL_TIMEOUT * 1000)) continue;
tt_local_event(bat_priv, TT_CHANGE_DEL,
tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry,
"address timed out");
"address timed out");
} }
spin_unlock_bh(&bat_priv->tt_lhash_lock);
- tt_local_start_timer(bat_priv);
}
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) {
- struct hashtable_t *hash;
- int i;
- spinlock_t *list_lock; /* protects write access to the hash lists */
- struct hlist_head *head;
- struct hlist_node *node, *node_tmp;
- struct tt_local_entry *tt_local_entry;
- if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work);
- hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv);
- hash = bat_priv->tt_local_hash;
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
list_lock = &hash->list_locks[i];
spin_lock_bh(list_lock);
hlist_for_each_entry_safe(tt_local_entry, node, node_tmp,
head, hash_entry) {
hlist_del_rcu(node);
kfree(tt_local_entry);
}
spin_unlock_bh(list_lock);
- }
- hash_destroy(hash);
- bat_priv->tt_local_hash = NULL;
}
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -407,74 +455,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *tt_buff, int tt_buff_len)
+static void tt_changes_list_free(struct bat_priv *bat_priv) {
- struct tt_global_entry *tt_global_entry;
- struct tt_local_entry *tt_local_entry;
- int tt_buff_count = 0;
- unsigned char *tt_ptr;
- while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) {
spin_lock_bh(&bat_priv->tt_ghash_lock);
tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN);
tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
- struct tt_change_node *entry, *safe;
if (!tt_global_entry) {
spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_lock_bh(&bat_priv->tt_changes_list_lock);
tt_global_entry =
kmalloc(sizeof(struct tt_global_entry),
GFP_ATOMIC);
if (!tt_global_entry)
break;
memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN);
bat_dbg(DBG_ROUTES, bat_priv,
"Creating new global tt entry: "
"%pM (via %pM)\n",
tt_global_entry->addr, orig_node->orig);
- list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list,
list) {
list_del(&entry->list);
kfree(entry);
- }
spin_lock_bh(&bat_priv->tt_ghash_lock);
hash_add(bat_priv->tt_global_hash, compare_gtt,
choose_orig, tt_global_entry,
&tt_global_entry->hash_entry);
- atomic_set(&bat_priv->tt_local_changes, 0);
- spin_unlock_bh(&bat_priv->tt_changes_list_lock);
+}
}
+/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *tt_addr, uint8_t ttvn)
+{
struct tt_global_entry *tt_global_entry;
struct tt_local_entry *tt_local_entry;
struct orig_node *orig_node_tmp;
spin_lock_bh(&bat_priv->tt_ghash_lock);
tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) {
tt_global_entry =
kmalloc(sizeof(struct tt_global_entry),
GFP_ATOMIC);
if (!tt_global_entry)
goto unlock;
memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN);
/* Assign the new orig_node */
atomic_inc(&orig_node->refcount);
tt_global_entry->orig_node = orig_node;
spin_unlock_bh(&bat_priv->tt_ghash_lock);
/* remove address from local hash if present */
spin_lock_bh(&bat_priv->tt_lhash_lock);
tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN);
tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr);
if (tt_local_entry)
tt_local_del(bat_priv, tt_local_entry,
"global tt received");
tt_global_entry->ttvn = ttvn;
atomic_inc(&orig_node->tt_size);
hash_add(bat_priv->tt_global_hash, compare_gtt,
choose_orig, tt_global_entry,
&tt_global_entry->hash_entry);
- } else {
if (tt_global_entry->orig_node != orig_node) {
atomic_dec(&tt_global_entry->orig_node->tt_size);
orig_node_tmp = tt_global_entry->orig_node;
atomic_inc(&orig_node->refcount);
tt_global_entry->orig_node = orig_node;
tt_global_entry->ttvn = ttvn;
orig_node_free_ref(orig_node_tmp);
atomic_inc(&orig_node->tt_size);
}
- }
spin_unlock_bh(&bat_priv->tt_lhash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock);
tt_buff_count++;
- }
- bat_dbg(DBG_TT, bat_priv,
"Creating new global tt entry: %pM (via %pM)\n",
tt_global_entry->addr, orig_node->orig);
- /* initialize, and overwrite if malloc succeeds */
- orig_node->tt_buff = NULL;
- orig_node->tt_buff_len = 0;
- /* remove address from local hash if present */
- spin_lock_bh(&bat_priv->tt_lhash_lock);
- tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
- if (tt_buff_len > 0) {
orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC);
if (orig_node->tt_buff) {
memcpy(orig_node->tt_buff, tt_buff, tt_buff_len);
orig_node->tt_buff_len = tt_buff_len;
}
- }
- if (tt_local_entry)
tt_local_del(bat_priv, tt_local_entry,
"global tt received");
- spin_unlock_bh(&bat_priv->tt_lhash_lock);
- return 1;
+unlock:
- spin_unlock_bh(&bat_priv->tt_ghash_lock);
- return 0;
}
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -508,17 +561,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, "Globally announced TT entries received via the mesh %s\n", net_dev->name);
seq_printf(seq, " %-13s %s %-15s %s\n",
"Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1;
- /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/
/* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via
* xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/
for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head)
buf_size += 43;
rcu_read_unlock(); }buf_size += 59;
@@ -537,10 +593,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) {
pos += snprintf(buff + pos, 44,
" * %pM via %pM\n",
pos += snprintf(buff + pos, 61,
" * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr,
tt_global_entry->orig_node->orig);
tt_global_entry->ttvn,
tt_global_entry->orig_node->orig,
(uint8_t) atomic_read(
&tt_global_entry->orig_node->
} rcu_read_unlock(); }last_ttvn));
@@ -555,64 +615,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv,
struct tt_global_entry *tt_global_entry,
char *message)
+static void _tt_global_del(struct bat_priv *bat_priv,
struct tt_global_entry *tt_global_entry,
char *message)
{
- bat_dbg(DBG_ROUTES, bat_priv,
if (!tt_global_entry)
return;
bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry);
}
+void tt_global_del(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *addr, char *message)
+{
- struct tt_global_entry *tt_global_entry;
- spin_lock_bh(&bat_priv->tt_ghash_lock);
- tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) {
atomic_dec(&orig_node->tt_size);
_tt_global_del(bat_priv, tt_global_entry, message);
- }
- spin_unlock_bh(&bat_priv->tt_ghash_lock);
+}
void tt_global_del_orig(struct bat_priv *bat_priv,
struct orig_node *orig_node, char *message)
struct orig_node *orig_node, char *message)
{ struct tt_global_entry *tt_global_entry;
- int tt_buff_count = 0;
- unsigned char *tt_ptr;
- int i;
- struct hashtable_t *hash = bat_priv->tt_global_hash;
- struct hlist_node *node, *safe;
- struct hlist_head *head;
- if (orig_node->tt_buff_len == 0)
if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock);
for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) {
tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN);
tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr);
if ((tt_global_entry) &&
(tt_global_entry->orig_node == orig_node))
_tt_global_del_orig(bat_priv, tt_global_entry,
message);
tt_buff_count++;
hlist_for_each_entry_safe(tt_global_entry, node, safe,
head, hash_entry) {
if (tt_global_entry->orig_node == orig_node)
_tt_global_del(bat_priv, tt_global_entry,
message);
}
}
atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock);
- orig_node->tt_buff_len = 0;
- kfree(orig_node->tt_buff);
- orig_node->tt_buff = NULL;
}
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry);
- kfree(data);
}
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL);
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL;
}
@@ -636,3 +712,686 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
+/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{
- uint16_t total = 0, total_one;
- struct hashtable_t *hash = bat_priv->tt_global_hash;
- struct tt_global_entry *tt_global_entry;
- struct hlist_node *node;
- struct hlist_head *head;
- int i, j;
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
rcu_read_lock();
hlist_for_each_entry_rcu(tt_global_entry, node,
head, hash_entry) {
if (compare_eth(tt_global_entry->orig_node,
orig_node)) {
total_one = 0;
for (j = 0; j < ETH_ALEN; j++)
total_one = crc16_byte(total_one,
tt_global_entry->addr[j]);
total ^= total_one;
}
}
rcu_read_unlock();
- }
- return total;
+}
+/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{
- uint16_t total = 0, total_one;
- struct hashtable_t *hash = bat_priv->tt_local_hash;
- struct tt_local_entry *tt_local_entry;
- struct hlist_node *node;
- struct hlist_head *head;
- int i, j;
- for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
rcu_read_lock();
hlist_for_each_entry_rcu(tt_local_entry, node,
head, hash_entry) {
total_one = 0;
for (j = 0; j < ETH_ALEN; j++)
total_one = crc16_byte(total_one,
tt_local_entry->addr[j]);
total ^= total_one;
}
rcu_read_unlock();
- }
- return total;
+}
+static void tt_req_list_free(struct bat_priv *bat_priv) +{
- struct tt_req_node *node, *safe;
- spin_lock_bh(&bat_priv->tt_req_list_lock);
- list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) {
list_del(&node->list);
kfree(node);
- }
- spin_unlock_bh(&bat_priv->tt_req_list_lock);
+}
+void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node,
unsigned char *tt_buff, uint8_t tt_num_changes)
+{
- uint16_t tt_buff_len = tt_len(tt_num_changes);
- /* Replace the old buffer only if I received something in the
* last OGM (the OGM could carry no changes) */
- spin_lock_bh(&orig_node->tt_buff_lock);
- if (tt_buff_len > 0) {
kfree(orig_node->tt_buff);
orig_node->tt_buff_len = 0;
orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC);
if (orig_node->tt_buff) {
memcpy(orig_node->tt_buff, tt_buff, tt_buff_len);
orig_node->tt_buff_len = tt_buff_len;
}
- }
- spin_unlock_bh(&orig_node->tt_buff_lock);
+}
+static void tt_req_purge(struct bat_priv *bat_priv) +{
- struct tt_req_node *node, *safe;
- spin_lock_bh(&bat_priv->tt_req_list_lock);
- list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) {
if (is_out_of_time(node->issued_at,
TT_REQUEST_TIMEOUT * 1000)) {
list_del(&node->list);
kfree(node);
}
- }
- spin_unlock_bh(&bat_priv->tt_req_list_lock);
+}
+/* returns the pointer to the new tt_req_node struct if no request
- has already been issued for this orig_node, NULL otherwise */
+static struct tt_req_node *new_tt_req_node(struct bat_priv *bat_priv,
struct orig_node *orig_node)
+{
- struct tt_req_node *tt_req_node_tmp, *tt_req_node = NULL;
- spin_lock_bh(&bat_priv->tt_req_list_lock);
- list_for_each_entry(tt_req_node_tmp, &bat_priv->tt_req_list, list) {
if (compare_eth(tt_req_node_tmp, orig_node) &&
!is_out_of_time(tt_req_node_tmp->issued_at,
TT_REQUEST_TIMEOUT * 1000))
goto unlock;
- }
- tt_req_node = kmalloc(sizeof(struct tt_req_node), GFP_ATOMIC);
- if (!tt_req_node)
goto unlock;
- memcpy(tt_req_node->addr, orig_node->orig, ETH_ALEN);
- tt_req_node->issued_at = jiffies;
- list_add(&tt_req_node->list, &bat_priv->tt_req_list);
+unlock:
- spin_unlock_bh(&bat_priv->tt_req_list_lock);
- return tt_req_node;
+}
+int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node,
uint8_t ttvn, uint16_t tt_crc, bool full_table)
+{
- struct sk_buff *skb;
- struct tt_query_packet *tt_request;
- struct neigh_node *neigh_node = NULL;
- struct hard_iface *primary_if;
- struct tt_req_node *tt_req_node;
- int ret = 0;
- primary_if = primary_if_get_selected(bat_priv);
- if (!primary_if)
goto out;
- /* The new tt_req will be issued only if I'm not waiting for a
* reply from the same orig_node yet */
- tt_req_node = new_tt_req_node(bat_priv, dst_orig_node);
- if (!tt_req_node)
goto out;
- skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN);
- if (!skb)
goto out;
- skb_reserve(skb, ETH_HLEN);
- tt_request = (struct tt_query_packet *)skb_put(skb,
sizeof(struct tt_query_packet));
- tt_request->packet_type = BAT_TT_QUERY;
- tt_request->version = COMPAT_VERSION;
- memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN);
- memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN);
- tt_request->ttl = TTL;
- tt_request->ttvn = ttvn;
- tt_request->tt_data = tt_crc;
- tt_request->flags = TT_REQUEST;
- if (full_table)
tt_request->flags |= TT_FULL_TABLE;
- neigh_node = orig_node_get_router(dst_orig_node);
- if (!neigh_node)
goto out;
- bat_dbg(DBG_TT, bat_priv, "Sending TT_REQUEST to %pM via %pM "
"[%c]\n", dst_orig_node->orig, neigh_node->addr,
(full_table ? 'F' : '.'));
- send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);
- ret = 0;
+out:
- if (neigh_node)
neigh_node_free_ref(neigh_node);
- if (primary_if)
hardif_free_ref(primary_if);
- if (ret) {
kfree_skb(skb);
spin_lock_bh(&bat_priv->tt_req_list_lock);
list_del(&tt_req_node->list);
spin_unlock_bh(&bat_priv->tt_req_list_lock);
kfree(tt_req_node);
- }
- return ret;
+}
+static bool send_other_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_request)
+{
- struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL;
- struct neigh_node *neigh_node = NULL;
- struct hard_iface *primary_if = NULL;
- struct tt_global_entry *tt_global_entry;
- struct hlist_node *node;
- struct hlist_head *head;
- struct hashtable_t *hash;
- uint8_t orig_ttvn, req_ttvn;
- int i, ret = false;
- unsigned char *tt_buff;
- bool full_table;
- uint16_t tt_len, tt_tot, tt_count;
- struct sk_buff *skb = NULL;
- struct tt_query_packet *tt_response;
- bat_dbg(DBG_TT, bat_priv,
"Received TT_REQUEST from %pM for "
"ttvn: %u (%pM) [%c]\n", tt_request->src,
tt_request->ttvn, tt_request->dst,
(tt_request->flags & TT_FULL_TABLE ? 'F' : '.'));
- /* Let's get the orig node of the REAL destination */
- req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst);
- if (!req_dst_orig_node)
goto out;
- res_dst_orig_node = get_orig_node(bat_priv, tt_request->src);
- if (!res_dst_orig_node)
goto out;
- neigh_node = orig_node_get_router(res_dst_orig_node);
- if (!neigh_node)
goto out;
- primary_if = primary_if_get_selected(bat_priv);
- if (!primary_if)
goto out;
- orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn);
- req_ttvn = tt_request->ttvn;
- /* I have not the requested data */
- if (orig_ttvn != req_ttvn ||
tt_request->tt_data != req_dst_orig_node->tt_crc)
goto out;
- /* If it has explicitly been requested the full table */
- if (tt_request->flags & TT_FULL_TABLE ||
!req_dst_orig_node->tt_buff)
full_table = true;
- else
full_table = false;
- /* In this version, fragmentation is not implemented, then
* I'll send only one packet with as much TT entries as I can */
- if (!full_table) {
spin_lock_bh(&req_dst_orig_node->tt_buff_lock);
tt_len = req_dst_orig_node->tt_buff_len;
tt_tot = tt_len / sizeof(struct tt_change);
skb = dev_alloc_skb(sizeof(struct tt_query_packet) +
tt_len + ETH_HLEN);
if (!skb)
goto unlock;
skb_reserve(skb, ETH_HLEN);
tt_response = (struct tt_query_packet *)skb_put(skb,
sizeof(struct tt_query_packet) + tt_len);
tt_response->ttvn = req_ttvn;
tt_buff = skb->data + sizeof(struct tt_query_packet);
/* Copy the last orig_node's OGM buffer */
memcpy(tt_buff, req_dst_orig_node->tt_buff,
req_dst_orig_node->tt_buff_len);
spin_unlock_bh(&req_dst_orig_node->tt_buff_lock);
- } else {
tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) *
ETH_ALEN;
if (sizeof(struct tt_query_packet) + tt_len >
primary_if->soft_iface->mtu) {
tt_len = primary_if->soft_iface->mtu -
sizeof(struct tt_query_packet);
tt_len -= tt_len % ETH_ALEN;
}
tt_tot = tt_len / ETH_ALEN;
skb = dev_alloc_skb(sizeof(struct tt_query_packet) +
tt_len + ETH_HLEN);
if (!skb)
goto out;
skb_reserve(skb, ETH_HLEN);
tt_response = (struct tt_query_packet *)skb_put(skb,
sizeof(struct tt_query_packet) + tt_len);
tt_response->ttvn = (uint8_t)
atomic_read(&req_dst_orig_node->last_ttvn);
tt_buff = skb->data + sizeof(struct tt_query_packet);
/* Fill the packet with the orig_node's local table */
hash = bat_priv->tt_global_hash;
tt_count = 0;
rcu_read_lock();
for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
hlist_for_each_entry_rcu(tt_global_entry, node,
head, hash_entry) {
if (tt_count == tt_tot)
break;
if (tt_global_entry->orig_node ==
req_dst_orig_node) {
memcpy(tt_buff + tt_count * ETH_ALEN,
tt_global_entry->addr,
ETH_ALEN);
tt_count++;
}
}
}
rcu_read_unlock();
- }
- tt_response->packet_type = BAT_TT_QUERY;
- tt_response->version = COMPAT_VERSION;
- memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN);
- memcpy(tt_response->dst, tt_request->src, ETH_ALEN);
- tt_response->tt_data = htons(tt_tot);
- tt_response->flags = TT_RESPONSE;
- if (full_table)
tt_response->flags |= TT_FULL_TABLE;
- bat_dbg(DBG_TT, bat_priv,
"Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n",
res_dst_orig_node->orig, neigh_node->addr,
req_dst_orig_node->orig, req_ttvn);
- send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);
- ret = true;
- goto out;
+unlock:
- spin_unlock_bh(&req_dst_orig_node->tt_buff_lock);
+out:
- if (res_dst_orig_node)
orig_node_free_ref(res_dst_orig_node);
- if (req_dst_orig_node)
orig_node_free_ref(req_dst_orig_node);
- if (neigh_node)
neigh_node_free_ref(neigh_node);
- if (primary_if)
hardif_free_ref(primary_if);
- if (!ret)
kfree(skb);
- return ret;
+} +static bool send_my_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_request)
+{
- struct orig_node *orig_node = NULL;
- struct neigh_node *neigh_node = NULL;
- struct tt_local_entry *tt_local_entry;
- struct hard_iface *primary_if = NULL;
- struct hlist_node *node;
- struct hlist_head *head;
- struct hashtable_t *hash;
- uint8_t my_ttvn, req_ttvn;
- int i, ret = false;
- unsigned char *tt_buff;
- bool full_table;
- uint16_t tt_len, tt_tot, tt_count;
- struct sk_buff *skb = NULL;
- struct tt_query_packet *tt_response;
- bat_dbg(DBG_TT, bat_priv,
"Received TT_REQUEST from %pM for "
"ttvn: %u (me) [%c]\n", tt_request->src,
tt_request->ttvn,
(tt_request->flags & TT_FULL_TABLE ? 'F' : '.'));
- my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn);
- req_ttvn = tt_request->ttvn;
- orig_node = get_orig_node(bat_priv, tt_request->src);
- if (!orig_node)
goto out;
- neigh_node = orig_node_get_router(orig_node);
- if (!neigh_node)
goto out;
- primary_if = primary_if_get_selected(bat_priv);
- if (!primary_if)
goto out;
- /* If the full table has been explicitly requested or the gap
* is too big send the whole local translation table */
- if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn ||
!bat_priv->tt_buff)
full_table = true;
- else
full_table = false;
- /* In this version, fragmentation is not implemented, then
* I'll send only one packet with as much TT entries as I can */
- if (!full_table) {
spin_lock_bh(&bat_priv->tt_buff_lock);
tt_len = bat_priv->tt_buff_len;
tt_tot = tt_len / sizeof(struct tt_change);
skb = dev_alloc_skb(sizeof(struct tt_query_packet) +
tt_len + ETH_HLEN);
if (!skb)
goto unlock;
skb_reserve(skb, ETH_HLEN);
tt_response = (struct tt_query_packet *)skb_put(skb,
sizeof(struct tt_query_packet) + tt_len);
tt_response->ttvn = req_ttvn;
tt_buff = skb->data + sizeof(struct tt_query_packet);
memcpy(tt_buff, bat_priv->tt_buff,
bat_priv->tt_buff_len);
spin_unlock_bh(&bat_priv->tt_buff_lock);
- } else {
tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) *
ETH_ALEN;
if (sizeof(struct tt_query_packet) + tt_len >
bat_priv->primary_if->soft_iface->mtu) {
tt_len = bat_priv->primary_if->soft_iface->mtu -
sizeof(struct tt_query_packet);
tt_len -= tt_len % ETH_ALEN;
}
tt_tot = tt_len / ETH_ALEN;
skb = dev_alloc_skb(sizeof(struct tt_query_packet) +
tt_len + ETH_HLEN);
if (!skb)
goto out;
skb_reserve(skb, ETH_HLEN);
tt_response = (struct tt_query_packet *)skb_put(skb,
sizeof(struct tt_query_packet) + tt_len);
tt_buff = skb->data + sizeof(struct tt_query_packet);
/* Fill the packet with the local table */
tt_response->ttvn =
(uint8_t)atomic_read(&bat_priv->ttvn);
hash = bat_priv->tt_local_hash;
tt_count = 0;
rcu_read_lock();
for (i = 0; i < hash->size; i++) {
head = &hash->table[i];
hlist_for_each_entry_rcu(tt_local_entry, node,
head, hash_entry) {
if (tt_count == tt_tot)
break;
memcpy(tt_buff + tt_count * ETH_ALEN,
tt_local_entry->addr,
ETH_ALEN);
tt_count++;
}
}
rcu_read_unlock();
- }
- tt_response->packet_type = BAT_TT_QUERY;
- tt_response->version = COMPAT_VERSION;
- memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN);
- memcpy(tt_response->dst, tt_request->src, ETH_ALEN);
- tt_response->tt_data = htons(tt_tot);
- tt_response->flags = TT_RESPONSE;
- if (full_table)
tt_response->flags |= TT_FULL_TABLE;
- bat_dbg(DBG_TT, bat_priv,
"Sending TT_RESPONSE to %pM via %pM [%c]\n",
orig_node->orig, neigh_node->addr,
(tt_response->flags & TT_FULL_TABLE ? 'F' : '.'));
- send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);
- ret = true;
- goto out;
+unlock:
- spin_unlock_bh(&bat_priv->tt_buff_lock);
+out:
- if (orig_node)
orig_node_free_ref(orig_node);
- if (neigh_node)
neigh_node_free_ref(neigh_node);
- if (primary_if)
hardif_free_ref(primary_if);
- if (!ret)
kfree(skb);
- /* This packet was for me, so it doesn't need to be re-routed */
- return true;
+}
+bool send_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_request)
+{
- if (is_my_mac(tt_request->dst))
return send_my_tt_response(bat_priv, tt_request);
- else
return send_other_tt_response(bat_priv, tt_request);
+}
+/* Substitute the TT response source's table with the newone carried by the
- packet */
+static void _tt_fill_gtable(struct bat_priv *bat_priv,
struct orig_node *orig_node, unsigned char *tt_buff,
uint16_t table_size, uint8_t ttvn)
+{
- int count;
- unsigned char *tt_ptr;
- for (count = 0; count < table_size; count++) {
tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */
if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn))
return;
- }
- atomic_set(&orig_node->last_ttvn, ttvn);
+}
+static void tt_fill_gtable(struct bat_priv *bat_priv,
struct tt_query_packet *tt_response)
+{
- struct orig_node *orig_node = NULL;
- orig_node = orig_hash_find(bat_priv, tt_response->src);
- if (!orig_node)
goto out;
- /* Purge the old table first.. */
- tt_global_del_orig(bat_priv, orig_node, "Received full table");
- _tt_fill_gtable(bat_priv, orig_node,
((unsigned char *)tt_response) +
sizeof(struct tt_query_packet),
tt_response->tt_data,
tt_response->ttvn);
- spin_lock_bh(&orig_node->tt_buff_lock);
- kfree(orig_node->tt_buff);
- orig_node->tt_buff_len = 0;
- orig_node->tt_buff = NULL;
- spin_unlock_bh(&orig_node->tt_buff_lock);
+out:
- if (orig_node)
orig_node_free_ref(orig_node);
+}
+void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
uint16_t tt_num_changes, uint8_t ttvn,
struct tt_change *tt_change)
+{
- int i;
- for (i = 0; i < tt_num_changes; i++) {
if ((tt_change + i)->flags & TT_CHANGE_DEL)
tt_global_del(bat_priv, orig_node,
(tt_change + i)->addr,
"tt removed by changes");
else
if (!tt_global_add(bat_priv, orig_node,
(tt_change + i)->addr, ttvn))
return;
- }
- tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change,
tt_num_changes);
- atomic_set(&orig_node->last_ttvn, ttvn);
+}
+bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) +{
- struct tt_local_entry *tt_local_entry;
- spin_lock_bh(&bat_priv->tt_lhash_lock);
- tt_local_entry = tt_local_hash_find(bat_priv, addr);
- spin_unlock_bh(&bat_priv->tt_lhash_lock);
- if (tt_local_entry)
return true;
- return false;
+}
+void handle_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_response)
+{
- struct tt_req_node *node, *safe;
- struct orig_node *orig_node = NULL;
- bat_dbg(DBG_TT, bat_priv, "Received TT_RESPONSE from %pM for "
"ttvn %d t_size: %d [%c]\n",
tt_response->src, tt_response->ttvn,
tt_response->tt_data,
(tt_response->flags & TT_FULL_TABLE ? 'F' : '.'));
- orig_node = orig_hash_find(bat_priv, tt_response->src);
- if (!orig_node)
goto out;
- if (tt_response->flags & TT_FULL_TABLE)
tt_fill_gtable(bat_priv, tt_response);
- else
tt_update_changes(bat_priv, orig_node, tt_response->tt_data,
tt_response->ttvn,
(struct tt_change *)(tt_response + 1));
- /* Delete the tt_req_node from pending tt_requests list */
- spin_lock_bh(&bat_priv->tt_req_list_lock);
- list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) {
if (!compare_eth(node->addr, tt_response->src))
continue;
list_del(&node->list);
kfree(node);
- }
- spin_unlock_bh(&bat_priv->tt_req_list_lock);
- /* Recalculate the CRC for this orig_node and store it */
- spin_lock_bh(&bat_priv->tt_ghash_lock);
- orig_node->tt_crc = tt_global_crc(bat_priv, orig_node);
- spin_unlock_bh(&bat_priv->tt_ghash_lock);
+out:
- if (orig_node)
orig_node_free_ref(orig_node);
+}
+int tt_init(struct bat_priv *bat_priv) +{
- if (!tt_local_init(bat_priv))
return 0;
- if (!tt_global_init(bat_priv))
return 0;
- tt_start_timer(bat_priv);
- return 1;
+}
+void tt_free(struct bat_priv *bat_priv) +{
- cancel_delayed_work_sync(&bat_priv->tt_work);
- tt_local_table_free(bat_priv);
- tt_global_table_free(bat_priv);
- tt_req_list_free(bat_priv);
- tt_changes_list_free(bat_priv);
- kfree(bat_priv->tt_buff);
+}
+static void tt_purge(struct work_struct *work) +{
- struct delayed_work *delayed_work =
container_of(work, struct delayed_work, work);
- struct bat_priv *bat_priv =
container_of(delayed_work, struct bat_priv, tt_work);
- tt_local_purge(bat_priv);
- tt_req_purge(bat_priv);
- tt_start_timer(bat_priv);
+} diff --git a/translation-table.h b/translation-table.h index 46152c3..f203b49 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,22 +22,44 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr,
uint8_t *new_addr);
+int tt_changes_fill_buffer(struct bat_priv *bat_priv,
unsigned char *buff, int buff_len);
+int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv,
uint8_t *addr, char *message);
-int tt_local_fill_buffer(struct bat_priv *bat_priv,
unsigned char *buff, int buff_len);
uint8_t *addr, char *message);
int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv,
struct orig_node *orig_node,
unsigned char *tt_buff, int tt_buff_len);
struct orig_node *orig_node,
unsigned char *tt_buff, int tt_buff_len);
+int tt_global_add(struct bat_priv *bat_priv,
struct orig_node *orig_node, unsigned char *addr,
uint8_t ttvn);
int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv,
struct orig_node *orig_node, char *message);
-void tt_global_free(struct bat_priv *bat_priv);
struct orig_node *orig_node, char *message);
+void tt_global_del(struct bat_priv *bat_priv,
struct orig_node *orig_node, unsigned char *addr,
char *message);
struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node,
unsigned char *tt_buff, uint8_t tt_num_changes);
+uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv,
struct orig_node *dst_orig_node, uint8_t hvn,
uint16_t tt_crc, bool full_table);
+bool send_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_request);
+void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
uint16_t tt_num_changes, uint8_t ttvn,
struct tt_change *tt_change);
+bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv,
struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index fab70e8..0848fcc 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags;
- atomic_t last_ttvn; /* last seen translation table version number */
- uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len;
- spinlock_t tt_buff_lock; /* protects tt_buff */
- atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS];
@@ -94,10 +98,16 @@ struct orig_node { spinlock_t ogm_cnt_lock; /* bcast_seqno_lock protects bcast_bits, last_bcast_seqno */ spinlock_t bcast_seqno_lock;
- spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list;
};
+struct tt_change {
- uint8_t flags;
- uint8_t addr[ETH_ALEN];
+};
struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left;
- atomic_t ttvn; /* tranlation table version number */
- atomic_t tt_ogm_append_cnt;
- atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj;
@@ -153,22 +166,30 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct hlist_head softif_neigh_vids;
- struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash;
- struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */
- spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */
- spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ spinlock_t softif_neigh_vid_lock; /* protects soft-interface vid list */
- int16_t num_local_tt;
- atomic_t tt_local_changed;
- atomic_t num_local_tt;
- /* Checksum of the local table, recomputed before sending a new OGM */
- atomic_t tt_crc;
- unsigned char *tt_buff;
- int16_t tt_buff_len;
- spinlock_t tt_buff_lock; /* protects tt_buff */ struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work;
@@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node;
- uint8_t ttvn;
- /* entry in the global table */ struct hlist_node hash_entry;
};
+struct tt_change_node {
- struct list_head list;
- struct tt_change change;
+};
+struct tt_req_node {
- uint8_t addr[ETH_ALEN];
- unsigned long issued_at;
- struct list_head list;
+};
/**
- forw_packet - structure for forw_list maintaining packets to be
send/forwarded
diff --git a/unicast.c b/unicast.c index bab6076..d6cb0f3 100644 --- a/unicast.c +++ b/unicast.c @@ -326,6 +326,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN);
/* set the destination tt version number */
unicast_packet->ttvn =
(uint8_t)atomic_read(&orig_node->last_ttvn);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(struct unicast_packet) >
-- 1.7.3.4
Nack
There are two many small things that need to be changed. I didn't to a complete review, but at least the primary_if dereference is unacceptable.
From IRC:
<ecsv_> could you do me the favour and go through my cleanup patches and also fix your code... it is too much to send every problem to the list <ecsv_> and the kfree_rcu patches will be merged soon in linux mainline... so please prepare your patches for that <ecsv_> and why is there still an align in batman_packet - this doesn't make any sense <ecsv_> and we already have functionality to do the before-after check of the ttvn <ecsv_> and you try to dereference primary_if (without using primary_if_get_selected) directly... this is not allowed
Kind regards, Sven
Sven Eckelmann wrote:
Nack
There are two many small things that need to be changed. I didn't to a complete review, but at least the primary_if dereference is unacceptable.
I'm also too broken right now... "too many small problems", 'I haven't done a complete reviews"
thanks, Sven
Exploting the new announcement implementation, it has been possible to improve the roaming mechanism and reduce the number of packet drops.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Roaming-improvements
Signed-off-by: Antonio Quartulli ordex@autistici.org --- Corrected orig_node_get_router() invokations
hard-interface.c | 4 + main.c | 2 + main.h | 12 +++- originator.c | 1 + packet.h | 10 +++ routing.c | 67 +++++++++++++++- routing.h | 1 + send.c | 1 + soft-interface.c | 3 +- translation-table.c | 211 ++++++++++++++++++++++++++++++++++++++++++++++----- translation-table.h | 9 ++- types.h | 26 ++++++- 12 files changed, 314 insertions(+), 33 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 69ef99a..815caf7 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -657,6 +657,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_TT_QUERY: ret = recv_tt_query(skb, hard_iface); break; + /* Roaming advertisement */ + case BAT_ROAM_ADV: + ret = recv_roam_adv(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index edb3e07..6e96fd6 100644 --- a/main.c +++ b/main.c @@ -88,6 +88,7 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_roam_list_lock); spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); @@ -101,6 +102,7 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); INIT_LIST_HEAD(&bat_priv->tt_changes_list); INIT_LIST_HEAD(&bat_priv->tt_req_list); + INIT_LIST_HEAD(&bat_priv->tt_roam_list);
if (originator_init(bat_priv) < 1) goto err; diff --git a/main.h b/main.h index 883e467..215d85d 100644 --- a/main.h +++ b/main.h @@ -56,8 +56,16 @@ #define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
/* Transtable change flags */ -#define TT_CHANGE_ADD 0x00 -#define TT_CHANGE_DEL 0x01 +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 +#define TT_CHANGE_ROAM 0x02 + +/* Transtable global entry flags */ +#define TT_GLOBAL_ROAM 0x01 + +#define ROAMING_MAX_TIME 20 /* Time in which a client can roam at most + * ROAMING_MAX_COUNT times */ +#define ROAMING_MAX_COUNT 5
#define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
diff --git a/originator.c b/originator.c index d4e26fd..bece4da 100644 --- a/originator.c +++ b/originator.c @@ -219,6 +219,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, uint8_t *addr) /* extra reference for return */ atomic_set(&orig_node->refcount, 2);
+ orig_node->tt_poss_change = false; orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; diff --git a/packet.h b/packet.h index 14f501e..3a4ecbf 100644 --- a/packet.h +++ b/packet.h @@ -31,6 +31,7 @@ #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 #define BAT_TT_QUERY 0x07 +#define BAT_ROAM_ADV 0x08
/* this file is included by batctl which needs these defines */ #define COMPAT_VERSION 14 @@ -163,4 +164,13 @@ struct tt_query_packet { */ } __packed;
+struct roam_adv_packet { + uint8_t packet_type; + uint8_t version; + uint8_t dst[6]; + uint8_t ttl; + uint8_t src[6]; + uint8_t client[6]; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 80218fc..9038687 100644 --- a/routing.c +++ b/routing.c @@ -92,6 +92,9 @@ static void update_transtable(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not * in sync anymore -> request fresh tt data */ @@ -1254,6 +1257,56 @@ out: return ret; }
+int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct roam_adv_packet *roam_adv_packet; + struct orig_node *orig_node; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + roam_adv_packet = (struct roam_adv_packet *)skb->data; + + if (!is_my_mac(roam_adv_packet->dst)) + return route_unicast_packet(skb, recv_if); + + orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + if (!orig_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Received ROAMING_ADV from %pM " + "(client %pM)\n", roam_adv_packet->src, + roam_adv_packet->client); + + tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + atomic_read(&orig_node->last_ttvn) + 1, true); + + /* Roaming phase starts: I have new information but the ttvn has not + * been incremented yet. This flag will make me check all the incoming + * packets for the correct destination. */ + bat_priv->tt_poss_change = true; + + orig_node_free_ref(orig_node); + ret = NET_RX_SUCCESS; +out: + kfree(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1449,35 +1502,41 @@ int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) struct ethhdr *ethhdr; uint8_t curr_ttvn; int16_t diff; + bool tt_poss_change;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + return NET_RX_DROP; + unicast_packet = (struct unicast_packet *)skb->data;
- if (is_my_mac(unicast_packet->dest)) + if (is_my_mac(unicast_packet->dest)) { + tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); - else { + } else { orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node) return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + tt_poss_change = orig_node->tt_poss_change; orig_node_free_ref(orig_node); }
diff = unicast_packet->ttvn - curr_ttvn; /* Check whether I have to reroute the packet */ if (unicast_packet->packet_type == BAT_UNICAST && - (diff < 0 && diff > -0xff/2)) { + ((diff < 0 && diff > -0xff/2) || tt_poss_change)) { /* Linearize the skb before accessing it */ if (skb_linearize(skb) < 0) return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data + sizeof(struct unicast_packet)); - orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) { diff --git a/routing.h b/routing.h index 6f6a5f8..e2943e0 100644 --- a/routing.h +++ b/routing.h @@ -37,6 +37,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, struct hard_iface *recv_if); diff --git a/send.c b/send.c index aa0ad64..3f45f39 100644 --- a/send.c +++ b/send.c @@ -303,6 +303,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) prepare_packet_buffer(bat_priv, hard_iface); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->ttvn); + bat_priv->tt_poss_change = false; }
/* if the changes have been sent enough times */ diff --git a/soft-interface.c b/soft-interface.c index 5c34bcc..613b833 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -542,7 +542,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed", false); tt_local_add(dev, addr->sa_data); }
@@ -845,6 +845,7 @@ struct net_device *softif_create(char *name)
bat_priv->tt_buff = NULL; bat_priv->tt_buff_len = 0; + bat_priv->tt_poss_change = false;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index bf3d3aa..c77aa1e 100644 --- a/translation-table.c +++ b/translation-table.c @@ -123,7 +123,8 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
-static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr, + uint8_t roaming) { struct tt_change_node *tt_change_node;
@@ -134,6 +135,9 @@ static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr) return;
tt_change_node->change.flags = op; + if (roaming) + tt_change_node->change.flags |= TT_GLOBAL_ROAM; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN);
spin_lock_bh(&bat_priv->tt_changes_list_lock); @@ -168,6 +172,8 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; + uint8_t roam_addr[ETH_ALEN]; + struct orig_node *roam_orig_node;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); @@ -181,7 +187,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr) if (!tt_local_entry) goto unlock;
- tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
bat_dbg(DBG_TT, bat_priv, "Creating new local tt entry: %pM (ttvn: %d)\n", addr, @@ -206,11 +212,20 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry) + /* Check whether it is a roaming! */ + if (tt_global_entry) { + memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); + roam_orig_node = tt_global_entry->orig_node; + /* This node is probably going to update its tt table */ + tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + send_roam_adv(bat_priv, tt_global_entry->addr, + tt_global_entry->orig_node); + } else + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock); return; unlock: spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -364,7 +379,8 @@ static void tt_local_del(struct bat_priv *bat_priv, tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) +void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, + char *message, bool roaming) { struct tt_local_entry *tt_local_entry;
@@ -372,7 +388,8 @@ void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message) tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, + roaming); tt_local_del(bat_priv, tt_local_entry, message); } spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -401,7 +418,7 @@ static void tt_local_purge(struct bat_priv *bat_priv) continue;
tt_local_event(bat_priv, TT_CHANGE_DEL, - tt_local_entry->addr); + tt_local_entry->addr, false); tt_local_del(bat_priv, tt_local_entry, "address timed out"); } @@ -474,7 +491,7 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* caller must hold orig_node recount */ int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *tt_addr, uint8_t ttvn) + unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; @@ -494,6 +511,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; atomic_inc(&orig_node->tt_size); hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, @@ -505,6 +523,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; orig_node_free_ref(orig_node_tmp); atomic_inc(&orig_node->tt_size); } @@ -521,8 +540,9 @@ int tt_global_add(struct bat_priv *bat_priv, tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 1; unlock: @@ -635,7 +655,7 @@ static void _tt_global_del(struct bat_priv *bat_priv,
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, - unsigned char *addr, char *message) + unsigned char *addr, char *message, bool roaming) { struct tt_global_entry *tt_global_entry;
@@ -643,9 +663,14 @@ void tt_global_del(struct bat_priv *bat_priv, tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (roaming) { + tt_global_entry->flags |= TT_GLOBAL_ROAM; + goto out; + } atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } +out: spin_unlock_bh(&bat_priv->tt_ghash_lock); }
@@ -731,6 +756,12 @@ uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) head, hash_entry) { if (compare_eth(tt_global_entry->orig_node, orig_node)) { + /* Roaming clients are in the global table for + * consistency only. They don't have to be + * taken into account while computing the + * global crc */ + if (tt_global_entry->flags & TT_GLOBAL_ROAM) + continue; total_one = 0; for (j = 0; j < ETH_ALEN; j++) total_one = crc16_byte(total_one, @@ -1246,7 +1277,7 @@ static void _tt_fill_gtable(struct bat_priv *bat_priv, tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */ - if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn, false)) return; } atomic_set(&orig_node->last_ttvn, ttvn); @@ -1291,10 +1322,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, if ((tt_change + i)->flags & TT_CHANGE_DEL) tt_global_del(bat_priv, orig_node, (tt_change + i)->addr, - "tt removed by changes"); + "tt removed by changes", + (tt_change + i)->flags & TT_CHANGE_ROAM); else if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, ttvn)) + (tt_change + i)->addr, ttvn, false)) + /* In case of problem while storing a + * global_entry, we stop the updating + * procedure without committing the + * ttvn change. This will avoid to send + * corrupted data on tt_request + */ return; }
@@ -1353,6 +1391,9 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; out: if (orig_node) orig_node_free_ref(orig_node); @@ -1371,16 +1412,130 @@ int tt_init(struct bat_priv *bat_priv) return 1; }
-void tt_free(struct bat_priv *bat_priv) +static void tt_roam_list_free(struct bat_priv *bat_priv) { - cancel_delayed_work_sync(&bat_priv->tt_work); + struct tt_roam_node *node, *safe;
- tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); + spin_lock_bh(&bat_priv->tt_roam_list_lock);
- kfree(bat_priv->tt_buff); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +static void tt_roam_purge(struct bat_priv *bat_priv) +{ + struct tt_roam_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + if (!is_out_of_time(node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +/* This function checks whether the client already reached the + * maximum number of possible roaming phases. In this case the ROAMING_ADV + * will not be sent. + * + * returns true if the ROAMING_ADV can be sent, false otherwise */ +static bool tt_check_roam_count(struct bat_priv *bat_priv, + uint8_t *client) +{ + struct tt_roam_node *tt_roam_node; + bool ret = false; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { + if (!compare_eth(tt_roam_node->addr, client)) + continue; + + if (is_out_of_time(tt_roam_node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + if (!atomic_dec_not_zero(&tt_roam_node->counter)) + /* Sorry, you roamed too many times! */ + goto unlock; + ret = true; + break; + } + + if (!ret) { + tt_roam_node = kmalloc(sizeof(struct tt_roam_node), GFP_ATOMIC); + if (!tt_roam_node) + goto unlock; + + tt_roam_node->first_time = jiffies; + atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + memcpy(tt_roam_node->addr, client, ETH_ALEN); + + list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); + ret = true; + } + +unlock: + spin_unlock_bh(&bat_priv->tt_roam_list_lock); + return ret; +} + +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node) +{ + struct neigh_node *neigh_node = NULL; + struct sk_buff *skb = NULL; + struct roam_adv_packet *roam_adv_packet; + int ret = 1; + + /* before going on we have to check whether the client has + * already roamed to us too many times */ + if (!tt_check_roam_count(bat_priv, client)) + goto out; + + skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, + sizeof(struct roam_adv_packet)); + + roam_adv_packet->packet_type = BAT_ROAM_ADV; + roam_adv_packet->version = COMPAT_VERSION; + roam_adv_packet->ttl = TTL; + memcpy(roam_adv_packet->src, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); + memcpy(roam_adv_packet->client, client, ETH_ALEN); + + neigh_node = orig_node_get_router(orig_node); + if (!neigh_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (ret) + kfree_skb(skb); + return; }
static void tt_purge(struct work_struct *work) @@ -1392,6 +1547,20 @@ static void tt_purge(struct work_struct *work)
tt_local_purge(bat_priv); tt_req_purge(bat_priv); + tt_roam_purge(bat_priv);
tt_start_timer(bat_priv); } + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + tt_roam_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} diff --git a/translation-table.h b/translation-table.h index f203b49..b08d30a 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,6 +22,7 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
+struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); int tt_len(int changes_num); void tt_changes_primary_if(struct bat_priv *bat_priv, uint8_t *old_addr, uint8_t *new_addr); @@ -30,20 +31,20 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - uint8_t *addr, char *message); + uint8_t *addr, char *message, bool roaming); int tt_local_seq_print_text(struct seq_file *seq, void *offset); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, int tt_buff_len); int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - uint8_t ttvn); + uint8_t ttvn, bool roaming); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, char *message); void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, - char *message); + char *message, bool roaming); struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr); void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *tt_buff, uint8_t tt_num_changes); @@ -61,5 +62,7 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr); void handle_tt_response(struct bat_priv *bat_priv, struct tt_query_packet *tt_response); +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 0848fcc..b148bc3 100644 --- a/types.h +++ b/types.h @@ -81,6 +81,13 @@ struct orig_node { int16_t tt_buff_len; spinlock_t tt_buff_lock; /* protects tt_buff */ atomic_t tt_size; + bool tt_poss_change; /* This flag is used to detect an ongoing roaming + * phase. If true, then I sent a Roaming_adv to + * this orig_node and I have to inspect every + * packet directed to it to check whether it is + * still the true destination or not. This flag + * will be reset to false as soon as I receive a + * new TTVN from this orig_node */ uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -158,6 +165,12 @@ struct bat_priv { atomic_t ttvn; /* tranlation table version number */ atomic_t tt_ogm_append_cnt; atomic_t tt_local_changes; /* changes registered in a OGM interval */ + bool tt_poss_change; /* This flag is used to detect an ongoing roaming + * phase. If true, then I received a Roaming_adv + * and I have to inspect every packet directed to + * me to check whether I am still the true + * destination or not. This flag will be reset to + * false as soon as I increase my TTVN */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -172,6 +185,7 @@ struct bat_priv { struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; struct list_head tt_req_list; /* list of pending tt_requests */ + struct list_head tt_roam_list; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -179,6 +193,7 @@ struct bat_priv { spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ + spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ @@ -224,8 +239,8 @@ struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; uint8_t ttvn; - /* entry in the global table */ - struct hlist_node hash_entry; + uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + struct hlist_node hash_entry; /* entry in the global table */ };
struct tt_change_node { @@ -239,6 +254,13 @@ struct tt_req_node { struct list_head list; };
+struct tt_roam_node { + uint8_t addr[ETH_ALEN]; + atomic_t counter; + unsigned long first_time; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org --- Corrected orig_node_get_router() invokation
main.c | 2 - routing.c | 2 - translation-table.c | 256 +++++++++++++++++++++++++++++---------------------- types.h | 6 +- vis.c | 13 +-- 5 files changed, 155 insertions(+), 124 deletions(-)
diff --git a/main.c b/main.c index 6e96fd6..5f3cab1 100644 --- a/main.c +++ b/main.c @@ -84,8 +84,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/routing.c b/routing.c index 9038687..c7f0519 100644 --- a/routing.c +++ b/routing.c @@ -89,9 +89,7 @@ static void update_transtable(struct bat_priv *bat_priv, /* Even if we received the crc into the OGM, we prefer * to recompute it to spot any possible inconsistency * in the global table */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/translation-table.c b/translation-table.c index c77aa1e..4f50f7d 100644 --- a/translation-table.c +++ b/translation-table.c @@ -78,6 +78,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -107,6 +110,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -123,8 +129,36 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + call_rcu(&tt_local_entry->rcu, tt_local_entry_free_rcu); +} + +static void tt_global_entry_free_rcu(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + call_rcu(&tt_global_entry->rcu, tt_global_entry_free_rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, uint8_t *addr, - uint8_t roaming) + bool roaming) { struct tt_change_node *tt_change_node;
@@ -170,22 +204,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(struct tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
@@ -195,6 +226,7 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -204,31 +236,26 @@ void tt_local_add(struct net_device *soft_iface, uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -310,8 +337,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -325,7 +350,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -345,8 +369,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -355,15 +377,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = (struct bat_priv *)arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, char *message) @@ -376,23 +389,24 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, uint8_t *addr, char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, - roaming); - tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (!tt_local_entry) + goto out; + + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, roaming); + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -401,13 +415,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */ int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -419,22 +434,26 @@ static void tt_local_purge(struct bat_priv *bat_priv)
tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, false); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_TT, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -449,7 +468,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -494,10 +513,9 @@ int tt_global_add(struct bat_priv *bat_priv, unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -505,17 +523,20 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(struct tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; tt_global_entry->flags = 0x00; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -529,25 +550,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_TT, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -584,8 +598,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -600,10 +612,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -625,8 +637,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -640,7 +650,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -648,30 +658,34 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, unsigned char *addr, char *message, bool roaming) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (tt_global_entry->orig_node == orig_node) { if (roaming) { tt_global_entry->flags |= TT_GLOBAL_ROAM; goto out; } - atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -682,38 +696,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_TT, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -722,19 +757,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, uint8_t *addr) struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -797,7 +832,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1343,15 +1377,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
bool is_my_client(struct bat_priv *bat_priv, uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1388,9 +1424,7 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->tt_req_list_lock);
/* Recalculate the CRC for this orig_node and store it */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/types.h b/types.h index b148bc3..fdc6993 100644 --- a/types.h +++ b/types.h @@ -190,8 +190,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -232,6 +230,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -240,6 +240,8 @@ struct tt_global_entry { struct orig_node *orig_node; uint8_t ttvn; uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; /* entry in the global table */ };
diff --git a/vis.c b/vis.c index c39f20c..4c27950 100644 --- a/vis.c +++ b/vis.c @@ -680,11 +680,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -693,14 +694,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
Hi,
this new patchset provides some fixes/improvements: - Code has been rebased on top of the current master branch - Support for kfree_rcu() has been added (with the related compat code) - 'Fix & clean' thanks to Sven's suggestions/patches - seq_before() is now used for ttvn comparison in recv_unicast_packet()
*** This patchset assumes that seq_before/after have been moved to main.h using these two patches: - https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004730.html - https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004748.html ***
Thank you all.
Regards, Antonio Quartulli
The old HNA mechanism has been totally rewritten from scratch. The new mechanism consists in announcing local translation-table changes only, reducing the protocol overhead.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Client-announcement
Moreover: - COMPAT_VERSION has been increased to 14 - batman-adv now depends on module "crc16" for tt crc computation
Signed-off-by: Antonio Quartulli ordex@autistici.org Acked-by: Simon Wunderlich siwu@hrz.tu-chemnitz.de ---
- Cleaned following ecsv's patches/suggestions. - recv_unicast_packet() now uses seq_before()
aggregation.c | 23 +- aggregation.h | 6 +- bat_sysfs.c | 2 +- hard-interface.c | 13 +- main.c | 13 +- main.h | 14 +- originator.c | 8 +- packet.h | 37 ++- routing.c | 236 +++++++++--- routing.h | 6 +- send.c | 87 +++- send.h | 2 +- soft-interface.c | 11 +- translation-table.c | 1128 ++++++++++++++++++++++++++++++++++++++++++--------- translation-table.h | 34 ++- types.h | 38 ++- unicast.c | 3 + 17 files changed, 1356 insertions(+), 305 deletions(-)
diff --git a/aggregation.c b/aggregation.c index b41f25b..ef26011 100644 --- a/aggregation.c +++ b/aggregation.c @@ -20,17 +20,12 @@ */
#include "main.h" +#include "translation-table.h" #include "aggregation.h" #include "send.h" #include "routing.h" #include "hard-interface.h"
-/* calculate the size of the tt information for a given packet */ -static int tt_len(const struct batman_packet *batman_packet) -{ - return batman_packet->num_tt * ETH_ALEN; -} - /* return true if new_packet can be aggregated with forw_packet */ static bool can_aggregate_with(const struct batman_packet *new_batman_packet, int packet_len, @@ -264,18 +259,20 @@ void receive_aggr_bat_packet(const struct ethhdr *ethhdr, batman_packet = (struct batman_packet *)packet_buff;
do { - /* network to host order for our 32bit seqno, and the - orig_interval. */ + /* network to host order for our 32bit seqno and the + orig_interval */ batman_packet->seqno = ntohl(batman_packet->seqno); + batman_packet->tt_crc = ntohs(batman_packet->tt_crc);
tt_buff = packet_buff + buff_pos + BAT_PACKET_LEN; - receive_bat_packet(ethhdr, batman_packet, - tt_buff, tt_len(batman_packet), - if_incoming);
- buff_pos += BAT_PACKET_LEN + tt_len(batman_packet); + receive_bat_packet(ethhdr, batman_packet, tt_buff, if_incoming); + + buff_pos += BAT_PACKET_LEN + + tt_len(batman_packet->tt_num_changes); + batman_packet = (struct batman_packet *) (packet_buff + buff_pos); } while (aggregated_packet(buff_pos, packet_len, - batman_packet->num_tt)); + batman_packet->tt_num_changes)); } diff --git a/aggregation.h b/aggregation.h index fedeb8d..2b7b852 100644 --- a/aggregation.h +++ b/aggregation.h @@ -25,9 +25,11 @@ #include "main.h"
/* is there another aggregated packet here? */ -static inline int aggregated_packet(int buff_pos, int packet_len, int num_tt) +static inline int aggregated_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BAT_PACKET_LEN + (num_tt * ETH_ALEN); + int next_buff_pos = buff_pos + BAT_PACKET_LEN + (tt_num_changes * + sizeof(struct tt_change));
return (next_buff_pos <= packet_len) && (next_buff_pos <= MAX_AGGREGATION_BYTES); diff --git a/bat_sysfs.c b/bat_sysfs.c index 6f70560..df8a283 100644 --- a/bat_sysfs.c +++ b/bat_sysfs.c @@ -367,7 +367,7 @@ BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, store_gw_bwidth); #ifdef CONFIG_BATMAN_ADV_DEBUG -BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 3, NULL); +BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 7, NULL); #endif
static struct bat_attribute *mesh_attrs[] = { diff --git a/hard-interface.c b/hard-interface.c index a3fbfb5..5ade5a8 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -152,12 +152,6 @@ static void primary_if_select(struct bat_priv *bat_priv, batman_packet->ttl = TTL;
primary_if_update_addr(bat_priv); - - /*** - * hacky trick to make sure that we send the TT information via - * our new primary interface - */ - atomic_set(&bat_priv->tt_local_changed, 1); }
static bool hardif_is_iface_up(const struct hard_iface *hard_iface) @@ -340,7 +334,8 @@ int hardif_enable_interface(struct hard_iface *hard_iface, batman_packet->flags = 0; batman_packet->ttl = 2; batman_packet->tq = TQ_MAX_VALUE; - batman_packet->num_tt = 0; + batman_packet->tt_num_changes = 0; + batman_packet->ttvn = 0;
hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; @@ -659,6 +654,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_VIS: ret = recv_vis_packet(skb, hard_iface); break; + /* Translation table query (request or response) */ + case BAT_TT_QUERY: + ret = recv_tt_query(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 2d6445e..49a5e64 100644 --- a/main.c +++ b/main.c @@ -86,6 +86,9 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->forw_bcast_list_lock); spin_lock_init(&bat_priv->tt_lhash_lock); spin_lock_init(&bat_priv->tt_ghash_lock); + spin_lock_init(&bat_priv->tt_changes_list_lock); + spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); spin_lock_init(&bat_priv->vis_list_lock); @@ -96,14 +99,13 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->forw_bcast_list); INIT_HLIST_HEAD(&bat_priv->gw_list); INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); + INIT_LIST_HEAD(&bat_priv->tt_changes_list); + INIT_LIST_HEAD(&bat_priv->tt_req_list);
if (originator_init(bat_priv) < 1) goto err;
- if (tt_local_init(bat_priv) < 1) - goto err; - - if (tt_global_init(bat_priv) < 1) + if (tt_init(bat_priv) < 1) goto err;
tt_local_add(soft_iface, soft_iface->dev_addr); @@ -137,8 +139,7 @@ void mesh_free(struct net_device *soft_iface) gw_node_purge(bat_priv); originator_free(bat_priv);
- tt_local_free(bat_priv); - tt_global_free(bat_priv); + tt_free(bat_priv);
softif_neigh_purge(bat_priv);
diff --git a/main.h b/main.h index db29444..930fbdb 100644 --- a/main.h +++ b/main.h @@ -46,11 +46,19 @@ /* sliding packet range of received originator messages in squence numbers * (should be a multiple of our word size) */ #define TQ_LOCAL_WINDOW_SIZE 64 +#define TT_REQUEST_TIMEOUT 3 /* seconds we have to keep pending tt_req */ + #define TQ_GLOBAL_WINDOW_SIZE 5 #define TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 #define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 #define TQ_TOTAL_BIDRECT_LIMIT 1
+#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ + +/* Transtable change flags */ +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 + #define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ @@ -90,9 +98,9 @@
/* all messages related to routing / flooding / broadcasting / etc */ #define DBG_BATMAN 1 -/* route or tt entry added / changed / deleted */ -#define DBG_ROUTES 2 -#define DBG_ALL 3 +#define DBG_ROUTES 2 /* route added / changed / deleted */ +#define DBG_TT 4 /* translation table operations */ +#define DBG_ALL 7
/* diff --git a/originator.c b/originator.c index a6c35d4..66938fa 100644 --- a/originator.c +++ b/originator.c @@ -137,6 +137,7 @@ static void orig_node_free_rcu(struct rcu_head *rcu) tt_global_del_orig(orig_node->bat_priv, orig_node, "originator timed out");
+ kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); kfree(orig_node->bcast_own_sum); kfree(orig_node); @@ -205,6 +206,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) spin_lock_init(&orig_node->ogm_cnt_lock); spin_lock_init(&orig_node->bcast_seqno_lock); spin_lock_init(&orig_node->neigh_list_lock); + spin_lock_init(&orig_node->tt_buff_lock);
/* extra reference for return */ atomic_set(&orig_node->refcount, 2); @@ -213,6 +215,8 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; orig_node->tt_buff = NULL; + orig_node->tt_buff_len = 0; + atomic_set(&orig_node->tt_size, 0); orig_node->bcast_seqno_reset = jiffies - 1 - msecs_to_jiffies(RESET_PROTECTION_MS); orig_node->batman_seqno_reset = jiffies - 1 @@ -322,9 +326,7 @@ static bool purge_orig_node(struct bat_priv *bat_priv, if (purge_orig_neighbors(bat_priv, orig_node, &best_neigh_node)) { update_routes(bat_priv, orig_node, - best_neigh_node, - orig_node->tt_buff, - orig_node->tt_buff_len); + best_neigh_node); } }
diff --git a/packet.h b/packet.h index eda9965..525fee5 100644 --- a/packet.h +++ b/packet.h @@ -30,9 +30,10 @@ #define BAT_BCAST 0x04 #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 +#define BAT_TT_QUERY 0x07
/* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 12 +#define COMPAT_VERSION 14 #define DIRECTLINK 0x40 #define VIS_SERVER 0x20 #define PRIMARIES_FIRST_HOP 0x10 @@ -52,6 +53,11 @@ #define UNI_FRAG_HEAD 0x01 #define UNI_FRAG_LARGETAIL 0x02
+/* TT flags */ +#define TT_RESPONSE 0x00 +#define TT_REQUEST 0x01 +#define TT_FULL_TABLE 0x02 + struct batman_packet { uint8_t packet_type; uint8_t version; /* batman version field */ @@ -61,9 +67,10 @@ struct batman_packet { uint8_t orig[6]; uint8_t prev_sender[6]; uint8_t ttl; - uint8_t num_tt; + uint8_t ttvn; /* translation table version number */ + uint16_t tt_crc; + uint8_t tt_num_changes; uint8_t gw_flags; /* flags related to gateway class */ - uint8_t align; } __packed;
#define BAT_PACKET_LEN sizeof(struct batman_packet) @@ -101,6 +108,7 @@ struct unicast_packet { uint8_t version; /* batman version field */ uint8_t dest[6]; uint8_t ttl; + uint8_t ttvn; /* destination translation table version number */ } __packed;
struct unicast_frag_packet { @@ -133,4 +141,27 @@ struct vis_packet { uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */ } __packed;
+struct tt_query_packet { + uint8_t packet_type; + uint8_t version; /* batman version field */ + uint8_t dst[ETH_ALEN]; + uint8_t ttl; + /* the flag field is a combination of: + * - TT_REQUEST or TT_RESPONSE + * - TT_FULL_TABLE */ + uint8_t flags; + uint8_t src[ETH_ALEN]; + /* the ttvn field is: + * if TT_REQUEST: ttvn that triggered the + * request + * if TT_RESPONSE: new ttvn for the src + * orig_node */ + uint8_t ttvn; + /* tt_data field is: + * if TT_REQUEST: crc associated with the + * ttvn + * if TT_RESPONSE: table_size */ + uint16_t tt_data; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index 07f23ba..e1b04a7 100644 --- a/routing.c +++ b/routing.c @@ -64,27 +64,56 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) } }
-static void update_TT(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_buff, int tt_buff_len) +static void update_transtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, + const unsigned char *tt_buff, + uint8_t tt_num_changes, uint8_t ttvn, + uint16_t tt_crc) { - if ((tt_buff_len != orig_node->tt_buff_len) || - ((tt_buff_len > 0) && - (orig_node->tt_buff_len > 0) && - (memcmp(orig_node->tt_buff, tt_buff, tt_buff_len) != 0))) { - - if (orig_node->tt_buff_len > 0) - tt_global_del_orig(bat_priv, orig_node, - "originator changed tt"); - - if ((tt_buff_len > 0) && (tt_buff)) - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); + uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + bool full_table = true; + + /* the ttvn increased by one -> we can apply the attached changes */ + if (ttvn - orig_ttvn == 1) { + /* the OGM could not contain the changes because they were too + * many to fit in one frame or because they have already been + * sent TT_OGM_APPEND_MAX times. In this case send a tt + * request */ + if (!tt_num_changes) { + full_table = false; + goto request_table; + } + + tt_update_changes(bat_priv, orig_node, tt_num_changes, ttvn, + (struct tt_change *)tt_buff); + + /* Even if we received the crc into the OGM, we prefer + * to recompute it to spot any possible inconsistency + * in the global table */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + } else { + /* if we missed more than one change or our tables are not + * in sync anymore -> request fresh tt data */ + if (ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { +request_table: + bat_dbg(DBG_TT, bat_priv, "TT inconsistency for %pM. " + "Need to retrieve the correct information " + "(ttvn: %u last_ttvn: %u crc: %u last_crc: " + "%u num_changes: %u)\n", orig_node->orig, ttvn, + orig_ttvn, tt_crc, orig_node->tt_crc, + tt_num_changes); + send_tt_request(bat_priv, orig_node, ttvn, tt_crc, + full_table); + return; + } } }
-static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, - const unsigned char *tt_buff, int tt_buff_len) +static void update_route(struct bat_priv *bat_priv, + struct orig_node *orig_node, + struct neigh_node *neigh_node) { struct neigh_node *curr_router;
@@ -92,11 +121,10 @@ static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node,
/* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", orig_node->orig); tt_global_del_orig(bat_priv, orig_node, - "originator timed out"); + "Deleted route towards originator");
/* route added */ } else if ((!curr_router) && (neigh_node)) { @@ -104,9 +132,6 @@ static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, bat_dbg(DBG_ROUTES, bat_priv, "Adding route towards: %pM (via %pM)\n", orig_node->orig, neigh_node->addr); - tt_global_add_orig(bat_priv, orig_node, - tt_buff, tt_buff_len); - /* route changed */ } else { bat_dbg(DBG_ROUTES, bat_priv, @@ -133,8 +158,7 @@ static void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, }
void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, const unsigned char *tt_buff, - int tt_buff_len) + struct neigh_node *neigh_node) { struct neigh_node *router = NULL;
@@ -144,11 +168,7 @@ void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, router = orig_node_get_router(orig_node);
if (router != neigh_node) - update_route(bat_priv, orig_node, neigh_node, - tt_buff, tt_buff_len); - /* may be just TT changed */ - else - update_TT(bat_priv, orig_node, tt_buff, tt_buff_len); + update_route(bat_priv, orig_node, neigh_node);
out: if (router) @@ -360,14 +380,12 @@ static void update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, const struct ethhdr *ethhdr, const struct batman_packet *batman_packet, struct hard_iface *if_incoming, - const unsigned char *tt_buff, int tt_buff_len, - char is_duplicate) + const unsigned char *tt_buff, char is_duplicate) { struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; struct neigh_node *router = NULL; struct orig_node *orig_node_tmp; struct hlist_node *node; - int tmp_tt_buff_len; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh;
bat_dbg(DBG_BATMAN, bat_priv, "update_originator(): " @@ -432,9 +450,6 @@ static void update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node,
bonding_candidate_add(orig_node, neigh_node);
- tmp_tt_buff_len = (tt_buff_len > batman_packet->num_tt * ETH_ALEN ? - batman_packet->num_tt * ETH_ALEN : tt_buff_len); - /* if this neighbor already is our next hop there is nothing * to change */ router = orig_node_get_router(orig_node); @@ -464,15 +479,19 @@ static void update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, goto update_tt; }
- update_routes(bat_priv, orig_node, neigh_node, - tt_buff, tmp_tt_buff_len); - goto update_gw; + update_routes(bat_priv, orig_node, neigh_node);
update_tt: - update_routes(bat_priv, orig_node, router, - tt_buff, tmp_tt_buff_len); + /* I have to check for transtable changes only if the OGM has been + * sent through a primary interface */ + if (((batman_packet->orig != ethhdr->h_source) && + (batman_packet->ttl > 2)) || + (batman_packet->flags & PRIMARIES_FIRST_HOP)) + update_transtable(bat_priv, orig_node, tt_buff, + batman_packet->tt_num_changes, + batman_packet->ttvn, + batman_packet->tt_crc);
-update_gw: if (orig_node->gw_flags != batman_packet->gw_flags) gw_node_update(bat_priv, orig_node, batman_packet->gw_flags);
@@ -594,7 +613,7 @@ out:
void receive_bat_packet(const struct ethhdr *ethhdr, struct batman_packet *batman_packet, - const unsigned char *tt_buff, int tt_buff_len, + const unsigned char *tt_buff, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); @@ -633,12 +652,14 @@ void receive_bat_packet(const struct ethhdr *ethhdr,
bat_dbg(DBG_BATMAN, bat_priv, "Received BATMAN packet via NB: %pM, IF: %s [%pM] " - "(from OG: %pM, via prev OG: %pM, seqno %d, tq %d, " - "TTL %d, V %d, IDF %d)\n", + "(from OG: %pM, via prev OG: %pM, seqno %d, ttvn %u, " + "crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", ethhdr->h_source, if_incoming->net_dev->name, if_incoming->net_dev->dev_addr, batman_packet->orig, batman_packet->prev_sender, batman_packet->seqno, - batman_packet->tq, batman_packet->ttl, batman_packet->version, + batman_packet->ttvn, batman_packet->tt_crc, + batman_packet->tt_num_changes, batman_packet->tq, + batman_packet->ttl, batman_packet->version, has_directlink_flag);
rcu_read_lock(); @@ -791,14 +812,14 @@ void receive_bat_packet(const struct ethhdr *ethhdr, ((orig_node->last_real_seqno == batman_packet->seqno) && (orig_node->last_ttl - 3 <= batman_packet->ttl)))) update_orig(bat_priv, orig_node, ethhdr, batman_packet, - if_incoming, tt_buff, tt_buff_len, is_duplicate); + if_incoming, tt_buff, is_duplicate);
/* is single hop (direct) neighbor */ if (is_single_hop_neigh) {
/* mark direct link on incoming interface */ schedule_forward_packet(orig_node, ethhdr, batman_packet, - 1, tt_buff_len, if_incoming); + 1, if_incoming);
bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: " "rebroadcast neighbor packet with direct link flag\n"); @@ -821,7 +842,7 @@ void receive_bat_packet(const struct ethhdr *ethhdr, bat_dbg(DBG_BATMAN, bat_priv, "Forwarding packet: rebroadcast originator packet\n"); schedule_forward_packet(orig_node, ethhdr, batman_packet, - 0, tt_buff_len, if_incoming); + 0, if_incoming);
out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) @@ -1168,6 +1189,70 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; }
+int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct tt_query_packet *tt_query; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + goto out; + + /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + tt_query = (struct tt_query_packet *)skb->data; + + tt_query->tt_data = ntohs(tt_query->tt_data); + + if (tt_query->flags & TT_REQUEST) { + /* If we cannot provide an answer the tt_request is + * forwarded */ + if (!send_tt_response(bat_priv, tt_query)) { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + goto out; + } + /* packet needs to be linearised to access the TT changes records */ + if (skb_linearize(skb) < 0) + goto out; + + if (is_my_mac(tt_query->dst)) + handle_tt_response(bat_priv, tt_query); + else { + bat_dbg(DBG_TT, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); + tt_query->tt_data = htons(tt_query->tt_data); + return route_unicast_packet(skb, recv_if); + } + ret = NET_RX_SUCCESS; + +out: + kfree_skb(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1356,14 +1441,69 @@ out:
int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) { + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); struct unicast_packet *unicast_packet; int hdr_size = sizeof(*unicast_packet); + struct orig_node *orig_node; + struct ethhdr *ethhdr; + uint8_t curr_ttvn; + int16_t diff; + struct hard_iface *primary_if;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
unicast_packet = (struct unicast_packet *)skb->data;
+ if (is_my_mac(unicast_packet->dest)) + curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + else { + orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + + if (!orig_node) + return NET_RX_DROP; + + curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + diff = unicast_packet->ttvn - curr_ttvn; + /* Check whether I have to reroute the packet */ + if (unicast_packet->packet_type == BAT_UNICAST && + (diff < 0 && diff > -0xff/2)) { + /* Linearize the skb before accessing it */ + if (skb_linearize(skb) < 0) + return NET_RX_DROP; + + ethhdr = (struct ethhdr *)(skb->data + + sizeof(struct unicast_packet)); + + orig_node = transtable_search(bat_priv, ethhdr->h_dest); + + if (!orig_node) { + if (!is_my_client(bat_priv, ethhdr->h_dest)) + return NET_RX_DROP; + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + return NET_RX_DROP; + memcpy(unicast_packet->dest, + primary_if->net_dev->dev_addr, ETH_ALEN); + hardif_free_ref(primary_if); + } else { + memcpy(unicast_packet->dest, orig_node->orig, + ETH_ALEN); + curr_ttvn = (uint8_t) + atomic_read(&orig_node->last_ttvn); + orig_node_free_ref(orig_node); + } + + bat_dbg(DBG_ROUTES, bat_priv, "TTVN mismatch (old_ttvn %u " + "new_ttvn %u)! Rerouting unicast packet (for %pM) to " + "%pM\n", unicast_packet->ttvn, curr_ttvn, + ethhdr->h_dest, unicast_packet->dest); + + unicast_packet->ttvn = curr_ttvn; + } /* packet for me */ if (is_my_mac(unicast_packet->dest)) { interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); diff --git a/routing.h b/routing.h index 0ce0392..e77d464 100644 --- a/routing.h +++ b/routing.h @@ -25,11 +25,10 @@ void slide_own_bcast_window(struct hard_iface *hard_iface); void receive_bat_packet(const struct ethhdr *ethhdr, struct batman_packet *batman_packet, - const unsigned char *tt_buff, int tt_buff_len, + const unsigned char *tt_buff, struct hard_iface *if_incoming); void update_routes(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node, const unsigned char *tt_buff, - int tt_buff_len); + struct neigh_node *neigh_node); int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); @@ -37,6 +36,7 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, const struct hard_iface *recv_if); diff --git a/send.c b/send.c index d0cfa95..13e5d20 100644 --- a/send.c +++ b/send.c @@ -120,7 +120,7 @@ static void send_packet_to_if(struct forw_packet *forw_packet, /* adjust all flags and log packets */ while (aggregated_packet(buff_pos, forw_packet->packet_len, - batman_packet->num_tt)) { + batman_packet->tt_num_changes)) {
/* we might have aggregated direct link packets with an * ordinary base packet */ @@ -135,17 +135,17 @@ static void send_packet_to_if(struct forw_packet *forw_packet, "Forwarding")); bat_dbg(DBG_BATMAN, bat_priv, "%s %spacket (originator %pM, seqno %d, TQ %d, TTL %d," - " IDF %s) on interface %s [%pM]\n", + " IDF %s, hvn %d) on interface %s [%pM]\n", fwd_str, (packet_num > 0 ? "aggregated " : ""), batman_packet->orig, ntohl(batman_packet->seqno), batman_packet->tq, batman_packet->ttl, (batman_packet->flags & DIRECTLINK ? "on" : "off"), - hard_iface->net_dev->name, + batman_packet->ttvn, hard_iface->net_dev->name, hard_iface->net_dev->dev_addr);
buff_pos += sizeof(*batman_packet) + - (batman_packet->num_tt * ETH_ALEN); + tt_len(batman_packet->tt_num_changes); packet_num++; batman_packet = (struct batman_packet *) (forw_packet->skb->data + buff_pos); @@ -213,25 +213,18 @@ static void send_packet(struct forw_packet *forw_packet) rcu_read_unlock(); }
-static void rebuild_batman_packet(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) +static void realloc_packet_buffer(struct hard_iface *hard_iface, + int new_len) { - int new_len; unsigned char *new_buff; struct batman_packet *batman_packet;
- new_len = sizeof(*batman_packet) + (bat_priv->num_local_tt * ETH_ALEN); new_buff = kmalloc(new_len, GFP_ATOMIC);
/* keep old buffer if kmalloc should fail */ if (new_buff) { memcpy(new_buff, hard_iface->packet_buff, sizeof(*batman_packet)); - batman_packet = (struct batman_packet *)new_buff; - - batman_packet->num_tt = tt_local_fill_buffer(bat_priv, - new_buff + sizeof(*batman_packet), - new_len - sizeof(*batman_packet));
kfree(hard_iface->packet_buff); hard_iface->packet_buff = new_buff; @@ -239,6 +232,46 @@ static void rebuild_batman_packet(struct bat_priv *bat_priv, } }
+/* when calling this function (hard_iface == primary_if) has to be true */ +static void prepare_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + int new_len; + struct batman_packet *batman_packet; + + new_len = BAT_PACKET_LEN + + tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented */ + if (new_len > hard_iface->soft_iface->mtu) + new_len = BAT_PACKET_LEN; + + realloc_packet_buffer(hard_iface, new_len); + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + + atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); + + batman_packet->tt_num_changes = tt_changes_fill_buffer(bat_priv, + hard_iface->packet_buff + BAT_PACKET_LEN, + hard_iface->packet_len - BAT_PACKET_LEN); + +} + +static void reset_packet_buffer(struct bat_priv *bat_priv, + struct hard_iface *hard_iface) +{ + struct batman_packet *batman_packet; + + realloc_packet_buffer(hard_iface, BAT_PACKET_LEN); + + batman_packet = (struct batman_packet *)hard_iface->packet_buff; + batman_packet->tt_num_changes = 0; +} + void schedule_own_packet(struct hard_iface *hard_iface) { struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); @@ -264,14 +297,22 @@ void schedule_own_packet(struct hard_iface *hard_iface) if (hard_iface->if_status == IF_TO_BE_ACTIVATED) hard_iface->if_status = IF_ACTIVE;
- /* if local tt has changed and interface is a primary interface */ - if ((atomic_read(&bat_priv->tt_local_changed)) && - (hard_iface == primary_if)) - rebuild_batman_packet(bat_priv, hard_iface); + if (hard_iface == primary_if) { + /* if at least one change happened */ + if (atomic_read(&bat_priv->tt_local_changes) > 0) { + prepare_packet_buffer(bat_priv, hard_iface); + /* Increment the TTVN only once per OGM interval */ + atomic_inc(&bat_priv->ttvn); + } + + /* if the changes have been sent enough times */ + if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) + reset_packet_buffer(bat_priv, hard_iface); + }
/** * NOTE: packet_buff might just have been re-allocated in - * rebuild_batman_packet() + * prepare_packet_buffer() or in reset_packet_buffer() */ batman_packet = (struct batman_packet *)hard_iface->packet_buff;
@@ -279,6 +320,9 @@ void schedule_own_packet(struct hard_iface *hard_iface) batman_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno));
+ batman_packet->ttvn = atomic_read(&bat_priv->ttvn); + batman_packet->tt_crc = htons((uint16_t)atomic_read(&bat_priv->tt_crc)); + if (vis_server == VIS_TYPE_SERVER_SYNC) batman_packet->flags |= VIS_SERVER; else @@ -307,13 +351,14 @@ void schedule_own_packet(struct hard_iface *hard_iface) void schedule_forward_packet(struct orig_node *orig_node, const struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_incoming) { struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct neigh_node *router; unsigned char in_tq, in_ttl, tq_avg = 0; unsigned long send_time; + uint8_t tt_num_changes;
if (batman_packet->ttl <= 1) { bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); @@ -324,6 +369,7 @@ void schedule_forward_packet(struct orig_node *orig_node,
in_tq = batman_packet->tq; in_ttl = batman_packet->ttl; + tt_num_changes = batman_packet->tt_num_changes;
batman_packet->ttl--; memcpy(batman_packet->prev_sender, ethhdr->h_source, ETH_ALEN); @@ -356,6 +402,7 @@ void schedule_forward_packet(struct orig_node *orig_node, batman_packet->ttl);
batman_packet->seqno = htonl(batman_packet->seqno); + batman_packet->tt_crc = htons(batman_packet->tt_crc);
/* switch of primaries first hop flag when forwarding */ batman_packet->flags &= ~PRIMARIES_FIRST_HOP; @@ -367,7 +414,7 @@ void schedule_forward_packet(struct orig_node *orig_node, send_time = forward_send_time(); add_bat_packet_to_list(bat_priv, (unsigned char *)batman_packet, - sizeof(*batman_packet) + tt_buff_len, + sizeof(*batman_packet) + tt_len(tt_num_changes), if_incoming, 0, send_time); }
diff --git a/send.h b/send.h index eceab87..bd5ab77 100644 --- a/send.h +++ b/send.h @@ -28,7 +28,7 @@ void schedule_own_packet(struct hard_iface *hard_iface); void schedule_forward_packet(struct orig_node *orig_node, const struct ethhdr *ethhdr, struct batman_packet *batman_packet, - uint8_t directlink, int tt_buff_len, + uint8_t directlink, struct hard_iface *if_outgoing); int add_bcast_packet_to_list(struct bat_priv *bat_priv, const struct sk_buff *skb); diff --git a/soft-interface.c b/soft-interface.c index b8d3f24..8e94273 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -534,7 +534,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed"); tt_local_add(dev, addr->sa_data); }
@@ -592,7 +592,7 @@ int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) if (curr_softif_neigh) goto dropped;
- /* TODO: check this for locks */ + /* Register the client MAC in the transtable */ tt_local_add(soft_iface, ethhdr->h_source);
if (is_multicast_ether_addr(ethhdr->h_dest)) { @@ -830,7 +830,12 @@ struct net_device *softif_create(const char *name)
atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); - atomic_set(&bat_priv->tt_local_changed, 0); + atomic_set(&bat_priv->ttvn, 0); + atomic_set(&bat_priv->tt_local_changes, 0); + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + + bat_priv->tt_buff = NULL; + bat_priv->tt_buff_len = 0;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 561f769..4c355b4 100644 --- a/translation-table.c +++ b/translation-table.c @@ -23,13 +23,17 @@ #include "translation-table.h" #include "soft-interface.h" #include "hard-interface.h" +#include "send.h" #include "hash.h" #include "originator.h" +#include "routing.h"
-static void tt_local_purge(struct work_struct *work); -static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - const char *message); +#include <linux/crc16.h> + +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + const char *message); +static void tt_purge(struct work_struct *work);
/* returns 1 if they are the same mac addr */ static int compare_ltt(const struct hlist_node *node, const void *data2) @@ -49,10 +53,11 @@ static int compare_gtt(const struct hlist_node *node, const void *data2) return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); }
-static void tt_local_start_timer(struct bat_priv *bat_priv) +static void tt_start_timer(struct bat_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_local_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, 10 * HZ); + INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); + queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + msecs_to_jiffies(5000)); }
static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, @@ -112,7 +117,43 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, return tt_global_entry_tmp; }
-int tt_local_init(struct bat_priv *bat_priv) +static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) +{ + unsigned long deadline; + deadline = starting_time + msecs_to_jiffies(timeout); + + return time_after(jiffies, deadline); +} + +static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, + const uint8_t *addr) +{ + struct tt_change_node *tt_change_node; + + tt_change_node = (struct tt_change_node *) + kmalloc(sizeof(*tt_change_node), GFP_ATOMIC); + + if (!tt_change_node) + return; + + tt_change_node->change.flags = op; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + + spin_lock_bh(&bat_priv->tt_changes_list_lock); + /* track the change in the OGMinterval list */ + list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); + atomic_inc(&bat_priv->tt_local_changes); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + atomic_set(&bat_priv->tt_ogm_append_cnt, 0); +} + +int tt_len(int changes_num) +{ + return changes_num * sizeof(struct tt_change); +} + +static int tt_local_init(struct bat_priv *bat_priv) { if (bat_priv->tt_local_hash) return 1; @@ -122,9 +163,6 @@ int tt_local_init(struct bat_priv *bat_priv) if (!bat_priv->tt_local_hash) return 0;
- atomic_set(&bat_priv->tt_local_changed, 0); - tt_local_start_timer(bat_priv); - return 1; }
@@ -133,40 +171,24 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; - int required_bytes;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - return; - } - - /* only announce as many hosts as possible in the batman-packet and - space in batman_packet->num_tt That also should give a limit to - MAC-flooding. */ - required_bytes = (bat_priv->num_local_tt + 1) * ETH_ALEN; - required_bytes += BAT_PACKET_LEN; - - if ((required_bytes > ETH_DATA_LEN) || - (atomic_read(&bat_priv->aggregated_ogms) && - required_bytes > MAX_AGGREGATION_BYTES) || - (bat_priv->num_local_tt + 1 > 255)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Can't add new local tt entry (%pM): " - "number of local tt entries exceeds packet size\n", - addr); - return; + goto unlock; }
- bat_dbg(DBG_ROUTES, bat_priv, - "Creating new local tt entry: %pM\n", addr); - tt_local_entry = kmalloc(sizeof(*tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - return; + goto unlock; + + tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + + bat_dbg(DBG_TT, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d)\n", addr, + (uint8_t)atomic_read(&bat_priv->ttvn));
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; @@ -177,13 +199,9 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) else tt_local_entry->never_purge = 0;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); - bat_priv->num_local_tt++; - atomic_set(&bat_priv->tt_local_changed, 1); - + atomic_inc(&bat_priv->num_local_tt); spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ @@ -192,46 +210,60 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry) - _tt_global_del_orig(bat_priv, tt_global_entry, - "local tt received"); + _tt_global_del(bat_priv, tt_global_entry, + "local tt received");
spin_unlock_bh(&bat_priv->tt_ghash_lock); + return; +unlock: + spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct hlist_node *node; - struct hlist_head *head; - int i, count = 0; - - spin_lock_bh(&bat_priv->tt_lhash_lock); - - for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; + int count = 0, tot_changes = 0; + struct tt_change_node *entry, *safe;
- rcu_read_lock(); - hlist_for_each_entry_rcu(tt_local_entry, node, - head, hash_entry) { - if (buff_len < (count + 1) * ETH_ALEN) - break; + if (buff_len > 0) + tot_changes = buff_len / tt_len(1);
- memcpy(buff + (count * ETH_ALEN), tt_local_entry->addr, - ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock); + atomic_set(&bat_priv->tt_local_changes, 0);
+ list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (count < tot_changes) { + memcpy(buff + tt_len(count), + &entry->change, sizeof(struct tt_change)); count++; } - rcu_read_unlock(); + list_del(&entry->list); + kfree(entry); } + spin_unlock_bh(&bat_priv->tt_changes_list_lock); + + /* Keep the buffer for possible tt_request */ + spin_lock_bh(&bat_priv->tt_buff_lock); + kfree(bat_priv->tt_buff); + bat_priv->tt_buff_len = 0; + bat_priv->tt_buff = NULL; + /* We check whether this new OGM has no changes due to size + * problems */ + if (buff_len > 0) { + /** + * if kmalloc() fails we will reply with the full table + * instead of providing the diff + */ + bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + if (bat_priv->tt_buff) { + memcpy(bat_priv->tt_buff, buff, buff_len); + bat_priv->tt_buff_len = buff_len; + } + } + spin_unlock_bh(&bat_priv->tt_buff_lock);
- /* if we did not get all new local tts see you next time ;-) */ - if (count == bat_priv->num_local_tt) - atomic_set(&bat_priv->tt_local_changed, 0); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return count; + return tot_changes; }
int tt_local_seq_print_text(struct seq_file *seq, void *offset) @@ -263,8 +295,8 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) }
seq_printf(seq, "Locally retrieved addresses (from %s) " - "announced via TT:\n", - net_dev->name); + "announced via TT (TTVN: %u):\n", + net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
spin_lock_bh(&bat_priv->tt_lhash_lock);
@@ -311,54 +343,51 @@ out: return ret; }
-static void _tt_local_del(struct hlist_node *node, void *arg) +static void tt_local_entry_free(struct hlist_node *node, void *arg) { struct bat_priv *bat_priv = arg; void *data = container_of(node, struct tt_local_entry, hash_entry);
kfree(data); - bat_priv->num_local_tt--; - atomic_set(&bat_priv->tt_local_changed, 1); + atomic_dec(&bat_priv->num_local_tt); }
static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, const char *message) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting local tt entry (%pM): %s\n", + bat_dbg(DBG_TT, bat_priv, "Deleting local tt entry (%pM): %s\n", tt_local_entry->addr, message);
+ atomic_dec(&bat_priv->num_local_tt); + hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr); - _tt_local_del(&tt_local_entry->hash_entry, bat_priv); + + tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); }
-void tt_local_remove(struct bat_priv *bat_priv, - const uint8_t *addr, const char *message) +void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, + const char *message) { struct tt_local_entry *tt_local_entry;
spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) + if (tt_local_entry) { + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, message); - + } spin_unlock_bh(&bat_priv->tt_lhash_lock); }
-static void tt_local_purge(struct work_struct *work) +static void tt_local_purge(struct bat_priv *bat_priv) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); struct hashtable_t *hash = bat_priv->tt_local_hash; struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; - unsigned long timeout; int i;
spin_lock_bh(&bat_priv->tt_lhash_lock); @@ -371,32 +400,53 @@ static void tt_local_purge(struct work_struct *work) if (tt_local_entry->never_purge) continue;
- timeout = tt_local_entry->last_seen; - timeout += TT_LOCAL_TIMEOUT * HZ; - - if (time_before(jiffies, timeout)) + if (!is_out_of_time(tt_local_entry->last_seen, + TT_LOCAL_TIMEOUT * 1000)) continue;
+ tt_local_event(bat_priv, TT_CHANGE_DEL, + tt_local_entry->addr); tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + "address timed out"); } }
spin_unlock_bh(&bat_priv->tt_lhash_lock); - tt_local_start_timer(bat_priv); }
-void tt_local_free(struct bat_priv *bat_priv) +static void tt_local_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + int i; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct hlist_head *head; + struct hlist_node *node, *node_tmp; + struct tt_local_entry *tt_local_entry; + if (!bat_priv->tt_local_hash) return;
- cancel_delayed_work_sync(&bat_priv->tt_work); - hash_delete(bat_priv->tt_local_hash, _tt_local_del, bat_priv); + hash = bat_priv->tt_local_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + kfree(tt_local_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_local_hash = NULL; }
-int tt_global_init(struct bat_priv *bat_priv) +static int tt_global_init(struct bat_priv *bat_priv) { if (bat_priv->tt_global_hash) return 1; @@ -409,73 +459,79 @@ int tt_global_init(struct bat_priv *bat_priv) return 1; }
-void tt_global_add_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const unsigned char *tt_buff, int tt_buff_len) +static void tt_changes_list_free(struct bat_priv *bat_priv) { - struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; - int tt_buff_count = 0; - const unsigned char *tt_ptr; - - while ((tt_buff_count + 1) * ETH_ALEN <= tt_buff_len) { - spin_lock_bh(&bat_priv->tt_ghash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if (!tt_global_entry) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - tt_global_entry = kmalloc(sizeof(*tt_global_entry), - GFP_ATOMIC); - - if (!tt_global_entry) - break; + struct tt_change_node *entry, *safe;
- memcpy(tt_global_entry->addr, tt_ptr, ETH_ALEN); + spin_lock_bh(&bat_priv->tt_changes_list_lock);
- bat_dbg(DBG_ROUTES, bat_priv, - "Creating new global tt entry: " - "%pM (via %pM)\n", - tt_global_entry->addr, orig_node->orig); + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + list_del(&entry->list); + kfree(entry); + }
- spin_lock_bh(&bat_priv->tt_ghash_lock); - hash_add(bat_priv->tt_global_hash, compare_gtt, - choose_orig, tt_global_entry, - &tt_global_entry->hash_entry); + atomic_set(&bat_priv->tt_local_changes, 0); + spin_unlock_bh(&bat_priv->tt_changes_list_lock); +}
- } +/* caller must hold orig_node recount */ +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, + const unsigned char *tt_addr, uint8_t ttvn) +{ + struct tt_global_entry *tt_global_entry; + struct tt_local_entry *tt_local_entry; + struct orig_node *orig_node_tmp;
+ spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + + if (!tt_global_entry) { + tt_global_entry = + kmalloc(sizeof(*tt_global_entry), + GFP_ATOMIC); + if (!tt_global_entry) + goto unlock; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); + /* Assign the new orig_node */ + atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - /* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - - tt_ptr = tt_buff + (tt_buff_count * ETH_ALEN); - tt_local_entry = tt_local_hash_find(bat_priv, tt_ptr); - - if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_global_entry->ttvn = ttvn; + atomic_inc(&orig_node->tt_size); + hash_add(bat_priv->tt_global_hash, compare_gtt, + choose_orig, tt_global_entry, + &tt_global_entry->hash_entry); + } else { + if (tt_global_entry->orig_node != orig_node) { + atomic_dec(&tt_global_entry->orig_node->tt_size); + orig_node_tmp = tt_global_entry->orig_node; + atomic_inc(&orig_node->refcount); + tt_global_entry->orig_node = orig_node; + tt_global_entry->ttvn = ttvn; + orig_node_free_ref(orig_node_tmp); + atomic_inc(&orig_node->tt_size); + } + }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- tt_buff_count++; - } + bat_dbg(DBG_TT, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->addr, orig_node->orig);
- /* initialize, and overwrite if malloc succeeds */ - orig_node->tt_buff = NULL; - orig_node->tt_buff_len = 0; + /* remove address from local hash if present */ + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
- if (tt_buff_len > 0) { - orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); - if (orig_node->tt_buff) { - memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); - orig_node->tt_buff_len = tt_buff_len; - } - } + if (tt_local_entry) + tt_local_del(bat_priv, tt_local_entry, + "global tt received"); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + return 1; +unlock: + spin_unlock_bh(&bat_priv->tt_ghash_lock); + return 0; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -509,17 +565,20 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, "Globally announced TT entries received via the mesh %s\n", net_dev->name); + seq_printf(seq, " %-13s %s %-15s %s\n", + "Client", "(TTVN)", "Originator", "(Curr TTVN)");
spin_lock_bh(&bat_priv->tt_ghash_lock);
buf_size = 1; - /* Estimate length for: " * xx:xx:xx:xx:xx:xx via xx:xx:xx:xx:xx:xx\n"*/ + /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via + * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ for (i = 0; i < hash->size; i++) { head = &hash->table[i];
rcu_read_lock(); __hlist_for_each_rcu(node, head) - buf_size += 43; + buf_size += 59; rcu_read_unlock(); }
@@ -538,10 +597,14 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_global_entry, node, head, hash_entry) { - pos += snprintf(buff + pos, 44, - " * %pM via %pM\n", + pos += snprintf(buff + pos, 61, + " * %pM (%3u) via %pM (%3u)\n", tt_global_entry->addr, - tt_global_entry->orig_node->orig); + tt_global_entry->ttvn, + tt_global_entry->orig_node->orig, + (uint8_t) atomic_read( + &tt_global_entry->orig_node-> + last_ttvn)); } rcu_read_unlock(); } @@ -556,64 +619,80 @@ out: return ret; }
-static void _tt_global_del_orig(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - const char *message) +static void _tt_global_del(struct bat_priv *bat_priv, + struct tt_global_entry *tt_global_entry, + const char *message) { - bat_dbg(DBG_ROUTES, bat_priv, + if (!tt_global_entry) + return; + + bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", tt_global_entry->addr, tt_global_entry->orig_node->orig, message);
+ atomic_dec(&tt_global_entry->orig_node->tt_size); hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); kfree(tt_global_entry); }
+void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, const unsigned char *addr, + const char *message) +{ + struct tt_global_entry *tt_global_entry; + + spin_lock_bh(&bat_priv->tt_ghash_lock); + tt_global_entry = tt_global_hash_find(bat_priv, addr); + + if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + atomic_dec(&orig_node->tt_size); + _tt_global_del(bat_priv, tt_global_entry, message); + } + spin_unlock_bh(&bat_priv->tt_ghash_lock); +} + void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, const char *message) + struct orig_node *orig_node, const char *message) { struct tt_global_entry *tt_global_entry; - int tt_buff_count = 0; - unsigned char *tt_ptr; + int i; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct hlist_node *node, *safe; + struct hlist_head *head;
- if (orig_node->tt_buff_len == 0) + if (!bat_priv->tt_global_hash) return;
spin_lock_bh(&bat_priv->tt_ghash_lock); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i];
- while ((tt_buff_count + 1) * ETH_ALEN <= orig_node->tt_buff_len) { - tt_ptr = orig_node->tt_buff + (tt_buff_count * ETH_ALEN); - tt_global_entry = tt_global_hash_find(bat_priv, tt_ptr); - - if ((tt_global_entry) && - (tt_global_entry->orig_node == orig_node)) - _tt_global_del_orig(bat_priv, tt_global_entry, - message); - - tt_buff_count++; + hlist_for_each_entry_safe(tt_global_entry, node, safe, + head, hash_entry) { + if (tt_global_entry->orig_node == orig_node) + _tt_global_del(bat_priv, tt_global_entry, + message); + } } + atomic_set(&orig_node->tt_size, 0);
spin_unlock_bh(&bat_priv->tt_ghash_lock); - - orig_node->tt_buff_len = 0; - kfree(orig_node->tt_buff); - orig_node->tt_buff = NULL; }
-static void tt_global_del(struct hlist_node *node, void *arg) +static void tt_global_entry_free(struct hlist_node *node, void *arg) { void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
-void tt_global_free(struct bat_priv *bat_priv) +static void tt_global_table_free(struct bat_priv *bat_priv) { if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_del, NULL); + hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); bat_priv->tt_global_hash = NULL; }
@@ -638,3 +717,686 @@ out: spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; } + +/* Calculates the checksum of the local table of a given orig_node */ +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_global_hash; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (compare_eth(tt_global_entry->orig_node, + orig_node)) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_global_entry->addr[j]); + total ^= total_one; + } + } + rcu_read_unlock(); + } + + return total; +} + +/* Calculates the checksum of the local table */ +uint16_t tt_local_crc(struct bat_priv *bat_priv) +{ + uint16_t total = 0, total_one; + struct hashtable_t *hash = bat_priv->tt_local_hash; + struct tt_local_entry *tt_local_entry; + struct hlist_node *node; + struct hlist_head *head; + int i, j; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + total_one = 0; + for (j = 0; j < ETH_ALEN; j++) + total_one = crc16_byte(total_one, + tt_local_entry->addr[j]); + total ^= total_one; + } + + rcu_read_unlock(); + } + + return total; +} + +static void tt_req_list_free(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + const unsigned char *tt_buff, uint8_t tt_num_changes) +{ + uint16_t tt_buff_len = tt_len(tt_num_changes); + + /* Replace the old buffer only if I received something in the + * last OGM (the OGM could carry no changes) */ + spin_lock_bh(&orig_node->tt_buff_lock); + if (tt_buff_len > 0) { + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = kmalloc(tt_buff_len, GFP_ATOMIC); + if (orig_node->tt_buff) { + memcpy(orig_node->tt_buff, tt_buff, tt_buff_len); + orig_node->tt_buff_len = tt_buff_len; + } + } + spin_unlock_bh(&orig_node->tt_buff_lock); +} + +static void tt_req_purge(struct bat_priv *bat_priv) +{ + struct tt_req_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (is_out_of_time(node->issued_at, + TT_REQUEST_TIMEOUT * 1000)) { + list_del(&node->list); + kfree(node); + } + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); +} + +/* returns the pointer to the new tt_req_node struct if no request + * has already been issued for this orig_node, NULL otherwise */ +static struct tt_req_node *new_tt_req_node(struct bat_priv *bat_priv, + struct orig_node *orig_node) +{ + struct tt_req_node *tt_req_node_tmp, *tt_req_node = NULL; + + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry(tt_req_node_tmp, &bat_priv->tt_req_list, list) { + if (compare_eth(tt_req_node_tmp, orig_node) && + !is_out_of_time(tt_req_node_tmp->issued_at, + TT_REQUEST_TIMEOUT * 1000)) + goto unlock; + } + + tt_req_node = kmalloc(sizeof(*tt_req_node), GFP_ATOMIC); + if (!tt_req_node) + goto unlock; + + memcpy(tt_req_node->addr, orig_node->orig, ETH_ALEN); + tt_req_node->issued_at = jiffies; + + list_add(&tt_req_node->list, &bat_priv->tt_req_list); +unlock: + spin_unlock_bh(&bat_priv->tt_req_list_lock); + return tt_req_node; +} + +int send_tt_request(struct bat_priv *bat_priv, struct orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, bool full_table) +{ + struct sk_buff *skb; + struct tt_query_packet *tt_request; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if; + struct tt_req_node *tt_req_node; + int ret = 0; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + tt_req_node = new_tt_req_node(bat_priv, dst_orig_node); + if (!tt_req_node) + goto out; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + tt_request = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet)); + + tt_request->packet_type = BAT_TT_QUERY; + tt_request->version = COMPAT_VERSION; + memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); + tt_request->ttl = TTL; + tt_request->ttvn = ttvn; + tt_request->tt_data = tt_crc; + tt_request->flags = TT_REQUEST; + + if (full_table) + tt_request->flags |= TT_FULL_TABLE; + + neigh_node = orig_node_get_router(dst_orig_node); + if (!neigh_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Sending TT_REQUEST to %pM via %pM " + "[%c]\n", dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (ret) { + kfree_skb(skb); + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_del(&tt_req_node->list); + spin_unlock_bh(&bat_priv->tt_req_list_lock); + kfree(tt_req_node); + } + return ret; +} + +static bool send_other_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct hard_iface *primary_if = NULL; + struct tt_global_entry *tt_global_entry; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t orig_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (%pM) [%c]\n", tt_request->src, + tt_request->ttvn, tt_request->dst, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + /* Let's get the orig node of the REAL destination */ + req_dst_orig_node = get_orig_node(bat_priv, tt_request->dst); + if (!req_dst_orig_node) + goto out; + + res_dst_orig_node = get_orig_node(bat_priv, tt_request->src); + if (!res_dst_orig_node) + goto out; + + neigh_node = orig_node_get_router(res_dst_orig_node); + if (!neigh_node) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + orig_ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn); + req_ttvn = tt_request->ttvn; + + /* I have not the requested data */ + if (orig_ttvn != req_ttvn || + tt_request->tt_data != req_dst_orig_node->tt_crc) + goto out; + + /* If it has explicitly been requested the full table */ + if (tt_request->flags & TT_FULL_TABLE || + !req_dst_orig_node->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&req_dst_orig_node->tt_buff_lock); + tt_len = req_dst_orig_node->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Copy the last orig_node's OGM buffer */ + memcpy(tt_buff, req_dst_orig_node->tt_buff, + req_dst_orig_node->tt_buff_len); + + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = (uint8_t) + atomic_read(&req_dst_orig_node->last_ttvn); + + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the orig_node's local table */ + hash = bat_priv->tt_global_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_global_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + if (tt_global_entry->orig_node == + req_dst_orig_node) { + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_global_entry->addr, + ETH_ALEN); + tt_count++; + } + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); + +out: + if (res_dst_orig_node) + orig_node_free_ref(res_dst_orig_node); + if (req_dst_orig_node) + orig_node_free_ref(req_dst_orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + return ret; + +} +static bool send_my_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + struct orig_node *orig_node = NULL; + struct neigh_node *neigh_node = NULL; + struct tt_local_entry *tt_local_entry; + struct hard_iface *primary_if = NULL; + struct hlist_node *node; + struct hlist_head *head; + struct hashtable_t *hash; + uint8_t my_ttvn, req_ttvn; + int i, ret = false; + unsigned char *tt_buff; + bool full_table; + uint16_t tt_len, tt_tot, tt_count; + struct sk_buff *skb = NULL; + struct tt_query_packet *tt_response; + + bat_dbg(DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for " + "ttvn: %u (me) [%c]\n", tt_request->src, + tt_request->ttvn, + (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + + + my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); + req_ttvn = tt_request->ttvn; + + orig_node = get_orig_node(bat_priv, tt_request->src); + if (!orig_node) + goto out; + + neigh_node = orig_node_get_router(orig_node); + if (!neigh_node) + goto out; + + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + /* If the full table has been explicitly requested or the gap + * is too big send the whole local translation table */ + if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + !bat_priv->tt_buff) + full_table = true; + else + full_table = false; + + /* In this version, fragmentation is not implemented, then + * I'll send only one packet with as much TT entries as I can */ + if (!full_table) { + spin_lock_bh(&bat_priv->tt_buff_lock); + tt_len = bat_priv->tt_buff_len; + tt_tot = tt_len / sizeof(struct tt_change); + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto unlock; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_response->ttvn = req_ttvn; + + tt_buff = skb->data + sizeof(struct tt_query_packet); + memcpy(tt_buff, bat_priv->tt_buff, + bat_priv->tt_buff_len); + spin_unlock_bh(&bat_priv->tt_buff_lock); + } else { + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * + ETH_ALEN; + if (sizeof(struct tt_query_packet) + tt_len > + primary_if->soft_iface->mtu) { + tt_len = primary_if->soft_iface->mtu - + sizeof(struct tt_query_packet); + tt_len -= tt_len % ETH_ALEN; + } + tt_tot = tt_len / ETH_ALEN; + + skb = dev_alloc_skb(sizeof(struct tt_query_packet) + + tt_len + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + tt_response = (struct tt_query_packet *)skb_put(skb, + sizeof(struct tt_query_packet) + tt_len); + tt_buff = skb->data + sizeof(struct tt_query_packet); + /* Fill the packet with the local table */ + tt_response->ttvn = + (uint8_t)atomic_read(&bat_priv->ttvn); + + hash = bat_priv->tt_local_hash; + tt_count = 0; + rcu_read_lock(); + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + + hlist_for_each_entry_rcu(tt_local_entry, node, + head, hash_entry) { + if (tt_count == tt_tot) + break; + memcpy(tt_buff + tt_count * ETH_ALEN, + tt_local_entry->addr, + ETH_ALEN); + tt_count++; + } + } + rcu_read_unlock(); + } + + tt_response->packet_type = BAT_TT_QUERY; + tt_response->version = COMPAT_VERSION; + memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); + memcpy(tt_response->dst, tt_request->src, ETH_ALEN); + tt_response->tt_data = htons(tt_tot); + tt_response->flags = TT_RESPONSE; + + if (full_table) + tt_response->flags |= TT_FULL_TABLE; + + bat_dbg(DBG_TT, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = true; + goto out; + +unlock: + spin_unlock_bh(&bat_priv->tt_buff_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (primary_if) + hardif_free_ref(primary_if); + if (!ret) + kfree(skb); + /* This packet was for me, so it doesn't need to be re-routed */ + return true; +} + +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request) +{ + if (is_my_mac(tt_request->dst)) + return send_my_tt_response(bat_priv, tt_request); + else + return send_other_tt_response(bat_priv, tt_request); +} + +/* Substitute the TT response source's table with the newone carried by the + * packet */ +static void _tt_fill_gtable(struct bat_priv *bat_priv, + struct orig_node *orig_node, unsigned char *tt_buff, + uint16_t table_size, uint8_t ttvn) +{ + int count; + unsigned char *tt_ptr; + + for (count = 0; count < table_size; count++) { + tt_ptr = tt_buff + (count * ETH_ALEN); + + /* If we fail to allocate a new entry we return immediatly */ + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + return; + } + atomic_set(&orig_node->last_ttvn, ttvn); +} + +static void tt_fill_gtable(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct orig_node *orig_node = NULL; + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + /* Purge the old table first.. */ + tt_global_del_orig(bat_priv, orig_node, "Received full table"); + + _tt_fill_gtable(bat_priv, orig_node, + ((unsigned char *)tt_response) + + sizeof(struct tt_query_packet), + tt_response->tt_data, + tt_response->ttvn); + + spin_lock_bh(&orig_node->tt_buff_lock); + kfree(orig_node->tt_buff); + orig_node->tt_buff_len = 0; + orig_node->tt_buff = NULL; + spin_unlock_bh(&orig_node->tt_buff_lock); + +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change) +{ + int i; + + for (i = 0; i < tt_num_changes; i++) { + if ((tt_change + i)->flags & TT_CHANGE_DEL) + tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by changes"); + else + if (!tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, ttvn)) + return; + } + + tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, + tt_num_changes); + atomic_set(&orig_node->last_ttvn, ttvn); +} + +bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr) +{ + struct tt_local_entry *tt_local_entry; + + spin_lock_bh(&bat_priv->tt_lhash_lock); + tt_local_entry = tt_local_hash_find(bat_priv, addr); + spin_unlock_bh(&bat_priv->tt_lhash_lock); + + if (tt_local_entry) + return true; + return false; +} + +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response) +{ + struct tt_req_node *node, *safe; + struct orig_node *orig_node = NULL; + + bat_dbg(DBG_TT, bat_priv, "Received TT_RESPONSE from %pM for " + "ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + tt_response->tt_data, + (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + + orig_node = orig_hash_find(bat_priv, tt_response->src); + if (!orig_node) + goto out; + + if (tt_response->flags & TT_FULL_TABLE) + tt_fill_gtable(bat_priv, tt_response); + else + tt_update_changes(bat_priv, orig_node, tt_response->tt_data, + tt_response->ttvn, + (struct tt_change *)(tt_response + 1)); + + /* Delete the tt_req_node from pending tt_requests list */ + spin_lock_bh(&bat_priv->tt_req_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { + if (!compare_eth(node->addr, tt_response->src)) + continue; + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_req_list_lock); + + /* Recalculate the CRC for this orig_node and store it */ + spin_lock_bh(&bat_priv->tt_ghash_lock); + orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + spin_unlock_bh(&bat_priv->tt_ghash_lock); +out: + if (orig_node) + orig_node_free_ref(orig_node); +} + +int tt_init(struct bat_priv *bat_priv) +{ + if (!tt_local_init(bat_priv)) + return 0; + + if (!tt_global_init(bat_priv)) + return 0; + + tt_start_timer(bat_priv); + + return 1; +} + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} + +static void tt_purge(struct work_struct *work) +{ + struct delayed_work *delayed_work = + container_of(work, struct delayed_work, work); + struct bat_priv *bat_priv = + container_of(delayed_work, struct bat_priv, tt_work); + + tt_local_purge(bat_priv); + tt_req_purge(bat_priv); + + tt_start_timer(bat_priv); +} diff --git a/translation-table.h b/translation-table.h index 0f2b990..51f7e30 100644 --- a/translation-table.h +++ b/translation-table.h @@ -22,23 +22,43 @@ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_
-int tt_local_init(struct bat_priv *bat_priv); +int tt_len(int changes_num); +int tt_changes_fill_buffer(struct bat_priv *bat_priv, + unsigned char *buff, int buff_len); +int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, const uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, const char *message); -int tt_local_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_local_free(struct bat_priv *bat_priv); -int tt_global_init(struct bat_priv *bat_priv); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *tt_buff, int tt_buff_len); +int tt_global_add(struct bat_priv *bat_priv, + struct orig_node *orig_node, const unsigned char *addr, + uint8_t ttvn); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, const char *message); -void tt_global_free(struct bat_priv *bat_priv); + struct orig_node *orig_node, const char *message); +void tt_global_del(struct bat_priv *bat_priv, + struct orig_node *orig_node, const unsigned char *addr, + const char *message); struct orig_node *transtable_search(struct bat_priv *bat_priv, const uint8_t *addr); +void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, + const unsigned char *tt_buff, uint8_t tt_num_changes); +uint16_t tt_local_crc(struct bat_priv *bat_priv); +uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node); +void tt_free(struct bat_priv *bat_priv); +int send_tt_request(struct bat_priv *bat_priv, + struct orig_node *dst_orig_node, uint8_t hvn, + uint16_t tt_crc, bool full_table); +bool send_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_request); +void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct tt_change *tt_change); +bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr); +void handle_tt_response(struct bat_priv *bat_priv, + struct tt_query_packet *tt_response);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index fab70e8..0848fcc 100644 --- a/types.h +++ b/types.h @@ -75,8 +75,12 @@ struct orig_node { unsigned long batman_seqno_reset; uint8_t gw_flags; uint8_t flags; + atomic_t last_ttvn; /* last seen translation table version number */ + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ + atomic_t tt_size; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -94,10 +98,16 @@ struct orig_node { spinlock_t ogm_cnt_lock; /* bcast_seqno_lock protects bcast_bits, last_bcast_seqno */ spinlock_t bcast_seqno_lock; + spinlock_t tt_list_lock; /* protects tt_list */ atomic_t bond_candidates; struct list_head bond_list; };
+struct tt_change { + uint8_t flags; + uint8_t addr[ETH_ALEN]; +}; + struct gw_node { struct hlist_node list; struct orig_node *orig_node; @@ -145,6 +155,9 @@ struct bat_priv { atomic_t bcast_seqno; atomic_t bcast_queue_left; atomic_t batman_queue_left; + atomic_t ttvn; /* tranlation table version number */ + atomic_t tt_ogm_append_cnt; + atomic_t tt_local_changes; /* changes registered in a OGM interval */ char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -153,22 +166,30 @@ struct bat_priv { struct hlist_head forw_bcast_list; struct hlist_head gw_list; struct hlist_head softif_neigh_vids; + struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; struct hashtable_t *orig_hash; struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; + struct list_head tt_req_list; /* list of pending tt_requests */ struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ + spinlock_t tt_changes_list_lock; /* protects tt_changes */ spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ + spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ spinlock_t softif_neigh_lock; /* protects soft-interface neigh list */ spinlock_t softif_neigh_vid_lock; /* protects soft-interface vid list */ - int16_t num_local_tt; - atomic_t tt_local_changed; + atomic_t num_local_tt; + /* Checksum of the local table, recomputed before sending a new OGM */ + atomic_t tt_crc; + unsigned char *tt_buff; + int16_t tt_buff_len; + spinlock_t tt_buff_lock; /* protects tt_buff */ struct delayed_work tt_work; struct delayed_work orig_work; struct delayed_work vis_work; @@ -202,9 +223,22 @@ struct tt_local_entry { struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; + uint8_t ttvn; + /* entry in the global table */ struct hlist_node hash_entry; };
+struct tt_change_node { + struct list_head list; + struct tt_change change; +}; + +struct tt_req_node { + uint8_t addr[ETH_ALEN]; + unsigned long issued_at; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded diff --git a/unicast.c b/unicast.c index 6eabf42..32b125f 100644 --- a/unicast.c +++ b/unicast.c @@ -325,6 +325,9 @@ find_router: unicast_packet->ttl = TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); + /* set the destination tt version number */ + unicast_packet->ttvn = + (uint8_t)atomic_read(&orig_node->last_ttvn);
if (atomic_read(&bat_priv->fragmentation) && data_len + sizeof(*unicast_packet) >
Antonio Quartulli wrote:
- Cleaned following ecsv's patches/suggestions.
- recv_unicast_packet() now uses seq_before()
Antonio Quartulli wrote:
(diff < 0 && diff > -0xff/2)) {
/* Linearize the skb before accessing it */
What?
Sry, have to Nack this one.
Kind regards, Sven
On Sun, May 22, 2011 at 01:01:02PM +0200, Sven Eckelmann wrote:
Antonio Quartulli wrote:
- Cleaned following ecsv's patches/suggestions.
- recv_unicast_packet() now uses seq_before()
Antonio Quartulli wrote:
(diff < 0 && diff > -0xff/2)) {
/* Linearize the skb before accessing it */
What?
Sry, have to Nack this one.
Thanks! Fixed
Exploting the new announcement implementation, it has been possible to improve the roaming mechanism and reduce the number of packet drops.
For details, please visit: http://www.open-mesh.org/wiki/batman-adv/Roaming-improvements
Signed-off-by: Antonio Quartulli ordex@autistici.org Acked-by: Simon Wunderlich siwu@hrz.tu-chemnitz.de ---
- Cleaned following ecsv's patches/suggestions.
hard-interface.c | 4 + main.c | 2 + main.h | 12 +++- originator.c | 1 + packet.h | 10 +++ routing.c | 67 +++++++++++++++- routing.h | 1 + send.c | 1 + soft-interface.c | 3 +- translation-table.c | 214 ++++++++++++++++++++++++++++++++++++++++++++++----- translation-table.h | 8 +- types.h | 25 ++++++- 12 files changed, 315 insertions(+), 33 deletions(-)
diff --git a/hard-interface.c b/hard-interface.c index 5ade5a8..288d68e 100644 --- a/hard-interface.c +++ b/hard-interface.c @@ -658,6 +658,10 @@ static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, case BAT_TT_QUERY: ret = recv_tt_query(skb, hard_iface); break; + /* Roaming advertisement */ + case BAT_ROAM_ADV: + ret = recv_roam_adv(skb, hard_iface); + break; default: ret = NET_RX_DROP; } diff --git a/main.c b/main.c index 49a5e64..3318ee2 100644 --- a/main.c +++ b/main.c @@ -88,6 +88,7 @@ int mesh_init(struct net_device *soft_iface) spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); + spin_lock_init(&bat_priv->tt_roam_list_lock); spin_lock_init(&bat_priv->tt_buff_lock); spin_lock_init(&bat_priv->gw_list_lock); spin_lock_init(&bat_priv->vis_hash_lock); @@ -101,6 +102,7 @@ int mesh_init(struct net_device *soft_iface) INIT_HLIST_HEAD(&bat_priv->softif_neigh_vids); INIT_LIST_HEAD(&bat_priv->tt_changes_list); INIT_LIST_HEAD(&bat_priv->tt_req_list); + INIT_LIST_HEAD(&bat_priv->tt_roam_list);
if (originator_init(bat_priv) < 1) goto err; diff --git a/main.h b/main.h index 930fbdb..1207fbb 100644 --- a/main.h +++ b/main.h @@ -56,8 +56,16 @@ #define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */
/* Transtable change flags */ -#define TT_CHANGE_ADD 0x00 -#define TT_CHANGE_DEL 0x01 +#define TT_CHANGE_ADD 0x00 +#define TT_CHANGE_DEL 0x01 +#define TT_CHANGE_ROAM 0x02 + +/* Transtable global entry flags */ +#define TT_GLOBAL_ROAM 0x01 + +#define ROAMING_MAX_TIME 20 /* Time in which a client can roam at most + * ROAMING_MAX_COUNT times */ +#define ROAMING_MAX_COUNT 5
#define NUM_WORDS (TQ_LOCAL_WINDOW_SIZE / WORD_BIT_SIZE)
diff --git a/originator.c b/originator.c index 66938fa..62d8196 100644 --- a/originator.c +++ b/originator.c @@ -211,6 +211,7 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) /* extra reference for return */ atomic_set(&orig_node->refcount, 2);
+ orig_node->tt_poss_change = false; orig_node->bat_priv = bat_priv; memcpy(orig_node->orig, addr, ETH_ALEN); orig_node->router = NULL; diff --git a/packet.h b/packet.h index 525fee5..f218f0d 100644 --- a/packet.h +++ b/packet.h @@ -31,6 +31,7 @@ #define BAT_VIS 0x05 #define BAT_UNICAST_FRAG 0x06 #define BAT_TT_QUERY 0x07 +#define BAT_ROAM_ADV 0x08
/* this file is included by batctl which needs these defines */ #define COMPAT_VERSION 14 @@ -164,4 +165,13 @@ struct tt_query_packet { uint16_t tt_data; } __packed;
+struct roam_adv_packet { + uint8_t packet_type; + uint8_t version; + uint8_t dst[6]; + uint8_t ttl; + uint8_t src[6]; + uint8_t client[6]; +} __packed; + #endif /* _NET_BATMAN_ADV_PACKET_H_ */ diff --git a/routing.c b/routing.c index e1b04a7..ed25b82 100644 --- a/routing.c +++ b/routing.c @@ -93,6 +93,9 @@ static void update_transtable(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not * in sync anymore -> request fresh tt data */ @@ -1253,6 +1256,56 @@ out: return ret; }
+int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +{ + struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct roam_adv_packet *roam_adv_packet; + struct orig_node *orig_node; + struct ethhdr *ethhdr; + int ret = NET_RX_DROP; + + /* drop packet if it has not necessary minimum size */ + if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + goto out; + + ethhdr = (struct ethhdr *)skb_mac_header(skb); + + /* packet with unicast indication but broadcast recipient */ + if (is_broadcast_ether_addr(ethhdr->h_dest)) + goto out; + + /* packet with broadcast sender address */ + if (is_broadcast_ether_addr(ethhdr->h_source)) + goto out; + + roam_adv_packet = (struct roam_adv_packet *)skb->data; + + if (!is_my_mac(roam_adv_packet->dst)) + return route_unicast_packet(skb, recv_if); + + orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + if (!orig_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, "Received ROAMING_ADV from %pM " + "(client %pM)\n", roam_adv_packet->src, + roam_adv_packet->client); + + tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + atomic_read(&orig_node->last_ttvn) + 1, true); + + /* Roaming phase starts: I have new information but the ttvn has not + * been incremented yet. This flag will make me check all the incoming + * packets for the correct destination. */ + bat_priv->tt_poss_change = true; + + orig_node_free_ref(orig_node); + ret = NET_RX_SUCCESS; +out: + kfree(skb); + return ret; +} + /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors * refcount.*/ @@ -1449,35 +1502,41 @@ int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) uint8_t curr_ttvn; int16_t diff; struct hard_iface *primary_if; + bool tt_poss_change;
if (check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP;
+ /* I could need to modify it */ + if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + return NET_RX_DROP; + unicast_packet = (struct unicast_packet *)skb->data;
- if (is_my_mac(unicast_packet->dest)) + if (is_my_mac(unicast_packet->dest)) { + tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); - else { + } else { orig_node = orig_hash_find(bat_priv, unicast_packet->dest);
if (!orig_node) return NET_RX_DROP;
curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); + tt_poss_change = orig_node->tt_poss_change; orig_node_free_ref(orig_node); }
diff = unicast_packet->ttvn - curr_ttvn; /* Check whether I have to reroute the packet */ if (unicast_packet->packet_type == BAT_UNICAST && - (diff < 0 && diff > -0xff/2)) { + (seq_before(unicast_packet->ttvn, curr_ttvn) || tt_poss_change)) { /* Linearize the skb before accessing it */ if (skb_linearize(skb) < 0) return NET_RX_DROP;
ethhdr = (struct ethhdr *)(skb->data + sizeof(struct unicast_packet)); - orig_node = transtable_search(bat_priv, ethhdr->h_dest);
if (!orig_node) { diff --git a/routing.h b/routing.h index e77d464..fb14e95 100644 --- a/routing.h +++ b/routing.h @@ -37,6 +37,7 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_bat_packet(struct sk_buff *skb, struct hard_iface *recv_if); int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); +int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); struct neigh_node *find_router(struct bat_priv *bat_priv, struct orig_node *orig_node, const struct hard_iface *recv_if); diff --git a/send.c b/send.c index 13e5d20..0c5d671 100644 --- a/send.c +++ b/send.c @@ -303,6 +303,7 @@ void schedule_own_packet(struct hard_iface *hard_iface) prepare_packet_buffer(bat_priv, hard_iface); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->ttvn); + bat_priv->tt_poss_change = false; }
/* if the changes have been sent enough times */ diff --git a/soft-interface.c b/soft-interface.c index 8e94273..b268f85 100644 --- a/soft-interface.c +++ b/soft-interface.c @@ -534,7 +534,7 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) /* only modify transtable if it has been initialised before */ if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed"); + "mac address changed", false); tt_local_add(dev, addr->sa_data); }
@@ -836,6 +836,7 @@ struct net_device *softif_create(const char *name)
bat_priv->tt_buff = NULL; bat_priv->tt_buff_len = 0; + bat_priv->tt_poss_change = false;
bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; diff --git a/translation-table.c b/translation-table.c index 4c355b4..902635a 100644 --- a/translation-table.c +++ b/translation-table.c @@ -126,7 +126,7 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) }
static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, - const uint8_t *addr) + const uint8_t *addr, uint8_t roaming) { struct tt_change_node *tt_change_node;
@@ -137,6 +137,9 @@ static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, return;
tt_change_node->change.flags = op; + if (roaming) + tt_change_node->change.flags |= TT_GLOBAL_ROAM; + memcpy(tt_change_node->change.addr, addr, ETH_ALEN);
spin_lock_bh(&bat_priv->tt_changes_list_lock); @@ -171,6 +174,8 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) struct bat_priv *bat_priv = netdev_priv(soft_iface); struct tt_local_entry *tt_local_entry; struct tt_global_entry *tt_global_entry; + uint8_t roam_addr[ETH_ALEN]; + struct orig_node *roam_orig_node;
spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); @@ -184,7 +189,7 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) if (!tt_local_entry) goto unlock;
- tt_local_event(bat_priv, TT_CHANGE_ADD, addr); + tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
bat_dbg(DBG_TT, bat_priv, "Creating new local tt entry: %pM (ttvn: %d)\n", addr, @@ -209,11 +214,20 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr)
tt_global_entry = tt_global_hash_find(bat_priv, addr);
- if (tt_global_entry) + /* Check whether it is a roaming! */ + if (tt_global_entry) { + memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); + roam_orig_node = tt_global_entry->orig_node; + /* This node is probably going to update its tt table */ + tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); + spin_unlock_bh(&bat_priv->tt_ghash_lock); + send_roam_adv(bat_priv, tt_global_entry->addr, + tt_global_entry->orig_node); + } else + spin_unlock_bh(&bat_priv->tt_ghash_lock);
- spin_unlock_bh(&bat_priv->tt_ghash_lock); return; unlock: spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -368,7 +382,7 @@ static void tt_local_del(struct bat_priv *bat_priv, }
void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, - const char *message) + const char *message, bool roaming) { struct tt_local_entry *tt_local_entry;
@@ -376,7 +390,8 @@ void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr); + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, + roaming); tt_local_del(bat_priv, tt_local_entry, message); } spin_unlock_bh(&bat_priv->tt_lhash_lock); @@ -405,7 +420,7 @@ static void tt_local_purge(struct bat_priv *bat_priv) continue;
tt_local_event(bat_priv, TT_CHANGE_DEL, - tt_local_entry->addr); + tt_local_entry->addr, false); tt_local_del(bat_priv, tt_local_entry, "address timed out"); } @@ -478,7 +493,7 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* caller must hold orig_node recount */ int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_addr, uint8_t ttvn) + const unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; struct tt_local_entry *tt_local_entry; @@ -498,6 +513,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; atomic_inc(&orig_node->tt_size); hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, @@ -509,6 +525,7 @@ int tt_global_add(struct bat_priv *bat_priv, atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; + tt_global_entry->flags = 0x00; orig_node_free_ref(orig_node_tmp); atomic_inc(&orig_node->tt_size); } @@ -525,8 +542,9 @@ int tt_global_add(struct bat_priv *bat_priv, tt_local_entry = tt_local_hash_find(bat_priv, tt_addr);
if (tt_local_entry) - tt_local_del(bat_priv, tt_local_entry, - "global tt received"); + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + spin_unlock_bh(&bat_priv->tt_lhash_lock); return 1; unlock: @@ -639,7 +657,7 @@ static void _tt_global_del(struct bat_priv *bat_priv,
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *addr, - const char *message) + const char *message, bool roaming) { struct tt_global_entry *tt_global_entry;
@@ -647,9 +665,14 @@ void tt_global_del(struct bat_priv *bat_priv, tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (roaming) { + tt_global_entry->flags |= TT_GLOBAL_ROAM; + goto out; + } atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } +out: spin_unlock_bh(&bat_priv->tt_ghash_lock); }
@@ -736,6 +759,12 @@ uint16_t tt_global_crc(struct bat_priv *bat_priv, struct orig_node *orig_node) head, hash_entry) { if (compare_eth(tt_global_entry->orig_node, orig_node)) { + /* Roaming clients are in the global table for + * consistency only. They don't have to be + * taken into account while computing the + * global crc */ + if (tt_global_entry->flags & TT_GLOBAL_ROAM) + continue; total_one = 0; for (j = 0; j < ETH_ALEN; j++) total_one = crc16_byte(total_one, @@ -1251,7 +1280,7 @@ static void _tt_fill_gtable(struct bat_priv *bat_priv, tt_ptr = tt_buff + (count * ETH_ALEN);
/* If we fail to allocate a new entry we return immediatly */ - if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn)) + if (!tt_global_add(bat_priv, orig_node, tt_ptr, ttvn, false)) return; } atomic_set(&orig_node->last_ttvn, ttvn); @@ -1296,10 +1325,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, if ((tt_change + i)->flags & TT_CHANGE_DEL) tt_global_del(bat_priv, orig_node, (tt_change + i)->addr, - "tt removed by changes"); + "tt removed by changes", + (tt_change + i)->flags & TT_CHANGE_ROAM); else if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, ttvn)) + (tt_change + i)->addr, ttvn, false)) + /* In case of problem while storing a + * global_entry, we stop the updating + * procedure without committing the + * ttvn change. This will avoid to send + * corrupted data on tt_request + */ return; }
@@ -1358,6 +1394,9 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); spin_unlock_bh(&bat_priv->tt_ghash_lock); + /* Roaming phase is over: tables are in sync again. I can + * unset the flag */ + orig_node->tt_poss_change = false; out: if (orig_node) orig_node_free_ref(orig_node); @@ -1376,16 +1415,135 @@ int tt_init(struct bat_priv *bat_priv) return 1; }
-void tt_free(struct bat_priv *bat_priv) +static void tt_roam_list_free(struct bat_priv *bat_priv) { - cancel_delayed_work_sync(&bat_priv->tt_work); + struct tt_roam_node *node, *safe;
- tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); + spin_lock_bh(&bat_priv->tt_roam_list_lock);
- kfree(bat_priv->tt_buff); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + list_del(&node->list); + kfree(node); + } + + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +static void tt_roam_purge(struct bat_priv *bat_priv) +{ + struct tt_roam_node *node, *safe; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { + if (!is_out_of_time(node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + list_del(&node->list); + kfree(node); + } + spin_unlock_bh(&bat_priv->tt_roam_list_lock); +} + +/* This function checks whether the client already reached the + * maximum number of possible roaming phases. In this case the ROAMING_ADV + * will not be sent. + * + * returns true if the ROAMING_ADV can be sent, false otherwise */ +static bool tt_check_roam_count(struct bat_priv *bat_priv, + uint8_t *client) +{ + struct tt_roam_node *tt_roam_node; + bool ret = false; + + spin_lock_bh(&bat_priv->tt_roam_list_lock); + /* The new tt_req will be issued only if I'm not waiting for a + * reply from the same orig_node yet */ + list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { + if (!compare_eth(tt_roam_node->addr, client)) + continue; + + if (is_out_of_time(tt_roam_node->first_time, + ROAMING_MAX_TIME * 1000)) + continue; + + if (!atomic_dec_not_zero(&tt_roam_node->counter)) + /* Sorry, you roamed too many times! */ + goto unlock; + ret = true; + break; + } + + if (!ret) { + tt_roam_node = kmalloc(sizeof(*tt_roam_node), GFP_ATOMIC); + if (!tt_roam_node) + goto unlock; + + tt_roam_node->first_time = jiffies; + atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + memcpy(tt_roam_node->addr, client, ETH_ALEN); + + list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); + ret = true; + } + +unlock: + spin_unlock_bh(&bat_priv->tt_roam_list_lock); + return ret; +} + +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node) +{ + struct neigh_node *neigh_node = NULL; + struct sk_buff *skb = NULL; + struct roam_adv_packet *roam_adv_packet; + int ret = 1; + struct hard_iface *primary_if; + + /* before going on we have to check whether the client has + * already roamed to us too many times */ + if (!tt_check_roam_count(bat_priv, client)) + goto out; + + skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + if (!skb) + goto out; + + skb_reserve(skb, ETH_HLEN); + + roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, + sizeof(struct roam_adv_packet)); + + roam_adv_packet->packet_type = BAT_ROAM_ADV; + roam_adv_packet->version = COMPAT_VERSION; + roam_adv_packet->ttl = TTL; + primary_if = primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + memcpy(roam_adv_packet->src, + bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN); + hardif_free_ref(primary_if); + memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); + memcpy(roam_adv_packet->client, client, ETH_ALEN); + + neigh_node = orig_node_get_router(orig_node); + if (!neigh_node) + goto out; + + bat_dbg(DBG_TT, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + ret = 0; + +out: + if (neigh_node) + neigh_node_free_ref(neigh_node); + if (ret) + kfree_skb(skb); + return; }
static void tt_purge(struct work_struct *work) @@ -1397,6 +1555,20 @@ static void tt_purge(struct work_struct *work)
tt_local_purge(bat_priv); tt_req_purge(bat_priv); + tt_roam_purge(bat_priv);
tt_start_timer(bat_priv); } + +void tt_free(struct bat_priv *bat_priv) +{ + cancel_delayed_work_sync(&bat_priv->tt_work); + + tt_local_table_free(bat_priv); + tt_global_table_free(bat_priv); + tt_req_list_free(bat_priv); + tt_changes_list_free(bat_priv); + tt_roam_list_free(bat_priv); + + kfree(bat_priv->tt_buff); +} diff --git a/translation-table.h b/translation-table.h index 51f7e30..1cd2d39 100644 --- a/translation-table.h +++ b/translation-table.h @@ -28,20 +28,20 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, int tt_init(struct bat_priv *bat_priv); void tt_local_add(struct net_device *soft_iface, const uint8_t *addr); void tt_local_remove(struct bat_priv *bat_priv, - const uint8_t *addr, const char *message); + const uint8_t *addr, const char *message, bool roaming); int tt_local_seq_print_text(struct seq_file *seq, void *offset); void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *tt_buff, int tt_buff_len); int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *addr, - uint8_t ttvn); + uint8_t ttvn, bool roaming); int tt_global_seq_print_text(struct seq_file *seq, void *offset); void tt_global_del_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, const char *message); void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *addr, - const char *message); + const char *message, bool roaming); struct orig_node *transtable_search(struct bat_priv *bat_priv, const uint8_t *addr); void tt_save_orig_buffer(struct bat_priv *bat_priv, struct orig_node *orig_node, @@ -60,5 +60,7 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node, bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr); void handle_tt_response(struct bat_priv *bat_priv, struct tt_query_packet *tt_response); +void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, + struct orig_node *orig_node);
#endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/types.h b/types.h index 0848fcc..8f05632 100644 --- a/types.h +++ b/types.h @@ -81,6 +81,12 @@ struct orig_node { int16_t tt_buff_len; spinlock_t tt_buff_lock; /* protects tt_buff */ atomic_t tt_size; + /* The tt_poss_change flag is used to detect an ongoing roaming phase. + * If true, then I sent a Roaming_adv to this orig_node and I have to + * inspect every packet directed to it to check whether it is still + * the true destination or not. This flag will be reset to false as + * soon as I receive a new TTVN from this orig_node */ + bool tt_poss_change; uint32_t last_real_seqno; uint8_t last_ttl; unsigned long bcast_bits[NUM_WORDS]; @@ -158,6 +164,12 @@ struct bat_priv { atomic_t ttvn; /* tranlation table version number */ atomic_t tt_ogm_append_cnt; atomic_t tt_local_changes; /* changes registered in a OGM interval */ + /* The tt_poss_change flag is used to detect an ongoing roaming phase. + * If true, then I received a Roaming_adv and I have to inspect every + * packet directed to me to check whether I am still the true + * destination or not. This flag will be reset to false as soon as I + * increase my TTVN */ + bool tt_poss_change; char num_ifaces; struct debug_log *debug_log; struct kobject *mesh_obj; @@ -172,6 +184,7 @@ struct bat_priv { struct hashtable_t *tt_local_hash; struct hashtable_t *tt_global_hash; struct list_head tt_req_list; /* list of pending tt_requests */ + struct list_head tt_roam_list; struct hashtable_t *vis_hash; spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -179,6 +192,7 @@ struct bat_priv { spinlock_t tt_lhash_lock; /* protects tt_local_hash */ spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ + spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ spinlock_t vis_hash_lock; /* protects vis_hash */ spinlock_t vis_list_lock; /* protects vis_info::recv_list */ @@ -224,8 +238,8 @@ struct tt_global_entry { uint8_t addr[ETH_ALEN]; struct orig_node *orig_node; uint8_t ttvn; - /* entry in the global table */ - struct hlist_node hash_entry; + uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + struct hlist_node hash_entry; /* entry in the global table */ };
struct tt_change_node { @@ -239,6 +253,13 @@ struct tt_req_node { struct list_head list; };
+struct tt_roam_node { + uint8_t addr[ETH_ALEN]; + atomic_t counter; + unsigned long first_time; + struct list_head list; +}; + /** * forw_packet - structure for forw_list maintaining packets to be * send/forwarded
Antonio Quartulli wrote:
- memcpy(roam_adv_packet->src,
bat_priv->primary_if->net_dev->dev_addr, ETH_ALEN);
Again, Nack.
This is _definitely_ not allowed. Please use the primary_if != NULL that you got through primary_if_get_selected instead of dereferencing bat_priv->primary_if directly.
Kind regards, Sven
The local and the global translation-tables are now lock free and rcu protected.
Signed-off-by: Antonio Quartulli ordex@autistici.org Acked-by: Simon Wunderlich siwu@hrz.tu-chemnitz.de ---
- Cleaned following ecsv's patches/suggestions. - Patched to use kfree_rcu() - Added compat code for kfree_rcu()
compat.c | 16 ++++ compat.h | 2 + main.c | 2 - routing.c | 2 - translation-table.c | 240 +++++++++++++++++++++++++++------------------------ types.h | 6 +- vis.c | 13 ++-- 7 files changed, 157 insertions(+), 124 deletions(-)
diff --git a/compat.c b/compat.c index e040486..ebedae8 100644 --- a/compat.c +++ b/compat.c @@ -1015,4 +1015,20 @@ void free_rcu_softif_neigh(struct rcu_head *rcu) kfree(softif_neigh); }
+void free_rcu_tt_local_entry(struct rcu_head *rcu) +{ + struct tt_local_entry *tt_local_entry; + + tt_local_entry = container_of(rcu, struct tt_local_entry, rcu); + kfree(tt_local_entry); +} + +void free_rcu_tt_global_entry(struct rcu_head *rcu) +{ + struct tt_global_entry *tt_global_entry; + + tt_global_entry = container_of(rcu, struct tt_global_entry, rcu); + kfree(tt_global_entry); +} + #endif /* < KERNEL_VERSION(2, 6, 40) */ diff --git a/compat.h b/compat.h index 2bd9e0a..6842a26 100644 --- a/compat.h +++ b/compat.h @@ -282,6 +282,8 @@ int bat_seq_printf(struct seq_file *m, const char *f, ...); void free_rcu_gw_node(struct rcu_head *rcu); void free_rcu_neigh_node(struct rcu_head *rcu); void free_rcu_softif_neigh(struct rcu_head *rcu); +void free_rcu_tt_local_entry(struct rcu_head *rcu); +void free_rcu_tt_global_entry(struct rcu_head *rcu);
#endif /* < KERNEL_VERSION(2, 6, 40) */
diff --git a/main.c b/main.c index 3318ee2..c2b06b7 100644 --- a/main.c +++ b/main.c @@ -84,8 +84,6 @@ int mesh_init(struct net_device *soft_iface)
spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); - spin_lock_init(&bat_priv->tt_lhash_lock); - spin_lock_init(&bat_priv->tt_ghash_lock); spin_lock_init(&bat_priv->tt_changes_list_lock); spin_lock_init(&bat_priv->tt_req_list_lock); spin_lock_init(&bat_priv->tt_roam_list_lock); diff --git a/routing.c b/routing.c index ed25b82..7b8f360 100644 --- a/routing.c +++ b/routing.c @@ -90,9 +90,7 @@ static void update_transtable(struct bat_priv *bat_priv, /* Even if we received the crc into the OGM, we prefer * to recompute it to spot any possible inconsistency * in the global table */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/translation-table.c b/translation-table.c index 902635a..5f89812 100644 --- a/translation-table.c +++ b/translation-table.c @@ -80,6 +80,9 @@ static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_local_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_local_entry->refcount)) + continue; + tt_local_entry_tmp = tt_local_entry; break; } @@ -109,6 +112,9 @@ static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, if (!compare_eth(tt_global_entry, data)) continue;
+ if (!atomic_inc_not_zero(&tt_global_entry->refcount)) + continue; + tt_global_entry_tmp = tt_global_entry; break; } @@ -125,8 +131,20 @@ static bool is_out_of_time(unsigned long starting_time, unsigned long timeout) return time_after(jiffies, deadline); }
+static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +{ + if (atomic_dec_and_test(&tt_local_entry->refcount)) + kfree_rcu(tt_local_entry, rcu); +} + +static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +{ + if (atomic_dec_and_test(&tt_global_entry->refcount)) + kfree_rcu(tt_global_entry, rcu); +} + static void tt_local_event(struct bat_priv *bat_priv, uint8_t op, - const uint8_t *addr, uint8_t roaming) + const uint8_t *addr, bool roaming) { struct tt_change_node *tt_change_node;
@@ -172,22 +190,19 @@ static int tt_local_init(struct bat_priv *bat_priv) void tt_local_add(struct net_device *soft_iface, const uint8_t *addr) { struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry; - struct tt_global_entry *tt_global_entry; - uint8_t roam_addr[ETH_ALEN]; - struct orig_node *roam_orig_node; + struct tt_local_entry *tt_local_entry = NULL; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - goto unlock; + goto out; }
tt_local_entry = kmalloc(sizeof(*tt_local_entry), GFP_ATOMIC); if (!tt_local_entry) - goto unlock; + goto out;
tt_local_event(bat_priv, TT_CHANGE_ADD, addr, false);
@@ -197,6 +212,7 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr)
memcpy(tt_local_entry->addr, addr, ETH_ALEN); tt_local_entry->last_seen = jiffies; + atomic_set(&tt_local_entry->refcount, 2);
/* the batman interface mac address should never be purged */ if (compare_eth(addr, soft_iface->dev_addr)) @@ -206,31 +222,26 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr)
hash_add(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry, &tt_local_entry->hash_entry); + atomic_inc(&bat_priv->num_local_tt); - spin_unlock_bh(&bat_priv->tt_lhash_lock);
/* remove address from global hash if present */ - spin_lock_bh(&bat_priv->tt_ghash_lock); - tt_global_entry = tt_global_hash_find(bat_priv, addr);
/* Check whether it is a roaming! */ if (tt_global_entry) { - memcpy(roam_addr, tt_global_entry->addr, ETH_ALEN); - roam_orig_node = tt_global_entry->orig_node; /* This node is probably going to update its tt table */ tt_global_entry->orig_node->tt_poss_change = true; _tt_global_del(bat_priv, tt_global_entry, "local tt received"); - spin_unlock_bh(&bat_priv->tt_ghash_lock); send_roam_adv(bat_priv, tt_global_entry->addr, - tt_global_entry->orig_node); - } else - spin_unlock_bh(&bat_priv->tt_ghash_lock); - - return; -unlock: - spin_unlock_bh(&bat_priv->tt_lhash_lock); + tt_global_entry->orig_node); + } +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
int tt_changes_fill_buffer(struct bat_priv *bat_priv, @@ -312,8 +323,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) "announced via TT (TTVN: %u):\n", net_dev->name, (uint8_t)atomic_read(&bat_priv->ttvn));
- spin_lock_bh(&bat_priv->tt_lhash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx\n" */ for (i = 0; i < hash->size; i++) { @@ -327,7 +336,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); ret = -ENOMEM; goto out; } @@ -347,8 +355,6 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -357,15 +363,6 @@ out: return ret; }
-static void tt_local_entry_free(struct hlist_node *node, void *arg) -{ - struct bat_priv *bat_priv = arg; - void *data = container_of(node, struct tt_local_entry, hash_entry); - - kfree(data); - atomic_dec(&bat_priv->num_local_tt); -} - static void tt_local_del(struct bat_priv *bat_priv, struct tt_local_entry *tt_local_entry, const char *message) @@ -378,23 +375,24 @@ static void tt_local_del(struct bat_priv *bat_priv, hash_remove(bat_priv->tt_local_hash, compare_ltt, choose_orig, tt_local_entry->addr);
- tt_local_entry_free(&tt_local_entry->hash_entry, bat_priv); + tt_local_entry_free_ref(tt_local_entry); }
void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, const char *message, bool roaming) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr);
- if (tt_local_entry) { - tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, - roaming); - tt_local_del(bat_priv, tt_local_entry, message); - } - spin_unlock_bh(&bat_priv->tt_lhash_lock); + if (!tt_local_entry) + goto out; + + tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, roaming); + tt_local_del(bat_priv, tt_local_entry, message); +out: + if (tt_local_entry) + tt_local_entry_free_ref(tt_local_entry); }
static void tt_local_purge(struct bat_priv *bat_priv) @@ -403,13 +401,14 @@ static void tt_local_purge(struct bat_priv *bat_priv) struct tt_local_entry *tt_local_entry; struct hlist_node *node, *node_tmp; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */ int i;
- spin_lock_bh(&bat_priv->tt_lhash_lock); - for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { if (tt_local_entry->never_purge) @@ -421,22 +420,26 @@ static void tt_local_purge(struct bat_priv *bat_priv)
tt_local_event(bat_priv, TT_CHANGE_DEL, tt_local_entry->addr, false); - tt_local_del(bat_priv, tt_local_entry, - "address timed out"); + atomic_dec(&bat_priv->num_local_tt); + bat_dbg(DBG_TT, bat_priv, "Deleting local " + "tt entry (%pM): timed out\n", + tt_local_entry->addr); + hlist_del_rcu(node); + tt_local_entry_free_ref(tt_local_entry); } + spin_unlock_bh(list_lock); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); }
static void tt_local_table_free(struct bat_priv *bat_priv) { struct hashtable_t *hash; - int i; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct hlist_head *head; - struct hlist_node *node, *node_tmp; struct tt_local_entry *tt_local_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i;
if (!bat_priv->tt_local_hash) return; @@ -451,7 +454,7 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_local_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - kfree(tt_local_entry); + tt_local_entry_free_ref(tt_local_entry); } spin_unlock_bh(list_lock); } @@ -496,10 +499,9 @@ int tt_global_add(struct bat_priv *bat_priv, const unsigned char *tt_addr, uint8_t ttvn, bool roaming) { struct tt_global_entry *tt_global_entry; - struct tt_local_entry *tt_local_entry; struct orig_node *orig_node_tmp; + int ret = 0;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, tt_addr);
if (!tt_global_entry) { @@ -507,17 +509,20 @@ int tt_global_add(struct bat_priv *bat_priv, kmalloc(sizeof(*tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) - goto unlock; + goto out; + memcpy(tt_global_entry->addr, tt_addr, ETH_ALEN); /* Assign the new orig_node */ atomic_inc(&orig_node->refcount); tt_global_entry->orig_node = orig_node; tt_global_entry->ttvn = ttvn; tt_global_entry->flags = 0x00; - atomic_inc(&orig_node->tt_size); + atomic_set(&tt_global_entry->refcount, 2); + hash_add(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry, &tt_global_entry->hash_entry); + atomic_inc(&orig_node->tt_size); } else { if (tt_global_entry->orig_node != orig_node) { atomic_dec(&tt_global_entry->orig_node->tt_size); @@ -531,25 +536,18 @@ int tt_global_add(struct bat_priv *bat_priv, } }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - bat_dbg(DBG_TT, bat_priv, "Creating new global tt entry: %pM (via %pM)\n", tt_global_entry->addr, orig_node->orig);
/* remove address from local hash if present */ - spin_lock_bh(&bat_priv->tt_lhash_lock); - tt_local_entry = tt_local_hash_find(bat_priv, tt_addr); - - if (tt_local_entry) - tt_local_remove(bat_priv, tt_global_entry->addr, - "global tt received", roaming); - - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 1; -unlock: - spin_unlock_bh(&bat_priv->tt_ghash_lock); - return 0; + tt_local_remove(bat_priv, tt_global_entry->addr, + "global tt received", roaming); + ret = 1; +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); + return ret; }
int tt_global_seq_print_text(struct seq_file *seq, void *offset) @@ -586,8 +584,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-13s %s %-15s %s\n", "Client", "(TTVN)", "Originator", "(Curr TTVN)");
- spin_lock_bh(&bat_priv->tt_ghash_lock); - buf_size = 1; /* Estimate length for: " * xx:xx:xx:xx:xx:xx (ttvn) via * xx:xx:xx:xx:xx:xx (cur_ttvn)\n"*/ @@ -602,10 +598,10 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset)
buff = kmalloc(buf_size, GFP_ATOMIC); if (!buff) { - spin_unlock_bh(&bat_priv->tt_ghash_lock); ret = -ENOMEM; goto out; } + buff[0] = '\0'; pos = 0;
@@ -627,8 +623,6 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_ghash_lock); - seq_printf(seq, "%s", buff); kfree(buff); out: @@ -642,7 +636,7 @@ static void _tt_global_del(struct bat_priv *bat_priv, const char *message) { if (!tt_global_entry) - return; + goto out;
bat_dbg(DBG_TT, bat_priv, "Deleting global tt entry %pM (via %pM): %s\n", @@ -650,30 +644,34 @@ static void _tt_global_del(struct bat_priv *bat_priv, message);
atomic_dec(&tt_global_entry->orig_node->tt_size); + hash_remove(bat_priv->tt_global_hash, compare_gtt, choose_orig, tt_global_entry->addr); - kfree(tt_global_entry); +out: + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del(struct bat_priv *bat_priv, struct orig_node *orig_node, const unsigned char *addr, const char *message, bool roaming) { - struct tt_global_entry *tt_global_entry; + struct tt_global_entry *tt_global_entry = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr); + if (!tt_global_entry) + goto out;
- if (tt_global_entry && tt_global_entry->orig_node == orig_node) { + if (tt_global_entry->orig_node == orig_node) { if (roaming) { tt_global_entry->flags |= TT_GLOBAL_ROAM; goto out; } - atomic_dec(&orig_node->tt_size); _tt_global_del(bat_priv, tt_global_entry, message); } out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); + if (tt_global_entry) + tt_global_entry_free_ref(tt_global_entry); }
void tt_global_del_orig(struct bat_priv *bat_priv, @@ -684,38 +682,59 @@ void tt_global_del_orig(struct bat_priv *bat_priv, struct hashtable_t *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; + spinlock_t *list_lock; /* protects write access to the hash lists */
- if (!bat_priv->tt_global_hash) - return; - - spin_lock_bh(&bat_priv->tt_ghash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; + list_lock = &hash->list_locks[i];
+ spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_global_entry, node, safe, head, hash_entry) { - if (tt_global_entry->orig_node == orig_node) - _tt_global_del(bat_priv, tt_global_entry, - message); + if (tt_global_entry->orig_node == orig_node) { + bat_dbg(DBG_TT, bat_priv, + "Deleting global tt entry %pM " + "(via %pM): originator time out\n", + tt_global_entry->addr, + tt_global_entry->orig_node->orig); + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } } + spin_unlock_bh(list_lock); } atomic_set(&orig_node->tt_size, 0); - - spin_unlock_bh(&bat_priv->tt_ghash_lock); -} - -static void tt_global_entry_free(struct hlist_node *node, void *arg) -{ - void *data = container_of(node, struct tt_global_entry, hash_entry); - kfree(data); }
static void tt_global_table_free(struct bat_priv *bat_priv) { + struct hashtable_t *hash; + spinlock_t *list_lock; /* protects write access to the hash lists */ + struct tt_global_entry *tt_global_entry; + struct hlist_node *node, *node_tmp; + struct hlist_head *head; + int i; + if (!bat_priv->tt_global_hash) return;
- hash_delete(bat_priv->tt_global_hash, tt_global_entry_free, NULL); + hash = bat_priv->tt_global_hash; + + for (i = 0; i < hash->size; i++) { + head = &hash->table[i]; + list_lock = &hash->list_locks[i]; + + spin_lock_bh(list_lock); + hlist_for_each_entry_safe(tt_global_entry, node, node_tmp, + head, hash_entry) { + hlist_del_rcu(node); + tt_global_entry_free_ref(tt_global_entry); + } + spin_unlock_bh(list_lock); + } + + hash_destroy(hash); + bat_priv->tt_global_hash = NULL; }
@@ -725,19 +744,19 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, struct tt_global_entry *tt_global_entry; struct orig_node *orig_node = NULL;
- spin_lock_bh(&bat_priv->tt_ghash_lock); tt_global_entry = tt_global_hash_find(bat_priv, addr);
if (!tt_global_entry) goto out;
if (!atomic_inc_not_zero(&tt_global_entry->orig_node->refcount)) - goto out; + goto free_tt;
orig_node = tt_global_entry->orig_node;
+free_tt: + tt_global_entry_free_ref(tt_global_entry); out: - spin_unlock_bh(&bat_priv->tt_ghash_lock); return orig_node; }
@@ -800,7 +819,6 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) tt_local_entry->addr[j]); total ^= total_one; } - rcu_read_unlock(); }
@@ -1346,15 +1364,17 @@ void tt_update_changes(struct bat_priv *bat_priv, struct orig_node *orig_node,
bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr) { - struct tt_local_entry *tt_local_entry; + struct tt_local_entry *tt_local_entry = NULL; + bool ret = false;
- spin_lock_bh(&bat_priv->tt_lhash_lock); tt_local_entry = tt_local_hash_find(bat_priv, addr); - spin_unlock_bh(&bat_priv->tt_lhash_lock); - + if (!tt_local_entry) + goto out; + ret = true; +out: if (tt_local_entry) - return true; - return false; + tt_local_entry_free_ref(tt_local_entry); + return ret; }
void handle_tt_response(struct bat_priv *bat_priv, @@ -1391,9 +1411,7 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->tt_req_list_lock);
/* Recalculate the CRC for this orig_node and store it */ - spin_lock_bh(&bat_priv->tt_ghash_lock); orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); - spin_unlock_bh(&bat_priv->tt_ghash_lock); /* Roaming phase is over: tables are in sync again. I can * unset the flag */ orig_node->tt_poss_change = false; diff --git a/types.h b/types.h index 8f05632..1a8f20e 100644 --- a/types.h +++ b/types.h @@ -189,8 +189,6 @@ struct bat_priv { spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ spinlock_t tt_changes_list_lock; /* protects tt_changes */ - spinlock_t tt_lhash_lock; /* protects tt_local_hash */ - spinlock_t tt_ghash_lock; /* protects tt_global_hash */ spinlock_t tt_req_list_lock; /* protects tt_req_list */ spinlock_t tt_roam_list_lock; /* protects tt_roam_list */ spinlock_t gw_list_lock; /* protects gw_list and curr_gw */ @@ -231,6 +229,8 @@ struct tt_local_entry { uint8_t addr[ETH_ALEN]; unsigned long last_seen; char never_purge; + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; };
@@ -239,6 +239,8 @@ struct tt_global_entry { struct orig_node *orig_node; uint8_t ttvn; uint8_t flags; /* only TT_GLOBAL_ROAM is used */ + atomic_t refcount; + struct rcu_head rcu; struct hlist_node hash_entry; /* entry in the global table */ };
diff --git a/vis.c b/vis.c index 355c6e5..8a1b985 100644 --- a/vis.c +++ b/vis.c @@ -665,11 +665,12 @@ next:
hash = bat_priv->tt_local_hash;
- spin_lock_bh(&bat_priv->tt_lhash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i];
- hlist_for_each_entry(tt_local_entry, node, head, hash_entry) { + rcu_read_lock(); + hlist_for_each_entry_rcu(tt_local_entry, node, head, + hash_entry) { entry = (struct vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); @@ -678,14 +679,12 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++;
- if (vis_packet_full(info)) { - spin_unlock_bh(&bat_priv->tt_lhash_lock); - return 0; - } + if (vis_packet_full(info)) + goto unlock; } + rcu_read_unlock(); }
- spin_unlock_bh(&bat_priv->tt_lhash_lock); return 0;
unlock:
On Wednesday 27 April 2011 23:35:02 Antonio Quartulli wrote:
Patchset description:
- Rename all the variables/functions/constants from hna to tt
- Implement the new announcement mechanism
- Implement the roaming optimisation
- Protect by RCU the local and global table
I just applied the remaining 3 patches from your repository (the first one already entered the master branch a while ago). The corresponding commits are: 4dea027, cea194d & 7bad463.
** Patch 2/4 also introduces a dependency on the crc16 module since the new mechanism uses the crc16 computation function provided by this module. **
Guess we have to adjust our Kconfig file ? I'll modify the OpenWRT package to handle this dependency.
Thanks for your work, Marek
b.a.t.m.a.n@lists.open-mesh.org