Hi Antonio,
Looks good, I like the idea of using refcounting for the tt global hash. That will indeed nicely remove the ordering dependancy I introduced with my patch earlier.
@@ -844,7 +897,17 @@ int batadv_tt_global_add(struct batadv_priv *bat_priv,
if (unlikely(hash_added != 0)) { /* remove the reference for the hash */
batadv_tt_global_entry_free_ref(tt_global_entry);
batadv_tt_global_entry_free_ref(bat_priv,
tt_global_entry);
goto out_remove;
}
/* increase the refcounter for this new "reference to the global
* table represented by the global entry. If the increment
* fails remove the entry from the hash
*/
if (!atomic_inc_not_zero(&bat_priv->tt.global_refcount)) {
}hlist_del_rcu(&tt_global_entry->common.hash_entry); goto out_remove;
Looks like there's a spin-lock missing, we usually need to do the rcu-list-adds/dels within such a lock.
Also the ordering looks a little different compared to what we usually do. Usually we increase all the refcounting first and add things to the lists in the end. Changing the order should remove the need for an hlist_del_rcu/spin-lock in the first place.
Cheers, Linus