It is hard to understand why the refcnt is increased when it isn't done near the actual place the new reference is used. So using kref_get right before the place which requires the reference and in the same function helps to avoid accidental problems causedy incorrect reference counting.
Signed-off-by: Sven Eckelmann sven@narfation.org --- v2: - split patch based on type
net/batman-adv/bridge_loop_avoidance.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c index 21bc3d0..0035b8f 100644 --- a/net/batman-adv/bridge_loop_avoidance.c +++ b/net/batman-adv/bridge_loop_avoidance.c @@ -519,11 +519,9 @@ batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, u8 *orig, atomic_set(&entry->wait_periods, 0); ether_addr_copy(entry->orig, orig); INIT_WORK(&entry->report_work, batadv_bla_loopdetect_report); - - /* one for the hash, one for returning */ kref_init(&entry->refcount); - kref_get(&entry->refcount);
+ kref_get(&entry->refcount); hash_added = batadv_hash_add(bat_priv->bla.backbone_hash, batadv_compare_backbone_gw, batadv_choose_backbone_gw, entry,