On Sun, Nov 29, 2009 at 09:09:50PM +0100, Simon Wunderlich wrote:
I did some testing, including loading, unloading, killing individual nodes etc, which seems to be clean so far. However there might be more race conditions introduced by this large patch, and i'm therefore requesting a careful review before committing.
Hi Simon
I've not done a careful review yet, just a quick look.
Two idea for improvements:
In the purge code, add some debug code which looks at the refcount value. If it is not between 1 and 5, prinkt() a warning what there is probably a missing refcount operation. Since the purge code does not run very often, it should not add much overhead, yet it is still a useful debug tool.
The following bit of code happens quite a lot:
while (NULL != (orig_node = orig_hash_iterate(&hashit, orig_node))) {
There is also a comment about having to free hashit, if you don't iterate to the end of the hash. How about refactoring this, more like the linux list.h.
Make hashit a stack variable, with an init macro:
#define HASHIT(name) struct hash_it_t name = { .index = -1, .bucket = NULL, \ .prev_bucket=NULL, \ .first_bucket=NULL }
and a macro for iterating over the hash
HASHIT(hashit);
orig_hash_for_each(orig_node, hashit) {
foo(orig_node); }
Andrew