The current implementation of hash_resize uses hash_add directly to initialize two a new hash table. But hash_add has two error cases: Data already exists and malloc fails.
The check for the duplicated data is not really harmful (beside increasing the time to re-add elements) but the malloc can potentially return an error. This malloc is unnecessary and just takes extra time and is a potential candidate for errors. Instead the bucket from the old hash table can be re-used.
Applied in revision 6bbde09.
Thanks! Simon