On Mon, Nov 26, 2012 at 10:40:07AM +0100, Sven Eckelmann wrote:
On Monday 26 November 2012 10:33:09 Antonio Quartulli wrote:
On Sun, Nov 25, 2012 at 07:23:27PM +0100, Sven Eckelmann wrote:
An unoptimized version of the Jenkins one-at-a-time hash function is copied all over the code wherever an hashtable is used. Instead the optimized version shared between the whole kernel should be used to reduce code duplication and keep bugs at a single point.
Only the TT and DAT code must use the old implementation to guarantee the same distribution of the elements in the hash. The TT code needs it because the CRC exchanged between the mesh nodes is computed over the entries in the hash.
Hi Sven,
I don't fully get why we can't use this new implementation in TT. What's wrong with the CRC computation?
The in kernel implementation will create a different hash sum -> tt entries will end up in a different bucket -> CRC will be different (please correct me about the last step... just had this problem in the back of my head).
CRC computation does not rely on entries positions, because the real CRC16 is computed on the client mac address only (and this is the same everywhere) then the results are XOR'd together. Since XOR is commutative we do not need to keep the same order network wide.
Instead, your reasoning is correct for DAT, but for the global DAT hash function only. The local one can be whatever we need, so we can also use jhash for this.
Cheers,