From: Amit Klein
Sent: 16 June 2021 14:20
On Wed, Jun 16, 2021 at 1:19 PM David Laight David.Laight@aculab.com wrote:
Can someone explain why this is a good idea for a 'normal' system?
This patch mitigates some techniques that leak internal state due to table hash collisions.
As you say below it isn't really effective...
Why should my desktop system 'waste' 2MB of memory on a massive hash table that I don't need.
In the patch's defense, it only consumes 2MB when the physical RAM is >= 16GB.
Right, but I've 16GB of memory because I run some very large applications that need all the memory they can get (and more).
In any case, for many architectures 8GB is 'entry level'. That includes some or the 'small' ARM cpus. They are unlikely to have tens of connections, never mind thousands.
It might be needed by systems than handle massive numbers of concurrent connections - but that isn't 'most systems'.
Surely it would be better to detect when the number of entries is comparable to the table size and then resize the table.
Security-wise, this approach is not effective. The table size was increased to reduce the likelihood of hash collisions. These start happening when you have ~N^(1/2) elements (for table size N), so you'll need to resize pretty quickly anyway.
You really have to live with some hash collisions. Resizing at N^(1/2) just doesn't scale. Not only that the hash function will stop generating non-colliding values once the table starts getting big. (A very expensive hash function might help - but i wouldn't count on it.)
In any case, the chance of hitting a hash collision is slightly less than (table_size / active_entries).
What you want to avoid is the ELF symbol hash when the average chain length is about 1.5 and you have to separately check the hash table of every shared library. That quickly balloons to a lot of string compares. (I looked at that for mozilla may years ago - was horrid.)
David
- Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)