On Wed, Jun 16, 2021 at 10:19:15AM +0000, David Laight wrote:
From: Amit Klein
Sent: 16 June 2021 10:17
...
-#define IP_IDENTS_SZ 2048u
+/* Hash tables of size 2048..262144 depending on RAM size.
- Each bucket uses 8 bytes.
- */
+static u32 ip_idents_mask __read_mostly;
...
- /* For modern hosts, this will use 2 MB of memory */
- idents_hash = alloc_large_system_hash("IP idents",
sizeof(*ip_idents) + sizeof(*ip_tstamps),
0,
16, /* one bucket per 64 KB */
HASH_ZERO,
NULL,
&ip_idents_mask,
2048,
256*1024);
Can someone explain why this is a good idea for a 'normal' system?
Why should my desktop system 'waste' 2MB of memory on a massive hash table that I don't need. It might be needed by systems than handle massive numbers of concurrent connections - but that isn't 'most systems'.
Surely it would be better to detect when the number of entries is comparable to the table size and then resize the table.
Patches always gladly accepted.