On 12.11.2024 19:31, Sabrina Dubroca wrote:
2024-11-10, 15:38:27 +0200, Sergey Ryazanov wrote:
On 29.10.2024 12:47, Antonio Quartulli wrote:
An ovpn_peer object holds the whole status of a remote peer (regardless whether it is a server or a client).
This includes status for crypto, tx/rx buffers, napi, etc.
Only support for one peer is introduced (P2P mode). Multi peer support is introduced with a later patch.
Reviewing the peer creation/destroying code I came to a generic question. Did you consider keeping a single P2P peer in the peers table as well?
Looks like such approach can greatly simply the code by dropping all these 'switch (ovpn->mode)' checks and implementing a unified peer management. The 'peer' field in the main private data structure can be kept to accelerate lookups, still using peers table for management tasks like removing all the peers on the interface teardown.
It would save a few 'switch(mode)', but force every client to allocate the hashtable for no reason at all. That tradeoff doesn't look very beneficial to me, the P2P-specific code is really simple. And if you keep ovpn->peer to make lookups faster, you're not removing that many 'switch(mode)'.
Looking at the done review, I can retrospectively conclude that I personally do not like short 'switch' statements and special handlers :)
Seriously, this module has a highest density of switches per KLOC from what I have seen before and a major part of it dedicated to handle the special case of P2P connection. What together look too unusual, so it feels like a flaw in the design. I racked my brains to come up with a better solution and failed. So I took a different approach, inviting people to discuss item pieces of the code to find a solution collectively or to realize that there is no better solution for now.
The problem is that all these hash tables become inefficient with the single entry (P2P case). I was thinking about allocating a table with a single bin, but it still requires hash function run to access the indexed entry.
And back to the hashtable(s) size for the MP mode. 8k-bins table looks a good choice for a normal server with 1-2Gb uplink serving up to 1k connections. But it sill unclear, how this choice can affect installations with a bigger number of connections? Or is this module applicable for embedded solutions? E.g. running a couple of VPN servers on a home router with a few actual connections looks like a waste of RAM. I was about to suggest to use rhashtable due to its dynamic sizing feature, but the module needs three tables. Any better idea?
-- Sergey