On 8/12/24 19:55, Mina Almasry wrote:
On Mon, Aug 12, 2024 at 1:57 PM Jakub Kicinski kuba@kernel.org wrote:
On Sun, 11 Aug 2024 22:51:13 +0100 Pavel Begunkov wrote:
I think we're talking about 2 slightly different flags, AFAIU.> Pavel and I are suggesting the driver reports "I support memory providers" directly to core (via the queue-api or what not), and we check that flag directly in netdev_rx_queue_restart(), and fail immediately if the support is not there.
I might've misread Jakub, but yes, I believe it's different. It'd communicate about support for providers to upper layers, so we can fail even before attempting to allocate a new queue and init a page pool.
Got it. Since allocating memory happens before stopping traffic I think it's acceptable to stick to a single flag.
Jakub is suggesting a page_pool_params flag which lets the driver report "I support memory providers". If the driver doesn't support it but core is trying to configure that, then the page_pool_create will fail, which will cause the queue API operation (ndo_queue_alloc_mem_alloc) to fail, which causes netdev_rx_queue_restart() to fail.
And I'm not against this way either if we explicitly get an error back instead of trying to figure it out post-factum like by checking the references and possibly reverting the allocation. Maybe that's where I was confused, and that refcount thing was suggested as a WARN_ONCE?
Yup, the refcount (now: check of the page pool list) was meant as a WARN_ONCE() to catch bad drivers.
FWIW, I think it warrants two flags. The first saying that the driver supports providers at all:
page_pool_init() { if (rxq->mp_params) if (!(flags & PP_PROVIDERS_SUPPORTED)) goto fail; }
And the second telling whether the driver wants to install providers for this particular page pool, so if there is a separate pool for headers we can set it with plain old kernel pages.
The implementation of the queue API should be resilient against failures in alloc, and not being MP capable is just a form of alloc failure. I don't see the upside of double-flag.
payload_pool = page_pool_create(rqx, PP_PROVIDERS_SUPPORTED); header_pool = page_pool_create(rqx, PP_PROVIDERS_SUPPORTED | PP_IGNORE_PROVIDERS);
Also don't see the upside of the explicit "non-capable" flag, but I haven't thought of that. Is there any use?
There are 2 things we're trying to accomplish:
- Drivers need to be able to say "I support unreadable netmem".
Failure to report unreadable netmem support should cause the netlink API to fail when the user tries to bind dmabuf/io uring memory.
- Drivers need to be able to say "I want a header pool (with readable
netmem)" or "I want a data pool (potentially with unreadable netmem)".
Pavel is suggesting implementing both of these in 2 different flags.
Jakub is suggesting implementing both with 1 flag which says "I can support unreadable netmem for this pool" , and guarding against #1 with a refcount check to detect if a dmabuf pool should have been created but wasn't.
That would be iffy IIUC, but I think Jakub just explicitly said that the refcount trick was just for debugging purposes and not for gauging errors like "providers are not supported by the driver".
"Yup, the refcount (now: check of the page pool list) was meant as a WARN_ONCE() to catch bad drivers."
I prefer Jakub's suggestion, but beware that if we go with Jakub's suggestion, we can't WARN_ON when the core-net check fails, because it's not a kernel warning. All that's happening is that the user asked for dmabuf binding but the driver didn't ask for it (because likely it doesn't support it). That's not cause for a warning to be printed in the kernel. The netlink API should just fail and return -EOPNOTSUPP or something.