On 04/23, Cosmin Ratiu wrote:
Drivers that are told to allocate RX buffers from pools of DMA memory should have enough memory in the pool to satisfy projected allocation requests (a function of ring size, MTU & other parameters). If there's not enough memory, RX ring refill might fail later at inconvenient times (e.g. during NAPI poll).
This commit adds a check at dmabuf pool init time that compares the amount of memory in the underlying chunk pool (configured by the user space application providing dmabuf memory) with the desired pool size (previously set by the driver) and fails with an error message if chunk memory isn't enough.
Fixes: 0f9214046893 ("memory-provider: dmabuf devmem memory provider") Signed-off-by: Cosmin Ratiu cratiu@nvidia.com
net/core/devmem.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/net/core/devmem.c b/net/core/devmem.c index 6e27a47d0493..651cd55ebb28 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -299,6 +299,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, int mp_dmabuf_devmem_init(struct page_pool *pool) { struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
- size_t size;
if (!binding) return -EINVAL; @@ -312,6 +313,16 @@ int mp_dmabuf_devmem_init(struct page_pool *pool) if (pool->p.order != 0) return -E2BIG;
- /* Validate that the underlying dmabuf has enough memory to satisfy
* requested pool size.
*/
I think it's useful to have a check, but note that this check is in no way a guarantee that the genpool has enough capacity. We can use the same binding on multiple queues... Can you expand the comment a bit to explain that it's more of a sanity check than a guarantee?
- size = gen_pool_size(binding->chunk_pool) >> PAGE_SHIFT;
- if (size < pool->p.pool_size) {
pr_warn("%s: Insufficient dmabuf memory (%zu pages) to satisfy pool_size (%u pages)\n",
Let's print the sizes in bytes? We might have order>0 pages soon in the pp: https://lore.kernel.org/netdev/20250421222827.283737-1-kuba@kernel.org/T/#t