Drivers that are told to allocate RX buffers from pools of DMA memory should have enough memory in the pool to satisfy projected allocation requests (a function of ring size, MTU & other parameters). If there's not enough memory, RX ring refill might fail later at inconvenient times (e.g. during NAPI poll).
This commit adds a check at dmabuf pool init time that compares the amount of memory in the underlying chunk pool (configured by the user space application providing dmabuf memory) with the desired pool size (previously set by the driver) and fails with an error message if chunk memory isn't enough.
Fixes: 0f9214046893 ("memory-provider: dmabuf devmem memory provider") Signed-off-by: Cosmin Ratiu cratiu@nvidia.com --- net/core/devmem.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/net/core/devmem.c b/net/core/devmem.c index 6e27a47d0493..651cd55ebb28 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -299,6 +299,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, int mp_dmabuf_devmem_init(struct page_pool *pool) { struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + size_t size;
if (!binding) return -EINVAL; @@ -312,6 +313,16 @@ int mp_dmabuf_devmem_init(struct page_pool *pool) if (pool->p.order != 0) return -E2BIG;
+ /* Validate that the underlying dmabuf has enough memory to satisfy + * requested pool size. + */ + size = gen_pool_size(binding->chunk_pool) >> PAGE_SHIFT; + if (size < pool->p.pool_size) { + pr_warn("%s: Insufficient dmabuf memory (%zu pages) to satisfy pool_size (%u pages)\n", + __func__, size, pool->p.pool_size); + return -ENOMEM; + } + net_devmem_dmabuf_binding_get(binding); return 0; }