On 2023/11/9 10:22, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 7:40 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/8 5:59, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
I seems to misunderstand it. In that case, it seems to be about defensive programming like other checking.
By looking at it more closely, it seems napi_frag_unref() call page_pool_page_put_many() directly, which means devmem seems to be bypassing the napi_safe optimization.
Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse the napi_safe optimization?