On 12/18/23 02:40, Mina Almasry wrote:
Implement a memory provider that allocates dmabuf devmem in the form of net_iov.
The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the net_iov.
The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed.
Usage of PP_FLAG_DMA_MAP is required for this memory provide such that the page_pool can provide the driver with the dma-addrs of the devmem.
Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity & p.order != 0.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
...
+static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
struct netmem *netmem)
+{
- WARN_ON_ONCE(!netmem_is_net_iov(netmem));
- page_pool_clear_pp_info(netmem);
- netdev_free_dmabuf(netmem_to_net_iov(netmem));
- /* We don't want the page pool put_page()ing our net_iovs. */
- return false;
+}
+const struct memory_provider_ops dmabuf_devmem_ops = {
- .init = mp_dmabuf_devmem_init,
- .destroy = mp_dmabuf_devmem_destroy,
- .alloc_pages = mp_dmabuf_devmem_alloc_pages,
- .release_page = mp_dmabuf_devmem_release_page,
+}; +EXPORT_SYMBOL(dmabuf_devmem_ops);
It might make sense to move all these functions together with new code from core/dev.c into a new file