On Wed, Jul 23, 2025 at 04:00:03PM +0300, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Extract the core P2PDMA provider information (device owner and bus offset) from the dev_pagemap into a dedicated p2pdma_provider structure. This creates a cleaner separation between the memory management layer and the P2PDMA functionality.
The new p2pdma_provider structure contains:
- owner: pointer to the providing device
- bus_offset: computed offset for non-host transactions
This refactoring simplifies the P2PDMA state management by removing the need to access pgmap internals directly. The pci_p2pdma_map_state now stores a pointer to the provider instead of the pgmap, making the API more explicit and easier to understand.
Based on the conversation how about this as a commit message:
PCI/P2PDMA: Separate the mmap() support from the core logic
Currently the P2PDMA code requires a pgmap and a struct page to function. The was serving three important purposes:
- DMA API compatibility, where scatterlist required a struct page as input
- Life cycle management, the percpu_ref is used to prevent UAF during device hot unplug
- A way to get the P2P provider data through the pci_p2pdma_pagemap
The DMA API now has a new flow, and has gained phys_addr_t support, so it no longer needs struct pages to perform P2P mapping.
Lifecycle management can be delegated to the user, DMABUF for instance has a suitable invalidation protocol that does not require struct page.
Finding the P2P provider data can also be managed by the caller without need to look it up from the phys_addr.
Split the P2PDMA code into two layers. The optionl upper layer, effectively, provides a way to mmap() P2P memory into a VMA by providing struct page, pgmap, a genalloc and sysfs.
The lower layer provides the actual P2P infrastructure and is wrapped up in a new struct p2pdma_provider. Rework the mmap layer to use new p2pdma_provider based APIs.
Drivers that do not want to put P2P memory into VMA's can allocate a struct p2pdma_provider after probe() starts and free it before remove() completes. When DMA mapping the driver must convey the struct p2pdma_provider to the DMA mapping code along with a phys_addr of the MMIO BAR slice to map. The driver must ensure that no DMA mapping outlives the lifetime of the struct p2pdma_provider.
The intended target of this new API layer is DMABUF. There is usually only a single p2pdma_provider for a DMABUF exporter. Most drivers can establish the p2pdma_provider during probe, access the single instance during DMABUF attach and use that to drive the DMA mapping.
DMABUF provides an invalidation mechanism that can guarentee all DMA is halted and the DMA mappings are undone prior to destroying the struct p2pdma_provider. This ensures there is no UAF through DMABUFs that are lingering past driver removal.
The new p2pdma_provider layer cannot be used to create P2P memory that can be mapped into VMA's, be used with pin_user_pages(), O_DIRECT, and so on. These use cases must still use the mmap() layer. The p2pdma_provider layer is principally for DMABUF-like use cases where DMABUF natively manages the life cycle and access instead of vmas/pin_user_pages()/struct page.
Jason