On Wed, Nov 19, 2025 at 10:18:08AM +0100, Christian König wrote:
On 11/11/25 10:57, Leon Romanovsky wrote:
From: Jason Gunthorpe jgg@nvidia.com
Reflect latest changes in p2p implementation to support DMABUF lifecycle.
Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com
Documentation/driver-api/pci/p2pdma.rst | 95 +++++++++++++++++++++++++-------- 1 file changed, 72 insertions(+), 23 deletions(-)
<...>
These MMIO pages have no struct page, and
Well please drop "pages" here. Just say MMIO addresses.
+if used with mmap() must create special PTEs. As such there are very few +kernel uAPIs that can accept pointers to them; in particular they cannot be used +with read()/write(), including O_DIRECT.
<...>
+DMABUF provides an alternative to the above struct page-based +client/provider/orchestrator system. In this mode the exporting driver will wrap +some of its MMIO in a DMABUF and give the DMABUF FD to userspace.
+Userspace can then pass the FD to an importing driver which will ask the +exporting driver to map it.
"to map it to the importer".
No problem, changed.
Regards, Christian.
+In this case the initiator and target pci_devices are known and the P2P subsystem +is used to determine the mapping type. The phys_addr_t-based DMA API is used to +establish the dma_addr_t.
+Lifecycle is controlled by DMABUF move_notify(). When the exporting driver wants +to remove() it must deliver an invalidation shutdown to all DMABUF importing +drivers through move_notify() and synchronously DMA unmap all the MMIO.
+No importing driver can continue to have a DMA map to the MMIO after the +exporting driver has destroyed its p2p_provider. P2P DMA Support Library