On Wed, Nov 19, 2025 at 03:45:06PM -0400, Jason Gunthorpe wrote:
On Wed, Nov 19, 2025 at 03:06:18PM +0100, Christian König wrote:
On 11/19/25 14:35, Jason Gunthorpe wrote:
On Wed, Nov 19, 2025 at 10:18:08AM +0100, Christian König wrote:
+As this is not well-defined or well-supported in real HW the kernel defaults to +blocking such routing. There is an allow list to allow detecting known-good HW, +in which case P2P between any two PCIe devices will be permitted.
<...>
The documentation makes it sound like DMA-buf is limited to not using struct pages and direct I/O, but that is not true.
Okay, I see what you mean, the intention was to be very strong and say if you are not using struct pages then you must using DMABUF or something like it to control lifetime. Not to say that was the only way how DMABUF can be used.
Leon let's try to clarify that a bit more
diff --git a/Documentation/driver-api/pci/p2pdma.rst b/Documentation/driver-api/pci/p2pdma.rst index 32e9b691508b..280673b50350 100644 --- a/Documentation/driver-api/pci/p2pdma.rst +++ b/Documentation/driver-api/pci/p2pdma.rst @@ -156,7 +156,8 @@ Usage With DMABUF =================
DMABUF provides an alternative to the above struct page-based -client/provider/orchestrator system. In this mode the exporting driver will wrap +client/provider/orchestrator system and should be used when struct page +doesn't exist. In this mode the exporting driver will wrap some of its MMIO in a DMABUF and give the DMABUF FD to userspace.
Userspace can then pass the FD to an importing driver which will ask the
Jason