On Mon, Jul 28, 2025 at 11:07:34AM -0600, Logan Gunthorpe wrote:
On 2025-07-28 10:41, Leon Romanovsky wrote:
On Mon, Jul 28, 2025 at 10:12:31AM -0600, Logan Gunthorpe wrote:
On 2025-07-27 13:05, Jason Gunthorpe wrote:
On Fri, Jul 25, 2025 at 10:30:46AM -0600, Logan Gunthorpe wrote:
On 2025-07-24 02:13, Leon Romanovsky wrote:
On Thu, Jul 24, 2025 at 10:03:13AM +0200, Christoph Hellwig wrote: > On Wed, Jul 23, 2025 at 04:00:06PM +0300, Leon Romanovsky wrote: >> From: Leon Romanovsky leonro@nvidia.com >> >> Export the pci_p2pdma_map_type() function to allow external modules >> and subsystems to determine the appropriate mapping type for P2PDMA >> transfers between a provider and target device. > > External modules have no business doing this.
VFIO PCI code is built as module. There is no way to access PCI p2p code without exporting functions in it.
The solution that would make more sense to me would be for either dma_iova_try_alloc() or another helper in dma-iommu.c to handle the P2PDMA case.
This has nothing to do with dma-iommu.c, the decisions here still need to be made even if dma-iommu.c is not compiled in.
Doesn't it though? Every single call in patch 10 to the newly exported PCI functions calls into the the dma-iommu functions.
Patch 10 has lots of flows, only one will end up in dma-iommu.c
vfio_pci_dma_buf_map() calls pci_p2pdma_bus_addr_map(), dma_iova_link(), dma_map_phys().
Only iova_link would call to dma-iommu.c - if dma_map_phys() is called we know that dma-iommu.c won't be called by it.
If there were non-iommu paths then I would expect the code would use the regular DMA api directly which would then call in to dma-iommu.
If p2p type is PCI_P2PDMA_MAP_BUS_ADDR, there will no dma-iommu and DMA at all.
I understand that and it is completely beside my point.
If the dma mapping for P2P memory doesn't need to create an iommu mapping then that's fine. But it should be the dma-iommu layer to decide that.
So above, we can't use dma-iommu.c, it might not be compiled into the kernel but the dma_map_phys() path is still valid.
It's not a decision that should be made by every driver doing this kind of thing.
Sort of, I think we are trying to get to some place where there are subsystem, or at least data structure specific helpers that do this (ie nvme has BIO helpers), but the helpers should be running this logic directly for performance. Leon hasn't done it but I think we should see helpers for DMABUF too encapsulating the logic shown in patch 10. I think we need to prove it out these basic points first before trying to go and convert a bunch of GPU drivers.
The vfio in patch 10 is not the full example since it only has a single scatter/gather" effectively, but the generalized version loops over pci_p2pdma_bus_addr_map(), dma_iova_link(), dma_map_phys() for each page.
Part of the new API design is to only do one kind of mapping operation at once, and part of the design is we know that the P2P type is fixed. It makes no performance sense to check the type inside the pci_p2pdma_bus_addr_map()/ dma_iova_link()/dma_map_phys() within the per-page loop.
I do think some level of abstraction has been lost here in pursuit of performance. If someone does have a better way to structure this without a performance hit then fantastic, but thats going back and revising the new DMA API. This just builds on top of that, and yes, it is not so abstract.
Jason
linaro-mm-sig@lists.linaro.org