Hi, Christian,
On Wed, 2026-01-21 at 10:20 +0100, Christian König wrote:
On 1/20/26 15:07, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
dma-buf invalidation is performed asynchronously by hardware, so VFIO must wait until all affected objects have been fully invalidated.
Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions") Signed-off-by: Leon Romanovsky leonro@nvidia.com
Reviewed-by: Christian König christian.koenig@amd.com
Please also keep in mind that the while this wait for all fences for correctness you also need to keep the mapping valid until dma_buf_unmap_attachment() was called.
I'm wondering shouldn't we require DMA_RESV_USAGE_BOOKKEEP here, as *any* unsignaled fence could indicate access through the map?
/Thomas
In other words you can only redirect the DMA-addresses previously given out into nirvana (or a dummy memory or similar), but you still need to avoid re-using them for something else.
Regards, Christian.
drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53..33bc6a1909dd 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) dma_resv_lock(priv->dmabuf->resv, NULL); priv->revoked = revoked; dma_buf_move_notify(priv->dmabuf);
dma_resv_wait_timeout(priv->dmabuf->resv,DMA_RESV_USAGE_KERNEL, false,
MAX_SCHEDULE_TIMEOUT); dma_resv_unlock(priv->dmabuf->resv); } fput(priv->dmabuf->file); @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev) priv->vdev = NULL; priv->revoked = true; dma_buf_move_notify(priv->dmabuf);
dma_resv_wait_timeout(priv->dmabuf->resv,DMA_RESV_USAGE_KERNEL,
false,MAX_SCHEDULE_TIMEOUT); dma_resv_unlock(priv->dmabuf->resv); vfio_device_put_registration(&vdev->vdev); fput(priv->dmabuf->file);