 
            On Wed, Oct 29, 2025 at 05:25:03PM -0700, Samiullah Khawaja wrote:
On Mon, Oct 13, 2025 at 8:27 AM Leon Romanovsky leon@kernel.org wrote:
From: Leon Romanovsky leonro@nvidia.com
Add support for exporting PCI device MMIO regions through dma-buf, enabling safe sharing of non-struct page memory with controlled lifetime management. This allows RDMA and other subsystems to import dma-buf FDs and build them into memory regions for PCI P2P operations.
The implementation provides a revocable attachment mechanism using dma-buf move operations. MMIO regions are normally pinned as BARs don't change physical addresses, but access is revoked when the VFIO device is closed or a PCI reset is issued. This ensures kernel self-defense against potentially hostile userspace.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Vivek Kasireddy vivek.kasireddy@intel.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/vfio/pci/Kconfig | 3 + drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/vfio_pci_config.c | 22 +- drivers/vfio/pci/vfio_pci_core.c | 28 ++ drivers/vfio/pci/vfio_pci_dmabuf.c | 446 +++++++++++++++++++++++++++++ drivers/vfio/pci/vfio_pci_priv.h | 23 ++ include/linux/vfio_pci_core.h | 1 + include/uapi/linux/vfio.h | 25 ++ 8 files changed, 546 insertions(+), 4 deletions(-) create mode 100644 drivers/vfio/pci/vfio_pci_dmabuf.c
<...>
+void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) +{
struct vfio_pci_dma_buf *priv;
struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock);
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
if (!get_file_active(&priv->dmabuf->file))
continue;
if (priv->revoked != revoked) {
dma_resv_lock(priv->dmabuf->resv, NULL);
priv->revoked = revoked;
dma_buf_move_notify(priv->dmabuf);I think this should only be called when revoked is true, otherwise this will be calling move_notify on the already revoked dmabuf attachments.
This case is protected by "if (priv->revoked)" check both in vfio_pci_dma_buf_map and vfio_pci_dma_buf_attach. They will prevent DMABUF recreation if revoked is false.
VTW, please trim your replies, it is time consuming to find your reply among 600 lines of unrelated text.
Thanks
dma_resv_unlock(priv->dmabuf->resv);
}
dma_buf_put(priv->dmabuf);
}+}
 
            On Thu, Oct 30, 2025 at 08:48:18AM +0200, Leon Romanovsky wrote:
+void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) +{
struct vfio_pci_dma_buf *priv;
struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock);
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
if (!get_file_active(&priv->dmabuf->file))
continue;
if (priv->revoked != revoked) {
dma_resv_lock(priv->dmabuf->resv, NULL);
priv->revoked = revoked;
dma_buf_move_notify(priv->dmabuf);I think this should only be called when revoked is true, otherwise this will be calling move_notify on the already revoked dmabuf attachments.
This case is protected by "if (priv->revoked)" check both in vfio_pci_dma_buf_map and vfio_pci_dma_buf_attach. They will prevent DMABUF recreation if revoked is false.
The point was to call it when revoked becomes false as well, as that gives the importing driver an opportunity to restore any mapping it had.
Jason
linaro-mm-sig@lists.linaro.org

