Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
------------------------------------------------------------------------- This series documents a dma-buf “revoke” mechanism: to allow a dma-buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter-triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
--- Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-) --- base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
From: Leon Romanovsky leonro@nvidia.com
Rename the .move_notify() callback to .invalidate_mappings() to make its purpose explicit and highlight that it is responsible for invalidating existing mappings.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com --- drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 2 +- include/linux/dma-buf.h | 6 +++--- 9 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index edaa9e4ee4ae..59cc647bf40e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -948,7 +948,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, if (WARN_ON(!dmabuf || !dev)) return ERR_PTR(-EINVAL);
- if (WARN_ON(importer_ops && !importer_ops->move_notify)) + if (WARN_ON(importer_ops && !importer_ops->invalidate_mappings)) return ERR_PTR(-EINVAL);
attach = kzalloc(sizeof(*attach), GFP_KERNEL); @@ -1055,7 +1055,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_pin, "DMA_BUF"); * * This unpins a buffer pinned by dma_buf_pin() and allows the exporter to move * any mapping of @attach again and inform the importer through - * &dma_buf_attach_ops.move_notify. + * &dma_buf_attach_ops.invalidate_mappings. */ void dma_buf_unpin(struct dma_buf_attachment *attach) { @@ -1262,7 +1262,7 @@ void dma_buf_move_notify(struct dma_buf *dmabuf)
list_for_each_entry(attach, &dmabuf->attachments, node) if (attach->importer_ops) - attach->importer_ops->move_notify(attach); + attach->importer_ops->invalidate_mappings(attach); } EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index e22cfa7c6d32..863454148b28 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -450,7 +450,7 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf) }
/** - * amdgpu_dma_buf_move_notify - &attach.move_notify implementation + * amdgpu_dma_buf_move_notify - &attach.invalidate_mappings implementation * * @attach: the DMA-buf attachment * @@ -521,7 +521,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = { .allow_peer2peer = true, - .move_notify = amdgpu_dma_buf_move_notify + .invalidate_mappings = amdgpu_dma_buf_move_notify };
/** diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index ce49282198cb..19c78dd2ca77 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -288,7 +288,7 @@ static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
static const struct dma_buf_attach_ops virtgpu_dma_buf_attach_ops = { .allow_peer2peer = true, - .move_notify = virtgpu_dma_buf_move_notify + .invalidate_mappings = virtgpu_dma_buf_move_notify };
struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index 5df98de5ba3c..1f2cca5c2f81 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -23,7 +23,7 @@ static bool p2p_enabled(struct dma_buf_test_params *params) static bool is_dynamic(struct dma_buf_test_params *params) { return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops && - params->attach_ops->move_notify; + params->attach_ops->invalidate_mappings; }
static void check_residency(struct kunit *test, struct xe_bo *exported, @@ -60,7 +60,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
/* * Evict exporter. Evicting the exported bo will - * evict also the imported bo through the move_notify() functionality if + * evict also the imported bo through the invalidate_mappings() functionality if * importer is on a different device. If they're on the same device, * the exporter and the importer should be the same bo. */ @@ -198,7 +198,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
static const struct dma_buf_attach_ops nop2p_attach_ops = { .allow_peer2peer = false, - .move_notify = xe_dma_buf_move_notify + .invalidate_mappings = xe_dma_buf_move_notify };
/* diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 7c74a31d4486..1b9cd043e517 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -287,7 +287,7 @@ static void xe_dma_buf_move_notify(struct dma_buf_attachment *attach)
static const struct dma_buf_attach_ops xe_dma_buf_attach_ops = { .allow_peer2peer = true, - .move_notify = xe_dma_buf_move_notify + .invalidate_mappings = xe_dma_buf_move_notify };
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0ec2e4120cc9..d77a739cfe7a 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -129,7 +129,7 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device, if (check_add_overflow(offset, (unsigned long)size, &end)) return ret;
- if (unlikely(!ops || !ops->move_notify)) + if (unlikely(!ops || !ops->invalidate_mappings)) return ret;
dmabuf = dma_buf_get(fd); @@ -195,7 +195,7 @@ ib_umem_dmabuf_unsupported_move_notify(struct dma_buf_attachment *attach)
static struct dma_buf_attach_ops ib_umem_dmabuf_attach_pinned_ops = { .allow_peer2peer = true, - .move_notify = ib_umem_dmabuf_unsupported_move_notify, + .invalidate_mappings = ib_umem_dmabuf_unsupported_move_notify, };
struct ib_umem_dmabuf * diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 325fa04cbe8a..97099d3b1688 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1620,7 +1620,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach)
static struct dma_buf_attach_ops mlx5_ib_dmabuf_attach_ops = { .allow_peer2peer = 1, - .move_notify = mlx5_ib_dmabuf_invalidate_cb, + .invalidate_mappings = mlx5_ib_dmabuf_invalidate_cb, };
static struct ib_mr * diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index dbe51ecb9a20..76f900fa1687 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1451,7 +1451,7 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach)
static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { .allow_peer2peer = true, - .move_notify = iopt_revoke_notify, + .invalidate_mappings = iopt_revoke_notify, };
/* diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 0bc492090237..1b397635c793 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -407,7 +407,7 @@ struct dma_buf { * through the device. * * - Dynamic importers should set fences for any access that they can't - * disable immediately from their &dma_buf_attach_ops.move_notify + * disable immediately from their &dma_buf_attach_ops.invalidate_mappings * callback. * * IMPORTANT: @@ -458,7 +458,7 @@ struct dma_buf_attach_ops { bool allow_peer2peer;
/** - * @move_notify: [optional] notification that the DMA-buf is moving + * @invalidate_mappings: [optional] notification that the DMA-buf is moving * * If this callback is provided the framework can avoid pinning the * backing store while mappings exists. @@ -475,7 +475,7 @@ struct dma_buf_attach_ops { * New mappings can be created after this callback returns, and will * point to the new location of the DMA-buf. */ - void (*move_notify)(struct dma_buf_attachment *attach); + void (*invalidate_mappings)(struct dma_buf_attachment *attach); };
/**
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Rename the .move_notify() callback to .invalidate_mappings() to make its purpose explicit and highlight that it is responsible for invalidating existing mappings.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
Reviewed-by: Christian König christian.koenig@amd.com
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 2 +- include/linux/dma-buf.h | 6 +++--- 9 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index edaa9e4ee4ae..59cc647bf40e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -948,7 +948,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, if (WARN_ON(!dmabuf || !dev)) return ERR_PTR(-EINVAL);
- if (WARN_ON(importer_ops && !importer_ops->move_notify))
- if (WARN_ON(importer_ops && !importer_ops->invalidate_mappings)) return ERR_PTR(-EINVAL);
attach = kzalloc(sizeof(*attach), GFP_KERNEL); @@ -1055,7 +1055,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_pin, "DMA_BUF");
- This unpins a buffer pinned by dma_buf_pin() and allows the exporter to move
- any mapping of @attach again and inform the importer through
- &dma_buf_attach_ops.move_notify.
*/
- &dma_buf_attach_ops.invalidate_mappings.
void dma_buf_unpin(struct dma_buf_attachment *attach) { @@ -1262,7 +1262,7 @@ void dma_buf_move_notify(struct dma_buf *dmabuf) list_for_each_entry(attach, &dmabuf->attachments, node) if (attach->importer_ops)
attach->importer_ops->move_notify(attach);
attach->importer_ops->invalidate_mappings(attach);} EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF"); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index e22cfa7c6d32..863454148b28 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -450,7 +450,7 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf) } /**
- amdgpu_dma_buf_move_notify - &attach.move_notify implementation
- amdgpu_dma_buf_move_notify - &attach.invalidate_mappings implementation
- @attach: the DMA-buf attachment
@@ -521,7 +521,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = amdgpu_dma_buf_move_notify
- .invalidate_mappings = amdgpu_dma_buf_move_notify
}; /** diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index ce49282198cb..19c78dd2ca77 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -288,7 +288,7 @@ static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops virtgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = virtgpu_dma_buf_move_notify
- .invalidate_mappings = virtgpu_dma_buf_move_notify
}; struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index 5df98de5ba3c..1f2cca5c2f81 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -23,7 +23,7 @@ static bool p2p_enabled(struct dma_buf_test_params *params) static bool is_dynamic(struct dma_buf_test_params *params) { return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops &&
params->attach_ops->move_notify;
params->attach_ops->invalidate_mappings;} static void check_residency(struct kunit *test, struct xe_bo *exported, @@ -60,7 +60,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported, /* * Evict exporter. Evicting the exported bo will
* evict also the imported bo through the move_notify() functionality if
* evict also the imported bo through the invalidate_mappings() functionality if*/
- importer is on a different device. If they're on the same device,
- the exporter and the importer should be the same bo.
@@ -198,7 +198,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe) static const struct dma_buf_attach_ops nop2p_attach_ops = { .allow_peer2peer = false,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; /* diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 7c74a31d4486..1b9cd043e517 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -287,7 +287,7 @@ static void xe_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops xe_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0ec2e4120cc9..d77a739cfe7a 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -129,7 +129,7 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device, if (check_add_overflow(offset, (unsigned long)size, &end)) return ret;
- if (unlikely(!ops || !ops->move_notify))
- if (unlikely(!ops || !ops->invalidate_mappings)) return ret;
dmabuf = dma_buf_get(fd); @@ -195,7 +195,7 @@ ib_umem_dmabuf_unsupported_move_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops ib_umem_dmabuf_attach_pinned_ops = { .allow_peer2peer = true,
- .move_notify = ib_umem_dmabuf_unsupported_move_notify,
- .invalidate_mappings = ib_umem_dmabuf_unsupported_move_notify,
}; struct ib_umem_dmabuf * diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 325fa04cbe8a..97099d3b1688 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1620,7 +1620,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops mlx5_ib_dmabuf_attach_ops = { .allow_peer2peer = 1,
- .move_notify = mlx5_ib_dmabuf_invalidate_cb,
- .invalidate_mappings = mlx5_ib_dmabuf_invalidate_cb,
}; static struct ib_mr * diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index dbe51ecb9a20..76f900fa1687 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1451,7 +1451,7 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { .allow_peer2peer = true,
- .move_notify = iopt_revoke_notify,
- .invalidate_mappings = iopt_revoke_notify,
}; /* diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 0bc492090237..1b397635c793 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -407,7 +407,7 @@ struct dma_buf { * through the device. * * - Dynamic importers should set fences for any access that they can't
* disable immediately from their &dma_buf_attach_ops.move_notify
* disable immediately from their &dma_buf_attach_ops.invalidate_mappings
- callback.
- IMPORTANT:
@@ -458,7 +458,7 @@ struct dma_buf_attach_ops { bool allow_peer2peer; /**
* @move_notify: [optional] notification that the DMA-buf is moving
* @invalidate_mappings: [optional] notification that the DMA-buf is moving
- If this callback is provided the framework can avoid pinning the
- backing store while mappings exists.
@@ -475,7 +475,7 @@ struct dma_buf_attach_ops { * New mappings can be created after this callback returns, and will * point to the new location of the DMA-buf. */
- void (*move_notify)(struct dma_buf_attachment *attach);
- void (*invalidate_mappings)(struct dma_buf_attachment *attach);
}; /**
On Mon, Jan 19, 2026 at 11:22:27AM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Rename the .move_notify() callback to .invalidate_mappings() to make its purpose explicit and highlight that it is responsible for invalidating existing mappings.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
Reviewed-by: Christian König christian.koenig@amd.com
Thanks,
BTW, I didn't update the various xxx_move_notify() functions to use xxx_invalidate_mappings() names. Should those be converted as well?
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 2 +- include/linux/dma-buf.h | 6 +++--- 9 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index edaa9e4ee4ae..59cc647bf40e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -948,7 +948,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, if (WARN_ON(!dmabuf || !dev)) return ERR_PTR(-EINVAL);
- if (WARN_ON(importer_ops && !importer_ops->move_notify))
- if (WARN_ON(importer_ops && !importer_ops->invalidate_mappings)) return ERR_PTR(-EINVAL);
attach = kzalloc(sizeof(*attach), GFP_KERNEL); @@ -1055,7 +1055,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_pin, "DMA_BUF");
- This unpins a buffer pinned by dma_buf_pin() and allows the exporter to move
- any mapping of @attach again and inform the importer through
- &dma_buf_attach_ops.move_notify.
*/
- &dma_buf_attach_ops.invalidate_mappings.
void dma_buf_unpin(struct dma_buf_attachment *attach) { @@ -1262,7 +1262,7 @@ void dma_buf_move_notify(struct dma_buf *dmabuf) list_for_each_entry(attach, &dmabuf->attachments, node) if (attach->importer_ops)
attach->importer_ops->move_notify(attach);
attach->importer_ops->invalidate_mappings(attach);} EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF"); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index e22cfa7c6d32..863454148b28 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -450,7 +450,7 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf) } /**
- amdgpu_dma_buf_move_notify - &attach.move_notify implementation
- amdgpu_dma_buf_move_notify - &attach.invalidate_mappings implementation
- @attach: the DMA-buf attachment
@@ -521,7 +521,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = amdgpu_dma_buf_move_notify
- .invalidate_mappings = amdgpu_dma_buf_move_notify
}; /** diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index ce49282198cb..19c78dd2ca77 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -288,7 +288,7 @@ static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops virtgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = virtgpu_dma_buf_move_notify
- .invalidate_mappings = virtgpu_dma_buf_move_notify
}; struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index 5df98de5ba3c..1f2cca5c2f81 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -23,7 +23,7 @@ static bool p2p_enabled(struct dma_buf_test_params *params) static bool is_dynamic(struct dma_buf_test_params *params) { return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops &&
params->attach_ops->move_notify;
params->attach_ops->invalidate_mappings;} static void check_residency(struct kunit *test, struct xe_bo *exported, @@ -60,7 +60,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported, /* * Evict exporter. Evicting the exported bo will
* evict also the imported bo through the move_notify() functionality if
* evict also the imported bo through the invalidate_mappings() functionality if*/
- importer is on a different device. If they're on the same device,
- the exporter and the importer should be the same bo.
@@ -198,7 +198,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe) static const struct dma_buf_attach_ops nop2p_attach_ops = { .allow_peer2peer = false,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; /* diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 7c74a31d4486..1b9cd043e517 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -287,7 +287,7 @@ static void xe_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops xe_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0ec2e4120cc9..d77a739cfe7a 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -129,7 +129,7 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device, if (check_add_overflow(offset, (unsigned long)size, &end)) return ret;
- if (unlikely(!ops || !ops->move_notify))
- if (unlikely(!ops || !ops->invalidate_mappings)) return ret;
dmabuf = dma_buf_get(fd); @@ -195,7 +195,7 @@ ib_umem_dmabuf_unsupported_move_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops ib_umem_dmabuf_attach_pinned_ops = { .allow_peer2peer = true,
- .move_notify = ib_umem_dmabuf_unsupported_move_notify,
- .invalidate_mappings = ib_umem_dmabuf_unsupported_move_notify,
}; struct ib_umem_dmabuf * diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 325fa04cbe8a..97099d3b1688 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1620,7 +1620,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops mlx5_ib_dmabuf_attach_ops = { .allow_peer2peer = 1,
- .move_notify = mlx5_ib_dmabuf_invalidate_cb,
- .invalidate_mappings = mlx5_ib_dmabuf_invalidate_cb,
}; static struct ib_mr * diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index dbe51ecb9a20..76f900fa1687 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1451,7 +1451,7 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { .allow_peer2peer = true,
- .move_notify = iopt_revoke_notify,
- .invalidate_mappings = iopt_revoke_notify,
}; /* diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 0bc492090237..1b397635c793 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -407,7 +407,7 @@ struct dma_buf { * through the device. * * - Dynamic importers should set fences for any access that they can't
* disable immediately from their &dma_buf_attach_ops.move_notify
* disable immediately from their &dma_buf_attach_ops.invalidate_mappings
- callback.
- IMPORTANT:
@@ -458,7 +458,7 @@ struct dma_buf_attach_ops { bool allow_peer2peer; /**
* @move_notify: [optional] notification that the DMA-buf is moving
* @invalidate_mappings: [optional] notification that the DMA-buf is moving
- If this callback is provided the framework can avoid pinning the
- backing store while mappings exists.
@@ -475,7 +475,7 @@ struct dma_buf_attach_ops { * New mappings can be created after this callback returns, and will * point to the new location of the DMA-buf. */
- void (*move_notify)(struct dma_buf_attachment *attach);
- void (*invalidate_mappings)(struct dma_buf_attachment *attach);
}; /**
On 1/19/26 12:38, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 11:22:27AM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Rename the .move_notify() callback to .invalidate_mappings() to make its purpose explicit and highlight that it is responsible for invalidating existing mappings.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
Reviewed-by: Christian König christian.koenig@amd.com
Thanks,
BTW, I didn't update the various xxx_move_notify() functions to use xxx_invalidate_mappings() names. Should those be converted as well?
No, those importer specific functions can keep their name.
More important is the config option. Haven't thought about that one.
Probably best if we either rename or completely remove that one, it was to keep the MOVE_NOTIFY functionality separate for initial testing but we have clearly supassed this long time ago.
Regards, Christian.
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 2 +- include/linux/dma-buf.h | 6 +++--- 9 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index edaa9e4ee4ae..59cc647bf40e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -948,7 +948,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, if (WARN_ON(!dmabuf || !dev)) return ERR_PTR(-EINVAL);
- if (WARN_ON(importer_ops && !importer_ops->move_notify))
- if (WARN_ON(importer_ops && !importer_ops->invalidate_mappings)) return ERR_PTR(-EINVAL);
attach = kzalloc(sizeof(*attach), GFP_KERNEL); @@ -1055,7 +1055,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_pin, "DMA_BUF");
- This unpins a buffer pinned by dma_buf_pin() and allows the exporter to move
- any mapping of @attach again and inform the importer through
- &dma_buf_attach_ops.move_notify.
*/
- &dma_buf_attach_ops.invalidate_mappings.
void dma_buf_unpin(struct dma_buf_attachment *attach) { @@ -1262,7 +1262,7 @@ void dma_buf_move_notify(struct dma_buf *dmabuf) list_for_each_entry(attach, &dmabuf->attachments, node) if (attach->importer_ops)
attach->importer_ops->move_notify(attach);
attach->importer_ops->invalidate_mappings(attach);} EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF"); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index e22cfa7c6d32..863454148b28 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -450,7 +450,7 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf) } /**
- amdgpu_dma_buf_move_notify - &attach.move_notify implementation
- amdgpu_dma_buf_move_notify - &attach.invalidate_mappings implementation
- @attach: the DMA-buf attachment
@@ -521,7 +521,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = amdgpu_dma_buf_move_notify
- .invalidate_mappings = amdgpu_dma_buf_move_notify
}; /** diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index ce49282198cb..19c78dd2ca77 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -288,7 +288,7 @@ static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops virtgpu_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = virtgpu_dma_buf_move_notify
- .invalidate_mappings = virtgpu_dma_buf_move_notify
}; struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index 5df98de5ba3c..1f2cca5c2f81 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -23,7 +23,7 @@ static bool p2p_enabled(struct dma_buf_test_params *params) static bool is_dynamic(struct dma_buf_test_params *params) { return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops &&
params->attach_ops->move_notify;
params->attach_ops->invalidate_mappings;} static void check_residency(struct kunit *test, struct xe_bo *exported, @@ -60,7 +60,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported, /* * Evict exporter. Evicting the exported bo will
* evict also the imported bo through the move_notify() functionality if
* evict also the imported bo through the invalidate_mappings() functionality if*/
- importer is on a different device. If they're on the same device,
- the exporter and the importer should be the same bo.
@@ -198,7 +198,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe) static const struct dma_buf_attach_ops nop2p_attach_ops = { .allow_peer2peer = false,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; /* diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 7c74a31d4486..1b9cd043e517 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -287,7 +287,7 @@ static void xe_dma_buf_move_notify(struct dma_buf_attachment *attach) static const struct dma_buf_attach_ops xe_dma_buf_attach_ops = { .allow_peer2peer = true,
- .move_notify = xe_dma_buf_move_notify
- .invalidate_mappings = xe_dma_buf_move_notify
}; #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0ec2e4120cc9..d77a739cfe7a 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -129,7 +129,7 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device, if (check_add_overflow(offset, (unsigned long)size, &end)) return ret;
- if (unlikely(!ops || !ops->move_notify))
- if (unlikely(!ops || !ops->invalidate_mappings)) return ret;
dmabuf = dma_buf_get(fd); @@ -195,7 +195,7 @@ ib_umem_dmabuf_unsupported_move_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops ib_umem_dmabuf_attach_pinned_ops = { .allow_peer2peer = true,
- .move_notify = ib_umem_dmabuf_unsupported_move_notify,
- .invalidate_mappings = ib_umem_dmabuf_unsupported_move_notify,
}; struct ib_umem_dmabuf * diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 325fa04cbe8a..97099d3b1688 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1620,7 +1620,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops mlx5_ib_dmabuf_attach_ops = { .allow_peer2peer = 1,
- .move_notify = mlx5_ib_dmabuf_invalidate_cb,
- .invalidate_mappings = mlx5_ib_dmabuf_invalidate_cb,
}; static struct ib_mr * diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index dbe51ecb9a20..76f900fa1687 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1451,7 +1451,7 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach) static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { .allow_peer2peer = true,
- .move_notify = iopt_revoke_notify,
- .invalidate_mappings = iopt_revoke_notify,
}; /* diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 0bc492090237..1b397635c793 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -407,7 +407,7 @@ struct dma_buf { * through the device. * * - Dynamic importers should set fences for any access that they can't
* disable immediately from their &dma_buf_attach_ops.move_notify
* disable immediately from their &dma_buf_attach_ops.invalidate_mappings
- callback.
- IMPORTANT:
@@ -458,7 +458,7 @@ struct dma_buf_attach_ops { bool allow_peer2peer; /**
* @move_notify: [optional] notification that the DMA-buf is moving
* @invalidate_mappings: [optional] notification that the DMA-buf is moving
- If this callback is provided the framework can avoid pinning the
- backing store while mappings exists.
@@ -475,7 +475,7 @@ struct dma_buf_attach_ops { * New mappings can be created after this callback returns, and will * point to the new location of the DMA-buf. */
- void (*move_notify)(struct dma_buf_attachment *attach);
- void (*invalidate_mappings)(struct dma_buf_attachment *attach);
}; /**
On Mon, Jan 19, 2026 at 01:00:18PM +0100, Christian König wrote:
On 1/19/26 12:38, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 11:22:27AM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Rename the .move_notify() callback to .invalidate_mappings() to make its purpose explicit and highlight that it is responsible for invalidating existing mappings.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
Reviewed-by: Christian König christian.koenig@amd.com
Thanks,
BTW, I didn't update the various xxx_move_notify() functions to use xxx_invalidate_mappings() names. Should those be converted as well?
No, those importer specific functions can keep their name.
More important is the config option. Haven't thought about that one.
Probably best if we either rename or completely remove that one, it was to keep the MOVE_NOTIFY functionality separate for initial testing but we have clearly supassed this long time ago.
I removed it and will send in v3.
commit 05ad416fc0b8c9b07714f9b23dbb038c991b819d Author: Leon Romanovsky leonro@nvidia.com Date: Mon Jan 19 07:24:26 2026 -0500
dma-buf: Always build with DMABUF_MOVE_NOTIFY
DMABUF_MOVE_NOTIFY was introduced in 2018 and has been marked as experimental and disabled by default ever since. Six years later, all new importers implement this callback.
It is therefore reasonable to drop CONFIG_DMABUF_MOVE_NOTIFY and always build DMABUF with support for it enabled.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Leon Romanovsky leonro@nvidia.com
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index b46eb8a552d7..84d5e9b24e20 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -40,18 +40,6 @@ config UDMABUF A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
-config DMABUF_MOVE_NOTIFY - bool "Move notify between drivers (EXPERIMENTAL)" - default n - depends on DMA_SHARED_BUFFER - help - Don't pin buffers if the dynamic DMA-buf interface is available on - both the exporter as well as the importer. This fixes a security - problem where userspace is able to pin unrestricted amounts of memory - through DMA-buf. - This is marked experimental because we don't yet have a consistent - execution context and memory management between drivers. - config DMABUF_DEBUG bool "DMA-BUF debug checks" depends on DMA_SHARED_BUFFER diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 59cc647bf40e..cd3b60ce4863 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -837,18 +837,10 @@ static void mangle_sg_table(struct sg_table *sg_table)
}
-static inline bool -dma_buf_attachment_is_dynamic(struct dma_buf_attachment *attach) -{ - return !!attach->importer_ops; -} - static bool dma_buf_pin_on_map(struct dma_buf_attachment *attach) { - return attach->dmabuf->ops->pin && - (!dma_buf_attachment_is_dynamic(attach) || - !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)); + return attach->dmabuf->ops->pin && !attach->importer_ops; }
/** @@ -1124,7 +1116,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, /* * Importers with static attachments don't wait for fences. */ - if (!dma_buf_attachment_is_dynamic(attach)) { + if (!attach->importer_ops) { ret = dma_resv_wait_timeout(attach->dmabuf->resv, DMA_RESV_USAGE_KERNEL, true, MAX_SCHEDULE_TIMEOUT); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 863454148b28..349215549e8f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -145,13 +145,9 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach) * notifiers are disabled, only allow pinning in VRAM when move * notiers are enabled. */ - if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { - domains &= ~AMDGPU_GEM_DOMAIN_VRAM; - } else { - list_for_each_entry(attach, &dmabuf->attachments, node) - if (!attach->peer2peer) - domains &= ~AMDGPU_GEM_DOMAIN_VRAM; - } + list_for_each_entry(attach, &dmabuf->attachments, node) + if (!attach->peer2peer) + domains &= ~AMDGPU_GEM_DOMAIN_VRAM;
if (domains & AMDGPU_GEM_DOMAIN_VRAM) bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; diff --git a/drivers/gpu/drm/amd/amdkfd/Kconfig b/drivers/gpu/drm/amd/amdkfd/Kconfig index 16e12c9913f9..a5d7467c2f34 100644 --- a/drivers/gpu/drm/amd/amdkfd/Kconfig +++ b/drivers/gpu/drm/amd/amdkfd/Kconfig @@ -27,7 +27,7 @@ config HSA_AMD_SVM
config HSA_AMD_P2P bool "HSA kernel driver support for peer-to-peer for AMD GPU devices" - depends on HSA_AMD && PCI_P2PDMA && DMABUF_MOVE_NOTIFY + depends on HSA_AMD && PCI_P2PDMA help Enable peer-to-peer (P2P) communication between AMD GPUs over the PCIe bus. This can improve performance of multi-GPU compute diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c index 1f2cca5c2f81..c107687ef3c0 100644 --- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c @@ -22,8 +22,7 @@ static bool p2p_enabled(struct dma_buf_test_params *params)
static bool is_dynamic(struct dma_buf_test_params *params) { - return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops && - params->attach_ops->invalidate_mappings; + return params->attach_ops && params->attach_ops->invalidate_mappings; }
static void check_residency(struct kunit *test, struct xe_bo *exported, diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 1b9cd043e517..ea370cd373e9 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -56,14 +56,10 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach) bool allow_vram = true; int ret;
- if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { - allow_vram = false; - } else { - list_for_each_entry(attach, &dmabuf->attachments, node) { - if (!attach->peer2peer) { - allow_vram = false; - break; - } + list_for_each_entry(attach, &dmabuf->attachments, node) { + if (!attach->peer2peer) { + allow_vram = false; + break; } }
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
For exporters, this means implementing the .pin() callback, which checks the DMA‑buf attachment for a valid revoke implementation.
Signed-off-by: Leon Romanovsky leonro@nvidia.com --- include/linux/dma-buf.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1b397635c793..e0bc0b7119f5 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -579,6 +579,25 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) return !!dmabuf->ops->pin; }
+/** + * dma_buf_attachment_is_revoke - check if a DMA-buf importer implements + * revoke semantics. + * @attach: the DMA-buf attachment to check + * + * Returns true if DMA-buf importer honors revoke semantics, which is + * negotiated with the exporter, by making sure that importer implements + * .invalidate_mappings() callback and calls to dma_buf_pin() after + * DMA-buf attach. + */ +static inline bool +dma_buf_attachment_is_revoke(struct dma_buf_attachment *attach) +{ + return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && + dma_buf_is_dynamic(attach->dmabuf) && + (attach->importer_ops && + attach->importer_ops->invalidate_mappings); +} + struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev); struct dma_buf_attachment *
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
See previous comment WRT this.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
Why would the importer want to verify the exporter's support for revocation? If the exporter doesn't support it, the only consequence would be that invalidate_mappings() would never be called, and that dma_buf_pin() is a NOP. Besides, dma_buf_pin() would not return an error if the exporter doesn't implement the pin() callback?
Or perhaps I missed a prereq patch?
Thanks, Thomas
On Sun, Jan 18, 2026 at 03:29:02PM +0100, Thomas Hellström wrote:
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
See previous comment WRT this.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
Why would the importer want to verify the exporter's support for revocation? If the exporter doesn't support it, the only consequence would be that invalidate_mappings() would never be called, and that dma_buf_pin() is a NOP. Besides, dma_buf_pin() would not return an error if the exporter doesn't implement the pin() callback?
The idea is that both should do revoke and there is a need to indicate that this exporter has some expectations from the importers. One of them is that invalidate_mappings exists.
Thanks
Or perhaps I missed a prereq patch?
Thanks, Thomas
On Sun, Jan 18, 2026 at 03:29:02PM +0100, Thomas Hellström wrote:
Why would the importer want to verify the exporter's support for revocation? If the exporter doesn't support it, the only consequence would be that invalidate_mappings() would never be called, and that dma_buf_pin() is a NOP. Besides, dma_buf_pin() would not return an error if the exporter doesn't implement the pin() callback?
I think the comment and commit message should be clarified that dma_buf_attachment_is_revoke() is called by the exporter.
The purpose is for the exporter that wants to call move_notify() on a pinned DMABUF to determine if the importer is going to support it.
Jason
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
For exporters, this means implementing the .pin() callback, which checks the DMA‑buf attachment for a valid revoke implementation.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
include/linux/dma-buf.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1b397635c793..e0bc0b7119f5 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -579,6 +579,25 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) return !!dmabuf->ops->pin; } +/**
- dma_buf_attachment_is_revoke - check if a DMA-buf importer implements
- revoke semantics.
- @attach: the DMA-buf attachment to check
- Returns true if DMA-buf importer honors revoke semantics, which is
- negotiated with the exporter, by making sure that importer implements
- .invalidate_mappings() callback and calls to dma_buf_pin() after
- DMA-buf attach.
That wording is to unclear. Something like:
Returns true if the DMA-buf importer can handle invalidating it's mappings at any time, even after pinning a buffer.
- */
+static inline bool +dma_buf_attachment_is_revoke(struct dma_buf_attachment *attach)
That's clearly not a good name. But that is already discussed in another thread.
+{
- return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) &&
Oh, we should have renamed that as well. Or maybe it is time to completely remove that config option.
dma_buf_is_dynamic(attach->dmabuf) &&
This is checking exporter and not importer capabilities, please drop.
(attach->importer_ops &&attach->importer_ops->invalidate_mappings);
So when invalidate_mappings is implemented we need to be able to call it at any time. Yeah that sounds like a valid approach to me.
But we need to remove the RDNA callback with the warning then to properly signal that. And also please document that in the callback kerneldoc.
Regards, Christian.
+}
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev); struct dma_buf_attachment *
On Mon, Jan 19, 2026 at 11:56:16AM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
For exporters, this means implementing the .pin() callback, which checks the DMA‑buf attachment for a valid revoke implementation.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
include/linux/dma-buf.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
<...>
- Returns true if DMA-buf importer honors revoke semantics, which is
- negotiated with the exporter, by making sure that importer implements
- .invalidate_mappings() callback and calls to dma_buf_pin() after
- DMA-buf attach.
That wording is to unclear. Something like:
Returns true if the DMA-buf importer can handle invalidating it's mappings at any time, even after pinning a buffer.
<...>
That's clearly not a good name. But that is already discussed in another thread.
<...>
Oh, we should have renamed that as well. Or maybe it is time to completely remove that config option.
<...>
This is checking exporter and not importer capabilities, please drop.
<...>
So when invalidate_mappings is implemented we need to be able to call it at any time. Yeah that sounds like a valid approach to me.
But we need to remove the RDNA callback with the warning then to properly signal that. And also please document that in the callback kerneldoc.
Will do, thanks
Regards, Christian.
+}
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev); struct dma_buf_attachment *
On Sun, Jan 18, 2026 at 02:08:46PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
For exporters, this means implementing the .pin() callback, which checks the DMA‑buf attachment for a valid revoke implementation.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
include/linux/dma-buf.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1b397635c793..e0bc0b7119f5 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -579,6 +579,25 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) return !!dmabuf->ops->pin; } +/**
- dma_buf_attachment_is_revoke - check if a DMA-buf importer implements
- revoke semantics.
- @attach: the DMA-buf attachment to check
- Returns true if DMA-buf importer honors revoke semantics, which is
- negotiated with the exporter, by making sure that importer implements
- .invalidate_mappings() callback and calls to dma_buf_pin() after
- DMA-buf attach.
- */
I think this clarification should also have comment to dma_buf_move_notify(). Maybe like this:
@@ -1324,7 +1324,18 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_unmap_attachment_unlocked, "DMA_BUF"); * @dmabuf: [in] buffer which is moving * * Informs all attachments that they need to destroy and recreate all their - * mappings. + * mappings. If the attachment is dynamic then the dynamic importer is expected + * to invalidate any caches it has of the mapping result and perform a new + * mapping request before allowing HW to do any further DMA. + * + * If the attachment is pinned then this informs the pinned importer that + * the underlying mapping is no longer available. Pinned importers may take + * this is as a permanent revocation so exporters should not trigger it + * lightly. + * + * For legacy pinned importers that cannot support invalidation this is a NOP. + * Drivers can call dma_buf_attachment_is_revoke() to determine if the + * importer supports this. */
Also it would be nice to document what Christian pointed out regarding fences after move_notify.
+static inline bool +dma_buf_attachment_is_revoke(struct dma_buf_attachment *attach) +{
- return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) &&
dma_buf_is_dynamic(attach->dmabuf) &&(attach->importer_ops &&attach->importer_ops->invalidate_mappings);+}
And I don't think we should use a NULL invalidate_mappings function pointer to signal this.
It sounds like the direction is to require importers to support move_notify, so we should not make it easy to just drop a NULL in the ops struct to get out of the desired configuration.
I suggest defining a function "dma_buf_unsupported_invalidate_mappings" and use EXPORT_SYMBOL_FOR_MODULES so only RDMA can use it. Then check for that along with NULL importer_ops to cover the two cases where it is not allowed.
The only reason RDMA has to use dma_buf_dynamic_attach() is to set the allow_p2p=true ..
Jason
On Mon, Jan 19, 2026 at 12:44:21PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:46PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
Document a DMA-buf revoke mechanism that allows an exporter to explicitly invalidate ("kill") a shared buffer after it has been handed out to importers. Once revoked, all further CPU and device access is blocked, and importers consistently observe failure.
This requires both importers and exporters to honor the revoke contract.
For importers, this means implementing .invalidate_mappings() and calling dma_buf_pin() after the DMA‑buf is attached to verify the exporter’s support for revocation.
For exporters, this means implementing the .pin() callback, which checks the DMA‑buf attachment for a valid revoke implementation.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
include/linux/dma-buf.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1b397635c793..e0bc0b7119f5 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -579,6 +579,25 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) return !!dmabuf->ops->pin; } +/**
- dma_buf_attachment_is_revoke - check if a DMA-buf importer implements
- revoke semantics.
- @attach: the DMA-buf attachment to check
- Returns true if DMA-buf importer honors revoke semantics, which is
- negotiated with the exporter, by making sure that importer implements
- .invalidate_mappings() callback and calls to dma_buf_pin() after
- DMA-buf attach.
- */
I think this clarification should also have comment to dma_buf_move_notify(). Maybe like this:
@@ -1324,7 +1324,18 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_unmap_attachment_unlocked, "DMA_BUF");
- @dmabuf: [in] buffer which is moving
- Informs all attachments that they need to destroy and recreate all their
- mappings.
- mappings. If the attachment is dynamic then the dynamic importer is expected
- to invalidate any caches it has of the mapping result and perform a new
- mapping request before allowing HW to do any further DMA.
- If the attachment is pinned then this informs the pinned importer that
- the underlying mapping is no longer available. Pinned importers may take
- this is as a permanent revocation so exporters should not trigger it
- lightly.
- For legacy pinned importers that cannot support invalidation this is a NOP.
- Drivers can call dma_buf_attachment_is_revoke() to determine if the
*/
- importer supports this.
Also it would be nice to document what Christian pointed out regarding fences after move_notify.
I added this comment too: diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 6dd70f7b992d..478127dc63e9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1253,6 +1253,10 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, "DMA_BUF"); * For legacy pinned importers that cannot support invalidation this is a NOP. * Drivers can call dma_buf_attach_revocable() to determine if the importer * supports this. + * + * NOTE: The invalidation triggers asynchronous HW operation and the callers + * need to wait for this operation to complete by calling + * to dma_resv_wait_timeout(). */ void dma_buf_move_notify(struct dma_buf *dmabuf) {
+static inline bool +dma_buf_attachment_is_revoke(struct dma_buf_attachment *attach) +{
- return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) &&
dma_buf_is_dynamic(attach->dmabuf) &&(attach->importer_ops &&attach->importer_ops->invalidate_mappings);+}
And I don't think we should use a NULL invalidate_mappings function pointer to signal this.
It sounds like the direction is to require importers to support move_notify, so we should not make it easy to just drop a NULL in the ops struct to get out of the desired configuration.
I suggest defining a function "dma_buf_unsupported_invalidate_mappings" and use EXPORT_SYMBOL_FOR_MODULES so only RDMA can use it. Then check for that along with NULL importer_ops to cover the two cases where it is not allowed.
The only reason RDMA has to use dma_buf_dynamic_attach() is to set the allow_p2p=true ..
Will do.
Jason
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com --- drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys); + rc = dma_buf_pin(attach); if (rc) goto err_detach;
+ rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys); + if (rc) + goto err_unpin; + dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
+err_unpin: + dma_buf_unpin(attach); err_detach: dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach); @@ -1656,6 +1662,7 @@ void iopt_release_pages(struct kref *kref) if (iopt_is_dmabuf(pages) && pages->dmabuf.attach) { struct dma_buf *dmabuf = pages->dmabuf.attach->dmabuf;
+ dma_buf_unpin(pages->dmabuf.attach); dma_buf_detach(dmabuf, pages->dmabuf.attach); dma_buf_put(dmabuf); WARN_ON(!list_empty(&pages->dmabuf.tracker));
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Jason
On Mon, Jan 19, 2026 at 12:59:51PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Yes, but this patch is going to be dropped in v3 because of this suggestion. https://lore.kernel.org/all/a397ff1e-615f-4873-98a9-940f9c16f85c@amd.com
Thanks
Jason
On Mon, Jan 19, 2026 at 08:23:00PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 12:59:51PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Yes, but this patch is going to be dropped in v3 because of this suggestion. https://lore.kernel.org/all/a397ff1e-615f-4873-98a9-940f9c16f85c@amd.com
That's not right, that suggestion is about changing VFIO. iommufd must still act as a pinning importer!
Jason
On Mon, Jan 19, 2026 at 03:54:44PM -0400, Jason Gunthorpe wrote:
On Mon, Jan 19, 2026 at 08:23:00PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 12:59:51PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Yes, but this patch is going to be dropped in v3 because of this suggestion. https://lore.kernel.org/all/a397ff1e-615f-4873-98a9-940f9c16f85c@amd.com
That's not right, that suggestion is about changing VFIO. iommufd must still act as a pinning importer!
There is no change in iommufd, as it invokes dma_buf_dynamic_attach() with a valid &iopt_dmabuf_attach_revoke_ops. The check determining whether iommufd can perform a revoke is handled there.
Thanks
Jason
On Tue, Jan 20, 2026 at 03:10:46PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 03:54:44PM -0400, Jason Gunthorpe wrote:
On Mon, Jan 19, 2026 at 08:23:00PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 12:59:51PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Yes, but this patch is going to be dropped in v3 because of this suggestion. https://lore.kernel.org/all/a397ff1e-615f-4873-98a9-940f9c16f85c@amd.com
That's not right, that suggestion is about changing VFIO. iommufd must still act as a pinning importer!
There is no change in iommufd, as it invokes dma_buf_dynamic_attach() with a valid &iopt_dmabuf_attach_revoke_ops. The check determining whether iommufd can perform a revoke is handled there.
iommufd is a pining importer. I did not add a call to pin because it only worked with VFIO that would not support it. Now that this series fixes it the pin must be added. Don't drop this patch.
All the explanations we just gave say this special revoke mode only activates if the buffer is pinned by the importer, so iommufd must pin it. Otherwise it says it is working in the move mode with faulting that it cannot support.
Jason
On Tue, Jan 20, 2026 at 09:15:30AM -0400, Jason Gunthorpe wrote:
On Tue, Jan 20, 2026 at 03:10:46PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 03:54:44PM -0400, Jason Gunthorpe wrote:
On Mon, Jan 19, 2026 at 08:23:00PM +0200, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 12:59:51PM -0400, Jason Gunthorpe wrote:
On Sun, Jan 18, 2026 at 02:08:47PM +0200, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
IOMMUFD does not support page fault handling, and after a call to .invalidate_mappings() all mappings become invalid. Ensure that the IOMMUFD DMABUF importer is bound to a revoke‑aware DMABUF exporter (for example, VFIO).
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/iommu/iommufd/pages.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 76f900fa1687..a5eb2bc4ef48 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1501,16 +1501,22 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- rc = dma_buf_pin(attach); if (rc) goto err_detach;
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys);
- if (rc)
goto err_unpin;- dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
Don't we need an explicit unpin after unmapping?
Yes, but this patch is going to be dropped in v3 because of this suggestion. https://lore.kernel.org/all/a397ff1e-615f-4873-98a9-940f9c16f85c@amd.com
That's not right, that suggestion is about changing VFIO. iommufd must still act as a pinning importer!
There is no change in iommufd, as it invokes dma_buf_dynamic_attach() with a valid &iopt_dmabuf_attach_revoke_ops. The check determining whether iommufd can perform a revoke is handled there.
iommufd is a pining importer. I did not add a call to pin because it only worked with VFIO that would not support it. Now that this series fixes it the pin must be added. Don't drop this patch.
No problem, let's keep it.
Thanks
From: Leon Romanovsky leonro@nvidia.com
DMABUF ->pin() interface is called when the DMABUF importer perform its DMA mapping, so let's use this opportunity to check if DMABUF exporter revoked its buffer or not.
Signed-off-by: Leon Romanovsky leonro@nvidia.com --- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53..af9c315ddf71 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -20,6 +20,20 @@ struct vfio_pci_dma_buf { u8 revoked : 1; };
+static int vfio_pci_dma_buf_pin(struct dma_buf_attachment *attachment) +{ + struct vfio_pci_dma_buf *priv = attachment->dmabuf->priv; + + dma_resv_assert_held(priv->dmabuf->resv); + + return dma_buf_attachment_is_revoke(attachment) ? 0 : -EOPNOTSUPP; +} + +static void vfio_pci_dma_buf_unpin(struct dma_buf_attachment *attachment) +{ + /* Do nothing */ +} + static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { @@ -76,6 +90,8 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) }
static const struct dma_buf_ops vfio_pci_dmabuf_ops = { + .pin = vfio_pci_dma_buf_pin, + .unpin = vfio_pci_dma_buf_unpin, .attach = vfio_pci_dma_buf_attach, .map_dma_buf = vfio_pci_dma_buf_map, .unmap_dma_buf = vfio_pci_dma_buf_unmap,
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
DMABUF ->pin() interface is called when the DMABUF importer perform its DMA mapping, so let's use this opportunity to check if DMABUF exporter revoked its buffer or not.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53..af9c315ddf71 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -20,6 +20,20 @@ struct vfio_pci_dma_buf { u8 revoked : 1; }; +static int vfio_pci_dma_buf_pin(struct dma_buf_attachment *attachment) +{
- struct vfio_pci_dma_buf *priv = attachment->dmabuf->priv;
- dma_resv_assert_held(priv->dmabuf->resv);
- return dma_buf_attachment_is_revoke(attachment) ? 0 : -EOPNOTSUPP;
It's probably better to do that check in vfio_pci_dma_buf_attach.
And BTW the function vfio_pci_dma_buf_move() seems to be broken:
void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) { struct vfio_pci_dma_buf *priv; struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock);
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { if (!get_file_active(&priv->dmabuf->file)) continue;
if (priv->revoked != revoked) { dma_resv_lock(priv->dmabuf->resv, NULL); priv->revoked = revoked; dma_buf_move_notify(priv->dmabuf);
A dma_buf_move_notify() just triggers asynchronous invalidation of the mapping!
You need to use dma_resv_wait() to wait for that to finish.
dma_resv_unlock(priv->dmabuf->resv); } fput(priv->dmabuf->file); } }
Regards, Christian.
+}
+static void vfio_pci_dma_buf_unpin(struct dma_buf_attachment *attachment) +{
- /* Do nothing */
+}
static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { @@ -76,6 +90,8 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) } static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
- .pin = vfio_pci_dma_buf_pin,
- .unpin = vfio_pci_dma_buf_unpin, .attach = vfio_pci_dma_buf_attach, .map_dma_buf = vfio_pci_dma_buf_map, .unmap_dma_buf = vfio_pci_dma_buf_unmap,
On Mon, Jan 19, 2026 at 01:12:45PM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
DMABUF ->pin() interface is called when the DMABUF importer perform its DMA mapping, so let's use this opportunity to check if DMABUF exporter revoked its buffer or not.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53..af9c315ddf71 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -20,6 +20,20 @@ struct vfio_pci_dma_buf { u8 revoked : 1; }; +static int vfio_pci_dma_buf_pin(struct dma_buf_attachment *attachment) +{
- struct vfio_pci_dma_buf *priv = attachment->dmabuf->priv;
- dma_resv_assert_held(priv->dmabuf->resv);
- return dma_buf_attachment_is_revoke(attachment) ? 0 : -EOPNOTSUPP;
It's probably better to do that check in vfio_pci_dma_buf_attach.
I assume you are proposing to add this check in both vfio_pci_dma_buf_attach() and vfio_pci_dma_buf_pin(). Otherwise, importers that lack .invalidate_mapping() will invoke dma_buf_pin() and will not fail.
And BTW the function vfio_pci_dma_buf_move() seems to be broken:
void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) { struct vfio_pci_dma_buf *priv; struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock); list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { if (!get_file_active(&priv->dmabuf->file)) continue; if (priv->revoked != revoked) { dma_resv_lock(priv->dmabuf->resv, NULL); priv->revoked = revoked; dma_buf_move_notify(priv->dmabuf);A dma_buf_move_notify() just triggers asynchronous invalidation of the mapping!
You need to use dma_resv_wait() to wait for that to finish.
We (VFIO and IOMMUFD) followed the same pattern used in amdgpu_bo_move_notify(), which also does not wait.
I'll add wait here.
Thanks
dma_resv_unlock(priv->dmabuf->resv); } fput(priv->dmabuf->file); }}
Regards, Christian.
+}
+static void vfio_pci_dma_buf_unpin(struct dma_buf_attachment *attachment) +{
- /* Do nothing */
+}
static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { @@ -76,6 +90,8 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) } static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
- .pin = vfio_pci_dma_buf_pin,
- .unpin = vfio_pci_dma_buf_unpin, .attach = vfio_pci_dma_buf_attach, .map_dma_buf = vfio_pci_dma_buf_map, .unmap_dma_buf = vfio_pci_dma_buf_unmap,
On 1/19/26 14:02, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 01:12:45PM +0100, Christian König wrote:
On 1/18/26 13:08, Leon Romanovsky wrote:
From: Leon Romanovsky leonro@nvidia.com
DMABUF ->pin() interface is called when the DMABUF importer perform its DMA mapping, so let's use this opportunity to check if DMABUF exporter revoked its buffer or not.
Signed-off-by: Leon Romanovsky leonro@nvidia.com
drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53..af9c315ddf71 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -20,6 +20,20 @@ struct vfio_pci_dma_buf { u8 revoked : 1; }; +static int vfio_pci_dma_buf_pin(struct dma_buf_attachment *attachment) +{
- struct vfio_pci_dma_buf *priv = attachment->dmabuf->priv;
- dma_resv_assert_held(priv->dmabuf->resv);
- return dma_buf_attachment_is_revoke(attachment) ? 0 : -EOPNOTSUPP;
It's probably better to do that check in vfio_pci_dma_buf_attach.
I assume you are proposing to add this check in both vfio_pci_dma_buf_attach() and vfio_pci_dma_buf_pin(). Otherwise, importers that lack .invalidate_mapping() will invoke dma_buf_pin() and will not fail.
vfio_pci_dma_buf_attach() alone should be sufficient. It is always called, even for importers lacking invalidate_mapping().
Regards, Christian.
And BTW the function vfio_pci_dma_buf_move() seems to be broken:
void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) { struct vfio_pci_dma_buf *priv; struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock); list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { if (!get_file_active(&priv->dmabuf->file)) continue; if (priv->revoked != revoked) { dma_resv_lock(priv->dmabuf->resv, NULL); priv->revoked = revoked; dma_buf_move_notify(priv->dmabuf);A dma_buf_move_notify() just triggers asynchronous invalidation of the mapping!
You need to use dma_resv_wait() to wait for that to finish.
We (VFIO and IOMMUFD) followed the same pattern used in amdgpu_bo_move_notify(), which also does not wait.
I'll add wait here.
Thanks
dma_resv_unlock(priv->dmabuf->resv); } fput(priv->dmabuf->file); }}
Regards, Christian.
+}
+static void vfio_pci_dma_buf_unpin(struct dma_buf_attachment *attachment) +{
- /* Do nothing */
+}
static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { @@ -76,6 +90,8 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) } static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
- .pin = vfio_pci_dma_buf_pin,
- .unpin = vfio_pci_dma_buf_unpin, .attach = vfio_pci_dma_buf_attach, .map_dma_buf = vfio_pci_dma_buf_map, .unmap_dma_buf = vfio_pci_dma_buf_unmap,
On Mon, Jan 19, 2026 at 03:02:44PM +0200, Leon Romanovsky wrote:
We (VFIO and IOMMUFD) followed the same pattern used in amdgpu_bo_move_notify(), which also does not wait.
You have to be really careful copying anything from the GPU drivers as they have these waits hidden and batched in other parts of their operations..
Jason
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics. It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On Mon, 2026-01-19 at 09:52 +0200, Leon Romanovsky wrote:
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics. It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Hmm,
Considering
https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/infiniband/core/um...
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers, so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
Thanks, Thomas
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On Mon, Jan 19, 2026 at 10:27:00AM +0100, Thomas Hellström wrote:
On Mon, 2026-01-19 at 09:52 +0200, Leon Romanovsky wrote:
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics. It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Hmm,
Considering
https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/infiniband/core/um...
this sounds like it's not just undocumented but also in some cases unimplemented.
Yes, it was discussed later in the thread https://lore.kernel.org/all/20260112153503.GF745888@ziepe.ca/. RDMA will need some adjustments later, but first we need to document the existing semantics.
The xe driver for one doesn't expect move_notify() to be called on pinned buffers, so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
This is what Jason proposed with "enum dma_buf_move_notify_level", but for some reason we got no responses.
Thanks
Thanks, Thomas
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On 1/19/26 10:27, Thomas Hellström wrote:
On Mon, 2026-01-19 at 09:52 +0200, Leon Romanovsky wrote:
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics.
Please don't document that. This is specific exporter behavior and doesn't belong into DMA-buf at all.
It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
I have to clearly reject that.
It's the job of the exporter to reject such calls with an appropriate error and not the importer to not make them.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Hmm,
Considering
https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/infiniband/core/um...
Yes, that case is well known.
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers,
And that is what we need to change. See move_notify can happen on pinned buffers currently as well.
For example in the case of PCI hot unplug. After pinning we just don't call it for memory management needs any more.
We just haven't documented that properly.
so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
That's what this patch set here should do, yes.
Regards, Christian.
Thanks, Thomas
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On Mon, Jan 19, 2026 at 11:20:46AM +0100, Christian König wrote:
On 1/19/26 10:27, Thomas Hellström wrote:
On Mon, 2026-01-19 at 09:52 +0200, Leon Romanovsky wrote:
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics.
Please don't document that. This is specific exporter behavior and doesn't belong into DMA-buf at all.
It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
I have to clearly reject that.
It's the job of the exporter to reject such calls with an appropriate error and not the importer to not make them.
Current code behaves as expected: the exporter rejects mapping attempts after .invalidate_mapping is called, and handles the logic internally.
However, it is not clear what exactly you are proposing. In v1 — which you objected to — I suggested negotiating revoke support along with the logic for rejecting mappings in the dma-buf core. In this version, you object to placing the rejection logic in the exporter.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Hmm,
Considering
https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/infiniband/core/um...
Yes, that case is well known.
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers,
And that is what we need to change. See move_notify can happen on pinned buffers currently as well.
For example in the case of PCI hot unplug. After pinning we just don't call it for memory management needs any more.
We just haven't documented that properly.
so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
That's what this patch set here should do, yes.
Regards, Christian.
Thanks, Thomas
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On 1/19/26 11:53, Leon Romanovsky wrote:
On Mon, Jan 19, 2026 at 11:20:46AM +0100, Christian König wrote:
On 1/19/26 10:27, Thomas Hellström wrote:
On Mon, 2026-01-19 at 09:52 +0200, Leon Romanovsky wrote:
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
Hi, Leon,
On Sun, 2026-01-18 at 14:08 +0200, Leon Romanovsky wrote:
Changelog: v2: * Changed series to document the revoke semantics instead of implementing it. v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma- buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
No change for them.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
This part has not changed and remains the same for the revocation flow as well.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
This part diverges by design and is documented to match revoke semantics.
Please don't document that. This is specific exporter behavior and doesn't belong into DMA-buf at all.
It defines what must occur after the exporter requests that the buffer be "killed". An importer that follows revoke semantics will not attempt to call dma_buf_map_attachment(), and the exporter will block any remapping attempts regardless. See the priv->revoked flag in the VFIO exporter.
I have to clearly reject that.
It's the job of the exporter to reject such calls with an appropriate error and not the importer to not make them.
Current code behaves as expected: the exporter rejects mapping attempts after .invalidate_mapping is called, and handles the logic internally.
However, it is not clear what exactly you are proposing. In v1 — which you objected to — I suggested negotiating revoke support along with the logic for rejecting mappings in the dma-buf core. In this version, you object to placing the rejection logic in the exporter.
Sorry I probably wasn't explaining this correctly.
I was rejecting the idea of doing this in the framework, e.g. the middle layer, or that importers would be force to drop their references.
That an exporter rejects attempts to attach or map a resource is perfectly valid.
Regards, Christian.
In addition, in this email thread, Christian explains that revoke semantics already exists, with the combination of dma_buf_pin and dma_buf_move_notify, just not documented: https://lore.kernel.org/all/f7f1856a-44fa-44af-b496-eb1267a05d11@amd.com/
Hmm,
Considering
https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/infiniband/core/um...
Yes, that case is well known.
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers,
And that is what we need to change. See move_notify can happen on pinned buffers currently as well.
For example in the case of PCI hot unplug. After pinning we just don't call it for memory management needs any more.
We just haven't documented that properly.
so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
That's what this patch set here should do, yes.
Regards, Christian.
Thanks, Thomas
Thanks
/Thomas
Thanks
Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-kernel@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: virtualization@lists.linux.dev Cc: intel-xe@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kvm@vger.kernel.org To: Sumit Semwal sumit.semwal@linaro.org To: Christian König christian.koenig@amd.com To: Alex Deucher alexander.deucher@amd.com To: David Airlie airlied@gmail.com To: Simona Vetter simona@ffwll.ch To: Gerd Hoffmann kraxel@redhat.com To: Dmitry Osipenko dmitry.osipenko@collabora.com To: Gurchetan Singh gurchetansingh@chromium.org To: Chia-I Wu olvaffe@gmail.com To: Maarten Lankhorst maarten.lankhorst@linux.intel.com To: Maxime Ripard mripard@kernel.org To: Thomas Zimmermann tzimmermann@suse.de To: Lucas De Marchi lucas.demarchi@intel.com To: Thomas Hellström thomas.hellstrom@linux.intel.com To: Rodrigo Vivi rodrigo.vivi@intel.com To: Jason Gunthorpe jgg@ziepe.ca To: Leon Romanovsky leon@kernel.org To: Kevin Tian kevin.tian@intel.com To: Joerg Roedel joro@8bytes.org To: Will Deacon will@kernel.org To: Robin Murphy robin.murphy@arm.com To: Alex Williamson alex@shazbot.org
Leon Romanovsky (4): dma-buf: Rename .move_notify() callback to a clearer identifier dma-buf: Document revoke semantics iommufd: Require DMABUF revoke semantics vfio: Add pinned interface to perform revoke semantics
drivers/dma-buf/dma-buf.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/tests/xe_dma_buf.c | 6 +++--- drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/iommu/iommufd/pages.c | 11 +++++++++-- drivers/vfio/pci/vfio_pci_dmabuf.c | 16 ++++++++++++++++ include/linux/dma-buf.h | 25 ++++++++++++++++++++++--- 10 files changed, 60 insertions(+), 18 deletions(-)
base-commit: 9ace4753a5202b02191d54e9fdf7f9e3d02b85eb change-id: 20251221-dmabuf-revoke-b90ef16e4236
Best regards, -- Leon Romanovsky leonro@nvidia.com
On Mon, Jan 19, 2026 at 10:27:00AM +0100, Thomas Hellström wrote:
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers, so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
Can you clarify this?
I don't see xe's importer calling dma_buf_pin() or dma_buf_attach() outside of tests? It's importer implements a fully functional looking dynamic attach with move_notify()?
I see the exporer is checking for pinned and then not calling move_notify - is that what you mean?
When I looked through all the importers only RDMA obviously didn't support move_notify on pinned buffers.
Jason
On Mon, 2026-01-19 at 12:24 -0400, Jason Gunthorpe wrote:
On Mon, Jan 19, 2026 at 10:27:00AM +0100, Thomas Hellström wrote:
this sounds like it's not just undocumented but also in some cases unimplemented. The xe driver for one doesn't expect move_notify() to be called on pinned buffers, so if that is indeed going to be part of the dma-buf protocol, wouldn't support for that need to be advertised by the importer?
Can you clarify this?
I don't see xe's importer calling dma_buf_pin() or dma_buf_attach() outside of tests? It's importer implements a fully functional looking dynamic attach with move_notify()?
I see the exporer is checking for pinned and then not calling move_notify - is that what you mean?
No it was if move_notify() is called on a pinned buffer, things will probably blow up.
And I was under the impression that we'd might be pinning imported framebuffers but either we don't get any of those or we're using the incorrect interface to pin, so it might not be a big issue from the xe side. Need to check this.
In any case we'd want to support revoking also of pinned buffers moving forward, so question really becomes whether in the mean-time we need to flag somehow that we don't support it.
Thanks, Thomas
When I looked through all the importers only RDMA obviously didn't support move_notify on pinned buffers.
Jason
On Sun, Jan 18, 2026 at 03:16:25PM +0100, Thomas Hellström wrote:
core “revoked” state on the dma-buf object and a corresponding exporter- triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
This sounds like it does not match how many GPU-drivers use the move_notify() callback.
move_notify() would typically invalidate any device maps and any asynchronous part of that invalidation would be complete when the dma- buf's reservation object becomes idle WRT DMA_RESV_USAGE_BOOKKEEP fences.
However, the importer could, after obtaining the resv lock, obtain a new map using dma_buf_map_attachment(), and I'd assume the CPU maps work in the same way, I.E. move_notify() does not *permanently* revoke importer access.
I think this was explained a bit in this thread, but I wanted to repeat the explanation to be really clear..
If the attachment is not pinned than calling move_notify() is as you say. The importer should expect multiple move_notify() calls and handle all of them. The exporter can move the location around and make it revoked/unrevoked at will. If it is revoked then dma_buf_map_attachment() fails, the importer could cache this and fail DMAs until the next move_notify().
If the attachment is *pinned* then we propose to allow the importer to revoke only and not require restoration. IOW a later move_notify() that signals a previously failing dma_buf_map_attachment() is no longer failing can be igmored by a pinned importer.
This at least matches what iommufd is able to do right now.
IOW, calling move_notify() on a pinned DMABUF is a special operationg we are calling "revoke" and means that the exporter accepts that the mapping is potentially gone from pinned importers forever. ie don't use it lightly.
Jason
On Sun, Jan 18, 2026 at 02:08:44PM +0200, Leon Romanovsky wrote:
Changelog: v2:
- Changed series to document the revoke semantics instead of implementing it.
v1: https://patch.msgid.link/20260111-dmabuf-revoke-v1-0-fb4bcc8c259b@nvidia.com
This series documents a dma-buf “revoke” mechanism: to allow a dma-buf exporter to explicitly invalidate (“kill”) a shared buffer after it has been distributed to importers, so that further CPU and device access is prevented and importers reliably observe failure.
The change in this series is to properly document and use existing core “revoked” state on the dma-buf object and a corresponding exporter-triggered revoke operation. Once a dma-buf is revoked, new access paths are blocked so that attempts to DMA-map, vmap, or mmap the buffer fail in a consistent way.
I think it would help to explain the bigger picture in the cover letter:
DMABUF has quietly allowed calling move_notify on pinned DMABUFs, even though legacy importers using dma_buf_attach() would simply ignore these calls.
RDMA saw this and needed to use allow_peer2peer=true, so implemented a new-style pinned importer with an explicitly non-working move_notify() callback.
This has been tolerable because the existing exporters are thought to only call move_notify() on a pinned DMABUF under RAS events and we have been willing to tolerate the UAF that results by allowing the importer to continue to use the mapping in this rare case.
VFIO wants to implement a pin supporting exporter that will issue a revoking move_notify() around FLRs and a few other user triggerable operations. Since this is much more common we are not willing to tolerate the security UAF caused by interworking with non-move_notify() supporting drivers. Thus till now VFIO has required dynamic importers, even though it never actually moves the buffer location.
To allow VFIO to work with pinned importers, according to how DMABUF was intended, we need to allow VFIO to detect if an importer is legacy or RDMA and does not actually implement move_notify().
Introduce a new function that exporters can call to detect these less capable importers. VFIO can then refuse to accept them during attach.
In theory all exporters that call move_notify() on pinned DMABUF's should call this function, however that would break a number of widely used NIC/GPU flows. Thus for now do not spread this further than VFIO until we can understand how much of RDMA can implement the full semantic.
In the process clarify how move_notify is intended to be used with pinned DMABUFs.
Jason
linaro-mm-sig@lists.linaro.org