Since its introduction, DMA-buf has only supported using scatterlist for the exporter and importer to exchange address information. This is not sufficient for all use cases as dma_addr_t is a very specific and limited type that should not be abused for things unrelated to the DMA API.
There are several motivations for addressing this now: 1) VFIO to IOMMUFD and KVM requires a physical address, not a dma_addr_t scatterlist, it cannot be represented in the scatterlist structure 2) xe vGPU requires the host driver to accept a DMABUF from VFIO of its own VF and convert it into an internal VRAM address on the PF 3) We are starting to look at replacement datastructures for scatterlist 4) Ideas around UALink/etc are suggesting not using the DMA API
None of these can sanely be achieved using scatterlist.
Introduce a new mechanism called "mapping types" which allows DMA-buf to work with more map/unmap options than scatterlist. Each mapping type encompasses a full set of functions and data unique to itself. The core code provides a match-making system to select the best type offered by the exporter and importer to be the active mapping type for the attachment.
Everything related to scatterlist is moved into a DMA-buf SGT mapping type, and into the "dma_buf_sgt_*" namespace for clarity. Existing exporters are moved over to explicitly declare SGT mapping types and importers are adjusted to use the dma_buf_sgt_* named importer helpers.
Mapping types are designed to be extendable, a driver can declare its own mapping type for its internal private interconnect and use that without having to adjust the core code.
The new attachment sequence starts with the importing driver declaring what mapping types it can accept:
struct dma_buf_mapping_match imp_match[] = { DMA_BUF_IMAPPING_MY_DRIVER(dev, ...), DMA_BUF_IMAPPING_SGT(dev, false), }; attach = dma_buf_mapping_attach(dmabuf, imp_match, ...)
Most drivers will do this via a dma_buf_sgt_*attach() helper.
The exporting driver can then declare what mapping types it can supply:
int exporter_match_mapping(struct dma_buf_match_args *args) { struct dma_buf_mapping_match exp_match[] = { DMA_BUF_EMAPPING_MY_DRIVER(my_ops, dev, ...), DMA_BUF_EMAPPING_SGT(sgt_ops, dev, false), DMA_BUF_EMAPPING_PAL(PAL_ops), }; return dma_buf_match_mapping(args, exp_match, ...); }
Most drivers will do this via a helper: static const struct dma_buf_ops ops = { DMA_BUF_SIMPLE_SGT_EXP_MATCH(map_func, unmap_func) };
During dma_buf_mapping_attach() the core code will select a mutual match between the importer and exporter and record it as the active match in the attach->map_type.
Each mapping type has its own types/function calls for mapping/unmapping, and storage in the attach->map_type for its information. As such each mapping type can offer function signatures and data that exactly matches its needs.
This series goes through a sequence of: 1) Introduce the basic mapping type framework and the main components of the SGT mapping type 2) Automatically make all existing exporters and importers use core generated SGT mapping types so every attachment has a SGT mapping type 3) Convert all exporter drivers to natively create a SGT mapping type 4) Move all dma_buf_* functions and types that are related to SGT into dma_buf_sgt_* 5) Remove all the now-unused items that have been moved into SGT specific structures. 6) Demonstrate adding a new Physical Address List alongside SGT.
Due to the high number of files touched I would expect this to be broken into phases, but this shows the entire picture.
This is on github: https://github.com/jgunthorpe/linux/commits/dmabuf_map_type
It is a followup to the discussion here:
https://lore.kernel.org/dri-devel/20251027044712.1676175-1-vivek.kasireddy@i...
Jason Gunthorpe (26): dma-buf: Introduce DMA-buf mapping types dma-buf: Add the SGT DMA mapping type dma-buf: Add dma_buf_mapping_attach() dma-buf: Route SGT related actions through attach->map_type dma-buf: Allow single exporter drivers to avoid the match_mapping function drm: Check the SGT ops for drm_gem_map_dma_buf() dma-buf: Convert all the simple exporters to use SGT mapping type drm/vmwgfx: Use match_mapping instead of dummy calls accel/habanalabs: Use the SGT mapping type drm/xe/dma-buf: Use the SGT mapping type drm/amdgpu: Use the SGT mapping type vfio/pci: Change the DMA-buf exporter to use mapping_type dma-buf: Update dma_buf_phys_vec_to_sgt() to use the SGT mapping type iio: buffer: convert to use the SGT mapping type functionfs: convert to use the SGT mapping type dma-buf: Remove unused SGT stuff from the common structures treewide: Rename dma_buf_map_attachment(_unlocked) to dma_buf_sgt_ treewide: Rename dma_buf_unmap_attachment(_unlocked) to dma_buf_sgt_* treewide: Rename dma_buf_attach() to dma_buf_sgt_attach() treewide: Rename dma_buf_dynamic_attach() to dma_buf_sgt_dynamic_attach() dma-buf: Add the Physical Address List DMA mapping type vfio/pci: Add physical address list support to DMABUF iommufd: Use the PAL mapping type instead of a vfio function iommufd: Support DMA-bufs with multiple physical ranges iommufd/selftest: Check multi-phys DMA-buf scenarios dma-buf: Add kunit tests for mapping type
Documentation/gpu/todo.rst | 2 +- drivers/accel/amdxdna/amdxdna_gem.c | 14 +- drivers/accel/amdxdna/amdxdna_ubuf.c | 10 +- drivers/accel/habanalabs/common/memory.c | 54 ++- drivers/accel/ivpu/ivpu_gem.c | 10 +- drivers/accel/ivpu/ivpu_gem_userptr.c | 11 +- drivers/accel/qaic/qaic_data.c | 8 +- drivers/dma-buf/Makefile | 1 + drivers/dma-buf/dma-buf-mapping.c | 186 ++++++++- drivers/dma-buf/dma-buf.c | 180 ++++++--- drivers/dma-buf/heaps/cma_heap.c | 12 +- drivers/dma-buf/heaps/system_heap.c | 13 +- drivers/dma-buf/st-dma-mapping.c | 373 ++++++++++++++++++ drivers/dma-buf/udmabuf.c | 8 +- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 98 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- drivers/gpu/drm/armada/armada_gem.c | 33 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +- drivers/gpu/drm/drm_prime.c | 31 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 18 +- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 +- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 8 +- .../gpu/drm/i915/gem/selftests/mock_dmabuf.c | 8 +- drivers/gpu/drm/msm/msm_gem_prime.c | 7 +- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 11 +- drivers/gpu/drm/tegra/gem.c | 33 +- drivers/gpu/drm/virtio/virtgpu_prime.c | 23 +- drivers/gpu/drm/vmwgfx/vmwgfx_prime.c | 32 +- drivers/gpu/drm/xe/xe_bo.c | 18 +- drivers/gpu/drm/xe/xe_dma_buf.c | 61 +-- drivers/iio/industrialio-buffer.c | 15 +- drivers/infiniband/core/umem_dmabuf.c | 15 +- drivers/iommu/iommufd/io_pagetable.h | 4 +- drivers/iommu/iommufd/iommufd_private.h | 8 - drivers/iommu/iommufd/iommufd_test.h | 7 + drivers/iommu/iommufd/pages.c | 85 ++-- drivers/iommu/iommufd/selftest.c | 177 ++++++--- .../media/common/videobuf2/videobuf2-core.c | 2 +- .../common/videobuf2/videobuf2-dma-contig.c | 26 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 21 +- .../common/videobuf2/videobuf2-vmalloc.c | 13 +- .../platform/nvidia/tegra-vde/dmabuf-cache.c | 9 +- drivers/misc/fastrpc.c | 21 +- drivers/tee/tee_heap.c | 13 +- drivers/usb/gadget/function/f_fs.c | 11 +- drivers/vfio/pci/vfio_pci_dmabuf.c | 79 ++-- drivers/xen/gntdev-dmabuf.c | 29 +- include/linux/dma-buf-mapping.h | 297 ++++++++++++++ include/linux/dma-buf.h | 168 ++++---- io_uring/zcrx.c | 9 +- net/core/devmem.c | 14 +- samples/vfio-mdev/mbochs.c | 10 +- sound/soc/fsl/fsl_asrc_m2m.c | 12 +- tools/testing/selftests/iommu/iommufd.c | 43 ++ tools/testing/selftests/iommu/iommufd_utils.h | 17 + 55 files changed, 1764 insertions(+), 614 deletions(-) create mode 100644 drivers/dma-buf/st-dma-mapping.c
base-commit: c63e5a50e1dd291cd95b04291b028fdcaba4c534
DMA-buf mapping types allow the importer and exporter to negotiate the format of the map/unmap to be used during the attachment.
Currently DMA-buf only supports struct scatterlist as the attachment map operation. This is not sufficient for all use cases as dma_addr_t is a very specific and limited type.
With mapping types the importing driver can declare what it supports. For example:
struct dma_buf_mapping_match imp_match[] = { DMA_BUF_IMAPPING_MY_DRIVER(dev, ...), DMA_BUF_IMAPPING_SGT(dev, false), }; attach = dma_buf_mapping_attach(dmabuf, imp_match, ...)
And the exporting driver can declare what it supports:
int exporter_match_mapping(struct dma_buf_match_args *args) { struct dma_buf_mapping_match exp_match[] = { DMA_BUF_EMAPPING_MY_DRIVER(my_ops, dev, ...), DMA_BUF_EMAPPING_SGT(sgt_ops, dev, false), DMA_BUF_EMAPPING_PAL(PAL_ops), }; return dma_buf_match_mapping(args, exp_match, ...); }
During dma_buf_mapping_attach() the core code will select a mutual match between the importer and exporter and record it in the attach->map_type.
Add the basic types:
struct dma_buf_mapping_type Type tag and ops for each mapping type.
struct dma_buf_mapping_match Entry in a list of importer or exporter match specifications. The match specification can be extended by the mapping type with unique data.
dma_buf_match_mapping() / struct dma_buf_match_args Helper to do the matching. Called by the exporting driver via a dma_buf_ops callback.
struct dma_buf_mapping_exp_ops Base type for the per-mapping type exporter provided functions. This would be the map/unmap callbacks. Each mapping type can provide its own functions for map/unmap type operations with optimal type signatures.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf-mapping.c | 46 +++++++++++++++++++ include/linux/dma-buf-mapping.h | 76 +++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 18 ++++++++ 3 files changed, 140 insertions(+)
diff --git a/drivers/dma-buf/dma-buf-mapping.c b/drivers/dma-buf/dma-buf-mapping.c index b7352e609fbdfa..459c204cabb803 100644 --- a/drivers/dma-buf/dma-buf-mapping.c +++ b/drivers/dma-buf/dma-buf-mapping.c @@ -5,6 +5,7 @@ */ #include <linux/dma-buf-mapping.h> #include <linux/dma-resv.h> +#include <linux/dma-buf.h>
static struct scatterlist *fill_sg_entry(struct scatterlist *sgl, size_t length, dma_addr_t addr) @@ -246,3 +247,48 @@ void dma_buf_free_sgt(struct dma_buf_attachment *attach, struct sg_table *sgt,
} EXPORT_SYMBOL_NS_GPL(dma_buf_free_sgt, "DMA_BUF"); + +/** + * dma_buf_match_mapping - Select a mapping type agreed upon by exporter and + * importer + * @args: Match arguments from attach. On success this is updated with the + * matched exporter and importer entries. + * @exp: Array of mapping types supported by the exporter, in priority order + * @exp_len: Number of entries in @exp + * + * Iterate over the exporter's supported mapping types and for each one search + * the importer's list for a compatible matching type. args and args->attach are + * populated with the resulting match. + * + * Because the exporter list is walked in order, the exporter controls the + * priority of mapping types. + */ +int dma_buf_match_mapping(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp, + size_t exp_len) +{ + const struct dma_buf_mapping_match *exp_end = exp + exp_len; + const struct dma_buf_mapping_match *imp_end = + args->imp_matches + args->imp_len; + int ret; + + for (; exp != exp_end; exp++) { + const struct dma_buf_mapping_match *imp = args->imp_matches; + + for (; imp != imp_end; imp++) { + if (exp->type != imp->type) + continue; + if (exp->type->match) { + ret = exp->type->match(args->dmabuf, exp, imp); + if (ret == -EOPNOTSUPP) + continue; + if (ret != 0) + return ret; + } + exp->type->finish_match(args, exp, imp); + return 0; + } + } + return -EINVAL; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_match_mapping, "DMA_BUF"); diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index a3c0ce2d3a42fe..080ccbf3a3f8b8 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -7,6 +7,77 @@ #define __DMA_BUF_MAPPING_H__ #include <linux/dma-buf.h>
+struct device; +struct dma_buf; +struct dma_buf_attachment; +struct dma_buf_mapping_exp_ops; + +/* Type tag for all mapping operations */ +struct dma_buf_mapping_exp_ops {}; + +/* + * Internal struct to pass arguments from the attach function to the matching + * function + */ +struct dma_buf_match_args { + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + const struct dma_buf_mapping_match *imp_matches; + size_t imp_len; +}; + +/** + * struct dma_buf_mapping_type - Operations for a DMA-buf mapping type + * + * Each mapping type provides a singleton instance of this struct to describe + * the mapping type and its operations. + */ +struct dma_buf_mapping_type { + /** + * @name: Human-readable name for this mapping type, used in debugfs + * output + */ + const char *name; + + /** + * @match: + * + * Called during attach from dma_buf_match_mapping(). &exp and &imp are + * single items from the importer and exporter mapping match lists. + * Both will have the same instance of this struct as their type member. + * + * It determines if the exporter/importer are compatible. + * + * Returns: 0 on success + * -EOPNOTSUPP means ignore the failure and continue + * Everything else aborts the search and returns the -errno + */ + int (*match)(struct dma_buf *dmabuf, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp); + + /** + * @finish_match: + * + * Called by dma_buf_match_mapping() after a successful match to store + * the negotiated result in @args->attach. The matched @exp and @imp + * entries are provided so the callback can copy type-specific data into + * the attachment. + */ + void (*finish_match)(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp); + + /** + * @debugfs_dump: + * + * Optional callback to write mapping-type-specific diagnostic + * information about @attach to the debugfs seq_file @s. + */ + void (*debugfs_dump)(struct seq_file *s, + struct dma_buf_attachment *attach); +}; + struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, struct p2pdma_provider *provider, struct dma_buf_phys_vec *phys_vec, @@ -14,4 +85,9 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, enum dma_data_direction dir); void dma_buf_free_sgt(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir); + +int dma_buf_match_mapping(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp_mappings, + size_t exp_len); + #endif diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 0bc492090237ed..a2b01b13026810 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -27,6 +27,21 @@ struct device; struct dma_buf; struct dma_buf_attachment; +struct dma_buf_mapping_type; +struct dma_buf_mapping_exp_ops; + +/* + * Match items are generated by the importer using the DMA_BUF_IMAPPING_*() and + * the exporter using the DMA_BUF_EMAPPING_*() functions. Each mapping type + * defines its own signature with its own data to make the match and attachment. + */ +struct dma_buf_mapping_match { + const struct dma_buf_mapping_type *type; + const struct dma_buf_mapping_exp_ops *exp_ops; + union { + /* Each mapping_type has unique match parameters here */ + }; +};
/** * struct dma_buf_ops - operations possible on struct dma_buf @@ -488,6 +503,8 @@ struct dma_buf_attach_ops { * @importer_ops: importer operations for this attachment, if provided * dma_buf_map/unmap_attachment() must be called with the dma_resv lock held. * @importer_priv: importer specific attachment data. + * @map_type: The match that defines the mutually compatible mapping type to use + * for this attachment. * * This structure holds the attachment information between the dma_buf buffer * and its user device(s). The list contains one attachment struct per device @@ -506,6 +523,7 @@ struct dma_buf_attachment { const struct dma_buf_attach_ops *importer_ops; void *importer_priv; void *priv; + struct dma_buf_mapping_match map_type; };
/**
The SGT (Scatter Gather Table) DMA mapping type represents the existing sg_table/scatterlist based DMA mapping. It provides a sg_table based map/unmap interface that exactly matches how things work today.
dma_buf_sgt_exp_compat_match will be used in the next patch to transparently wrap an unaware exporter with a mapping type.
The SGT type handles the allow_peer2peer flag directly through matching logic. The importer indicates if it is willing to accept peer2peer and the exporter indicates if it requires peer2peer. A required peer2peer exporter will not match to an importer that does not accept peer2peer.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf-mapping.c | 95 ++++++++++++++++++++++++++++ include/linux/dma-buf-mapping.h | 101 ++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 18 +++++- 3 files changed, 213 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-buf-mapping.c b/drivers/dma-buf/dma-buf-mapping.c index 459c204cabb803..02f5cf8b3def40 100644 --- a/drivers/dma-buf/dma-buf-mapping.c +++ b/drivers/dma-buf/dma-buf-mapping.c @@ -6,6 +6,7 @@ #include <linux/dma-buf-mapping.h> #include <linux/dma-resv.h> #include <linux/dma-buf.h> +#include <linux/seq_file.h>
static struct scatterlist *fill_sg_entry(struct scatterlist *sgl, size_t length, dma_addr_t addr) @@ -292,3 +293,97 @@ int dma_buf_match_mapping(struct dma_buf_match_args *args, return -EINVAL; } EXPORT_SYMBOL_NS_GPL(dma_buf_match_mapping, "DMA_BUF"); + +static int dma_buf_sgt_match(struct dma_buf *dmabuf, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp) +{ + switch (exp->sgt_data.exporter_requires_p2p) { + case DMA_SGT_NO_P2P: + return 0; + case DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE: + if (WARN_ON(!exp->sgt_data.exporting_p2p_device) || + imp->sgt_data.importer_accepts_p2p != + DMA_SGT_IMPORTER_ACCEPTS_P2P) + return -EOPNOTSUPP; + if (pci_p2pdma_distance(exp->sgt_data.exporting_p2p_device, + imp->sgt_data.importing_dma_device, + true) < 0) + return -EOPNOTSUPP; + return 0; + } + return 0; +} + +static inline void +dma_buf_sgt_finish_match(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp) +{ + struct dma_buf_attachment *attach = args->attach; + + attach->map_type = (struct dma_buf_mapping_match) { + .type = &dma_buf_mapping_sgt_type, + .exp_ops = exp->exp_ops, + .sgt_data = { + .importing_dma_device = imp->sgt_data.importing_dma_device, + /* exporting_p2p_device is left opaque */ + .importer_accepts_p2p = imp->sgt_data.importer_accepts_p2p, + .exporter_requires_p2p = exp->sgt_data.exporter_requires_p2p, + }, + }; + + /* + * Setup the SGT type variables stored in attach because importers and + * exporters that do not natively use mappings expect them to be there. + * When converting to use mappings users should use the match versions + * of these instead. + */ + attach->dev = imp->sgt_data.importing_dma_device; + attach->peer2peer = attach->map_type.sgt_data.importer_accepts_p2p == + DMA_SGT_IMPORTER_ACCEPTS_P2P; +} + +static void dma_buf_sgt_debugfs_dump(struct seq_file *s, + struct dma_buf_attachment *attach) +{ + seq_printf(s, " %s", dev_name(dma_buf_sgt_dma_device(attach))); +} + +struct dma_buf_mapping_type dma_buf_mapping_sgt_type = { + .name = "DMA Mapped Scatter Gather Table", + .match = dma_buf_sgt_match, + .finish_match = dma_buf_sgt_finish_match, + .debugfs_dump = dma_buf_sgt_debugfs_dump, +}; +EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_sgt_type, "DMA_BUF"); + +static struct sg_table * +dma_buf_sgt_compat_map_dma_buf(struct dma_buf_attachment *attach, + enum dma_data_direction dir) +{ + return attach->dmabuf->ops->map_dma_buf(attach, dir); +} + +static void dma_buf_sgt_compat_unmap_dma_buf(struct dma_buf_attachment *attach, + struct sg_table *sgt, + enum dma_data_direction dir) +{ + attach->dmabuf->ops->unmap_dma_buf(attach, sgt, dir); +} + +/* Route the classic map/unmap ops through the exp ops for old importers */ +static const struct dma_buf_mapping_sgt_exp_ops dma_buf_sgt_compat_exp_ops = { + .map_dma_buf = dma_buf_sgt_compat_map_dma_buf, + .unmap_dma_buf = dma_buf_sgt_compat_unmap_dma_buf, +}; + +/* + * This mapping type is used for unaware exporters that do not support + * match_mapping(). It wraps the dma_buf ops for SGT mappings into a mapping + * type so aware importers can transparently work with unaware exporters. This + * does not require p2p because old exporters will check it through the + * attach->peer2peer mechanism. + */ +const struct dma_buf_mapping_match dma_buf_sgt_exp_compat_match = + DMA_BUF_EMAPPING_SGT(&dma_buf_sgt_compat_exp_ops); diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index 080ccbf3a3f8b8..360a7fe0b098be 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -12,6 +12,12 @@ struct dma_buf; struct dma_buf_attachment; struct dma_buf_mapping_exp_ops;
+enum dma_sgt_requires_p2p { + DMA_SGT_NO_P2P = 0, + DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE, + DMA_SGT_IMPORTER_ACCEPTS_P2P, +}; + /* Type tag for all mapping operations */ struct dma_buf_mapping_exp_ops {};
@@ -90,4 +96,99 @@ int dma_buf_match_mapping(struct dma_buf_match_args *args, const struct dma_buf_mapping_match *exp_mappings, size_t exp_len);
+/* + * DMA Mapped Scatterlist Type + * + * When this type is matched the map/unmap functions are: + * + * dma_buf_map_attachment() + * dma_buf_unmap_attachment() + * + * The struct sg_table returned by those functions has only the DMA portions + * available. The caller must not try to use the struct page * information. + * + * importing_dma_device is passed to the DMA API to provide the dma_addr_t's. + */ +extern struct dma_buf_mapping_type dma_buf_mapping_sgt_type; + +struct dma_buf_mapping_sgt_exp_ops { + struct dma_buf_mapping_exp_ops ops; + struct sg_table *(*map_dma_buf)(struct dma_buf_attachment *attach, + enum dma_data_direction dir); + void (*unmap_dma_buf)(struct dma_buf_attachment *attach, + struct sg_table *sgt, + enum dma_data_direction dir); +}; + +/** + * dma_buf_sgt_dma_device - Return the device to use for DMA mapping + * @attach: sgt mapping type attachment + * + * Called by the exporter to get the struct device to pass to the DMA API + * during map and unmap callbacks. + */ +static inline struct device * +dma_buf_sgt_dma_device(struct dma_buf_attachment *attach) +{ + if (attach->map_type.type != &dma_buf_mapping_sgt_type) + return NULL; + return attach->map_type.sgt_data.importing_dma_device; +} + +/** + * dma_buf_sgt_p2p_allowed - True if MMIO memory can be used peer to peer + * @attach: sgt mapping type attachment + * + * Should be called by exporters, returns true if the exporter's + * DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE was matched. + */ +static inline bool dma_buf_sgt_p2p_allowed(struct dma_buf_attachment *attach) +{ + if (attach->map_type.type != &dma_buf_mapping_sgt_type) + return false; + return attach->map_type.sgt_data.exporter_requires_p2p == + DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE; +} + +static inline const struct dma_buf_mapping_sgt_exp_ops * +dma_buf_get_sgt_ops(struct dma_buf_attachment *attach) +{ + if (attach->map_type.type != &dma_buf_mapping_sgt_type) + return NULL; + return container_of(attach->map_type.exp_ops, + struct dma_buf_mapping_sgt_exp_ops, ops); +} + +static inline struct dma_buf_mapping_match +DMA_BUF_IMAPPING_SGT(struct device *importing_dma_device, + enum dma_sgt_requires_p2p importer_accepts_p2p) +{ + return (struct dma_buf_mapping_match){ + .type = &dma_buf_mapping_sgt_type, + .sgt_data = { .importing_dma_device = importing_dma_device, + .importer_accepts_p2p = importer_accepts_p2p }, + }; +} +#define DMA_BUF_EMAPPING_SGT(_exp_ops) \ + ((struct dma_buf_mapping_match){ .type = &dma_buf_mapping_sgt_type, \ + .exp_ops = &((_exp_ops)->ops) }) + +/* + * Only matches if the importing device is P2P capable and the P2P subsystem + * says P2P is possible from p2p_device. + */ +static inline struct dma_buf_mapping_match +DMA_BUF_EMAPPING_SGT_P2P(const struct dma_buf_mapping_sgt_exp_ops *exp_ops, + struct pci_dev *p2p_device) +{ + struct dma_buf_mapping_match match = DMA_BUF_EMAPPING_SGT(exp_ops); + + match.sgt_data.exporter_requires_p2p = + DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE; + match.sgt_data.exporting_p2p_device = p2p_device; + return match; +} + +extern const struct dma_buf_mapping_match dma_buf_sgt_exp_compat_match; + #endif diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index a2b01b13026810..3bcd1d6d150188 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -30,6 +30,7 @@ struct dma_buf_attachment; struct dma_buf_mapping_type; struct dma_buf_mapping_exp_ops;
+ /* * Match items are generated by the importer using the DMA_BUF_IMAPPING_*() and * the exporter using the DMA_BUF_EMAPPING_*() functions. Each mapping type @@ -39,7 +40,22 @@ struct dma_buf_mapping_match { const struct dma_buf_mapping_type *type; const struct dma_buf_mapping_exp_ops *exp_ops; union { - /* Each mapping_type has unique match parameters here */ + /* For dma_buf_mapping_sgt_type */ + struct { + struct device *importing_dma_device; + /* Only used if DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE */ + struct pci_dev *exporting_p2p_device; + /* + * These p2p flags are used to support the hard coded + * mechanism for p2p. If an exporting device knows it + * will put MMIO into the sgt then it should set + * exporter_requires_p2p. Importers should set + * importer_accepts_p2p unless it is known that the + * importing HW never supports P2P because of HW issues. + */ + u8 importer_accepts_p2p; + u8 exporter_requires_p2p; + } sgt_data; }; };
Introduce a new attach function that accepts the list of importer supported mapping types from the caller. Turn dma_buf_dynamic_attach() and dma_buf_attach() into simple wrappers calling this new function with a compatibility mapping type list that only includes sgt.
dma_buf_mapping_attach() checks if the exporter is mapping aware and calls its ops->match_mapping() function to pick up the exporter match list and call dma_buf_match_mapping().
If unaware it will use dma_buf_sgt_exp_compat_match as a compatibility matchlist that uses the unaware exporter's dma_buf_ops callbacks.
The resulting match is stored in attach->map_type.
In effect attach->map_type is always available and always makes sense regardless of what combination of aware/unaware importer/exporter is used.
For compatibility with unaware drivers copy the sgt matching data into the attach->dev and peer2peer.
If the exporter sets exporter_requires_p2p then only the following are allowed: - dma_buf_dynamic_attach() with importer_ops->allow_peer2peer = true - dma_buf_mapping_attach() with a DMA_BUF_IMAPPING_SGT(xx, exporter_requires_p2p=true)
Other combinations are blocked.
Exporters that want to behave differently based on the importer's capability can declare exporter_requires_p2p=false and check attach->map_type.sgt_data.importer_accepts_p2p. Or they can declare two SGT exporters with different map/unmap functions.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf.c | 91 +++++++++++++++++++++++++++++++-------- include/linux/dma-buf.h | 14 ++++++ 2 files changed, 87 insertions(+), 18 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index edaa9e4ee4aed0..6e89fcfdad3015 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -14,6 +14,7 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-fence.h> #include <linux/dma-fence-unwrap.h> #include <linux/anon_inodes.h> @@ -689,11 +690,19 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) int ret;
if (WARN_ON(!exp_info->priv || !exp_info->ops - || !exp_info->ops->map_dma_buf - || !exp_info->ops->unmap_dma_buf || !exp_info->ops->release)) return ERR_PTR(-EINVAL);
+ if (exp_info->ops->match_mapping) { + if (WARN_ON(exp_info->ops->map_dma_buf || + exp_info->ops->unmap_dma_buf)) + return ERR_PTR(-EINVAL); + } else { + if (WARN_ON(!exp_info->ops->map_dma_buf || + !exp_info->ops->unmap_dma_buf)) + return ERR_PTR(-EINVAL); + } + if (WARN_ON(!exp_info->ops->pin != !exp_info->ops->unpin)) return ERR_PTR(-EINVAL);
@@ -916,9 +925,10 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) */
/** - * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list + * dma_buf_mapping_attach - Add the device to dma_buf's attachments list * @dmabuf: [in] buffer to attach device to. - * @dev: [in] device to be attached. + * @importer_matches: [in] mapping types supported by the importer + * @match_len: [in] length of @importer_matches * @importer_ops: [in] importer operations for the attachment * @importer_priv: [in] importer private pointer for the attachment * @@ -934,31 +944,46 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * error code wrapped into a pointer on failure. * * Note that this can fail if the backing storage of @dmabuf is in a place not - * accessible to @dev, and cannot be moved to a more suitable place. This is - * indicated with the error code -EBUSY. + * accessible to any importers, and cannot be moved to a more suitable place. + * This is indicated with the error code -EBUSY. */ -struct dma_buf_attachment * -dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, - const struct dma_buf_attach_ops *importer_ops, - void *importer_priv) +struct dma_buf_attachment *dma_buf_mapping_attach( + struct dma_buf *dmabuf, struct dma_buf_mapping_match *importer_matches, + size_t match_len, const struct dma_buf_attach_ops *importer_ops, + void *importer_priv) { + struct dma_buf_match_args match_args = { + .dmabuf = dmabuf, + .imp_matches = importer_matches, + .imp_len = match_len, + }; struct dma_buf_attachment *attach; int ret;
- if (WARN_ON(!dmabuf || !dev)) + if (WARN_ON(!dmabuf)) return ERR_PTR(-EINVAL);
if (WARN_ON(importer_ops && !importer_ops->move_notify)) return ERR_PTR(-EINVAL);
+ attach = kzalloc(sizeof(*attach), GFP_KERNEL); if (!attach) return ERR_PTR(-ENOMEM);
- attach->dev = dev; + match_args.attach = attach; + if (dmabuf->ops->match_mapping) { + ret = dmabuf->ops->match_mapping(&match_args); + if (ret) + goto err_attach; + } else { + ret = dma_buf_match_mapping(&match_args, + &dma_buf_sgt_exp_compat_match, 1); + if (ret) + goto err_attach; + } + attach->dmabuf = dmabuf; - if (importer_ops) - attach->peer2peer = importer_ops->allow_peer2peer; attach->importer_ops = importer_ops; attach->importer_priv = importer_priv;
@@ -977,23 +1002,53 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, kfree(attach); return ERR_PTR(ret); } -EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_attach, "DMA_BUF");
/** - * dma_buf_attach - Wrapper for dma_buf_dynamic_attach + * dma_buf_attach - Wrapper for dma_buf_mapping_attach * @dmabuf: [in] buffer to attach device to. * @dev: [in] device to be attached. * - * Wrapper to call dma_buf_dynamic_attach() for drivers which still use a static + * Wrapper to call dma_buf_mapping_attach() for drivers which still use a static * mapping. */ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct device *dev) { - return dma_buf_dynamic_attach(dmabuf, dev, NULL, NULL); + struct dma_buf_mapping_match sgt_match[] = { + DMA_BUF_IMAPPING_SGT(dev, DMA_SGT_NO_P2P), + }; + + return dma_buf_mapping_attach(dmabuf, sgt_match, ARRAY_SIZE(sgt_match), + NULL, NULL); } EXPORT_SYMBOL_NS_GPL(dma_buf_attach, "DMA_BUF");
+/** + * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list + * @dmabuf: [in] buffer to attach device to. + * @dev: [in] device to be attached. + * @importer_ops: [in] importer operations for the attachment + * @importer_priv: [in] importer private pointer for the attachment + * + * Wrapper to call dma_buf_mapping_attach() for drivers which only support SGT. + */ +struct dma_buf_attachment * +dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, + const struct dma_buf_attach_ops *importer_ops, + void *importer_priv) +{ + struct dma_buf_mapping_match sgt_match[] = { + DMA_BUF_IMAPPING_SGT(dev, importer_ops->allow_peer2peer ? + DMA_SGT_IMPORTER_ACCEPTS_P2P : + DMA_SGT_NO_P2P), + }; + + return dma_buf_mapping_attach(dmabuf, sgt_match, ARRAY_SIZE(sgt_match), + importer_ops, importer_priv); +} +EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, "DMA_BUF"); + /** * dma_buf_detach - Remove the given attachment from dmabuf's attachments list * @dmabuf: [in] buffer to detach from. diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 3bcd1d6d150188..14d556bb022862 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -29,6 +29,7 @@ struct dma_buf; struct dma_buf_attachment; struct dma_buf_mapping_type; struct dma_buf_mapping_exp_ops; +struct dma_buf_match_args;
/* @@ -308,6 +309,14 @@ struct dma_buf_ops {
int (*vmap)(struct dma_buf *dmabuf, struct iosys_map *map); void (*vunmap)(struct dma_buf *dmabuf, struct iosys_map *map); + + /** + * @match_mapping: + * + * Called during attach. Allows the exporter to build its own exporter + * struct dma_buf_mapping_match[] and call dma_buf_match_mapping(). + */ + int (*match_mapping)(struct dma_buf_match_args *args); };
/** @@ -619,6 +628,11 @@ struct dma_buf_attachment * dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, const struct dma_buf_attach_ops *importer_ops, void *importer_priv); +struct dma_buf_attachment *dma_buf_mapping_attach( + struct dma_buf *dmabuf, struct dma_buf_mapping_match *importer_matches, + size_t match_len, const struct dma_buf_attach_ops *importer_ops, + void *importer_priv); + void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach); int dma_buf_pin(struct dma_buf_attachment *attach);
Now that dma_buf_mapping_attach() ensures a mapping_type exists, even for exporters and importers that don't provide it, route operations through the map_type.
For map/unmap this will go through dma_buf_sgt_compat_map_dma_buf() which calls the same attach->dmabuf->ops->map_dma_buf().
Move the debugfs processing unique to SGT into a callback too.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 6e89fcfdad3015..4211ae2b462bdd 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1149,12 +1149,14 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, "DMA_BUF"); struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, enum dma_data_direction direction) { + const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = + dma_buf_get_sgt_ops(attach); struct sg_table *sg_table; signed long ret;
might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf)) + if (WARN_ON(!attach || !attach->dmabuf || !sgt_exp_ops)) return ERR_PTR(-EINVAL);
dma_resv_assert_held(attach->dmabuf->resv); @@ -1170,7 +1172,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, return ERR_PTR(ret); }
- sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); + sg_table = sgt_exp_ops->map_dma_buf(attach, direction); if (!sg_table) sg_table = ERR_PTR(-ENOMEM); if (IS_ERR(sg_table)) @@ -1208,7 +1210,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, return sg_table;
error_unmap: - attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction); + sgt_exp_ops->unmap_dma_buf(attach, sg_table, direction); sg_table = ERR_PTR(ret);
error_unpin: @@ -1261,15 +1263,18 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, struct sg_table *sg_table, enum dma_data_direction direction) { + const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = + dma_buf_get_sgt_ops(attach); + might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + if (WARN_ON(!attach || !attach->dmabuf || !sg_table || !sgt_exp_ops)) return;
dma_resv_assert_held(attach->dmabuf->resv);
mangle_sg_table(sg_table); - attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction); + sgt_exp_ops->unmap_dma_buf(attach, sg_table, direction);
if (dma_buf_pin_on_map(attach)) attach->dmabuf->ops->unpin(attach); @@ -1700,7 +1705,11 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused) attach_count = 0;
list_for_each_entry(attach_obj, &buf_obj->attachments, node) { - seq_printf(s, "\t%s\n", dev_name(attach_obj->dev)); + seq_printf(s, "\t%s:", attach_obj->map_type.type->name); + if (attach_obj->map_type.type->debugfs_dump) + attach_obj->map_type.type->debugfs_dump( + s, attach_obj); + seq_putc(s, '\n'); attach_count++; } dma_resv_unlock(buf_obj->resv);
Introduce single_exporter_match into the dma_buf_ops so that single exporter drivers can simply set it using a static initializer to use the mapping type APIs. Provide a helper macro DMA_BUF_SIMPLE_SGT_EXP_MATCH() that generates the initializer for simple drivers that don't use P2P.
More complex exporters, especially those with P2P, need to implement the match_mapping call back to extract things like their DMA struct device from the dma_buf in order to do the P2P calculations.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf.c | 14 +++++++++++--- include/linux/dma-buf-mapping.h | 15 +++++++++++++++ include/linux/dma-buf.h | 9 +++++++++ 3 files changed, 35 insertions(+), 3 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 4211ae2b462bdd..ac755f358dc7b3 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -693,10 +693,14 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) || !exp_info->ops->release)) return ERR_PTR(-EINVAL);
- if (exp_info->ops->match_mapping) { + if (exp_info->ops->match_mapping || + exp_info->ops->single_exporter_match) { if (WARN_ON(exp_info->ops->map_dma_buf || exp_info->ops->unmap_dma_buf)) return ERR_PTR(-EINVAL); + if (WARN_ON(exp_info->ops->match_mapping && + exp_info->ops->single_exporter_match)) + return ERR_PTR(-EINVAL); } else { if (WARN_ON(!exp_info->ops->map_dma_buf || !exp_info->ops->unmap_dma_buf)) @@ -977,8 +981,12 @@ struct dma_buf_attachment *dma_buf_mapping_attach( if (ret) goto err_attach; } else { - ret = dma_buf_match_mapping(&match_args, - &dma_buf_sgt_exp_compat_match, 1); + const struct dma_buf_mapping_match *exp_match = + dmabuf->ops->single_exporter_match; + + if (!exp_match) + exp_match = &dma_buf_sgt_exp_compat_match; + ret = dma_buf_match_mapping(&match_args, exp_match, 1); if (ret) goto err_attach; } diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index 360a7fe0b098be..c11e32ef2a684f 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -191,4 +191,19 @@ DMA_BUF_EMAPPING_SGT_P2P(const struct dma_buf_mapping_sgt_exp_ops *exp_ops,
extern const struct dma_buf_mapping_match dma_buf_sgt_exp_compat_match;
+/* + * dma_buf_ops initializer helper for simple drivers that use a single + * SGT map/unmap operation without P2P. + */ +#define DMA_BUF_SIMPLE_SGT_EXP_MATCH(_map, _unmap) \ + .single_exporter_match = &((const struct dma_buf_mapping_match){ \ + .type = &dma_buf_mapping_sgt_type, \ + .exp_ops = &((const struct dma_buf_mapping_sgt_exp_ops){ \ + .map_dma_buf = _map, \ + .unmap_dma_buf = _unmap, \ + }.ops), \ + .sgt_data = { \ + .exporter_requires_p2p = DMA_SGT_NO_P2P, \ + } }) + #endif diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 14d556bb022862..a8cfbbafbe31fe 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -317,6 +317,15 @@ struct dma_buf_ops { * struct dma_buf_mapping_match[] and call dma_buf_match_mapping(). */ int (*match_mapping)(struct dma_buf_match_args *args); + + /** + * @single_exporter_match: + * + * Should only be set by the DMA_BUF_SIMPLE_*_EXP_MATCH() helper macros. + * Exactly one of @match_mapping or @single_exporter_match must be + * provided. + */ + const struct dma_buf_mapping_match *single_exporter_match; };
/**
When the next patch converts exporters to use SGT natively, the dma_buf->ops->map_dma_buf will become NULL. Additionally check sgt_exp_ops to see the new location.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/gpu/drm/drm_prime.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 21809a82187b12..d093a888b0df8f 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -28,6 +28,7 @@
#include <linux/export.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/rbtree.h> #include <linux/module.h>
@@ -587,6 +588,18 @@ int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data, * option for sharing lots of buffers for rendering. */
+static bool is_gem_map_dma_buf(struct dma_buf_attachment *attach) +{ + const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = + dma_buf_get_sgt_ops(attach); + + if (attach->dmabuf->ops->map_dma_buf == drm_gem_map_dma_buf) + return true; + if (sgt_exp_ops && sgt_exp_ops->map_dma_buf == drm_gem_map_dma_buf) + return true; + return false; +} + /** * drm_gem_map_attach - dma_buf attach implementation for GEM * @dma_buf: buffer to attach device to @@ -608,7 +621,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf, * drm_gem_map_dma_buf() requires obj->get_sg_table(), but drivers * that implement their own ->map_dma_buf() do not. */ - if (dma_buf->ops->map_dma_buf == drm_gem_map_dma_buf && + if (is_gem_map_dma_buf(attach) && !obj->funcs->get_sg_table) return -ENOSYS;
Update the exporters to use a SGT mapping type and the new style mapping type API. None of these exporters do anything with attach->peer2peer or importer_ops->allow_peer2peer and they all follow the same pattern.
Change all the places that need to get the SGT's DMA device for DMA API use to use dma_buf_sgt_dma_device().
This is all a mechanical change of moving the map_dma_buf/unmap_dma_buf into DMA_BUF_SIMPLE_SGT_EXP_MATCH() arguments and switching attach->dev to dma_buf_sgt_dma_device(attach).
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/accel/amdxdna/amdxdna_gem.c | 5 +++-- drivers/accel/amdxdna/amdxdna_ubuf.c | 10 ++++++---- drivers/accel/ivpu/ivpu_gem_userptr.c | 11 +++++++---- drivers/dma-buf/heaps/cma_heap.c | 12 +++++++----- drivers/dma-buf/heaps/system_heap.c | 13 ++++++++----- drivers/dma-buf/udmabuf.c | 8 ++++---- drivers/gpu/drm/armada/armada_gem.c | 12 +++++++----- drivers/gpu/drm/drm_prime.c | 9 +++++---- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +++++--- .../gpu/drm/i915/gem/selftests/mock_dmabuf.c | 8 ++++---- drivers/gpu/drm/msm/msm_gem_prime.c | 7 +++++-- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 5 +++-- drivers/gpu/drm/tegra/gem.c | 12 +++++++----- drivers/gpu/drm/virtio/virtgpu_prime.c | 11 +++++++---- drivers/iommu/iommufd/selftest.c | 18 +++--------------- .../common/videobuf2/videobuf2-dma-contig.c | 15 ++++++++------- .../media/common/videobuf2/videobuf2-dma-sg.c | 14 +++++++++----- .../common/videobuf2/videobuf2-vmalloc.c | 13 ++++++++----- drivers/misc/fastrpc.c | 12 +++++++----- drivers/tee/tee_heap.c | 13 +++++++------ drivers/xen/gntdev-dmabuf.c | 19 +++++++++++-------- samples/vfio-mdev/mbochs.c | 10 +++++----- sound/soc/fsl/fsl_asrc_m2m.c | 12 +++++++----- 23 files changed, 143 insertions(+), 114 deletions(-)
diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c index dfa916eeb2d9c8..fb7c8de960cd2a 100644 --- a/drivers/accel/amdxdna/amdxdna_gem.c +++ b/drivers/accel/amdxdna/amdxdna_gem.c @@ -11,6 +11,7 @@ #include <drm/drm_print.h> #include <drm/gpu_scheduler.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-direct.h> #include <linux/iosys-map.h> #include <linux/pagemap.h> @@ -385,12 +386,12 @@ static int amdxdna_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struc static const struct dma_buf_ops amdxdna_dmabuf_ops = { .attach = drm_gem_map_attach, .detach = drm_gem_map_detach, - .map_dma_buf = drm_gem_map_dma_buf, - .unmap_dma_buf = drm_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, .mmap = amdxdna_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(drm_gem_map_dma_buf, + drm_gem_unmap_dma_buf), };
static int amdxdna_gem_obj_vmap(struct amdxdna_gem_obj *abo, void **vaddr) diff --git a/drivers/accel/amdxdna/amdxdna_ubuf.c b/drivers/accel/amdxdna/amdxdna_ubuf.c index 077b2261cf2a04..ad3c9064f5c5cd 100644 --- a/drivers/accel/amdxdna/amdxdna_ubuf.c +++ b/drivers/accel/amdxdna/amdxdna_ubuf.c @@ -7,6 +7,7 @@ #include <drm/drm_device.h> #include <drm/drm_print.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/pagemap.h> #include <linux/vmalloc.h>
@@ -37,7 +38,8 @@ static struct sg_table *amdxdna_ubuf_map(struct dma_buf_attachment *attach, return ERR_PTR(ret);
if (ubuf->flags & AMDXDNA_UBUF_FLAG_MAP_DMA) { - ret = dma_map_sgtable(attach->dev, sg, direction, 0); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attach), sg, + direction, 0); if (ret) return ERR_PTR(ret); } @@ -52,7 +54,8 @@ static void amdxdna_ubuf_unmap(struct dma_buf_attachment *attach, struct amdxdna_ubuf_priv *ubuf = attach->dmabuf->priv;
if (ubuf->flags & AMDXDNA_UBUF_FLAG_MAP_DMA) - dma_unmap_sgtable(attach->dev, sg, direction, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attach), sg, direction, + 0);
sg_free_table(sg); kfree(sg); @@ -117,12 +120,11 @@ static void amdxdna_ubuf_vunmap(struct dma_buf *dbuf, struct iosys_map *map) }
static const struct dma_buf_ops amdxdna_ubuf_dmabuf_ops = { - .map_dma_buf = amdxdna_ubuf_map, - .unmap_dma_buf = amdxdna_ubuf_unmap, .release = amdxdna_ubuf_release, .mmap = amdxdna_ubuf_mmap, .vmap = amdxdna_ubuf_vmap, .vunmap = amdxdna_ubuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(amdxdna_ubuf_map, amdxdna_ubuf_unmap), };
struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev, diff --git a/drivers/accel/ivpu/ivpu_gem_userptr.c b/drivers/accel/ivpu/ivpu_gem_userptr.c index 25ba606164c03c..32e9a37a15191d 100644 --- a/drivers/accel/ivpu/ivpu_gem_userptr.c +++ b/drivers/accel/ivpu/ivpu_gem_userptr.c @@ -4,6 +4,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/err.h> #include <linux/highmem.h> #include <linux/mm.h> @@ -26,7 +27,8 @@ ivpu_gem_userptr_dmabuf_map(struct dma_buf_attachment *attachment, struct sg_table *sgt = attachment->dmabuf->priv; int ret;
- ret = dma_map_sgtable(attachment->dev, sgt, direction, DMA_ATTR_SKIP_CPU_SYNC); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), sgt, + direction, DMA_ATTR_SKIP_CPU_SYNC); if (ret) return ERR_PTR(ret);
@@ -37,7 +39,8 @@ static void ivpu_gem_userptr_dmabuf_unmap(struct dma_buf_attachment *attachment, struct sg_table *sgt, enum dma_data_direction direction) { - dma_unmap_sgtable(attachment->dev, sgt, direction, DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), sgt, direction, + DMA_ATTR_SKIP_CPU_SYNC); }
static void ivpu_gem_userptr_dmabuf_release(struct dma_buf *dma_buf) @@ -56,9 +59,9 @@ static void ivpu_gem_userptr_dmabuf_release(struct dma_buf *dma_buf) }
static const struct dma_buf_ops ivpu_gem_userptr_dmabuf_ops = { - .map_dma_buf = ivpu_gem_userptr_dmabuf_map, - .unmap_dma_buf = ivpu_gem_userptr_dmabuf_unmap, .release = ivpu_gem_userptr_dmabuf_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(ivpu_gem_userptr_dmabuf_map, + ivpu_gem_userptr_dmabuf_unmap), };
static struct dma_buf * diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 42f88193eab9f8..a1ac415bbc512c 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -14,6 +14,7 @@
#include <linux/cma.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-buf/heaps/cma.h> #include <linux/dma-heap.h> #include <linux/dma-map-ops.h> @@ -87,7 +88,7 @@ static int cma_heap_attach(struct dma_buf *dmabuf, return ret; }
- a->dev = attachment->dev; + a->dev = dma_buf_sgt_dma_device(attachment); INIT_LIST_HEAD(&a->list); a->mapped = false;
@@ -121,7 +122,8 @@ static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachme struct sg_table *table = &a->table; int ret;
- ret = dma_map_sgtable(attachment->dev, table, direction, 0); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), table, + direction, 0); if (ret) return ERR_PTR(-ENOMEM); a->mapped = true; @@ -135,7 +137,8 @@ static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct dma_heap_attachment *a = attachment->priv;
a->mapped = false; - dma_unmap_sgtable(attachment->dev, table, direction, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), table, direction, + 0); }
static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, @@ -282,14 +285,13 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf) static const struct dma_buf_ops cma_heap_buf_ops = { .attach = cma_heap_attach, .detach = cma_heap_detach, - .map_dma_buf = cma_heap_map_dma_buf, - .unmap_dma_buf = cma_heap_unmap_dma_buf, .begin_cpu_access = cma_heap_dma_buf_begin_cpu_access, .end_cpu_access = cma_heap_dma_buf_end_cpu_access, .mmap = cma_heap_mmap, .vmap = cma_heap_vmap, .vunmap = cma_heap_vunmap, .release = cma_heap_dma_buf_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(cma_heap_map_dma_buf, cma_heap_unmap_dma_buf), };
static struct dma_buf *cma_heap_allocate(struct dma_heap *heap, diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 4c782fe33fd497..18c05d2fe27f0b 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -11,6 +11,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-mapping.h> #include <linux/dma-heap.h> #include <linux/err.h> @@ -87,7 +88,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, return ret; }
- a->dev = attachment->dev; + a->dev = dma_buf_sgt_dma_device(attachment); INIT_LIST_HEAD(&a->list); a->mapped = false;
@@ -121,7 +122,8 @@ static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attac struct sg_table *table = &a->table; int ret;
- ret = dma_map_sgtable(attachment->dev, table, direction, 0); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), table, + direction, 0); if (ret) return ERR_PTR(ret);
@@ -136,7 +138,8 @@ static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct dma_heap_attachment *a = attachment->priv;
a->mapped = false; - dma_unmap_sgtable(attachment->dev, table, direction, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), table, direction, + 0); }
static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, @@ -305,14 +308,14 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) static const struct dma_buf_ops system_heap_buf_ops = { .attach = system_heap_attach, .detach = system_heap_detach, - .map_dma_buf = system_heap_map_dma_buf, - .unmap_dma_buf = system_heap_unmap_dma_buf, .begin_cpu_access = system_heap_dma_buf_begin_cpu_access, .end_cpu_access = system_heap_dma_buf_end_cpu_access, .mmap = system_heap_mmap, .vmap = system_heap_vmap, .vunmap = system_heap_vunmap, .release = system_heap_dma_buf_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(system_heap_map_dma_buf, + system_heap_unmap_dma_buf), };
static struct page *alloc_largest_available(unsigned long size, diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 40399c26e6be62..e1b75772df168f 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -2,6 +2,7 @@ #include <linux/cred.h> #include <linux/device.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-resv.h> #include <linux/highmem.h> #include <linux/init.h> @@ -185,14 +186,14 @@ static void put_sg_table(struct device *dev, struct sg_table *sg, static struct sg_table *map_udmabuf(struct dma_buf_attachment *at, enum dma_data_direction direction) { - return get_sg_table(at->dev, at->dmabuf, direction); + return get_sg_table(dma_buf_sgt_dma_device(at), at->dmabuf, direction); }
static void unmap_udmabuf(struct dma_buf_attachment *at, struct sg_table *sg, enum dma_data_direction direction) { - return put_sg_table(at->dev, sg, direction); + return put_sg_table(dma_buf_sgt_dma_device(at), sg, direction); }
static void unpin_all_folios(struct udmabuf *ubuf) @@ -277,14 +278,13 @@ static int end_cpu_udmabuf(struct dma_buf *buf, }
static const struct dma_buf_ops udmabuf_ops = { - .map_dma_buf = map_udmabuf, - .unmap_dma_buf = unmap_udmabuf, .release = release_udmabuf, .mmap = mmap_udmabuf, .vmap = vmap_udmabuf, .vunmap = vunmap_udmabuf, .begin_cpu_access = begin_cpu_udmabuf, .end_cpu_access = end_cpu_udmabuf, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(map_udmabuf, unmap_udmabuf), };
#define SEALS_WANTED (F_SEAL_SHRINK) diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index 35fcfa0d85ff35..bf6968b1f22511 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -4,6 +4,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-mapping.h> #include <linux/mman.h> #include <linux/shmem_fs.h> @@ -387,6 +388,7 @@ static struct sg_table * armada_gem_prime_map_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); struct drm_gem_object *obj = attach->dmabuf->priv; struct armada_gem_object *dobj = drm_to_armada_gem(obj); struct scatterlist *sg; @@ -417,7 +419,7 @@ armada_gem_prime_map_dma_buf(struct dma_buf_attachment *attach, sg_set_page(sg, page, PAGE_SIZE, 0); }
- if (dma_map_sgtable(attach->dev, sgt, dir, 0)) + if (dma_map_sgtable(dma_dev, sgt, dir, 0)) goto release; } else if (dobj->page) { /* Single contiguous page */ @@ -426,7 +428,7 @@ armada_gem_prime_map_dma_buf(struct dma_buf_attachment *attach,
sg_set_page(sgt->sgl, dobj->page, dobj->obj.size, 0);
- if (dma_map_sgtable(attach->dev, sgt, dir, 0)) + if (dma_map_sgtable(dma_dev, sgt, dir, 0)) goto free_table; } else if (dobj->linear) { /* Single contiguous physical region - no struct page */ @@ -458,7 +460,7 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach, int i;
if (!dobj->linear) - dma_unmap_sgtable(attach->dev, sgt, dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attach), sgt, dir, 0);
if (dobj->obj.filp) { struct scatterlist *sg; @@ -478,10 +480,10 @@ armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma) }
static const struct dma_buf_ops armada_gem_prime_dmabuf_ops = { - .map_dma_buf = armada_gem_prime_map_dma_buf, - .unmap_dma_buf = armada_gem_prime_unmap_dma_buf, .release = drm_gem_dmabuf_release, .mmap = armada_gem_dmabuf_mmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(armada_gem_prime_map_dma_buf, + armada_gem_prime_unmap_dma_buf), };
struct dma_buf * diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index d093a888b0df8f..94ec2483e40107 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -693,7 +693,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, if (IS_ERR(sgt)) return sgt;
- ret = dma_map_sgtable(attach->dev, sgt, dir, + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attach), sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); if (ret) { sg_free_table(sgt); @@ -720,7 +720,8 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, if (!sgt) return;
- dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attach), sgt, dir, + DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sgt); kfree(sgt); } @@ -840,12 +841,12 @@ EXPORT_SYMBOL(drm_gem_dmabuf_mmap); static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { .attach = drm_gem_map_attach, .detach = drm_gem_map_detach, - .map_dma_buf = drm_gem_map_dma_buf, - .unmap_dma_buf = drm_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(drm_gem_map_dma_buf, + drm_gem_unmap_dma_buf), };
/** diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index f4f1c979d1b9ca..a119623aed254b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -4,6 +4,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/highmem.h> #include <linux/dma-resv.h> #include <linux/module.h> @@ -52,7 +53,8 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attach, dst = sg_next(dst); }
- ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attach), sgt, dir, + DMA_ATTR_SKIP_CPU_SYNC); if (ret) goto err_free_sg;
@@ -203,14 +205,14 @@ static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf, static const struct dma_buf_ops i915_dmabuf_ops = { .attach = i915_gem_dmabuf_attach, .detach = i915_gem_dmabuf_detach, - .map_dma_buf = i915_gem_map_dma_buf, - .unmap_dma_buf = drm_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, .mmap = i915_gem_dmabuf_mmap, .vmap = i915_gem_dmabuf_vmap, .vunmap = i915_gem_dmabuf_vunmap, .begin_cpu_access = i915_gem_begin_cpu_access, .end_cpu_access = i915_gem_end_cpu_access, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(i915_gem_map_dma_buf, + drm_gem_unmap_dma_buf), };
struct dma_buf *i915_gem_prime_export(struct drm_gem_object *gem_obj, int flags) diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c index 5cd58e0f0dcf64..93a091280baf9e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c @@ -4,6 +4,7 @@ * Copyright © 2016 Intel Corporation */
+#include <linux/dma-buf-mapping.h> #include <linux/vmalloc.h> #include "mock_dmabuf.h"
@@ -29,7 +30,7 @@ static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attachment, sg = sg_next(sg); }
- err = dma_map_sgtable(attachment->dev, st, dir, 0); + err = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), st, dir, 0); if (err) goto err_st;
@@ -46,7 +47,7 @@ static void mock_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *st, enum dma_data_direction dir) { - dma_unmap_sgtable(attachment->dev, st, dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), st, dir, 0); sg_free_table(st); kfree(st); } @@ -88,12 +89,11 @@ static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma) }
static const struct dma_buf_ops mock_dmabuf_ops = { - .map_dma_buf = mock_map_dma_buf, - .unmap_dma_buf = mock_unmap_dma_buf, .release = mock_dmabuf_release, .mmap = mock_dmabuf_mmap, .vmap = mock_dmabuf_vmap, .vunmap = mock_dmabuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(mock_map_dma_buf, mock_unmap_dma_buf), };
static struct dma_buf *mock_dmabuf(int npages) diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index 036d34c674d9a2..ed7a9bfd33c288 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -5,6 +5,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h>
#include <drm/drm_drv.h> #include <drm/drm_prime.h> @@ -54,12 +55,12 @@ static void msm_gem_dmabuf_release(struct dma_buf *dma_buf) static const struct dma_buf_ops msm_gem_prime_dmabuf_ops = { .attach = drm_gem_map_attach, .detach = drm_gem_map_detach, - .map_dma_buf = drm_gem_map_dma_buf, - .unmap_dma_buf = drm_gem_unmap_dma_buf, .release = msm_gem_dmabuf_release, .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(drm_gem_map_dma_buf, + drm_gem_unmap_dma_buf), };
struct drm_gem_object *msm_gem_prime_import(struct drm_device *dev, @@ -132,3 +133,5 @@ void msm_gem_prime_unpin(struct drm_gem_object *obj)
msm_gem_unpin_pages_locked(obj); } + +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 30cf1cdc1aa3c8..23beaeefab67d7 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -5,6 +5,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/highmem.h>
#include <drm/drm_prime.h> @@ -69,12 +70,12 @@ static int omap_gem_dmabuf_mmap(struct dma_buf *buffer, }
static const struct dma_buf_ops omap_dmabuf_ops = { - .map_dma_buf = omap_gem_map_dma_buf, - .unmap_dma_buf = omap_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, .begin_cpu_access = omap_gem_dmabuf_begin_cpu_access, .end_cpu_access = omap_gem_dmabuf_end_cpu_access, .mmap = omap_gem_dmabuf_mmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(omap_gem_map_dma_buf, + omap_gem_unmap_dma_buf), };
struct dma_buf *omap_gem_prime_export(struct drm_gem_object *obj, int flags) diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 6b14f1e919eb6b..244c01819d56b5 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -11,6 +11,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/iommu.h> #include <linux/module.h> #include <linux/vmalloc.h> @@ -635,6 +636,7 @@ static struct sg_table * tegra_gem_prime_map_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); struct drm_gem_object *gem = attach->dmabuf->priv; struct tegra_bo *bo = to_tegra_bo(gem); struct sg_table *sgt; @@ -648,12 +650,12 @@ tegra_gem_prime_map_dma_buf(struct dma_buf_attachment *attach, 0, gem->size, GFP_KERNEL) < 0) goto free; } else { - if (dma_get_sgtable(attach->dev, sgt, bo->vaddr, bo->iova, + if (dma_get_sgtable(dma_dev, sgt, bo->vaddr, bo->iova, gem->size) < 0) goto free; }
- if (dma_map_sgtable(attach->dev, sgt, dir, 0)) + if (dma_map_sgtable(dma_dev, sgt, dir, 0)) goto free;
return sgt; @@ -672,7 +674,7 @@ static void tegra_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach, struct tegra_bo *bo = to_tegra_bo(gem);
if (bo->pages) - dma_unmap_sgtable(attach->dev, sgt, dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attach), sgt, dir, 0);
sg_free_table(sgt); kfree(sgt); @@ -745,14 +747,14 @@ static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct iosys_map *map) }
static const struct dma_buf_ops tegra_gem_prime_dmabuf_ops = { - .map_dma_buf = tegra_gem_prime_map_dma_buf, - .unmap_dma_buf = tegra_gem_prime_unmap_dma_buf, .release = tegra_gem_prime_release, .begin_cpu_access = tegra_gem_prime_begin_cpu_access, .end_cpu_access = tegra_gem_prime_end_cpu_access, .mmap = tegra_gem_prime_mmap, .vmap = tegra_gem_prime_vmap, .vunmap = tegra_gem_prime_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(tegra_gem_prime_map_dma_buf, + tegra_gem_prime_unmap_dma_buf), };
struct dma_buf *tegra_gem_prime_export(struct drm_gem_object *gem, diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index ce49282198cbf6..d7e1f741f941a3 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -23,6 +23,7 @@ */
#include <drm/drm_prime.h> +#include <linux/dma-buf-mapping.h> #include <linux/virtio_dma_buf.h>
#include "virtgpu_drv.h" @@ -53,7 +54,8 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
if (virtio_gpu_is_vram(bo)) - return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); + return virtio_gpu_vram_map_dma_buf( + bo, dma_buf_sgt_dma_device(attach), dir);
return drm_gem_map_dma_buf(attach, dir); } @@ -66,7 +68,8 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
if (virtio_gpu_is_vram(bo)) { - virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); + virtio_gpu_vram_unmap_dma_buf(dma_buf_sgt_dma_device(attach), + sgt, dir); return; }
@@ -77,12 +80,12 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { .ops = { .attach = virtio_dma_buf_attach, .detach = drm_gem_map_detach, - .map_dma_buf = virtgpu_gem_map_dma_buf, - .unmap_dma_buf = virtgpu_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(virtgpu_gem_map_dma_buf, + virtgpu_gem_unmap_dma_buf), }, .device_attach = drm_gem_map_attach, .get_uuid = virtgpu_virtio_get_uuid, diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index 550ff36dec3a35..7aa6a58a5705f7 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -6,6 +6,7 @@ #include <linux/anon_inodes.h> #include <linux/debugfs.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-resv.h> #include <linux/fault-inject.h> #include <linux/file.h> @@ -1961,17 +1962,6 @@ struct iommufd_test_dma_buf { bool revoked; };
-static int iommufd_test_dma_buf_attach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - return 0; -} - -static void iommufd_test_dma_buf_detach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ -} - static struct sg_table * iommufd_test_dma_buf_map(struct dma_buf_attachment *attachment, enum dma_data_direction dir) @@ -1994,11 +1984,9 @@ static void iommufd_test_dma_buf_release(struct dma_buf *dmabuf) }
static const struct dma_buf_ops iommufd_test_dmabuf_ops = { - .attach = iommufd_test_dma_buf_attach, - .detach = iommufd_test_dma_buf_detach, - .map_dma_buf = iommufd_test_dma_buf_map, .release = iommufd_test_dma_buf_release, - .unmap_dma_buf = iommufd_test_dma_buf_unmap, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(iommufd_test_dma_buf_map, + iommufd_test_dma_buf_unmap), };
int iommufd_test_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 7123c5fae92cee..7a3bc31699bb90 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -11,6 +11,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/module.h> #include <linux/refcount.h> #include <linux/scatterlist.h> @@ -372,8 +373,8 @@ static void vb2_dc_dmabuf_ops_detach(struct dma_buf *dbuf, * memory locations do not require any explicit cache * maintenance prior or after being used by the device. */ - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, + attach->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sgt); kfree(attach); db_attach->priv = NULL; @@ -392,8 +393,8 @@ static struct sg_table *vb2_dc_dmabuf_ops_map(
/* release any previous cache */ if (attach->dma_dir != DMA_NONE) { - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, + attach->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); attach->dma_dir = DMA_NONE; }
@@ -401,7 +402,7 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( * mapping to the client with new direction, no cache sync * required see comment in vb2_dc_dmabuf_ops_detach() */ - if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, + if (dma_map_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, dma_dir, DMA_ATTR_SKIP_CPU_SYNC)) { pr_err("failed to map scatterlist\n"); return ERR_PTR(-EIO); @@ -462,13 +463,13 @@ static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf, static const struct dma_buf_ops vb2_dc_dmabuf_ops = { .attach = vb2_dc_dmabuf_ops_attach, .detach = vb2_dc_dmabuf_ops_detach, - .map_dma_buf = vb2_dc_dmabuf_ops_map, - .unmap_dma_buf = vb2_dc_dmabuf_ops_unmap, .begin_cpu_access = vb2_dc_dmabuf_ops_begin_cpu_access, .end_cpu_access = vb2_dc_dmabuf_ops_end_cpu_access, .vmap = vb2_dc_dmabuf_ops_vmap, .mmap = vb2_dc_dmabuf_ops_mmap, .release = vb2_dc_dmabuf_ops_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(vb2_dc_dmabuf_ops_map, + vb2_dc_dmabuf_ops_unmap), };
static struct sg_table *vb2_dc_get_base_sgt(struct vb2_dc_buf *buf) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index b3bf2173c14e1b..03a836dce44f90 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -10,6 +10,7 @@ * the Free Software Foundation. */
+#include <linux/dma-buf-mapping.h> #include <linux/module.h> #include <linux/mm.h> #include <linux/refcount.h> @@ -416,7 +417,8 @@ static void vb2_dma_sg_dmabuf_ops_detach(struct dma_buf *dbuf,
/* release the scatterlist cache */ if (attach->dma_dir != DMA_NONE) - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, + attach->dma_dir, 0); sg_free_table(sgt); kfree(attach); db_attach->priv = NULL; @@ -435,12 +437,14 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map(
/* release any previous cache */ if (attach->dma_dir != DMA_NONE) { - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, + attach->dma_dir, 0); attach->dma_dir = DMA_NONE; }
/* mapping to the client with new direction */ - if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { + if (dma_map_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, dma_dir, + 0)) { pr_err("failed to map scatterlist\n"); return ERR_PTR(-EIO); } @@ -509,13 +513,13 @@ static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf, static const struct dma_buf_ops vb2_dma_sg_dmabuf_ops = { .attach = vb2_dma_sg_dmabuf_ops_attach, .detach = vb2_dma_sg_dmabuf_ops_detach, - .map_dma_buf = vb2_dma_sg_dmabuf_ops_map, - .unmap_dma_buf = vb2_dma_sg_dmabuf_ops_unmap, .begin_cpu_access = vb2_dma_sg_dmabuf_ops_begin_cpu_access, .end_cpu_access = vb2_dma_sg_dmabuf_ops_end_cpu_access, .vmap = vb2_dma_sg_dmabuf_ops_vmap, .mmap = vb2_dma_sg_dmabuf_ops_mmap, .release = vb2_dma_sg_dmabuf_ops_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(vb2_dma_sg_dmabuf_ops_map, + vb2_dma_sg_dmabuf_ops_unmap), };
static struct dma_buf *vb2_dma_sg_get_dmabuf(struct vb2_buffer *vb, diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 3f777068cd34b7..b98d067acffe5d 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -10,6 +10,7 @@ * the Free Software Foundation. */
+#include <linux/dma-buf-mapping.h> #include <linux/io.h> #include <linux/module.h> #include <linux/mm.h> @@ -261,7 +262,8 @@ static void vb2_vmalloc_dmabuf_ops_detach(struct dma_buf *dbuf,
/* release the scatterlist cache */ if (attach->dma_dir != DMA_NONE) - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(db_attach), sgt, + attach->dma_dir, 0); sg_free_table(sgt); kfree(attach); db_attach->priv = NULL; @@ -270,6 +272,7 @@ static void vb2_vmalloc_dmabuf_ops_detach(struct dma_buf *dbuf, static struct sg_table *vb2_vmalloc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(db_attach); struct vb2_vmalloc_attachment *attach = db_attach->priv; struct sg_table *sgt;
@@ -280,12 +283,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map(
/* release any previous cache */ if (attach->dma_dir != DMA_NONE) { - dma_unmap_sgtable(db_attach->dev, sgt, attach->dma_dir, 0); + dma_unmap_sgtable(dma_dev, sgt, attach->dma_dir, 0); attach->dma_dir = DMA_NONE; }
/* mapping to the client with new direction */ - if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { + if (dma_map_sgtable(dma_dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); return ERR_PTR(-EIO); } @@ -326,11 +329,11 @@ static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf, static const struct dma_buf_ops vb2_vmalloc_dmabuf_ops = { .attach = vb2_vmalloc_dmabuf_ops_attach, .detach = vb2_vmalloc_dmabuf_ops_detach, - .map_dma_buf = vb2_vmalloc_dmabuf_ops_map, - .unmap_dma_buf = vb2_vmalloc_dmabuf_ops_unmap, .vmap = vb2_vmalloc_dmabuf_ops_vmap, .mmap = vb2_vmalloc_dmabuf_ops_mmap, .release = vb2_vmalloc_dmabuf_ops_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(vb2_vmalloc_dmabuf_ops_map, + vb2_vmalloc_dmabuf_ops_unmap), };
static struct dma_buf *vb2_vmalloc_get_dmabuf(struct vb2_buffer *vb, diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index ee652ef01534a8..2ea57170e56b3e 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -5,6 +5,7 @@ #include <linux/completion.h> #include <linux/device.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-mapping.h> #include <linux/dma-resv.h> #include <linux/idr.h> @@ -652,7 +653,8 @@ fastrpc_map_dma_buf(struct dma_buf_attachment *attachment,
table = &a->sgt;
- ret = dma_map_sgtable(attachment->dev, table, dir, 0); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), table, dir, + 0); if (ret) table = ERR_PTR(ret); return table; @@ -662,7 +664,7 @@ static void fastrpc_unmap_dma_buf(struct dma_buf_attachment *attach, struct sg_table *table, enum dma_data_direction dir) { - dma_unmap_sgtable(attach->dev, table, dir, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attach), table, dir, 0); }
static void fastrpc_release(struct dma_buf *dmabuf) @@ -691,7 +693,7 @@ static int fastrpc_dma_buf_attach(struct dma_buf *dmabuf, return -EINVAL; }
- a->dev = attachment->dev; + a->dev = dma_buf_sgt_dma_device(attachment); INIT_LIST_HEAD(&a->node); attachment->priv = a;
@@ -739,11 +741,11 @@ static int fastrpc_mmap(struct dma_buf *dmabuf, static const struct dma_buf_ops fastrpc_dma_buf_ops = { .attach = fastrpc_dma_buf_attach, .detach = fastrpc_dma_buf_detatch, - .map_dma_buf = fastrpc_map_dma_buf, - .unmap_dma_buf = fastrpc_unmap_dma_buf, .mmap = fastrpc_mmap, .vmap = fastrpc_vmap, .release = fastrpc_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(fastrpc_map_dma_buf, + fastrpc_unmap_dma_buf), };
static int fastrpc_map_attach(struct fastrpc_user *fl, int fd, diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c index d8d7735cdffb9b..48948d39b94961 100644 --- a/drivers/tee/tee_heap.c +++ b/drivers/tee/tee_heap.c @@ -4,6 +4,7 @@ */
#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-heap.h> #include <linux/genalloc.h> #include <linux/module.h> @@ -104,7 +105,7 @@ static int tee_heap_attach(struct dma_buf *dmabuf, return ret; }
- a->dev = attachment->dev; + a->dev = dma_buf_sgt_dma_device(attachment); attachment->priv = a;
return 0; @@ -126,8 +127,8 @@ tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, struct tee_heap_attachment *a = attachment->priv; int ret;
- ret = dma_map_sgtable(attachment->dev, &a->table, direction, - DMA_ATTR_SKIP_CPU_SYNC); + ret = dma_map_sgtable(dma_buf_sgt_dma_device(attachment), &a->table, + direction, DMA_ATTR_SKIP_CPU_SYNC); if (ret) return ERR_PTR(ret);
@@ -142,7 +143,7 @@ static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
WARN_ON(&a->table != table);
- dma_unmap_sgtable(attachment->dev, table, direction, + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), table, direction, DMA_ATTR_SKIP_CPU_SYNC); }
@@ -160,9 +161,9 @@ static void tee_heap_buf_free(struct dma_buf *dmabuf) static const struct dma_buf_ops tee_heap_buf_ops = { .attach = tee_heap_attach, .detach = tee_heap_detach, - .map_dma_buf = tee_heap_map_dma_buf, - .unmap_dma_buf = tee_heap_unmap_dma_buf, .release = tee_heap_buf_free, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(tee_heap_map_dma_buf, + tee_heap_unmap_dma_buf), };
static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 550980dd3b0bc4..91a31a22ba98aa 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -11,6 +11,7 @@ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-direct.h> #include <linux/slab.h> #include <linux/types.h> @@ -242,9 +243,10 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
if (sgt) { if (gntdev_dmabuf_attach->dir != DMA_NONE) - dma_unmap_sgtable(attach->dev, sgt, - gntdev_dmabuf_attach->dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable( + dma_buf_sgt_dma_device(attach), sgt, + gntdev_dmabuf_attach->dir, + DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sgt); }
@@ -258,12 +260,13 @@ static struct sg_table * dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); struct gntdev_dmabuf_attachment *gntdev_dmabuf_attach = attach->priv; struct gntdev_dmabuf *gntdev_dmabuf = attach->dmabuf->priv; struct sg_table *sgt;
pr_debug("Mapping %d pages for dev %p\n", gntdev_dmabuf->nr_pages, - attach->dev); + dma_dev);
if (dir == DMA_NONE || !gntdev_dmabuf_attach) return ERR_PTR(-EINVAL); @@ -282,7 +285,7 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach, sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages, gntdev_dmabuf->nr_pages); if (!IS_ERR(sgt)) { - if (dma_map_sgtable(attach->dev, sgt, dir, + if (dma_map_sgtable(dma_dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC)) { sg_free_table(sgt); kfree(sgt); @@ -293,7 +296,7 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach, } } if (IS_ERR(sgt)) - pr_debug("Failed to map sg table for dev %p\n", attach->dev); + pr_debug("Failed to map sg table for dev %p\n", dma_dev); return sgt; }
@@ -339,9 +342,9 @@ static void dmabuf_exp_ops_release(struct dma_buf *dma_buf) static const struct dma_buf_ops dmabuf_exp_ops = { .attach = dmabuf_exp_ops_attach, .detach = dmabuf_exp_ops_detach, - .map_dma_buf = dmabuf_exp_ops_map_dma_buf, - .unmap_dma_buf = dmabuf_exp_ops_unmap_dma_buf, .release = dmabuf_exp_ops_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(dmabuf_exp_ops_map_dma_buf, + dmabuf_exp_ops_unmap_dma_buf), };
struct gntdev_dmabuf_export_args { diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c index 64ea19253ee3ad..c2eaa14b9ddd64 100644 --- a/samples/vfio-mdev/mbochs.c +++ b/samples/vfio-mdev/mbochs.c @@ -32,6 +32,7 @@ #include <linux/pci.h> #include <linux/dma-buf.h> #include <linux/highmem.h> +#include <linux/dma-buf-mapping.h> #include <drm/drm_fourcc.h> #include <drm/drm_rect.h> #include <drm/drm_modeset_lock.h> @@ -872,7 +873,7 @@ static struct sg_table *mbochs_map_dmabuf(struct dma_buf_attachment *at, if (sg_alloc_table_from_pages(sg, dmabuf->pages, dmabuf->pagecount, 0, dmabuf->mode.size, GFP_KERNEL) < 0) goto err2; - if (dma_map_sgtable(at->dev, sg, direction, 0)) + if (dma_map_sgtable(dma_buf_sgt_dma_device(at), sg, direction, 0)) goto err3;
return sg; @@ -894,7 +895,7 @@ static void mbochs_unmap_dmabuf(struct dma_buf_attachment *at,
dev_dbg(dev, "%s: %d\n", __func__, dmabuf->id);
- dma_unmap_sgtable(at->dev, sg, direction, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(at), sg, direction, 0); sg_free_table(sg); kfree(sg); } @@ -918,11 +919,10 @@ static void mbochs_release_dmabuf(struct dma_buf *buf) mutex_unlock(&mdev_state->ops_lock); }
-static struct dma_buf_ops mbochs_dmabuf_ops = { - .map_dma_buf = mbochs_map_dmabuf, - .unmap_dma_buf = mbochs_unmap_dmabuf, +static const struct dma_buf_ops mbochs_dmabuf_ops = { .release = mbochs_release_dmabuf, .mmap = mbochs_mmap_dmabuf, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(mbochs_map_dmabuf, mbochs_unmap_dmabuf), };
static struct mbochs_dmabuf *mbochs_dmabuf_alloc(struct mdev_state *mdev_state, diff --git a/sound/soc/fsl/fsl_asrc_m2m.c b/sound/soc/fsl/fsl_asrc_m2m.c index f46881f71e4307..fef6a57fc7354a 100644 --- a/sound/soc/fsl/fsl_asrc_m2m.c +++ b/sound/soc/fsl/fsl_asrc_m2m.c @@ -7,6 +7,7 @@
#include <linux/dma/imx-dma.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-mapping.h> #include <linux/pm_runtime.h> #include <sound/asound.h> @@ -411,6 +412,7 @@ static int fsl_asrc_m2m_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) static struct sg_table *fsl_asrc_m2m_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { + struct device *dma_dev = dma_buf_sgt_dma_device(attachment); struct snd_dma_buffer *dmab = attachment->dmabuf->priv; struct sg_table *sgt;
@@ -418,10 +420,10 @@ static struct sg_table *fsl_asrc_m2m_map_dma_buf(struct dma_buf_attachment *atta if (!sgt) return NULL;
- if (dma_get_sgtable(attachment->dev, sgt, dmab->area, dmab->addr, dmab->bytes) < 0) + if (dma_get_sgtable(dma_dev, sgt, dmab->area, dmab->addr, dmab->bytes) < 0) goto free;
- if (dma_map_sgtable(attachment->dev, sgt, direction, 0)) + if (dma_map_sgtable(dma_dev, sgt, direction, 0)) goto free;
return sgt; @@ -436,7 +438,7 @@ static void fsl_asrc_m2m_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { - dma_unmap_sgtable(attachment->dev, table, direction, 0); + dma_unmap_sgtable(dma_buf_sgt_dma_device(attachment), table, direction, 0); }
static void fsl_asrc_m2m_release(struct dma_buf *dmabuf) @@ -446,9 +448,9 @@ static void fsl_asrc_m2m_release(struct dma_buf *dmabuf)
static const struct dma_buf_ops fsl_asrc_m2m_dma_buf_ops = { .mmap = fsl_asrc_m2m_mmap, - .map_dma_buf = fsl_asrc_m2m_map_dma_buf, - .unmap_dma_buf = fsl_asrc_m2m_unmap_dma_buf, .release = fsl_asrc_m2m_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(fsl_asrc_m2m_map_dma_buf, + fsl_asrc_m2m_unmap_dma_buf), };
static int fsl_asrc_m2m_comp_task_create(struct snd_compr_stream *stream,
vmwgfx is creating a DMA-buf that cannot be attached by providing always fail functions for dma_buf_ops.
The attach/detach callbacks are already optional inside DMA-buf, but dma_buf_export() is checking for non-null map/unmap callbacks.
Instead use the mapping type interface and provide an always fail match_mapping(). Remove the unused SGT and attach/detach functions, they can never be called if match_mapping() fails.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/gpu/drm/vmwgfx/vmwgfx_prime.c | 32 +++++---------------------- 1 file changed, 5 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c b/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c index 598b90ac7590b5..90e4342a378d5d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c @@ -35,41 +35,19 @@ #include "vmwgfx_bo.h" #include "ttm_object.h" #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h>
/* - * DMA-BUF attach- and mapping methods. No need to implement - * these until we have other virtual devices use them. + * No need to implement these until we have other virtual devices use them. */ - -static int vmw_prime_map_attach(struct dma_buf *dma_buf, - struct dma_buf_attachment *attach) -{ - return -ENOSYS; -} - -static void vmw_prime_map_detach(struct dma_buf *dma_buf, - struct dma_buf_attachment *attach) -{ -} - -static struct sg_table *vmw_prime_map_dma_buf(struct dma_buf_attachment *attach, - enum dma_data_direction dir) -{ - return ERR_PTR(-ENOSYS); -} - -static void vmw_prime_unmap_dma_buf(struct dma_buf_attachment *attach, - struct sg_table *sgb, - enum dma_data_direction dir) +static int vmw_prime_match_mapping(struct dma_buf_match_args *args) { + return -EOPNOTSUPP; }
const struct dma_buf_ops vmw_prime_dmabuf_ops = { - .attach = vmw_prime_map_attach, - .detach = vmw_prime_map_detach, - .map_dma_buf = vmw_prime_map_dma_buf, - .unmap_dma_buf = vmw_prime_unmap_dma_buf, .release = NULL, + .match_mapping = vmw_prime_match_mapping, };
int vmw_prime_fd_to_handle(struct drm_device *dev,
habana has special code to check pci_p2pdma_distance() and rejects any importer that cannot do P2P DMA to its MMIO.
Convert this directly to a SGT_P2P match which does the same check inside the matching logic.
Someday this should be converted to use dma_buf_phys_vec_to_sgt() which does the P2P checking correctly, for both direct and IOMMU based cases, instead of this hack.
Remove the now empty attach function.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/accel/habanalabs/common/memory.c | 54 +++++++++++------------- 1 file changed, 24 insertions(+), 30 deletions(-)
diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c index 633db4bff46fc4..58dbc3c7f0877a 100644 --- a/drivers/accel/habanalabs/common/memory.c +++ b/drivers/accel/habanalabs/common/memory.c @@ -9,10 +9,10 @@ #include "habanalabs.h" #include "../include/hw_ip/mmu/mmu_general.h"
+#include <linux/dma-buf-mapping.h> #include <linux/uaccess.h> #include <linux/slab.h> #include <linux/vmalloc.h> -#include <linux/pci-p2pdma.h>
MODULE_IMPORT_NS("DMA_BUF");
@@ -1704,23 +1704,6 @@ static struct sg_table *alloc_sgt_from_device_pages(struct hl_device *hdev, u64 return ERR_PTR(rc); }
-static int hl_dmabuf_attach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - struct hl_dmabuf_priv *hl_dmabuf; - struct hl_device *hdev; - int rc; - - hl_dmabuf = dmabuf->priv; - hdev = hl_dmabuf->ctx->hdev; - - rc = pci_p2pdma_distance(hdev->pdev, attachment->dev, true); - - if (rc < 0) - attachment->peer2peer = false; - return 0; -} - static struct sg_table *hl_map_dmabuf(struct dma_buf_attachment *attachment, enum dma_data_direction dir) { @@ -1734,11 +1717,6 @@ static struct sg_table *hl_map_dmabuf(struct dma_buf_attachment *attachment, hl_dmabuf = dma_buf->priv; hdev = hl_dmabuf->ctx->hdev;
- if (!attachment->peer2peer) { - dev_dbg(hdev->dev, "Failed to map dmabuf because p2p is disabled\n"); - return ERR_PTR(-EPERM); - } - exported_size = hl_dmabuf->dmabuf->size; offset = hl_dmabuf->offset; phys_pg_pack = hl_dmabuf->phys_pg_pack; @@ -1753,8 +1731,10 @@ static struct sg_table *hl_map_dmabuf(struct dma_buf_attachment *attachment, page_size = hl_dmabuf->dmabuf->size; }
- sgt = alloc_sgt_from_device_pages(hdev, pages, npages, page_size, exported_size, offset, - attachment->dev, dir); + sgt = alloc_sgt_from_device_pages(hdev, pages, npages, page_size, + exported_size, offset, + dma_buf_sgt_dma_device(attachment), + dir); if (IS_ERR(sgt)) dev_err(hdev->dev, "failed (%ld) to initialize sgt for dmabuf\n", PTR_ERR(sgt));
@@ -1776,9 +1756,9 @@ static void hl_unmap_dmabuf(struct dma_buf_attachment *attachment, * a sync of the memory to the CPU's cache, as it never resided inside that cache. */ for_each_sgtable_dma_sg(sgt, sg, i) - dma_unmap_resource(attachment->dev, sg_dma_address(sg), - sg_dma_len(sg), dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_resource(dma_buf_sgt_dma_device(attachment), + sg_dma_address(sg), sg_dma_len(sg), dir, + DMA_ATTR_SKIP_CPU_SYNC);
/* Need to restore orig_nents because sg_free_table use that field */ sgt->orig_nents = sgt->nents; @@ -1848,11 +1828,25 @@ static void hl_release_dmabuf(struct dma_buf *dmabuf) kfree(hl_dmabuf); }
-static const struct dma_buf_ops habanalabs_dmabuf_ops = { - .attach = hl_dmabuf_attach, +static const struct dma_buf_mapping_sgt_exp_ops hl_dma_buf_sgt_ops = { .map_dma_buf = hl_map_dmabuf, .unmap_dma_buf = hl_unmap_dmabuf, +}; + +static int hl_match_mapping(struct dma_buf_match_args *args) +{ + struct hl_dmabuf_priv *hl_dmabuf = args->dmabuf->priv; + struct dma_buf_mapping_match sgt_match[] = { + DMA_BUF_EMAPPING_SGT_P2P(&hl_dma_buf_sgt_ops, + hl_dmabuf->ctx->hdev->pdev), + }; + + return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match)); +} + +static const struct dma_buf_ops habanalabs_dmabuf_ops = { .release = hl_release_dmabuf, + .match_mapping = hl_match_mapping, };
static int export_dmabuf(struct hl_ctx *ctx,
Like habana, xe wants to check pci_p2pdma_distance(), but unlike habana, it can migrate to system memory and support non-p2p DMAs as well.
Add two exporter SGT mapping types, one that matches P2P and one that matches all of the non-p2p. The pin and map code will force migrate if the non-p2p one is matched.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/gpu/drm/xe/xe_dma_buf.c | 58 +++++++++++++++++++++------------ 1 file changed, 37 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 7c74a31d448602..9968f37657d57d 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -7,7 +7,7 @@
#include <kunit/test.h> #include <linux/dma-buf.h> -#include <linux/pci-p2pdma.h> +#include <linux/dma-buf-mapping.h>
#include <drm/drm_device.h> #include <drm/drm_prime.h> @@ -27,13 +27,6 @@ static int xe_dma_buf_attach(struct dma_buf *dmabuf, { struct drm_gem_object *obj = attach->dmabuf->priv;
- if (attach->peer2peer && - pci_p2pdma_distance(to_pci_dev(obj->dev->dev), attach->dev, false) < 0) - attach->peer2peer = false; - - if (!attach->peer2peer && !xe_bo_can_migrate(gem_to_xe_bo(obj), XE_PL_TT)) - return -EOPNOTSUPP; - xe_pm_runtime_get(to_xe_device(obj->dev)); return 0; } @@ -53,14 +46,12 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach) struct xe_bo *bo = gem_to_xe_bo(obj); struct xe_device *xe = xe_bo_device(bo); struct drm_exec *exec = XE_VALIDATION_UNSUPPORTED; - bool allow_vram = true; + bool allow_vram = dma_buf_sgt_p2p_allowed(attach); int ret;
- if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { - allow_vram = false; - } else { + if (allow_vram) { list_for_each_entry(attach, &dmabuf->attachments, node) { - if (!attach->peer2peer) { + if (!dma_buf_sgt_p2p_allowed(attach)) { allow_vram = false; break; } @@ -101,6 +92,8 @@ static void xe_dma_buf_unpin(struct dma_buf_attachment *attach) static struct sg_table *xe_dma_buf_map(struct dma_buf_attachment *attach, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); + bool peer2peer = dma_buf_sgt_p2p_allowed(attach); struct dma_buf *dma_buf = attach->dmabuf; struct drm_gem_object *obj = dma_buf->priv; struct xe_bo *bo = gem_to_xe_bo(obj); @@ -108,11 +101,11 @@ static struct sg_table *xe_dma_buf_map(struct dma_buf_attachment *attach, struct sg_table *sgt; int r = 0;
- if (!attach->peer2peer && !xe_bo_can_migrate(bo, XE_PL_TT)) + if (!peer2peer && !xe_bo_can_migrate(bo, XE_PL_TT)) return ERR_PTR(-EOPNOTSUPP);
if (!xe_bo_is_pinned(bo)) { - if (!attach->peer2peer) + if (!peer2peer) r = xe_bo_migrate(bo, XE_PL_TT, NULL, exec); else r = xe_bo_validate(bo, NULL, false, exec); @@ -128,7 +121,7 @@ static struct sg_table *xe_dma_buf_map(struct dma_buf_attachment *attach, if (IS_ERR(sgt)) return sgt;
- if (dma_map_sgtable(attach->dev, sgt, dir, + if (dma_map_sgtable(dma_dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC)) goto error_free; break; @@ -137,7 +130,7 @@ static struct sg_table *xe_dma_buf_map(struct dma_buf_attachment *attach, case XE_PL_VRAM1: r = xe_ttm_vram_mgr_alloc_sgt(xe_bo_device(bo), bo->ttm.resource, 0, - bo->ttm.base.size, attach->dev, + bo->ttm.base.size, dma_dev, dir, &sgt); if (r) return ERR_PTR(r); @@ -158,12 +151,14 @@ static void xe_dma_buf_unmap(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); + if (sg_page(sgt->sgl)) { - dma_unmap_sgtable(attach->dev, sgt, dir, 0); + dma_unmap_sgtable(dma_dev, sgt, dir, 0); sg_free_table(sgt); kfree(sgt); } else { - xe_ttm_vram_mgr_free_sgt(attach->dev, dir, sgt); + xe_ttm_vram_mgr_free_sgt(dma_dev, dir, sgt); } }
@@ -197,18 +192,39 @@ static int xe_dma_buf_begin_cpu_access(struct dma_buf *dma_buf, return 0; }
+static const struct dma_buf_mapping_sgt_exp_ops xe_dma_buf_sgt_ops = { + .map_dma_buf = xe_dma_buf_map, + .unmap_dma_buf = xe_dma_buf_unmap, +}; + +static int xe_dma_buf_match_mapping(struct dma_buf_match_args *args) +{ + struct drm_gem_object *obj = args->dmabuf->priv; + struct dma_buf_mapping_match sgt_match[2]; + unsigned int num_match = 0; + + if (IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) + sgt_match[num_match++] = DMA_BUF_EMAPPING_SGT_P2P( + &xe_dma_buf_sgt_ops, to_pci_dev(obj->dev->dev)); + + if (xe_bo_can_migrate(gem_to_xe_bo(obj), XE_PL_TT)) + sgt_match[num_match++] = + DMA_BUF_EMAPPING_SGT(&xe_dma_buf_sgt_ops); + + return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match)); +} + static const struct dma_buf_ops xe_dmabuf_ops = { .attach = xe_dma_buf_attach, .detach = xe_dma_buf_detach, .pin = xe_dma_buf_pin, .unpin = xe_dma_buf_unpin, - .map_dma_buf = xe_dma_buf_map, - .unmap_dma_buf = xe_dma_buf_unmap, .release = drm_gem_dmabuf_release, .begin_cpu_access = xe_dma_buf_begin_cpu_access, .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + .match_mapping = xe_dma_buf_match_mapping, };
struct dma_buf *xe_gem_prime_export(struct drm_gem_object *obj, int flags)
Similar to xe, amdgpu wants to check pci_p2pdma_distance(), and only needs that if peer2peer can be supported by the GPU. It can migrate to system memory and support non-p2p DMA as well.
Further it supports a private non-PCI XGMI path. For now hack this on top of a SGT type, but eventually this is likely better off as its own mapping type.
Add two exporter SGT mapping types, one that matches P2P and one that matches all of the non-p2p. The pin and map code will force migrate if the non-p2p one is matched.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 94 +++++++++++++++------ drivers/gpu/drm/xe/xe_dma_buf.c | 2 +- 2 files changed, 69 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index c1461317eb2987..bb9c602c061dc3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -40,6 +40,7 @@ #include <drm/amdgpu_drm.h> #include <drm/ttm/ttm_tt.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-fence-array.h> #include <linux/pci-p2pdma.h>
@@ -77,28 +78,10 @@ static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *atta static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) { - struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach); struct drm_gem_object *obj = dmabuf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); int r;
- /* - * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+. - * Such buffers cannot be safely accessed over P2P due to device-local - * compression metadata. Fallback to system-memory path instead. - * Device supports GFX12 (GC 12.x or newer) - * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag - * - */ - if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) && - bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) - attach->peer2peer = false; - - if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && - pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) - attach->peer2peer = false; - r = dma_resv_lock(bo->tbo.base.resv, NULL); if (r) return r; @@ -137,7 +120,7 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach) domains &= ~AMDGPU_GEM_DOMAIN_VRAM; } else { list_for_each_entry(attach, &dmabuf->attachments, node) - if (!attach->peer2peer) + if (!dma_buf_sgt_p2p_allowed(attach)) domains &= ~AMDGPU_GEM_DOMAIN_VRAM; }
@@ -181,6 +164,7 @@ static void amdgpu_dma_buf_unpin(struct dma_buf_attachment *attach) static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); struct dma_buf *dma_buf = attach->dmabuf; struct drm_gem_object *obj = dma_buf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); @@ -194,7 +178,7 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, unsigned int domains = AMDGPU_GEM_DOMAIN_GTT;
if (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM && - attach->peer2peer) { + dma_buf_sgt_p2p_allowed(attach)) { bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; domains |= AMDGPU_GEM_DOMAIN_VRAM; } @@ -212,7 +196,7 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, if (IS_ERR(sgt)) return sgt;
- if (dma_map_sgtable(attach->dev, sgt, dir, + if (dma_map_sgtable(dma_dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC)) goto error_free; break; @@ -224,7 +208,7 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, return ERR_PTR(-EINVAL);
r = amdgpu_vram_mgr_alloc_sgt(adev, bo->tbo.resource, 0, - bo->tbo.base.size, attach->dev, + bo->tbo.base.size, dma_dev, dir, &sgt); if (r) return ERR_PTR(r); @@ -254,12 +238,14 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); + if (sg_page(sgt->sgl)) { - dma_unmap_sgtable(attach->dev, sgt, dir, 0); + dma_unmap_sgtable(dma_dev, sgt, dir, 0); sg_free_table(sgt); kfree(sgt); } else { - amdgpu_vram_mgr_free_sgt(attach->dev, dir, sgt); + amdgpu_vram_mgr_free_sgt(dma_dev, dir, sgt); } }
@@ -334,17 +320,73 @@ static void amdgpu_dma_buf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map amdgpu_bo_unpin(bo); }
+static const struct dma_buf_mapping_sgt_exp_ops amdgpu_dma_buf_sgt_ops = { + .map_dma_buf = amdgpu_dma_buf_map, + .unmap_dma_buf = amdgpu_dma_buf_unmap, +}; + +static int amdgpu_dma_buf_match_mapping(struct dma_buf_match_args *args) +{ + struct dma_buf_attachment *attach = args->attach; + struct drm_gem_object *obj = args->dmabuf->priv; + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); + struct dma_buf_mapping_match sgt_match[2]; + unsigned int num_match = 0; + bool peer2peer = true; + int ret; + + /* + * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+. + * Such buffers cannot be safely accessed over P2P due to device-local + * compression metadata. Fallback to system-memory path instead. + * Device supports GFX12 (GC 12.x or newer) + * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag + * + */ + if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) && + bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) + peer2peer = false; + + /* + * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+. + * Such buffers cannot be safely accessed over P2P due to device-local + * compression metadata. Fallback to system-memory path instead. + * Device supports GFX12 (GC 12.x or newer) + * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag + * + */ + if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) && + bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) + peer2peer = false; + + if (peer2peer) + sgt_match[num_match++] = DMA_BUF_EMAPPING_SGT_P2P( + &amdgpu_dma_buf_sgt_ops, adev->pdev); + sgt_match[num_match++] = DMA_BUF_EMAPPING_SGT(&amdgpu_dma_buf_sgt_ops); + + ret = dma_buf_match_mapping(args, sgt_match, num_match); + if (ret) + return ret; + + /* If the transfer will use XGMI then force a P2P match. */ + if (peer2peer && !dma_buf_sgt_p2p_allowed(attach) && + amdgpu_dmabuf_is_xgmi_accessible(dma_buf_attach_adev(attach), bo)) + return attach->map_type.sgt_data.exporter_requires_p2p = + DMA_SGT_EXPORTER_REQUIRES_P2P_DISTANCE; + return 0; +} + const struct dma_buf_ops amdgpu_dmabuf_ops = { .attach = amdgpu_dma_buf_attach, .pin = amdgpu_dma_buf_pin, .unpin = amdgpu_dma_buf_unpin, - .map_dma_buf = amdgpu_dma_buf_map, - .unmap_dma_buf = amdgpu_dma_buf_unmap, .release = drm_gem_dmabuf_release, .begin_cpu_access = amdgpu_dma_buf_begin_cpu_access, .mmap = drm_gem_dmabuf_mmap, .vmap = amdgpu_dma_buf_vmap, .vunmap = amdgpu_dma_buf_vunmap, + .match_mapping = amdgpu_dma_buf_match_mapping, };
/** diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 9968f37657d57d..848532aca432db 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -211,7 +211,7 @@ static int xe_dma_buf_match_mapping(struct dma_buf_match_args *args) sgt_match[num_match++] = DMA_BUF_EMAPPING_SGT(&xe_dma_buf_sgt_ops);
- return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match)); + return dma_buf_match_mapping(args, sgt_match, num_match); }
static const struct dma_buf_ops xe_dmabuf_ops = {
Simple conversion to add a match_mapping() callback that offers an exporter SGT mapping type. Later patches will add a physical address exporter so go straight to adding the match_mapping() function.
The check for attachment->peer2peer is replaced with setting exporter_requires_p2p=true. VFIO always uses MMIO memory.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/vfio/pci/vfio_pci_dmabuf.c | 31 +++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index d4d0f7d08c53e2..c7addef5794abf 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -25,9 +25,6 @@ static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, { struct vfio_pci_dma_buf *priv = dmabuf->priv;
- if (!attachment->peer2peer) - return -EOPNOTSUPP; - if (priv->revoked) return -ENODEV;
@@ -75,11 +72,35 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) kfree(priv); }
-static const struct dma_buf_ops vfio_pci_dmabuf_ops = { - .attach = vfio_pci_dma_buf_attach, +static const struct dma_buf_mapping_sgt_exp_ops vfio_pci_dma_buf_sgt_ops = { .map_dma_buf = vfio_pci_dma_buf_map, .unmap_dma_buf = vfio_pci_dma_buf_unmap, +}; + +static int vfio_pci_dma_buf_match_mapping(struct dma_buf_match_args *args) +{ + struct vfio_pci_dma_buf *priv = args->dmabuf->priv; + struct dma_buf_mapping_match sgt_match[1]; + + dma_resv_assert_held(priv->dmabuf->resv); + + /* + * Once we pass vfio_pci_dma_buf_cleanup() the dmabuf will never be + * usable again. + */ + if (!priv->vdev) + return -ENODEV; + + sgt_match[0] = DMA_BUF_EMAPPING_SGT_P2P(&vfio_pci_dma_buf_sgt_ops, + priv->vdev->pdev); + + return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match)); +} + +static const struct dma_buf_ops vfio_pci_dmabuf_ops = { + .attach = vfio_pci_dma_buf_attach, .release = vfio_pci_dma_buf_release, + .match_mapping = vfio_pci_dma_buf_match_mapping, };
/*
These helpers only work when matched to a SGT mapping type, call dma_buf_sgt_dma_device() to get the DMA device.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf-mapping.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf-mapping.c b/drivers/dma-buf/dma-buf-mapping.c index 02f5cf8b3def40..b5f320be0f24bf 100644 --- a/drivers/dma-buf/dma-buf-mapping.c +++ b/drivers/dma-buf/dma-buf-mapping.c @@ -97,6 +97,7 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, size_t nr_ranges, size_t size, enum dma_data_direction dir) { + struct device *dma_dev = dma_buf_sgt_dma_device(attach); unsigned int nents, mapped_len = 0; struct dma_buf_dma *dma; struct scatterlist *sgl; @@ -114,7 +115,7 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, if (!dma) return ERR_PTR(-ENOMEM);
- switch (pci_p2pdma_map_type(provider, attach->dev)) { + switch (pci_p2pdma_map_type(provider, dma_dev)) { case PCI_P2PDMA_MAP_BUS_ADDR: /* * There is no need in IOVA at all for this flow. @@ -127,7 +128,7 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, goto err_free_dma; }
- dma_iova_try_alloc(attach->dev, dma->state, 0, size); + dma_iova_try_alloc(dma_dev, dma->state, 0, size); break; default: ret = -EINVAL; @@ -146,7 +147,7 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, addr = pci_p2pdma_bus_addr_map(provider, phys_vec[i].paddr); } else if (dma_use_iova(dma->state)) { - ret = dma_iova_link(attach->dev, dma->state, + ret = dma_iova_link(dma_dev, dma->state, phys_vec[i].paddr, 0, phys_vec[i].len, dir, DMA_ATTR_MMIO); @@ -155,10 +156,10 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach,
mapped_len += phys_vec[i].len; } else { - addr = dma_map_phys(attach->dev, phys_vec[i].paddr, + addr = dma_map_phys(dma_dev, phys_vec[i].paddr, phys_vec[i].len, dir, DMA_ATTR_MMIO); - ret = dma_mapping_error(attach->dev, addr); + ret = dma_mapping_error(dma_dev, addr); if (ret) goto err_unmap_dma; } @@ -169,7 +170,7 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach,
if (dma->state && dma_use_iova(dma->state)) { WARN_ON_ONCE(mapped_len != size); - ret = dma_iova_sync(attach->dev, dma->state, 0, mapped_len); + ret = dma_iova_sync(dma_dev, dma->state, 0, mapped_len); if (ret) goto err_unmap_dma;
@@ -196,11 +197,11 @@ struct sg_table *dma_buf_phys_vec_to_sgt(struct dma_buf_attachment *attach, if (!i || !dma->state) { ; /* Do nothing */ } else if (dma_use_iova(dma->state)) { - dma_iova_destroy(attach->dev, dma->state, mapped_len, dir, + dma_iova_destroy(dma_dev, dma->state, mapped_len, dir, DMA_ATTR_MMIO); } else { for_each_sgtable_dma_sg(&dma->sgt, sgl, i) - dma_unmap_phys(attach->dev, sg_dma_address(sgl), + dma_unmap_phys(dma_dev, sg_dma_address(sgl), sg_dma_len(sgl), dir, DMA_ATTR_MMIO); } sg_free_table(&dma->sgt); @@ -225,6 +226,7 @@ void dma_buf_free_sgt(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir) { struct dma_buf_dma *dma = container_of(sgt, struct dma_buf_dma, sgt); + struct device *dma_dev = dma_buf_sgt_dma_device(attach); int i;
dma_resv_assert_held(attach->dmabuf->resv); @@ -232,13 +234,13 @@ void dma_buf_free_sgt(struct dma_buf_attachment *attach, struct sg_table *sgt, if (!dma->state) { ; /* Do nothing */ } else if (dma_use_iova(dma->state)) { - dma_iova_destroy(attach->dev, dma->state, dma->size, dir, + dma_iova_destroy(dma_dev, dma->state, dma->size, dir, DMA_ATTR_MMIO); } else { struct scatterlist *sgl;
for_each_sgtable_dma_sg(sgt, sgl, i) - dma_unmap_phys(attach->dev, sg_dma_address(sgl), + dma_unmap_phys(dma_dev, sg_dma_address(sgl), sg_dma_len(sgl), dir, DMA_ATTR_MMIO); }
These importer helper functions should call dma_buf_sgt_dma_device() as they are always working with SGTs.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/iio/industrialio-buffer.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index c6259213e15035..7daac53c502e50 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -16,6 +16,7 @@ #include <linux/export.h> #include <linux/device.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-fence.h> #include <linux/dma-resv.h> #include <linux/file.h> @@ -1642,7 +1643,7 @@ iio_buffer_find_attachment(struct iio_dev_buffer_pair *ib, guard(mutex)(&buffer->dmabufs_mutex);
list_for_each_entry(priv, &buffer->dmabufs, entry) { - if (priv->attach->dev == dma_dev + if (dma_buf_sgt_dma_device(priv->attach) == dma_dev && priv->attach->dmabuf == dmabuf) { attach = priv->attach; break; @@ -1727,7 +1728,7 @@ static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib, * combo. If we do, refuse to attach. */ list_for_each_entry(each, &buffer->dmabufs, entry) { - if (each->attach->dev == dma_dev + if (dma_buf_sgt_dma_device(each->attach) == dma_dev && each->attach->dmabuf == dmabuf) { /* * We unlocked the reservation object, so going through @@ -1781,7 +1782,7 @@ static int iio_buffer_detach_dmabuf(struct iio_dev_buffer_pair *ib, guard(mutex)(&buffer->dmabufs_mutex);
list_for_each_entry(priv, &buffer->dmabufs, entry) { - if (priv->attach->dev == dma_dev + if (dma_buf_sgt_dma_device(priv->attach) == dma_dev && priv->attach->dmabuf == dmabuf) { list_del(&priv->entry);
These importer helper functions should call dma_buf_sgt_dma_device() as they are always working with SGTs.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/usb/gadget/function/f_fs.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 05c6750702b609..5c81ea9afa1249 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -16,6 +16,7 @@
#include <linux/blkdev.h> #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-fence.h> #include <linux/dma-resv.h> #include <linux/pagemap.h> @@ -1467,7 +1468,7 @@ ffs_dmabuf_find_attachment(struct ffs_epfile *epfile, struct dma_buf *dmabuf) mutex_lock(&epfile->dmabufs_mutex);
list_for_each_entry(priv, &epfile->dmabufs, entry) { - if (priv->attach->dev == dev + if (dma_buf_sgt_dma_device(priv->attach) == dev && priv->attach->dmabuf == dmabuf) { attach = priv->attach; break; @@ -1569,7 +1570,7 @@ static int ffs_dmabuf_detach(struct file *file, int fd) mutex_lock(&epfile->dmabufs_mutex);
list_for_each_entry_safe(priv, tmp, &epfile->dmabufs, entry) { - if (priv->attach->dev == dev + if (dma_buf_sgt_dma_device(priv->attach) == dev && priv->attach->dmabuf == dmabuf) { /* Cancel any pending transfer */ spin_lock_irq(&ffs->eps_lock);
Now that all exporters are converted these compatibility things can be removed:
dma_buf_ops: map_dma_buf/unmap_dma_buf Moved to dma_buf_mapping_sgt_exp_ops
dma_buf_attachment: dev Moved to attach->map_type.importing_dma_device
dma_buf_attachment: peer2peer Moved to attach->map_type.exporter_requires_p2p accessed via dma_buf_sgt_p2p_allowed()
dma_buf_sgt_exp_compat_match: No compatibility exporters anymore
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf-mapping.c | 40 ----------------- drivers/dma-buf/dma-buf.c | 24 +++------- drivers/gpu/drm/drm_prime.c | 2 - include/linux/dma-buf-mapping.h | 67 +++++++++++++++++++++++++++- include/linux/dma-buf.h | 73 ------------------------------- 5 files changed, 70 insertions(+), 136 deletions(-)
diff --git a/drivers/dma-buf/dma-buf-mapping.c b/drivers/dma-buf/dma-buf-mapping.c index b5f320be0f24bf..baa96b37e2c6bd 100644 --- a/drivers/dma-buf/dma-buf-mapping.c +++ b/drivers/dma-buf/dma-buf-mapping.c @@ -334,16 +334,6 @@ dma_buf_sgt_finish_match(struct dma_buf_match_args *args, .exporter_requires_p2p = exp->sgt_data.exporter_requires_p2p, }, }; - - /* - * Setup the SGT type variables stored in attach because importers and - * exporters that do not natively use mappings expect them to be there. - * When converting to use mappings users should use the match versions - * of these instead. - */ - attach->dev = imp->sgt_data.importing_dma_device; - attach->peer2peer = attach->map_type.sgt_data.importer_accepts_p2p == - DMA_SGT_IMPORTER_ACCEPTS_P2P; }
static void dma_buf_sgt_debugfs_dump(struct seq_file *s, @@ -359,33 +349,3 @@ struct dma_buf_mapping_type dma_buf_mapping_sgt_type = { .debugfs_dump = dma_buf_sgt_debugfs_dump, }; EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_sgt_type, "DMA_BUF"); - -static struct sg_table * -dma_buf_sgt_compat_map_dma_buf(struct dma_buf_attachment *attach, - enum dma_data_direction dir) -{ - return attach->dmabuf->ops->map_dma_buf(attach, dir); -} - -static void dma_buf_sgt_compat_unmap_dma_buf(struct dma_buf_attachment *attach, - struct sg_table *sgt, - enum dma_data_direction dir) -{ - attach->dmabuf->ops->unmap_dma_buf(attach, sgt, dir); -} - -/* Route the classic map/unmap ops through the exp ops for old importers */ -static const struct dma_buf_mapping_sgt_exp_ops dma_buf_sgt_compat_exp_ops = { - .map_dma_buf = dma_buf_sgt_compat_map_dma_buf, - .unmap_dma_buf = dma_buf_sgt_compat_unmap_dma_buf, -}; - -/* - * This mapping type is used for unaware exporters that do not support - * match_mapping(). It wraps the dma_buf ops for SGT mappings into a mapping - * type so aware importers can transparently work with unaware exporters. This - * does not require p2p because old exporters will check it through the - * attach->peer2peer mechanism. - */ -const struct dma_buf_mapping_match dma_buf_sgt_exp_compat_match = - DMA_BUF_EMAPPING_SGT(&dma_buf_sgt_compat_exp_ops); diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ac755f358dc7b3..e773441abab65d 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -693,19 +693,9 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) || !exp_info->ops->release)) return ERR_PTR(-EINVAL);
- if (exp_info->ops->match_mapping || - exp_info->ops->single_exporter_match) { - if (WARN_ON(exp_info->ops->map_dma_buf || - exp_info->ops->unmap_dma_buf)) - return ERR_PTR(-EINVAL); - if (WARN_ON(exp_info->ops->match_mapping && - exp_info->ops->single_exporter_match)) - return ERR_PTR(-EINVAL); - } else { - if (WARN_ON(!exp_info->ops->map_dma_buf || - !exp_info->ops->unmap_dma_buf)) - return ERR_PTR(-EINVAL); - } + if (WARN_ON(!exp_info->ops->match_mapping && + !exp_info->ops->single_exporter_match)) + return ERR_PTR(-EINVAL);
if (WARN_ON(!exp_info->ops->pin != !exp_info->ops->unpin)) return ERR_PTR(-EINVAL); @@ -981,12 +971,8 @@ struct dma_buf_attachment *dma_buf_mapping_attach( if (ret) goto err_attach; } else { - const struct dma_buf_mapping_match *exp_match = - dmabuf->ops->single_exporter_match; - - if (!exp_match) - exp_match = &dma_buf_sgt_exp_compat_match; - ret = dma_buf_match_mapping(&match_args, exp_match, 1); + ret = dma_buf_match_mapping( + &match_args, dmabuf->ops->single_exporter_match, 1); if (ret) goto err_attach; } diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 94ec2483e40107..0852c60a722b67 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -593,8 +593,6 @@ static bool is_gem_map_dma_buf(struct dma_buf_attachment *attach) const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = dma_buf_get_sgt_ops(attach);
- if (attach->dmabuf->ops->map_dma_buf == drm_gem_map_dma_buf) - return true; if (sgt_exp_ops && sgt_exp_ops->map_dma_buf == drm_gem_map_dma_buf) return true; return false; diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index c11e32ef2a684f..f81e215401b49d 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -113,8 +113,73 @@ extern struct dma_buf_mapping_type dma_buf_mapping_sgt_type;
struct dma_buf_mapping_sgt_exp_ops { struct dma_buf_mapping_exp_ops ops; + + /** + * @map_dma_buf: + * + * This is called by dma_buf_map_attachment() and is used to map a + * shared &dma_buf into device address space, and it is mandatory. It + * can only be called if @attach has been called successfully. + * + * This call may sleep, e.g. when the backing storage first needs to be + * allocated, or moved to a location suitable for all currently attached + * devices. + * + * Note that any specific buffer attributes required for this function + * should get added to device_dma_parameters accessible via + * &device.dma_params from the &dma_buf_attachment. The @attach callback + * should also check these constraints. + * + * If this is being called for the first time, the exporter can now + * choose to scan through the list of attachments for this buffer, + * collate the requirements of the attached devices, and choose an + * appropriate backing storage for the buffer. + * + * Based on enum dma_data_direction, it might be possible to have + * multiple users accessing at the same time (for reading, maybe), or + * any other kind of sharing that the exporter might wish to make + * available to buffer-users. + * + * This is always called with the dmabuf->resv object locked when + * the dynamic_mapping flag is true. + * + * Note that for non-dynamic exporters the driver must guarantee that + * that the memory is available for use and cleared of any old data by + * the time this function returns. Drivers which pipeline their buffer + * moves internally must wait for all moves and clears to complete. + * Dynamic exporters do not need to follow this rule: For non-dynamic + * importers the buffer is already pinned through @pin, which has the + * same requirements. Dynamic importers otoh are required to obey the + * dma_resv fences. + * + * Returns: + * + * A &sg_table scatter list of the backing storage of the DMA buffer, + * already mapped into the device address space of the &device attached + * with the provided &dma_buf_attachment. The addresses and lengths in + * the scatter list are PAGE_SIZE aligned. + * + * On failure, returns a negative error value wrapped into a pointer. + * May also return -EINTR when a signal was received while being + * blocked. + * + * Note that exporters should not try to cache the scatter list, or + * return the same one for multiple calls. Caching is done either by the + * DMA-BUF code (for non-dynamic importers) or the importer. Ownership + * of the scatter list is transferred to the caller, and returned by + * @unmap_dma_buf. + */ struct sg_table *(*map_dma_buf)(struct dma_buf_attachment *attach, enum dma_data_direction dir); + + /** + * @unmap_dma_buf: + * + * This is called by dma_buf_unmap_attachment() and should unmap and + * release the &sg_table allocated in @map_dma_buf, and it is mandatory. + * For static dma_buf handling this might also unpin the backing + * storage if this is the last mapping of the DMA buffer. + */ void (*unmap_dma_buf)(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir); @@ -189,8 +254,6 @@ DMA_BUF_EMAPPING_SGT_P2P(const struct dma_buf_mapping_sgt_exp_ops *exp_ops, return match; }
-extern const struct dma_buf_mapping_match dma_buf_sgt_exp_compat_match; - /* * dma_buf_ops initializer helper for simple drivers that use a single * SGT map/unmap operation without P2P. diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index a8cfbbafbe31fe..5feab8b8b5d517 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -145,75 +145,6 @@ struct dma_buf_ops { */ void (*unpin)(struct dma_buf_attachment *attach);
- /** - * @map_dma_buf: - * - * This is called by dma_buf_map_attachment() and is used to map a - * shared &dma_buf into device address space, and it is mandatory. It - * can only be called if @attach has been called successfully. - * - * This call may sleep, e.g. when the backing storage first needs to be - * allocated, or moved to a location suitable for all currently attached - * devices. - * - * Note that any specific buffer attributes required for this function - * should get added to device_dma_parameters accessible via - * &device.dma_params from the &dma_buf_attachment. The @attach callback - * should also check these constraints. - * - * If this is being called for the first time, the exporter can now - * choose to scan through the list of attachments for this buffer, - * collate the requirements of the attached devices, and choose an - * appropriate backing storage for the buffer. - * - * Based on enum dma_data_direction, it might be possible to have - * multiple users accessing at the same time (for reading, maybe), or - * any other kind of sharing that the exporter might wish to make - * available to buffer-users. - * - * This is always called with the dmabuf->resv object locked when - * the dynamic_mapping flag is true. - * - * Note that for non-dynamic exporters the driver must guarantee that - * that the memory is available for use and cleared of any old data by - * the time this function returns. Drivers which pipeline their buffer - * moves internally must wait for all moves and clears to complete. - * Dynamic exporters do not need to follow this rule: For non-dynamic - * importers the buffer is already pinned through @pin, which has the - * same requirements. Dynamic importers otoh are required to obey the - * dma_resv fences. - * - * Returns: - * - * A &sg_table scatter list of the backing storage of the DMA buffer, - * already mapped into the device address space of the &device attached - * with the provided &dma_buf_attachment. The addresses and lengths in - * the scatter list are PAGE_SIZE aligned. - * - * On failure, returns a negative error value wrapped into a pointer. - * May also return -EINTR when a signal was received while being - * blocked. - * - * Note that exporters should not try to cache the scatter list, or - * return the same one for multiple calls. Caching is done either by the - * DMA-BUF code (for non-dynamic importers) or the importer. Ownership - * of the scatter list is transferred to the caller, and returned by - * @unmap_dma_buf. - */ - struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *, - enum dma_data_direction); - /** - * @unmap_dma_buf: - * - * This is called by dma_buf_unmap_attachment() and should unmap and - * release the &sg_table allocated in @map_dma_buf, and it is mandatory. - * For static dma_buf handling this might also unpin the backing - * storage if this is the last mapping of the DMA buffer. - */ - void (*unmap_dma_buf)(struct dma_buf_attachment *, - struct sg_table *, - enum dma_data_direction); - /* TODO: Add try_map_dma_buf version, to return immed with -EBUSY * if the call would block. */ @@ -530,9 +461,7 @@ struct dma_buf_attach_ops { /** * struct dma_buf_attachment - holds device-buffer attachment data * @dmabuf: buffer for this attachment. - * @dev: device attached to the buffer. * @node: list of dma_buf_attachment, protected by dma_resv lock of the dmabuf. - * @peer2peer: true if the importer can handle peer resources without pages. * @priv: exporter specific attachment data. * @importer_ops: importer operations for this attachment, if provided * dma_buf_map/unmap_attachment() must be called with the dma_resv lock held. @@ -551,9 +480,7 @@ struct dma_buf_attach_ops { */ struct dma_buf_attachment { struct dma_buf *dmabuf; - struct device *dev; struct list_head node; - bool peer2peer; const struct dma_buf_attach_ops *importer_ops; void *importer_priv; void *priv;
This map function only works with SGT importers.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/accel/amdxdna/amdxdna_gem.c | 2 +- drivers/accel/ivpu/ivpu_gem.c | 3 +- drivers/accel/qaic/qaic_data.c | 4 +-- drivers/dma-buf/dma-buf.c | 28 +++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +- drivers/gpu/drm/armada/armada_gem.c | 14 ++++++---- drivers/gpu/drm/drm_prime.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 +-- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 3 +- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/tegra/gem.c | 6 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 2 +- drivers/gpu/drm/xe/xe_bo.c | 2 +- drivers/iio/industrialio-buffer.c | 2 +- drivers/infiniband/core/umem_dmabuf.c | 4 +-- .../media/common/videobuf2/videobuf2-core.c | 2 +- .../common/videobuf2/videobuf2-dma-contig.c | 2 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 2 +- .../platform/nvidia/tegra-vde/dmabuf-cache.c | 2 +- drivers/misc/fastrpc.c | 3 +- drivers/usb/gadget/function/f_fs.c | 2 +- drivers/xen/gntdev-dmabuf.c | 2 +- include/linux/dma-buf-mapping.h | 4 +-- include/linux/dma-buf.h | 10 +++---- io_uring/zcrx.c | 3 +- net/core/devmem.c | 4 +-- 26 files changed, 63 insertions(+), 54 deletions(-)
diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c index fb7c8de960cd2a..ab7610375ad761 100644 --- a/drivers/accel/amdxdna/amdxdna_gem.c +++ b/drivers/accel/amdxdna/amdxdna_gem.c @@ -610,7 +610,7 @@ amdxdna_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf) goto put_buf; }
- sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto fail_detach; diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index ece68f570b7ead..850dc82c7857e2 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -54,7 +54,8 @@ static struct sg_table *ivpu_bo_map_attachment(struct ivpu_device *vdev, struct
sgt = bo->base.sgt; if (!sgt) { - sgt = dma_buf_map_attachment(bo->base.base.import_attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment(bo->base.base.import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) ivpu_err(vdev, "Failed to map BO in IOMMU: %ld\n", PTR_ERR(sgt)); else diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index 60cb4d65d48ee7..0a7b8b9620bf9a 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -844,7 +844,7 @@ struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_
drm_gem_private_object_init(dev, obj, attach->dmabuf->size); /* - * skipping dma_buf_map_attachment() as we do not know the direction + * skipping dma_buf_sgt_map_attachment() as we do not know the direction * just yet. Once the direction is known in the subsequent IOCTL to * attach slicing, we can do it then. */ @@ -870,7 +870,7 @@ static int qaic_prepare_import_bo(struct qaic_bo *bo, struct qaic_attach_slice_h struct sg_table *sgt; int ret;
- sgt = dma_buf_map_attachment(obj->import_attach, hdr->dir); + sgt = dma_buf_sgt_map_attachment(obj->import_attach, hdr->dir); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); return ret; diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index e773441abab65d..73c599f84e121a 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -653,7 +653,7 @@ static struct file *dma_buf_getfile(size_t size, int flags) * * 3. Once the buffer is attached to all devices userspace can initiate DMA * access to the shared buffer. In the kernel this is done by calling - * dma_buf_map_attachment() and dma_buf_unmap_attachment(). + * dma_buf_sgt_map_attachment() and dma_buf_unmap_attachment(). * * 4. Once a driver is done with a shared buffer it needs to call * dma_buf_detach() (after cleaning up any mappings) and then release the @@ -867,7 +867,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * * - dma_buf_pin() * - dma_buf_unpin() - * - dma_buf_map_attachment() + * - dma_buf_sgt_map_attachment() * - dma_buf_unmap_attachment() * - dma_buf_vmap() * - dma_buf_vunmap() @@ -885,7 +885,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * - dma_buf_mmap() * - dma_buf_begin_cpu_access() * - dma_buf_end_cpu_access() - * - dma_buf_map_attachment_unlocked() + * - dma_buf_sgt_map_attachment_unlocked() * - dma_buf_unmap_attachment_unlocked() * - dma_buf_vmap_unlocked() * - dma_buf_vunmap_unlocked() @@ -1120,7 +1120,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, "DMA_BUF");
/** - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * dma_buf_sgt_map_attachment - Returns the scatterlist table of the attachment; * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. * @attach: [in] attachment whose scatterlist is to be returned @@ -1140,8 +1140,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, "DMA_BUF"); * Important: Dynamic importers must wait for the exclusive fence of the struct * dma_resv attached to the DMA-BUF first. */ -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, - enum dma_data_direction direction) +struct sg_table *dma_buf_sgt_map_attachment(struct dma_buf_attachment *attach, + enum dma_data_direction direction) { const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = dma_buf_get_sgt_ops(attach); @@ -1213,20 +1213,20 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
return sg_table; } -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_map_attachment, "DMA_BUF");
/** - * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment; + * dma_buf_sgt_map_attachment_unlocked - Returns the scatterlist table of the attachment; * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. * @attach: [in] attachment whose scatterlist is to be returned * @direction: [in] direction of DMA transfer * - * Unlocked variant of dma_buf_map_attachment(). + * Unlocked variant of dma_buf_sgt_map_attachment(). */ struct sg_table * -dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, - enum dma_data_direction direction) +dma_buf_sgt_map_attachment_unlocked(struct dma_buf_attachment *attach, + enum dma_data_direction direction) { struct sg_table *sg_table;
@@ -1236,12 +1236,12 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, return ERR_PTR(-EINVAL);
dma_resv_lock(attach->dmabuf->resv, NULL); - sg_table = dma_buf_map_attachment(attach, direction); + sg_table = dma_buf_sgt_map_attachment(attach, direction); dma_resv_unlock(attach->dmabuf->resv);
return sg_table; } -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_map_attachment_unlocked, "DMA_BUF");
/** * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might @@ -1251,7 +1251,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, "DMA_BUF"); * @sg_table: [in] scatterlist info of the buffer to unmap * @direction: [in] direction of DMA transfer * - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + * This unmaps a DMA mapping for @attached obtained by dma_buf_sgt_map_attachment(). */ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, struct sg_table *sg_table, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 2b931e855abd9d..6c8b2a3dde1f54 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -914,7 +914,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, struct sg_table *sgt;
attach = gtt->gobj->import_attach; - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment(attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt);
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index bf6968b1f22511..21b83b00b68254 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -531,7 +531,7 @@ armada_gem_prime_import(struct drm_device *dev, struct dma_buf *buf) get_dma_buf(buf);
/* - * Don't call dma_buf_map_attachment() here - it maps the + * Don't call dma_buf_sgt_map_attachment() here - it maps the * scatterlist immediately for DMA, and this is not always * an appropriate thing to do. */ @@ -542,20 +542,22 @@ int armada_gem_map_import(struct armada_gem_object *dobj) { int ret;
- dobj->sgt = dma_buf_map_attachment_unlocked(dobj->obj.import_attach, - DMA_TO_DEVICE); + dobj->sgt = dma_buf_sgt_map_attachment_unlocked(dobj->obj.import_attach, + DMA_TO_DEVICE); if (IS_ERR(dobj->sgt)) { ret = PTR_ERR(dobj->sgt); dobj->sgt = NULL; - DRM_ERROR("dma_buf_map_attachment() error: %d\n", ret); + DRM_ERROR("dma_buf_sgt_map_attachment() error: %d\n", ret); return ret; } if (dobj->sgt->nents > 1) { - DRM_ERROR("dma_buf_map_attachment() returned an (unsupported) scattered list\n"); + DRM_ERROR( + "dma_buf_sgt_map_attachment() returned an (unsupported) scattered list\n"); return -EINVAL; } if (sg_dma_len(dobj->sgt->sgl) < dobj->obj.size) { - DRM_ERROR("dma_buf_map_attachment() returned a small buffer\n"); + DRM_ERROR( + "dma_buf_sgt_map_attachment() returned a small buffer\n"); return -EINVAL; } dobj->dev_addr = sg_dma_address(dobj->sgt->sgl); diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 0852c60a722b67..c1afb9e0886c4f 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -1005,7 +1005,7 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev,
get_dma_buf(dma_buf);
- sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto fail_detach; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index a119623aed254b..92e2677eb5a33b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -242,8 +242,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
assert_object_held(obj);
- sgt = dma_buf_map_attachment(obj->base.import_attach, - DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment(obj->base.import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index 2fda549dd82d2b..fcfa819caa389f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -293,7 +293,8 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, goto out_import; }
- st = dma_buf_map_attachment_unlocked(import_attach, DMA_BIDIRECTIONAL); + st = dma_buf_sgt_map_attachment_unlocked(import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(st)) { err = PTR_ERR(st); goto out_detach; diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 23beaeefab67d7..569ee2d3ab6f84 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -121,7 +121,7 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
get_dma_buf(dma_buf);
- sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); + sgt = dma_buf_sgt_map_attachment_unlocked(attach, DMA_TO_DEVICE); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto fail_detach; diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 244c01819d56b5..4866d639bbb026 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -86,7 +86,8 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_ goto free; }
- map->sgt = dma_buf_map_attachment_unlocked(map->attach, direction); + map->sgt = dma_buf_sgt_map_attachment_unlocked(map->attach, + direction); if (IS_ERR(map->sgt)) { dma_buf_detach(buf, map->attach); err = PTR_ERR(map->sgt); @@ -477,7 +478,8 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm, goto free; }
- bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); + bo->sgt = dma_buf_sgt_map_attachment_unlocked(attach, + DMA_TO_DEVICE); if (IS_ERR(bo->sgt)) { err = PTR_ERR(bo->sgt); goto detach; diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index d7e1f741f941a3..3dbc1b41052068 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -163,7 +163,7 @@ int virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents, if (ret <= 0) return ret < 0 ? ret : -ETIMEDOUT;
- sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt);
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 71acd45aa33b00..e5e716c5f33fa8 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -764,7 +764,7 @@ static int xe_bo_move_dmabuf(struct ttm_buffer_object *ttm_bo, ttm_bo->sg = NULL; }
- sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sg = dma_buf_sgt_map_attachment(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sg)) return PTR_ERR(sg);
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 7daac53c502e50..7556c3c7675c2c 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -1701,7 +1701,7 @@ static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib, priv->dir = buffer->direction == IIO_BUFFER_DIRECTION_IN ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
- priv->sgt = dma_buf_map_attachment(attach, priv->dir); + priv->sgt = dma_buf_sgt_map_attachment(attach, priv->dir); if (IS_ERR(priv->sgt)) { err = PTR_ERR(priv->sgt); dev_err(&indio_dev->dev, "Unable to map attachment: %d\n", err); diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 0ec2e4120cc94b..aac9f9d12f0f8f 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -29,8 +29,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) if (umem_dmabuf->sgt) goto wait_fence;
- sgt = dma_buf_map_attachment(umem_dmabuf->attach, - DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment(umem_dmabuf->attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt);
diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c index 2df566f409b65e..4fe30a21e1e687 100644 --- a/drivers/media/common/videobuf2/videobuf2-core.c +++ b/drivers/media/common/videobuf2/videobuf2-core.c @@ -1470,7 +1470,7 @@ static int __prepare_dmabuf(struct vb2_buffer *vb) vb->planes[plane].mem_priv = mem_priv;
/* - * This pins the buffer(s) with dma_buf_map_attachment()). It's done + * This pins the buffer(s) with dma_buf_sgt_map_attachment()). It's done * here instead just before the DMA, while queueing the buffer(s) so * userspace knows sooner rather than later if the dma-buf map fails. */ diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 7a3bc31699bb90..de3eb4121aadb0 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -706,7 +706,7 @@ static int vb2_dc_map_dmabuf(void *mem_priv) }
/* get the associated scatterlist for this buffer */ - sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); + sgt = dma_buf_sgt_map_attachment_unlocked(buf->db_attach, buf->dma_dir); if (IS_ERR(sgt)) { pr_err("Error getting dmabuf scatterlist\n"); return -EINVAL; diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index 03a836dce44f90..ed968d7e326449 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -568,7 +568,7 @@ static int vb2_dma_sg_map_dmabuf(void *mem_priv) }
/* get the associated scatterlist for this buffer */ - sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); + sgt = dma_buf_sgt_map_attachment_unlocked(buf->db_attach, buf->dma_dir); if (IS_ERR(sgt)) { pr_err("Error getting dmabuf scatterlist\n"); return -EINVAL; diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c index b34244ea14dd06..595b759de4f939 100644 --- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c +++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c @@ -102,7 +102,7 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde, goto err_unlock; }
- sgt = dma_buf_map_attachment_unlocked(attachment, dma_dir); + sgt = dma_buf_sgt_map_attachment_unlocked(attachment, dma_dir); if (IS_ERR(sgt)) { dev_err(dev, "Failed to get dmabufs sg_table\n"); err = PTR_ERR(sgt); diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 2ea57170e56b3e..52abf3290a580f 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -779,7 +779,8 @@ static int fastrpc_map_attach(struct fastrpc_user *fl, int fd, goto attach_err; }
- table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL); + table = dma_buf_sgt_map_attachment_unlocked(map->attach, + DMA_BIDIRECTIONAL); if (IS_ERR(table)) { err = PTR_ERR(table); goto map_err; diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 5c81ea9afa1249..d5d4bfc390ebc6 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1520,7 +1520,7 @@ static int ffs_dmabuf_attach(struct file *file, int fd) if (err) goto err_free_priv;
- sg_table = dma_buf_map_attachment(attach, dir); + sg_table = dma_buf_sgt_map_attachment(attach, dir); dma_resv_unlock(dmabuf->resv);
if (IS_ERR(sg_table)) { diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 91a31a22ba98aa..78125cc1aee322 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -590,7 +590,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
gntdev_dmabuf->u.imp.attach = attach;
- sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_sgt_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) { ret = ERR_CAST(sgt); goto fail_detach; diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index f81e215401b49d..daddf30d0eceae 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -101,7 +101,7 @@ int dma_buf_match_mapping(struct dma_buf_match_args *args, * * When this type is matched the map/unmap functions are: * - * dma_buf_map_attachment() + * dma_buf_sgt_map_attachment() * dma_buf_unmap_attachment() * * The struct sg_table returned by those functions has only the DMA portions @@ -117,7 +117,7 @@ struct dma_buf_mapping_sgt_exp_ops { /** * @map_dma_buf: * - * This is called by dma_buf_map_attachment() and is used to map a + * This is called by dma_buf_sgt_map_attachment() and is used to map a * shared &dma_buf into device address space, and it is mandatory. It * can only be called if @attach has been called successfully. * diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 5feab8b8b5d517..1ed50ec261022e 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -475,7 +475,7 @@ struct dma_buf_attach_ops { * * An attachment is created by calling dma_buf_attach(), and released again by * calling dma_buf_detach(). The DMA mapping itself needed to initiate a - * transfer is created by dma_buf_map_attachment() and freed again by calling + * transfer is created by dma_buf_sgt_map_attachment() and freed again by calling * dma_buf_unmap_attachment(). */ struct dma_buf_attachment { @@ -580,8 +580,8 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd); void dma_buf_put(struct dma_buf *dmabuf);
-struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, - enum dma_data_direction); +struct sg_table *dma_buf_sgt_map_attachment(struct dma_buf_attachment *, + enum dma_data_direction); void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, enum dma_data_direction); void dma_buf_move_notify(struct dma_buf *dma_buf); @@ -590,8 +590,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, int dma_buf_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction dir); struct sg_table * -dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, - enum dma_data_direction direction); +dma_buf_sgt_map_attachment_unlocked(struct dma_buf_attachment *attach, + enum dma_data_direction direction); void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, struct sg_table *sg_table, enum dma_data_direction direction); diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index b99cf2c6670aa8..3b8c9752208bdf 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -137,7 +137,8 @@ static int io_import_dmabuf(struct io_zcrx_ifq *ifq, goto err; }
- mem->sgt = dma_buf_map_attachment_unlocked(mem->attach, DMA_FROM_DEVICE); + mem->sgt = dma_buf_sgt_map_attachment_unlocked(mem->attach, + DMA_FROM_DEVICE); if (IS_ERR(mem->sgt)) { ret = PTR_ERR(mem->sgt); mem->sgt = NULL; diff --git a/net/core/devmem.c b/net/core/devmem.c index ec4217d6c0b4fd..ccdf3f70a4de9b 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -223,8 +223,8 @@ net_devmem_bind_dmabuf(struct net_device *dev, goto err_free_binding; }
- binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment, - direction); + binding->sgt = dma_buf_sgt_map_attachment_unlocked(binding->attachment, + direction); if (IS_ERR(binding->sgt)) { err = PTR_ERR(binding->sgt); NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment");
This unmap function only works with SGT importers.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/accel/amdxdna/amdxdna_gem.c | 5 +-- drivers/accel/ivpu/ivpu_gem.c | 5 +-- drivers/accel/qaic/qaic_data.c | 2 +- drivers/dma-buf/dma-buf.c | 32 +++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +- drivers/gpu/drm/armada/armada_gem.c | 5 +-- drivers/gpu/drm/drm_prime.c | 5 +-- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 +-- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 +- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 3 +- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/tegra/gem.c | 11 ++++--- drivers/gpu/drm/virtio/virtgpu_prime.c | 6 ++-- drivers/gpu/drm/xe/xe_bo.c | 16 ++++++---- drivers/iio/industrialio-buffer.c | 4 +-- drivers/infiniband/core/umem_dmabuf.c | 4 +-- .../common/videobuf2/videobuf2-dma-contig.c | 7 ++-- .../media/common/videobuf2/videobuf2-dma-sg.c | 3 +- .../platform/nvidia/tegra-vde/dmabuf-cache.c | 5 +-- drivers/misc/fastrpc.c | 4 +-- drivers/usb/gadget/function/f_fs.c | 2 +- drivers/xen/gntdev-dmabuf.c | 6 ++-- include/linux/dma-buf-mapping.h | 4 +-- include/linux/dma-buf.h | 12 +++---- io_uring/zcrx.c | 4 +-- net/core/devmem.c | 8 ++--- 26 files changed, 88 insertions(+), 76 deletions(-)
diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c index ab7610375ad761..ccc78aeeb4c0fc 100644 --- a/drivers/accel/amdxdna/amdxdna_gem.c +++ b/drivers/accel/amdxdna/amdxdna_gem.c @@ -444,7 +444,8 @@ static struct dma_buf *amdxdna_gem_prime_export(struct drm_gem_object *gobj, int
static void amdxdna_imported_obj_free(struct amdxdna_gem_obj *abo) { - dma_buf_unmap_attachment_unlocked(abo->attach, abo->base.sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(abo->attach, abo->base.sgt, + DMA_BIDIRECTIONAL); dma_buf_detach(abo->dma_buf, abo->attach); dma_buf_put(abo->dma_buf); drm_gem_object_release(to_gobj(abo)); @@ -629,7 +630,7 @@ amdxdna_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf) return gobj;
fail_unmap: - dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); fail_detach: dma_buf_detach(dma_buf, attach); put_buf: diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index 850dc82c7857e2..1fcb454f4cb33b 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -159,8 +159,9 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo)
if (bo->base.sgt) { if (bo->base.base.import_attach) { - dma_buf_unmap_attachment(bo->base.base.import_attach, - bo->base.sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment( + bo->base.base.import_attach, bo->base.sgt, + DMA_BIDIRECTIONAL); } else { dma_unmap_sgtable(vdev->drm.dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0); sg_free_table(bo->base.sgt); diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index 0a7b8b9620bf9a..8e2e597bc1ff03 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -911,7 +911,7 @@ static int qaic_prepare_bo(struct qaic_device *qdev, struct qaic_bo *bo,
static void qaic_unprepare_import_bo(struct qaic_bo *bo) { - dma_buf_unmap_attachment(bo->base.import_attach, bo->sgt, bo->dir); + dma_buf_sgt_unmap_attachment(bo->base.import_attach, bo->sgt, bo->dir); bo->sgt = NULL; }
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 73c599f84e121a..35d3bbb4bb053c 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -653,7 +653,7 @@ static struct file *dma_buf_getfile(size_t size, int flags) * * 3. Once the buffer is attached to all devices userspace can initiate DMA * access to the shared buffer. In the kernel this is done by calling - * dma_buf_sgt_map_attachment() and dma_buf_unmap_attachment(). + * dma_buf_sgt_map_attachment() and dma_buf_sgt_unmap_attachment(). * * 4. Once a driver is done with a shared buffer it needs to call * dma_buf_detach() (after cleaning up any mappings) and then release the @@ -868,7 +868,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * - dma_buf_pin() * - dma_buf_unpin() * - dma_buf_sgt_map_attachment() - * - dma_buf_unmap_attachment() + * - dma_buf_sgt_unmap_attachment() * - dma_buf_vmap() * - dma_buf_vunmap() * @@ -886,7 +886,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * - dma_buf_begin_cpu_access() * - dma_buf_end_cpu_access() * - dma_buf_sgt_map_attachment_unlocked() - * - dma_buf_unmap_attachment_unlocked() + * - dma_buf_sgt_unmap_attachment_unlocked() * - dma_buf_vmap_unlocked() * - dma_buf_vunmap_unlocked() * @@ -1132,7 +1132,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, "DMA_BUF"); * On success, the DMA addresses and lengths in the returned scatterlist are * PAGE_SIZE aligned. * - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that + * A mapping must be unmapped by using dma_buf_sgt_unmap_attachment(). Note that * the underlying backing storage is pinned for as long as a mapping exists, * therefore users/importers should not hold onto a mapping for undue amounts of * time. @@ -1244,7 +1244,7 @@ dma_buf_sgt_map_attachment_unlocked(struct dma_buf_attachment *attach, EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_map_attachment_unlocked, "DMA_BUF");
/** - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might + * dma_buf_sgt_unmap_attachment - unmaps and decreases usecount of the buffer;might * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of * dma_buf_ops. * @attach: [in] attachment to unmap buffer from @@ -1253,9 +1253,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_map_attachment_unlocked, "DMA_BUF"); * * This unmaps a DMA mapping for @attached obtained by dma_buf_sgt_map_attachment(). */ -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg_table, - enum dma_data_direction direction) +void dma_buf_sgt_unmap_attachment(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) { const struct dma_buf_mapping_sgt_exp_ops *sgt_exp_ops = dma_buf_get_sgt_ops(attach); @@ -1273,21 +1273,21 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, if (dma_buf_pin_on_map(attach)) attach->dmabuf->ops->unpin(attach); } -EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_unmap_attachment, "DMA_BUF");
/** - * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might + * dma_buf_sgt_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of * dma_buf_ops. * @attach: [in] attachment to unmap buffer from * @sg_table: [in] scatterlist info of the buffer to unmap * @direction: [in] direction of DMA transfer * - * Unlocked variant of dma_buf_unmap_attachment(). + * Unlocked variant of dma_buf_sgt_unmap_attachment(). */ -void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, - struct sg_table *sg_table, - enum dma_data_direction direction) +void dma_buf_sgt_unmap_attachment_unlocked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) { might_sleep();
@@ -1295,10 +1295,10 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, return;
dma_resv_lock(attach->dmabuf->resv, NULL); - dma_buf_unmap_attachment(attach, sg_table, direction); + dma_buf_sgt_unmap_attachment(attach, sg_table, direction); dma_resv_unlock(attach->dmabuf->resv); } -EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_unmap_attachment_unlocked, "DMA_BUF");
/** * dma_buf_move_notify - notify attachments that DMA-buf is moving diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 6c8b2a3dde1f54..9e80212fb096ba 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1036,7 +1036,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, struct dma_buf_attachment *attach;
attach = gtt->gobj->import_attach; - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(attach, ttm->sg, + DMA_BIDIRECTIONAL); ttm->sg = NULL; }
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index 21b83b00b68254..dee5fef5eb4f7b 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -68,8 +68,9 @@ void armada_gem_free_object(struct drm_gem_object *obj) if (dobj->obj.import_attach) { /* We only ever display imported data */ if (dobj->sgt) - dma_buf_unmap_attachment_unlocked(dobj->obj.import_attach, - dobj->sgt, DMA_TO_DEVICE); + dma_buf_sgt_unmap_attachment_unlocked( + dobj->obj.import_attach, dobj->sgt, + DMA_TO_DEVICE); drm_prime_gem_destroy(&dobj->obj, NULL); }
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index c1afb9e0886c4f..6f98d0c123dc8d 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -1023,7 +1023,7 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev, return obj;
fail_unmap: - dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); fail_detach: dma_buf_detach(dma_buf, attach); dma_buf_put(dma_buf); @@ -1121,7 +1121,8 @@ void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg)
attach = obj->import_attach; if (sg) - dma_buf_unmap_attachment_unlocked(attach, sg, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(attach, sg, + DMA_BIDIRECTIONAL); dma_buf = attach->dmabuf; dma_buf_detach(attach->dmabuf, attach); /* remove the reference */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 92e2677eb5a33b..325442948fafe0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, struct sg_table *sgt) { - dma_buf_unmap_attachment(obj->base.import_attach, sgt, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(obj->base.import_attach, sgt, + DMA_BIDIRECTIONAL); }
static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 3f6f040c359db0..0b9ba60b59c5c6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -367,7 +367,7 @@ void __i915_gem_object_pages_fini(struct drm_i915_gem_object *obj) atomic_set(&obj->mm.pages_pin_count, 0);
/* - * dma_buf_unmap_attachment() requires reservation to be + * dma_buf_sgt_unmap_attachment() requires reservation to be * locked. The imported GEM shouldn't share reservation lock * and ttm_bo_cleanup_memtype_use() shouldn't be invoked for * dma-buf, so it's safe to take the lock. diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index fcfa819caa389f..6b6d235fd3e9fd 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -307,7 +307,8 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, timeout = -ETIME; } err = timeout > 0 ? 0 : timeout; - dma_buf_unmap_attachment_unlocked(import_attach, st, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(import_attach, st, + DMA_BIDIRECTIONAL); out_detach: dma_buf_detach(dmabuf, import_attach); out_import: diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 569ee2d3ab6f84..c549b94b2e8ad5 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -138,7 +138,7 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, return obj;
fail_unmap: - dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE); + dma_buf_sgt_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE); fail_detach: dma_buf_detach(dma_buf, attach); dma_buf_put(dma_buf); diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 4866d639bbb026..6b93f4d42df26c 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -163,8 +163,8 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_ static void tegra_bo_unpin(struct host1x_bo_mapping *map) { if (map->attach) { - dma_buf_unmap_attachment_unlocked(map->attach, map->sgt, - map->direction); + dma_buf_sgt_unmap_attachment_unlocked(map->attach, map->sgt, + map->direction); dma_buf_detach(map->attach->dmabuf, map->attach); } else { dma_unmap_sgtable(map->dev, map->sgt, map->direction, 0); @@ -499,7 +499,8 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
detach: if (!IS_ERR_OR_NULL(bo->sgt)) - dma_buf_unmap_attachment_unlocked(attach, bo->sgt, DMA_TO_DEVICE); + dma_buf_sgt_unmap_attachment_unlocked(attach, bo->sgt, + DMA_TO_DEVICE);
dma_buf_detach(buf, attach); dma_buf_put(buf); @@ -528,8 +529,8 @@ void tegra_bo_free_object(struct drm_gem_object *gem) tegra_bo_iommu_unmap(tegra, bo);
if (drm_gem_is_imported(gem)) { - dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, - DMA_TO_DEVICE); + dma_buf_sgt_unmap_attachment_unlocked( + gem->import_attach, bo->sgt, DMA_TO_DEVICE); dma_buf_detach(gem->import_attach->dmabuf, gem->import_attach); } } diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 3dbc1b41052068..95582cfbd7e63f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -171,7 +171,7 @@ int virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents, sizeof(struct virtio_gpu_mem_entry), GFP_KERNEL); if (!(*ents)) { - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); return -ENOMEM; }
@@ -196,8 +196,8 @@ static void virtgpu_dma_buf_unmap(struct virtio_gpu_object *bo) virtio_gpu_detach_object_fenced(bo);
if (bo->sgt) - dma_buf_unmap_attachment(attach, bo->sgt, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(attach, bo->sgt, + DMA_BIDIRECTIONAL);
bo->sgt = NULL; } diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index e5e716c5f33fa8..893a2023d66e60 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -752,7 +752,8 @@ static int xe_bo_move_dmabuf(struct ttm_buffer_object *ttm_bo, ttm_bo->sg) { dma_resv_wait_timeout(ttm_bo->base.resv, DMA_RESV_USAGE_BOOKKEEP, false, MAX_SCHEDULE_TIMEOUT); - dma_buf_unmap_attachment(attach, ttm_bo->sg, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(attach, ttm_bo->sg, + DMA_BIDIRECTIONAL); ttm_bo->sg = NULL; }
@@ -760,7 +761,8 @@ static int xe_bo_move_dmabuf(struct ttm_buffer_object *ttm_bo, goto out;
if (ttm_bo->sg) { - dma_buf_unmap_attachment(attach, ttm_bo->sg, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(attach, ttm_bo->sg, + DMA_BIDIRECTIONAL); ttm_bo->sg = NULL; }
@@ -1480,9 +1482,9 @@ int xe_bo_dma_unmap_pinned(struct xe_bo *bo) struct xe_ttm_tt *xe_tt = container_of(tt, typeof(*xe_tt), ttm);
if (ttm_bo->type == ttm_bo_type_sg && ttm_bo->sg) { - dma_buf_unmap_attachment(ttm_bo->base.import_attach, - ttm_bo->sg, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(ttm_bo->base.import_attach, + ttm_bo->sg, + DMA_BIDIRECTIONAL); ttm_bo->sg = NULL; xe_tt->sg = NULL; } else if (xe_tt->sg) { @@ -1597,8 +1599,8 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo) struct xe_ttm_tt *xe_tt = container_of(ttm_bo->ttm, struct xe_ttm_tt, ttm);
- dma_buf_unmap_attachment(ttm_bo->base.import_attach, ttm_bo->sg, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(ttm_bo->base.import_attach, + ttm_bo->sg, DMA_BIDIRECTIONAL); ttm_bo->sg = NULL; xe_tt->sg = NULL; } diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 7556c3c7675c2c..973db853525958 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -1564,7 +1564,7 @@ static void iio_buffer_dmabuf_release(struct kref *ref) struct iio_buffer *buffer = priv->buffer; struct dma_buf *dmabuf = attach->dmabuf;
- dma_buf_unmap_attachment_unlocked(attach, priv->sgt, priv->dir); + dma_buf_sgt_unmap_attachment_unlocked(attach, priv->sgt, priv->dir);
buffer->access->detach_dmabuf(buffer, priv->block);
@@ -1749,7 +1749,7 @@ static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib, return 0;
err_dmabuf_unmap_attachment: - dma_buf_unmap_attachment(attach, priv->sgt, priv->dir); + dma_buf_sgt_unmap_attachment(attach, priv->sgt, priv->dir); err_resv_unlock: dma_resv_unlock(dmabuf->resv); err_dmabuf_detach: diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index aac9f9d12f0f8f..8401cd31763aa4 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -106,8 +106,8 @@ void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf) umem_dmabuf->last_sg_trim = 0; }
- dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, + DMA_BIDIRECTIONAL);
umem_dmabuf->sgt = NULL; } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index de3eb4121aadb0..6c18a0b33546e8 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -717,8 +717,8 @@ static int vb2_dc_map_dmabuf(void *mem_priv) if (contig_size < buf->size) { pr_err("contiguous chunk is too small %lu/%lu\n", contig_size, buf->size); - dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, - buf->dma_dir); + dma_buf_sgt_unmap_attachment_unlocked(buf->db_attach, sgt, + buf->dma_dir); return -EFAULT; }
@@ -749,7 +749,8 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv) dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); buf->vaddr = NULL; } - dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); + dma_buf_sgt_unmap_attachment_unlocked(buf->db_attach, sgt, + buf->dma_dir);
buf->dma_addr = 0; buf->dma_sgt = NULL; diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index ed968d7e326449..a5b855f055e358 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -600,7 +600,8 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv) dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); buf->vaddr = NULL; } - dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); + dma_buf_sgt_unmap_attachment_unlocked(buf->db_attach, sgt, + buf->dma_dir);
buf->dma_sgt = NULL; } diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c index 595b759de4f939..04ea8ffd4836c9 100644 --- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c +++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c @@ -38,7 +38,8 @@ static void tegra_vde_release_entry(struct tegra_vde_cache_entry *entry) if (entry->vde->domain) tegra_vde_iommu_unmap(entry->vde, entry->iova);
- dma_buf_unmap_attachment_unlocked(entry->a, entry->sgt, entry->dma_dir); + dma_buf_sgt_unmap_attachment_unlocked(entry->a, entry->sgt, + entry->dma_dir); dma_buf_detach(dmabuf, entry->a); dma_buf_put(dmabuf);
@@ -152,7 +153,7 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde, err_free: kfree(entry); err_unmap: - dma_buf_unmap_attachment_unlocked(attachment, sgt, dma_dir); + dma_buf_sgt_unmap_attachment_unlocked(attachment, sgt, dma_dir); err_detach: dma_buf_detach(dmabuf, attachment); err_unlock: diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 52abf3290a580f..a7376d4a07c73c 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -329,8 +329,8 @@ static void fastrpc_free_map(struct kref *ref) return; } } - dma_buf_unmap_attachment_unlocked(map->attach, map->table, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(map->attach, map->table, + DMA_BIDIRECTIONAL); dma_buf_detach(map->buf, map->attach); dma_buf_put(map->buf); } diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index d5d4bfc390ebc6..a6adbd132669e3 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1333,7 +1333,7 @@ static void ffs_dmabuf_release(struct kref *ref) struct dma_buf *dmabuf = attach->dmabuf;
pr_vdebug("FFS DMABUF release\n"); - dma_buf_unmap_attachment_unlocked(attach, priv->sgt, priv->dir); + dma_buf_sgt_unmap_attachment_unlocked(attach, priv->sgt, priv->dir);
dma_buf_detach(attach->dmabuf, attach); dma_buf_put(dmabuf); diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 78125cc1aee322..927265ae7a5dc8 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -653,7 +653,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, fail_end_access: dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); fail_unmap: - dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); fail_detach: dma_buf_detach(dma_buf, attach); fail_free_obj: @@ -703,8 +703,8 @@ static int dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd) attach = gntdev_dmabuf->u.imp.attach;
if (gntdev_dmabuf->u.imp.sgt) - dma_buf_unmap_attachment_unlocked(attach, gntdev_dmabuf->u.imp.sgt, - DMA_BIDIRECTIONAL); + dma_buf_sgt_unmap_attachment_unlocked( + attach, gntdev_dmabuf->u.imp.sgt, DMA_BIDIRECTIONAL); dma_buf = attach->dmabuf; dma_buf_detach(attach->dmabuf, attach); dma_buf_put(dma_buf); diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index daddf30d0eceae..ac859b8913edcd 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -102,7 +102,7 @@ int dma_buf_match_mapping(struct dma_buf_match_args *args, * When this type is matched the map/unmap functions are: * * dma_buf_sgt_map_attachment() - * dma_buf_unmap_attachment() + * dma_buf_sgt_unmap_attachment() * * The struct sg_table returned by those functions has only the DMA portions * available. The caller must not try to use the struct page * information. @@ -175,7 +175,7 @@ struct dma_buf_mapping_sgt_exp_ops { /** * @unmap_dma_buf: * - * This is called by dma_buf_unmap_attachment() and should unmap and + * This is called by dma_buf_sgt_unmap_attachment() and should unmap and * release the &sg_table allocated in @map_dma_buf, and it is mandatory. * For static dma_buf handling this might also unpin the backing * storage if this is the last mapping of the DMA buffer. diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1ed50ec261022e..7fde67e1b4f459 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -476,7 +476,7 @@ struct dma_buf_attach_ops { * An attachment is created by calling dma_buf_attach(), and released again by * calling dma_buf_detach(). The DMA mapping itself needed to initiate a * transfer is created by dma_buf_sgt_map_attachment() and freed again by calling - * dma_buf_unmap_attachment(). + * dma_buf_sgt_unmap_attachment(). */ struct dma_buf_attachment { struct dma_buf *dmabuf; @@ -582,8 +582,8 @@ void dma_buf_put(struct dma_buf *dmabuf);
struct sg_table *dma_buf_sgt_map_attachment(struct dma_buf_attachment *, enum dma_data_direction); -void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, - enum dma_data_direction); +void dma_buf_sgt_unmap_attachment(struct dma_buf_attachment *, + struct sg_table *, enum dma_data_direction); void dma_buf_move_notify(struct dma_buf *dma_buf); int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction dir); @@ -592,9 +592,9 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf, struct sg_table * dma_buf_sgt_map_attachment_unlocked(struct dma_buf_attachment *attach, enum dma_data_direction direction); -void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, - struct sg_table *sg_table, - enum dma_data_direction direction); +void dma_buf_sgt_unmap_attachment_unlocked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction);
int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 3b8c9752208bdf..623fb97b8c5209 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -92,8 +92,8 @@ static void io_release_dmabuf(struct io_zcrx_mem *mem) return;
if (mem->sgt) - dma_buf_unmap_attachment_unlocked(mem->attach, mem->sgt, - DMA_FROM_DEVICE); + dma_buf_sgt_unmap_attachment_unlocked(mem->attach, mem->sgt, + DMA_FROM_DEVICE); if (mem->attach) dma_buf_detach(mem->dmabuf, mem->attach); if (mem->dmabuf) diff --git a/net/core/devmem.c b/net/core/devmem.c index ccdf3f70a4de9b..9a1393d144e404 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -70,8 +70,8 @@ void __net_devmem_dmabuf_binding_free(struct work_struct *wq) size, avail)) gen_pool_destroy(binding->chunk_pool);
- dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt, - binding->direction); + dma_buf_sgt_unmap_attachment_unlocked(binding->attachment, binding->sgt, + binding->direction); dma_buf_detach(binding->dmabuf, binding->attachment); dma_buf_put(binding->dmabuf); xa_destroy(&binding->bound_rxqs); @@ -318,8 +318,8 @@ net_devmem_bind_dmabuf(struct net_device *dev, err_tx_vec: kvfree(binding->tx_vec); err_unmap: - dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt, - direction); + dma_buf_sgt_unmap_attachment_unlocked(binding->attachment, binding->sgt, + direction); err_detach: dma_buf_detach(dmabuf, binding->attachment); err_free_binding:
This attach function always creates a SGT mapping type importer.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- Documentation/gpu/todo.rst | 2 +- drivers/accel/amdxdna/amdxdna_gem.c | 2 +- drivers/accel/ivpu/ivpu_gem.c | 2 +- drivers/accel/qaic/qaic_data.c | 2 +- drivers/dma-buf/dma-buf.c | 14 +++++++------- drivers/gpu/drm/armada/armada_gem.c | 2 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +- drivers/gpu/drm/drm_prime.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 +- .../gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 2 +- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/tegra/gem.c | 4 ++-- drivers/iio/industrialio-buffer.c | 2 +- .../media/common/videobuf2/videobuf2-dma-contig.c | 2 +- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 2 +- .../media/platform/nvidia/tegra-vde/dmabuf-cache.c | 2 +- drivers/misc/fastrpc.c | 2 +- drivers/usb/gadget/function/f_fs.c | 2 +- drivers/xen/gntdev-dmabuf.c | 2 +- include/linux/dma-buf.h | 10 +++++----- io_uring/zcrx.c | 2 +- net/core/devmem.c | 2 +- 22 files changed, 33 insertions(+), 33 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 9013ced318cb97..9a690a1bf62b5a 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -608,7 +608,7 @@ Remove automatic page mapping from dma-buf importing
When importing dma-bufs, the dma-buf and PRIME frameworks automatically map imported pages into the importer's DMA area. drm_gem_prime_fd_to_handle() and -drm_gem_prime_handle_to_fd() require that importers call dma_buf_attach() +drm_gem_prime_handle_to_fd() require that importers call dma_buf_sgt_attach() even if they never do actual device DMA, but only CPU access through dma_buf_vmap(). This is a problem for USB devices, which do not support DMA operations. diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c index ccc78aeeb4c0fc..ddaf3f59adaf6c 100644 --- a/drivers/accel/amdxdna/amdxdna_gem.c +++ b/drivers/accel/amdxdna/amdxdna_gem.c @@ -605,7 +605,7 @@ amdxdna_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf)
get_dma_buf(dma_buf);
- attach = dma_buf_attach(dma_buf, dev->dev); + attach = dma_buf_sgt_attach(dma_buf, dev->dev); if (IS_ERR(attach)) { ret = PTR_ERR(attach); goto put_buf; diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index 1fcb454f4cb33b..4d26244a394daf 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -219,7 +219,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct ivpu_bo *bo; int ret;
- attach = dma_buf_attach(dma_buf, attach_dev); + attach = dma_buf_sgt_attach(dma_buf, attach_dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index 8e2e597bc1ff03..19126309105165 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -831,7 +831,7 @@ struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_ obj = &bo->base; get_dma_buf(dma_buf);
- attach = dma_buf_attach(dma_buf, dev->dev); + attach = dma_buf_sgt_attach(dma_buf, dev->dev); if (IS_ERR(attach)) { ret = PTR_ERR(attach); goto attach_fail; diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 35d3bbb4bb053c..ded9331a493c36 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -646,7 +646,7 @@ static struct file *dma_buf_getfile(size_t size, int flags) * 2. Userspace passes this file-descriptors to all drivers it wants this buffer * to share with: First the file descriptor is converted to a &dma_buf using * dma_buf_get(). Then the buffer is attached to the device using - * dma_buf_attach(). + * dma_buf_sgt_attach(). * * Up to this stage the exporter is still free to migrate or reallocate the * backing storage. @@ -875,7 +875,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * 2. Importers must not hold the dma-buf reservation lock when calling these * functions: * - * - dma_buf_attach() + * - dma_buf_sgt_attach() * - dma_buf_dynamic_attach() * - dma_buf_detach() * - dma_buf_export() @@ -999,15 +999,15 @@ struct dma_buf_attachment *dma_buf_mapping_attach( EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_attach, "DMA_BUF");
/** - * dma_buf_attach - Wrapper for dma_buf_mapping_attach + * dma_buf_sgt_attach - Wrapper for dma_buf_mapping_attach * @dmabuf: [in] buffer to attach device to. * @dev: [in] device to be attached. * * Wrapper to call dma_buf_mapping_attach() for drivers which still use a static * mapping. */ -struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, - struct device *dev) +struct dma_buf_attachment *dma_buf_sgt_attach(struct dma_buf *dmabuf, + struct device *dev) { struct dma_buf_mapping_match sgt_match[] = { DMA_BUF_IMAPPING_SGT(dev, DMA_SGT_NO_P2P), @@ -1016,7 +1016,7 @@ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, return dma_buf_mapping_attach(dmabuf, sgt_match, ARRAY_SIZE(sgt_match), NULL, NULL); } -EXPORT_SYMBOL_NS_GPL(dma_buf_attach, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_attach, "DMA_BUF");
/** * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list @@ -1048,7 +1048,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, "DMA_BUF"); * @dmabuf: [in] buffer to detach from. * @attach: [in] attachment to be detached; is free'd after this call. * - * Clean up a device attachment obtained by calling dma_buf_attach(). + * Clean up a device attachment obtained by calling dma_buf_sgt_attach(). * * Optionally this calls &dma_buf_ops.detach for device-specific detach. */ diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index dee5fef5eb4f7b..a2efa57114e283 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -518,7 +518,7 @@ armada_gem_prime_import(struct drm_device *dev, struct dma_buf *buf) } }
- attach = dma_buf_attach(buf, dev->dev); + attach = dma_buf_sgt_attach(buf, dev->dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f13eb5f36e8a97..8e7c4ac9ab85f8 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -865,7 +865,7 @@ struct drm_gem_object *drm_gem_shmem_prime_import_no_map(struct drm_device *dev, return obj; }
- attach = dma_buf_attach(dma_buf, dev->dev); + attach = dma_buf_sgt_attach(dma_buf, dev->dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 6f98d0c123dc8d..6fecc3c1b362d3 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -999,7 +999,7 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev, if (!dev->driver->gem_prime_import_sg_table) return ERR_PTR(-EINVAL);
- attach = dma_buf_attach(dma_buf, attach_dev); + attach = dma_buf_sgt_attach(dma_buf, attach_dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 325442948fafe0..069367edcad2a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -306,7 +306,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, return ERR_PTR(-E2BIG);
/* need to attach */ - attach = dma_buf_attach(dma_buf, dev->dev); + attach = dma_buf_sgt_attach(dma_buf, dev->dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index 6b6d235fd3e9fd..3c193e6d9d11e2 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -287,7 +287,7 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, goto out_import;
/* Now try a fake an importer */ - import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev); + import_attach = dma_buf_sgt_attach(dmabuf, obj->base.dev->dev); if (IS_ERR(import_attach)) { err = PTR_ERR(import_attach); goto out_import; diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index c549b94b2e8ad5..ca0962a995099a 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -115,7 +115,7 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, } }
- attach = dma_buf_attach(dma_buf, dev->dev); + attach = dma_buf_sgt_attach(dma_buf, dev->dev); if (IS_ERR(attach)) return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 6b93f4d42df26c..19c83556e147c1 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -80,7 +80,7 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_ if (obj->dma_buf) { struct dma_buf *buf = obj->dma_buf;
- map->attach = dma_buf_attach(buf, dev); + map->attach = dma_buf_sgt_attach(buf, dev); if (IS_ERR(map->attach)) { err = PTR_ERR(map->attach); goto free; @@ -472,7 +472,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm, * domain, map it first to the DRM device to get an sgt. */ if (tegra->domain) { - attach = dma_buf_attach(buf, drm->dev); + attach = dma_buf_sgt_attach(buf, drm->dev); if (IS_ERR(attach)) { err = PTR_ERR(attach); goto free; diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 973db853525958..0d170978108cae 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -1688,7 +1688,7 @@ static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib, goto err_free_priv; }
- attach = dma_buf_attach(dmabuf, dma_dev); + attach = dma_buf_sgt_attach(dmabuf, dma_dev); if (IS_ERR(attach)) { err = PTR_ERR(attach); goto err_dmabuf_put; diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 6c18a0b33546e8..0e40799687d4ee 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -789,7 +789,7 @@ static void *vb2_dc_attach_dmabuf(struct vb2_buffer *vb, struct device *dev, buf->vb = vb;
/* create attachment for the dmabuf with the user device */ - dba = dma_buf_attach(dbuf, buf->dev); + dba = dma_buf_sgt_attach(dbuf, buf->dev); if (IS_ERR(dba)) { pr_err("failed to attach dmabuf\n"); kfree(buf); diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index a5b855f055e358..a397498d669111 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -637,7 +637,7 @@ static void *vb2_dma_sg_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
buf->dev = dev; /* create attachment for the dmabuf with the user device */ - dba = dma_buf_attach(dbuf, buf->dev); + dba = dma_buf_sgt_attach(dbuf, buf->dev); if (IS_ERR(dba)) { pr_err("failed to attach dmabuf\n"); kfree(buf); diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c index 04ea8ffd4836c9..02175c39cfddf9 100644 --- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c +++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c @@ -96,7 +96,7 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde, goto ref; }
- attachment = dma_buf_attach(dmabuf, dev); + attachment = dma_buf_sgt_attach(dmabuf, dev); if (IS_ERR(attachment)) { dev_err(dev, "Failed to attach dmabuf\n"); err = PTR_ERR(attachment); diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index a7376d4a07c73c..391026b15a6dc3 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -772,7 +772,7 @@ static int fastrpc_map_attach(struct fastrpc_user *fl, int fd, goto get_err; }
- map->attach = dma_buf_attach(map->buf, sess->dev); + map->attach = dma_buf_sgt_attach(map->buf, sess->dev); if (IS_ERR(map->attach)) { dev_err(sess->dev, "Failed to attach dmabuf\n"); err = PTR_ERR(map->attach); diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index a6adbd132669e3..e66715f289d497 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1502,7 +1502,7 @@ static int ffs_dmabuf_attach(struct file *file, int fd) if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf);
- attach = dma_buf_attach(dmabuf, gadget->dev.parent); + attach = dma_buf_sgt_attach(dmabuf, gadget->dev.parent); if (IS_ERR(attach)) { err = PTR_ERR(attach); goto err_dmabuf_put; diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 927265ae7a5dc8..b53bf6d92d27c2 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -582,7 +582,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, gntdev_dmabuf->priv = priv; gntdev_dmabuf->fd = fd;
- attach = dma_buf_attach(dma_buf, dev); + attach = dma_buf_sgt_attach(dma_buf, dev); if (IS_ERR(attach)) { ret = ERR_CAST(attach); goto fail_free_obj; diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 7fde67e1b4f459..456ed5767c05eb 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -70,7 +70,7 @@ struct dma_buf_ops { /** * @attach: * - * This is called from dma_buf_attach() to make sure that a given + * This is called from dma_buf_sgt_attach() to make sure that a given * &dma_buf_attachment.dev can access the provided &dma_buf. Exporters * which support buffer objects in special locations like VRAM or * device-specific carveout areas should check whether the buffer could @@ -118,7 +118,7 @@ struct dma_buf_ops { * exclusive with @cache_sgt_mapping. * * This is called automatically for non-dynamic importers from - * dma_buf_attach(). + * dma_buf_sgt_attach(). * * Note that similar to non-dynamic exporters in their @map_dma_buf * callback the driver must guarantee that the memory is available for @@ -473,7 +473,7 @@ struct dma_buf_attach_ops { * and its user device(s). The list contains one attachment struct per device * attached to the buffer. * - * An attachment is created by calling dma_buf_attach(), and released again by + * An attachment is created by calling dma_buf_sgt_attach(), and released again by * calling dma_buf_detach(). The DMA mapping itself needed to initiate a * transfer is created by dma_buf_sgt_map_attachment() and freed again by calling * dma_buf_sgt_unmap_attachment(). @@ -558,8 +558,8 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) return !!dmabuf->ops->pin; }
-struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, - struct device *dev); +struct dma_buf_attachment *dma_buf_sgt_attach(struct dma_buf *dmabuf, + struct device *dev); struct dma_buf_attachment * dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, const struct dma_buf_attach_ops *importer_ops, diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 623fb97b8c5209..acc7a0b48c660e 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -130,7 +130,7 @@ static int io_import_dmabuf(struct io_zcrx_ifq *ifq, goto err; }
- mem->attach = dma_buf_attach(mem->dmabuf, ifq->dev); + mem->attach = dma_buf_sgt_attach(mem->dmabuf, ifq->dev); if (IS_ERR(mem->attach)) { ret = PTR_ERR(mem->attach); mem->attach = NULL; diff --git a/net/core/devmem.c b/net/core/devmem.c index 9a1393d144e404..d4a86faf18c2f2 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -216,7 +216,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, binding->dmabuf = dmabuf; binding->direction = direction;
- binding->attachment = dma_buf_attach(binding->dmabuf, dma_dev); + binding->attachment = dma_buf_sgt_attach(binding->dmabuf, dma_dev); if (IS_ERR(binding->attachment)) { err = PTR_ERR(binding->attachment); NL_SET_ERR_MSG(extack, "Failed to bind dmabuf to device");
This attach function always creates a SGT mapping type importer.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf.c | 14 +++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_prime.c | 4 ++-- drivers/gpu/drm/xe/xe_dma_buf.c | 3 ++- drivers/infiniband/core/umem_dmabuf.c | 7 ++----- drivers/iommu/iommufd/pages.c | 5 +++-- include/linux/dma-buf.h | 6 +++--- 7 files changed, 21 insertions(+), 22 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ded9331a493c36..cfb64d27c1a628 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -876,7 +876,7 @@ dma_buf_pin_on_map(struct dma_buf_attachment *attach) * functions: * * - dma_buf_sgt_attach() - * - dma_buf_dynamic_attach() + * - dma_buf_sgt_dynamic_attach() * - dma_buf_detach() * - dma_buf_export() * - dma_buf_fd() @@ -1019,7 +1019,7 @@ struct dma_buf_attachment *dma_buf_sgt_attach(struct dma_buf *dmabuf, EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_attach, "DMA_BUF");
/** - * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list + * dma_buf_sgt_dynamic_attach - Add the device to dma_buf's attachments list * @dmabuf: [in] buffer to attach device to. * @dev: [in] device to be attached. * @importer_ops: [in] importer operations for the attachment @@ -1028,9 +1028,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_attach, "DMA_BUF"); * Wrapper to call dma_buf_mapping_attach() for drivers which only support SGT. */ struct dma_buf_attachment * -dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, - const struct dma_buf_attach_ops *importer_ops, - void *importer_priv) +dma_buf_sgt_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, + const struct dma_buf_attach_ops *importer_ops, + void *importer_priv) { struct dma_buf_mapping_match sgt_match[] = { DMA_BUF_IMAPPING_SGT(dev, importer_ops->allow_peer2peer ? @@ -1041,7 +1041,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, return dma_buf_mapping_attach(dmabuf, sgt_match, ARRAY_SIZE(sgt_match), importer_ops, importer_priv); } -EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, "DMA_BUF"); +EXPORT_SYMBOL_NS_GPL(dma_buf_sgt_dynamic_attach, "DMA_BUF");
/** * dma_buf_detach - Remove the given attachment from dmabuf's attachments list @@ -1072,7 +1072,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_detach, "DMA_BUF"); * dma_buf_pin - Lock down the DMA-buf * @attach: [in] attachment which should be pinned * - * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach()) may + * Only dynamic importers (who set up @attach with dma_buf_sgt_dynamic_attach()) may * call this, and only for limited use cases like scanout and not for temporary * pin operations. It is not permitted to allow userspace to pin arbitrary * amounts of buffers through this interface. diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index bb9c602c061dc3..8169ebe6ababf1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -586,8 +586,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, if (IS_ERR(obj)) return obj;
- attach = dma_buf_dynamic_attach(dma_buf, dev->dev, - &amdgpu_dma_buf_attach_ops, obj); + attach = dma_buf_sgt_dynamic_attach(dma_buf, dev->dev, + &amdgpu_dma_buf_attach_ops, obj); if (IS_ERR(attach)) { drm_gem_object_put(obj); return ERR_CAST(attach); diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 95582cfbd7e63f..7e9a42eaa96f4e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -327,8 +327,8 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, obj->funcs = &virtgpu_gem_dma_buf_funcs; drm_gem_private_object_init(dev, obj, buf->size);
- attach = dma_buf_dynamic_attach(buf, dev->dev, - &virtgpu_dma_buf_attach_ops, obj); + attach = dma_buf_sgt_dynamic_attach(buf, dev->dev, + &virtgpu_dma_buf_attach_ops, obj); if (IS_ERR(attach)) { kfree(bo); return ERR_CAST(attach); diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c index 848532aca432db..ddd865ae0522ca 100644 --- a/drivers/gpu/drm/xe/xe_dma_buf.c +++ b/drivers/gpu/drm/xe/xe_dma_buf.c @@ -358,7 +358,8 @@ struct drm_gem_object *xe_gem_prime_import(struct drm_device *dev, attach_ops = test->attach_ops; #endif
- attach = dma_buf_dynamic_attach(dma_buf, dev->dev, attach_ops, &bo->ttm.base); + attach = dma_buf_sgt_dynamic_attach(dma_buf, dev->dev, attach_ops, + &bo->ttm.base); if (IS_ERR(attach)) { obj = ERR_CAST(attach); goto out_err; diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 8401cd31763aa4..c8785f6c08a3bd 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -155,11 +155,8 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device, if (!ib_umem_num_pages(umem)) goto out_free_umem;
- umem_dmabuf->attach = dma_buf_dynamic_attach( - dmabuf, - dma_device, - ops, - umem_dmabuf); + umem_dmabuf->attach = dma_buf_sgt_dynamic_attach(dmabuf, dma_device, + ops, umem_dmabuf); if (IS_ERR(umem_dmabuf->attach)) { ret = ERR_CAST(umem_dmabuf->attach); goto out_free_umem; diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index dbe51ecb9a20f8..a487d93dacadab 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1486,8 +1486,9 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, struct dma_buf_attachment *attach; int rc;
- attach = dma_buf_dynamic_attach(dmabuf, iommufd_global_device(), - &iopt_dmabuf_attach_revoke_ops, pages); + attach = dma_buf_sgt_dynamic_attach(dmabuf, iommufd_global_device(), + &iopt_dmabuf_attach_revoke_ops, + pages); if (IS_ERR(attach)) return PTR_ERR(attach);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 456ed5767c05eb..11488b1e6936cf 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -561,9 +561,9 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf) struct dma_buf_attachment *dma_buf_sgt_attach(struct dma_buf *dmabuf, struct device *dev); struct dma_buf_attachment * -dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, - const struct dma_buf_attach_ops *importer_ops, - void *importer_priv); +dma_buf_sgt_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, + const struct dma_buf_attach_ops *importer_ops, + void *importer_priv); struct dma_buf_attachment *dma_buf_mapping_attach( struct dma_buf *dmabuf, struct dma_buf_mapping_match *importer_matches, size_t match_len, const struct dma_buf_attach_ops *importer_ops,
This type is required by iommufd and kvm as dmabuf importers.
Due to sensitivity about abusing physical addresses, restrict importers by using EXPORT_SYMBOL_FOR_MODULES(). Only iommufd can implement an importer, the kernel module loader will enforce this.
Allow anything to implement an exporter as there are use cases in DPDK/SPDK to connect GPU memory into VFIO/iommufd and it is hard to abuse the API as an exporter.
The physical address list exporter returns a physical address list in a simple kvalloc'd array of struct phys_vec.
For now all entries are assumed to be MMIO and iommufd will map into the IOMMU using the IOMMU_MMIO flag.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/dma-buf-mapping.c | 63 +++++++++++++++++++++++++++++++ include/linux/dma-buf-mapping.h | 42 +++++++++++++++++++++ 2 files changed, 105 insertions(+)
diff --git a/drivers/dma-buf/dma-buf-mapping.c b/drivers/dma-buf/dma-buf-mapping.c index baa96b37e2c6bd..d9d6b9b5bf05c6 100644 --- a/drivers/dma-buf/dma-buf-mapping.c +++ b/drivers/dma-buf/dma-buf-mapping.c @@ -349,3 +349,66 @@ struct dma_buf_mapping_type dma_buf_mapping_sgt_type = { .debugfs_dump = dma_buf_sgt_debugfs_dump, }; EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_sgt_type, "DMA_BUF"); + +static const struct dma_buf_mapping_pal_exp_ops * +to_pal_exp_ops(struct dma_buf_attachment *attach) +{ + return container_of(attach->map_type.exp_ops, + struct dma_buf_mapping_pal_exp_ops, ops); +} + +/** + * dma_buf_pal_map_phys - Obtain the physical address list for a PAL attachment + * @attach: The DMA-buf attachment + * + * Calls the exporter's map_phys() callback to retrieve the physical address + * list for the buffer. The caller must hold the dma-buf's reservation lock. + * + * This symbol is restricted to iommufd to prevent misuse. + * + * Returns the physical address list on success, or an ERR_PTR on failure. + * The returned list must be freed with dma_buf_pal_unmap_phys(). + */ +struct dma_buf_phys_list * +dma_buf_pal_map_phys(struct dma_buf_attachment *attach) +{ + dma_resv_assert_held(attach->dmabuf->resv); + return to_pal_exp_ops(attach)->map_phys(attach); +} +/* + * Restricted, iommufd is the only importer allowed to prevent misuse of this + * API. + */ +EXPORT_SYMBOL_FOR_MODULES(dma_buf_pal_map_phys, "iommufd"); + +/** + * dma_buf_pal_unmap_phys - Unmap a physical address list + * @attach: The DMA-buf attachment + * @phys: The physical address list returned by dma_buf_pal_map_phys() + * + * Returns the mapping back to the exporter. After this point the importer may + * not touch any of the addresses in any way. + */ +void dma_buf_pal_unmap_phys(struct dma_buf_attachment *attach, + struct dma_buf_phys_list *phys) +{ + to_pal_exp_ops(attach)->unmap_phys(attach, phys); +} +EXPORT_SYMBOL_NS_GPL(dma_buf_pal_unmap_phys, "DMA_BUF"); + +static inline void +dma_buf_pal_finish_match(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp) +{ + args->attach->map_type = (struct dma_buf_mapping_match){ + .type = &dma_buf_mapping_pal_type, + .exp_ops = exp->exp_ops, + }; +} + +struct dma_buf_mapping_type dma_buf_mapping_pal_type = { + .name = "Physical Address List", + .finish_match = dma_buf_pal_finish_match, +}; +EXPORT_SYMBOL_NS_GPL(dma_buf_mapping_pal_type, "DMA_BUF"); diff --git a/include/linux/dma-buf-mapping.h b/include/linux/dma-buf-mapping.h index ac859b8913edcd..10831ce2e72851 100644 --- a/include/linux/dma-buf-mapping.h +++ b/include/linux/dma-buf-mapping.h @@ -269,4 +269,46 @@ DMA_BUF_EMAPPING_SGT_P2P(const struct dma_buf_mapping_sgt_exp_ops *exp_ops, .exporter_requires_p2p = DMA_SGT_NO_P2P, \ } })
+/* + * Physical Address List mapping type + * + * Use of the Physical Address List type is restricted to prevent abuse of the + * physical addresses API. Please check with the DMA BUF maintainers before + * trying to use it. + */ +struct dma_buf_phys_list { + size_t length; + struct dma_buf_phys_vec phys[] __counted_by(length); +}; + +extern struct dma_buf_mapping_type dma_buf_mapping_pal_type; + +struct dma_buf_mapping_pal_exp_ops { + struct dma_buf_mapping_exp_ops ops; + struct dma_buf_phys_list *(*map_phys)(struct dma_buf_attachment *attach); + void (*unmap_phys)(struct dma_buf_attachment *attach, + struct dma_buf_phys_list *phys); +}; + +struct dma_buf_phys_list * +dma_buf_pal_map_phys(struct dma_buf_attachment *attach); +void dma_buf_pal_unmap_phys(struct dma_buf_attachment *attach, + struct dma_buf_phys_list *phys); + +static inline struct dma_buf_mapping_match DMA_BUF_IMAPPING_PAL(void) +{ + return (struct dma_buf_mapping_match){ + .type = &dma_buf_mapping_pal_type, + }; +} + +static inline struct dma_buf_mapping_match +DMA_BUF_EMAPPING_PAL(const struct dma_buf_mapping_pal_exp_ops *exp_ops) +{ + return (struct dma_buf_mapping_match){ + .type = &dma_buf_mapping_pal_type, + .exp_ops = &exp_ops->ops, + }; +} + #endif
Simply return a copy of the phys_vec.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/vfio/pci/vfio_pci_dmabuf.c | 34 ++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index c7addef5794abf..f8d5848a47ff55 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -77,10 +77,39 @@ static const struct dma_buf_mapping_sgt_exp_ops vfio_pci_dma_buf_sgt_ops = { .unmap_dma_buf = vfio_pci_dma_buf_unmap, };
+static struct dma_buf_phys_list * +vfio_pci_dma_pal_map_phys(struct dma_buf_attachment *attach) +{ + struct vfio_pci_dma_buf *priv = attach->dmabuf->priv; + struct dma_buf_phys_list *phys; + + phys = kvmalloc(struct_size(phys, phys, priv->nr_ranges), GFP_KERNEL); + if (!phys) + return ERR_PTR(-ENOMEM); + + phys->length = priv->nr_ranges; + memcpy(phys->phys, priv->phys_vec, + sizeof(phys->phys[0]) * priv->nr_ranges); + + return phys; +} + +static void vfio_pci_dma_pal_unmap_phys(struct dma_buf_attachment *attach, + struct dma_buf_phys_list *phys) +{ + /* FIXME when rebased on Leon's series this manages the refcount */ + kvfree(phys); +} + +static const struct dma_buf_mapping_pal_exp_ops vfio_pci_dma_buf_pal_ops = { + .map_phys = vfio_pci_dma_pal_map_phys, + .unmap_phys = vfio_pci_dma_pal_unmap_phys, +}; + static int vfio_pci_dma_buf_match_mapping(struct dma_buf_match_args *args) { struct vfio_pci_dma_buf *priv = args->dmabuf->priv; - struct dma_buf_mapping_match sgt_match[1]; + struct dma_buf_mapping_match sgt_match[2];
dma_resv_assert_held(priv->dmabuf->resv);
@@ -91,7 +120,8 @@ static int vfio_pci_dma_buf_match_mapping(struct dma_buf_match_args *args) if (!priv->vdev) return -ENODEV;
- sgt_match[0] = DMA_BUF_EMAPPING_SGT_P2P(&vfio_pci_dma_buf_sgt_ops, + sgt_match[0] = DMA_BUF_EMAPPING_PAL(&vfio_pci_dma_buf_pal_ops); + sgt_match[1] = DMA_BUF_EMAPPING_SGT_P2P(&vfio_pci_dma_buf_sgt_ops, priv->vdev->pdev);
return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match));
Switch iommufd over to use the PAL mapping type. iommufd is the only importer permitted to use this, and this is enforced by module name restrictions.
If the exporter does not support PAL then the import will fail, same as today.
If the exporter does offer PAL then the PAL functions are used to get a phys_addr_t array for use in iommufd. The exporter must offer a single entry list for now.
Remove everything related to vfio_pci_dma_buf_iommufd_map(). Call the new unmap function.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/iommu/iommufd/io_pagetable.h | 1 + drivers/iommu/iommufd/iommufd_private.h | 8 ---- drivers/iommu/iommufd/pages.c | 58 +++++++++++----------- drivers/iommu/iommufd/selftest.c | 64 ++++++++++++++----------- drivers/vfio/pci/vfio_pci_dmabuf.c | 34 ------------- 5 files changed, 64 insertions(+), 101 deletions(-)
diff --git a/drivers/iommu/iommufd/io_pagetable.h b/drivers/iommu/iommufd/io_pagetable.h index 14cd052fd3204e..fcd1a2c75dfa3d 100644 --- a/drivers/iommu/iommufd/io_pagetable.h +++ b/drivers/iommu/iommufd/io_pagetable.h @@ -202,6 +202,7 @@ struct iopt_pages_dmabuf_track {
struct iopt_pages_dmabuf { struct dma_buf_attachment *attach; + struct dma_buf_phys_list *exp_phys; struct dma_buf_phys_vec phys; /* Always PAGE_SIZE aligned */ unsigned long start; diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index eb6d1a70f6732c..cfb8637cb143ac 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -717,8 +717,6 @@ bool iommufd_should_fail(void); int __init iommufd_test_init(void); void iommufd_test_exit(void); bool iommufd_selftest_is_mock_dev(struct device *dev); -int iommufd_test_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, - struct dma_buf_phys_vec *phys); #else static inline void iommufd_test_syz_conv_iova_id(struct iommufd_ucmd *ucmd, unsigned int ioas_id, @@ -740,11 +738,5 @@ static inline bool iommufd_selftest_is_mock_dev(struct device *dev) { return false; } -static inline int -iommufd_test_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, - struct dma_buf_phys_vec *phys) -{ - return -EOPNOTSUPP; -} #endif #endif diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index a487d93dacadab..9a23c3e30959a9 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -46,6 +46,7 @@ * ULONG_MAX so last_index + 1 cannot overflow. */ #include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> #include <linux/dma-resv.h> #include <linux/file.h> #include <linux/highmem.h> @@ -1447,6 +1448,8 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach) iopt_area_last_index(area)); } pages->dmabuf.phys.len = 0; + dma_buf_pal_unmap_phys(pages->dmabuf.attach, pages->dmabuf.exp_phys); + pages->dmabuf.exp_phys = NULL; }
static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { @@ -1454,41 +1457,16 @@ static struct dma_buf_attach_ops iopt_dmabuf_attach_revoke_ops = { .move_notify = iopt_revoke_notify, };
-/* - * iommufd and vfio have a circular dependency. Future work for a phys - * based private interconnect will remove this. - */ -static int -sym_vfio_pci_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, - struct dma_buf_phys_vec *phys) -{ - typeof(&vfio_pci_dma_buf_iommufd_map) fn; - int rc; - - rc = iommufd_test_dma_buf_iommufd_map(attachment, phys); - if (rc != -EOPNOTSUPP) - return rc; - - if (!IS_ENABLED(CONFIG_VFIO_PCI_DMABUF)) - return -EOPNOTSUPP; - - fn = symbol_get(vfio_pci_dma_buf_iommufd_map); - if (!fn) - return -EOPNOTSUPP; - rc = fn(attachment, phys); - symbol_put(vfio_pci_dma_buf_iommufd_map); - return rc; -} - static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, struct dma_buf *dmabuf) { + struct dma_buf_mapping_match pal_match[] = { DMA_BUF_IMAPPING_PAL() }; struct dma_buf_attachment *attach; int rc;
- attach = dma_buf_sgt_dynamic_attach(dmabuf, iommufd_global_device(), - &iopt_dmabuf_attach_revoke_ops, - pages); + attach = dma_buf_mapping_attach(dmabuf, pal_match, + ARRAY_SIZE(pal_match), + &iopt_dmabuf_attach_revoke_ops, pages); if (IS_ERR(attach)) return PTR_ERR(attach);
@@ -1502,9 +1480,19 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, mutex_unlock(&pages->mutex); }
- rc = sym_vfio_pci_dma_buf_iommufd_map(attach, &pages->dmabuf.phys); - if (rc) + + pages->dmabuf.exp_phys = dma_buf_pal_map_phys(attach); + if (IS_ERR(pages->dmabuf.exp_phys)) { + rc = PTR_ERR(pages->dmabuf.exp_phys); goto err_detach; + } + + /* For now only works with single range exporters */ + if (pages->dmabuf.exp_phys->length != 1) { + rc = -EINVAL; + goto err_unmap; + } + pages->dmabuf.phys = pages->dmabuf.exp_phys->phys[0];
dma_resv_unlock(dmabuf->resv);
@@ -1512,6 +1500,8 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, pages->dmabuf.attach = attach; return 0;
+err_unmap: + dma_buf_pal_unmap_phys(attach, pages->dmabuf.exp_phys); err_detach: dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach); @@ -1657,6 +1647,12 @@ void iopt_release_pages(struct kref *kref) if (iopt_is_dmabuf(pages) && pages->dmabuf.attach) { struct dma_buf *dmabuf = pages->dmabuf.attach->dmabuf;
+ dma_resv_lock(dmabuf->resv, NULL); + if (pages->dmabuf.exp_phys) + dma_buf_pal_unmap_phys(pages->dmabuf.attach, + pages->dmabuf.exp_phys); + dma_resv_unlock(dmabuf->resv); + dma_buf_detach(dmabuf, pages->dmabuf.attach); dma_buf_put(dmabuf); WARN_ON(!list_empty(&pages->dmabuf.tracker)); diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index 7aa6a58a5705f7..06820a50d5d24c 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -1962,19 +1962,6 @@ struct iommufd_test_dma_buf { bool revoked; };
-static struct sg_table * -iommufd_test_dma_buf_map(struct dma_buf_attachment *attachment, - enum dma_data_direction dir) -{ - return ERR_PTR(-EOPNOTSUPP); -} - -static void iommufd_test_dma_buf_unmap(struct dma_buf_attachment *attachment, - struct sg_table *sgt, - enum dma_data_direction dir) -{ -} - static void iommufd_test_dma_buf_release(struct dma_buf *dmabuf) { struct iommufd_test_dma_buf *priv = dmabuf->priv; @@ -1983,30 +1970,51 @@ static void iommufd_test_dma_buf_release(struct dma_buf *dmabuf) kfree(priv); }
-static const struct dma_buf_ops iommufd_test_dmabuf_ops = { - .release = iommufd_test_dma_buf_release, - DMA_BUF_SIMPLE_SGT_EXP_MATCH(iommufd_test_dma_buf_map, - iommufd_test_dma_buf_unmap), -}; - -int iommufd_test_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, - struct dma_buf_phys_vec *phys) +static struct dma_buf_phys_list * +iommufd_dma_pal_map_phys(struct dma_buf_attachment *attachment) { struct iommufd_test_dma_buf *priv = attachment->dmabuf->priv; + struct dma_buf_phys_list *phys;
dma_resv_assert_held(attachment->dmabuf->resv);
- if (attachment->dmabuf->ops != &iommufd_test_dmabuf_ops) - return -EOPNOTSUPP; - if (priv->revoked) - return -ENODEV; + return ERR_PTR(-ENODEV);
- phys->paddr = virt_to_phys(priv->memory); - phys->len = priv->length; - return 0; + phys = kvmalloc(struct_size(phys, phys, 1), GFP_KERNEL); + if (!phys) + return ERR_PTR(-ENOMEM); + + phys->length = 1; + phys->phys[0].paddr = virt_to_phys(priv->memory); + phys->phys[0].len = priv->length; + return phys; }
+static void iommufd_dma_pal_unmap_phys(struct dma_buf_attachment *attach, + struct dma_buf_phys_list *phys) +{ +} + +static const struct dma_buf_mapping_pal_exp_ops iommufd_test_dma_buf_pal_ops = { + .map_phys = iommufd_dma_pal_map_phys, + .unmap_phys = iommufd_dma_pal_unmap_phys, +}; + +static int iommufd_dma_buf_match_mapping(struct dma_buf_match_args *args) +{ + struct dma_buf_mapping_match pal_match[] = { + DMA_BUF_EMAPPING_PAL(&iommufd_test_dma_buf_pal_ops), + }; + + return dma_buf_match_mapping(args, pal_match, ARRAY_SIZE(pal_match)); +} + +static const struct dma_buf_ops iommufd_test_dmabuf_ops = { + .release = iommufd_test_dma_buf_release, + .match_mapping = iommufd_dma_buf_match_mapping, +}; + static int iommufd_test_dmabuf_get(struct iommufd_ucmd *ucmd, unsigned int open_flags, size_t len) diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c index f8d5848a47ff55..247c709541a937 100644 --- a/drivers/vfio/pci/vfio_pci_dmabuf.c +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -133,40 +133,6 @@ static const struct dma_buf_ops vfio_pci_dmabuf_ops = { .match_mapping = vfio_pci_dma_buf_match_mapping, };
-/* - * This is a temporary "private interconnect" between VFIO DMABUF and iommufd. - * It allows the two co-operating drivers to exchange the physical address of - * the BAR. This is to be replaced with a formal DMABUF system for negotiated - * interconnect types. - * - * If this function succeeds the following are true: - * - There is one physical range and it is pointing to MMIO - * - When move_notify is called it means revoke, not move, vfio_dma_buf_map - * will fail if it is currently revoked - */ -int vfio_pci_dma_buf_iommufd_map(struct dma_buf_attachment *attachment, - struct dma_buf_phys_vec *phys) -{ - struct vfio_pci_dma_buf *priv; - - dma_resv_assert_held(attachment->dmabuf->resv); - - if (attachment->dmabuf->ops != &vfio_pci_dmabuf_ops) - return -EOPNOTSUPP; - - priv = attachment->dmabuf->priv; - if (priv->revoked) - return -ENODEV; - - /* More than one range to iommufd will require proper DMABUF support */ - if (priv->nr_ranges != 1) - return -EOPNOTSUPP; - - *phys = priv->phys_vec[0]; - return 0; -} -EXPORT_SYMBOL_FOR_MODULES(vfio_pci_dma_buf_iommufd_map, "iommufd"); - int vfio_pci_core_fill_phys_vec(struct dma_buf_phys_vec *phys_vec, struct vfio_region_dma_range *dma_ranges, size_t nr_ranges, phys_addr_t start,
Support the full list of physical address protocol. This requires seeking to the correct start entry in the physical list and maintaing the current offset as the population progresses.
Remove the phys field related to the replaced single entry version.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/iommu/iommufd/io_pagetable.h | 3 +- drivers/iommu/iommufd/pages.c | 48 ++++++++++++++++++++-------- 2 files changed, 35 insertions(+), 16 deletions(-)
diff --git a/drivers/iommu/iommufd/io_pagetable.h b/drivers/iommu/iommufd/io_pagetable.h index fcd1a2c75dfa3d..3c95b631d86354 100644 --- a/drivers/iommu/iommufd/io_pagetable.h +++ b/drivers/iommu/iommufd/io_pagetable.h @@ -203,7 +203,6 @@ struct iopt_pages_dmabuf_track { struct iopt_pages_dmabuf { struct dma_buf_attachment *attach; struct dma_buf_phys_list *exp_phys; - struct dma_buf_phys_vec phys; /* Always PAGE_SIZE aligned */ unsigned long start; struct list_head tracker; @@ -260,7 +259,7 @@ static inline bool iopt_dmabuf_revoked(struct iopt_pages *pages) { lockdep_assert_held(&pages->mutex); if (iopt_is_dmabuf(pages)) - return pages->dmabuf.phys.len == 0; + return pages->dmabuf.exp_phys == NULL; return false; }
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 9a23c3e30959a9..85cb1f9ab2ae91 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1078,7 +1078,9 @@ static int pfn_reader_user_update_pinned(struct pfn_reader_user *user, }
struct pfn_reader_dmabuf { - struct dma_buf_phys_vec phys; + struct dma_buf_phys_list *exp_phys; + unsigned int cur_index; + unsigned long cur_base; unsigned long start_offset; };
@@ -1089,8 +1091,10 @@ static int pfn_reader_dmabuf_init(struct pfn_reader_dmabuf *dmabuf, if (WARN_ON(iopt_dmabuf_revoked(pages))) return -EINVAL;
- dmabuf->phys = pages->dmabuf.phys; + dmabuf->exp_phys = pages->dmabuf.exp_phys; dmabuf->start_offset = pages->dmabuf.start; + dmabuf->cur_index = 0; + dmabuf->cur_base = 0; return 0; }
@@ -1100,6 +1104,15 @@ static int pfn_reader_fill_dmabuf(struct pfn_reader_dmabuf *dmabuf, unsigned long last_index) { unsigned long start = dmabuf->start_offset + start_index * PAGE_SIZE; + unsigned long npages = last_index - start_index + 1; + struct dma_buf_phys_vec *vec = + &dmabuf->exp_phys->phys[dmabuf->cur_index]; + + while (dmabuf->cur_base + vec->len <= start) { + dmabuf->cur_base += vec->len; + dmabuf->cur_index++; + vec++; + }
/* * start/last_index and start are all PAGE_SIZE aligned, the batch is @@ -1107,8 +1120,25 @@ static int pfn_reader_fill_dmabuf(struct pfn_reader_dmabuf *dmabuf, * If the dmabuf has been sliced on a sub page offset then the common * batch to domain code will adjust it before mapping to the domain. */ - batch_add_pfn_num(batch, PHYS_PFN(dmabuf->phys.paddr + start), - last_index - start_index + 1, BATCH_MMIO); + while (npages) { + unsigned long offset_in_entry = start - dmabuf->cur_base; + unsigned long avail_pages = (vec->len - offset_in_entry) >> + PAGE_SHIFT; + unsigned long nr = min(npages, avail_pages); + + if (!batch_add_pfn_num( + batch, (vec->paddr + offset_in_entry) >> PAGE_SHIFT, + nr, BATCH_MMIO)) + break; + + start += nr * PAGE_SIZE; + npages -= nr; + if (nr == avail_pages) { + dmabuf->cur_base += vec->len; + dmabuf->cur_index++; + vec++; + } + } return 0; }
@@ -1447,7 +1477,6 @@ static void iopt_revoke_notify(struct dma_buf_attachment *attach) iopt_area_index(area), iopt_area_last_index(area)); } - pages->dmabuf.phys.len = 0; dma_buf_pal_unmap_phys(pages->dmabuf.attach, pages->dmabuf.exp_phys); pages->dmabuf.exp_phys = NULL; } @@ -1487,21 +1516,12 @@ static int iopt_map_dmabuf(struct iommufd_ctx *ictx, struct iopt_pages *pages, goto err_detach; }
- /* For now only works with single range exporters */ - if (pages->dmabuf.exp_phys->length != 1) { - rc = -EINVAL; - goto err_unmap; - } - pages->dmabuf.phys = pages->dmabuf.exp_phys->phys[0]; - dma_resv_unlock(dmabuf->resv);
/* On success iopt_release_pages() will detach and put the dmabuf. */ pages->dmabuf.attach = attach; return 0;
-err_unmap: - dma_buf_pal_unmap_phys(attach, pages->dmabuf.exp_phys); err_detach: dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach);
Improve the mock DMA-buf to have multiple physical ranges and add a method to compare the values loaded into the iommu_domain with the allocated page array.
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/iommu/iommufd/iommufd_test.h | 7 ++ drivers/iommu/iommufd/selftest.c | 107 +++++++++++++++--- tools/testing/selftests/iommu/iommufd.c | 43 +++++++ tools/testing/selftests/iommu/iommufd_utils.h | 17 +++ 4 files changed, 160 insertions(+), 14 deletions(-)
diff --git a/drivers/iommu/iommufd/iommufd_test.h b/drivers/iommu/iommufd/iommufd_test.h index 73e73e1ec15837..dae7d808b7bade 100644 --- a/drivers/iommu/iommufd/iommufd_test.h +++ b/drivers/iommu/iommufd/iommufd_test.h @@ -31,6 +31,7 @@ enum { IOMMU_TEST_OP_PASID_CHECK_HWPT, IOMMU_TEST_OP_DMABUF_GET, IOMMU_TEST_OP_DMABUF_REVOKE, + IOMMU_TEST_OP_MD_CHECK_DMABUF, };
enum { @@ -194,6 +195,12 @@ struct iommu_test_cmd { __s32 dmabuf_fd; __u32 revoked; } dmabuf_revoke; + struct { + __s32 dmabuf_fd; + __aligned_u64 iova; + __aligned_u64 length; + __aligned_u64 offset; + } check_dmabuf; }; __u32 last; }; diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index 06820a50d5d24c..e924281840a07e 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -1957,16 +1957,19 @@ void iommufd_selftest_destroy(struct iommufd_object *obj) }
struct iommufd_test_dma_buf { - void *memory; size_t length; + unsigned int npages; bool revoked; + struct page *pages[] __counted_by(npages); };
static void iommufd_test_dma_buf_release(struct dma_buf *dmabuf) { struct iommufd_test_dma_buf *priv = dmabuf->priv; + unsigned int i;
- kfree(priv->memory); + for (i = 0; i < priv->npages; i++) + __free_page(priv->pages[i]); kfree(priv); }
@@ -1981,19 +1984,22 @@ iommufd_dma_pal_map_phys(struct dma_buf_attachment *attachment) if (priv->revoked) return ERR_PTR(-ENODEV);
- phys = kvmalloc(struct_size(phys, phys, 1), GFP_KERNEL); + phys = kvmalloc(struct_size(phys, phys, priv->npages), GFP_KERNEL); if (!phys) return ERR_PTR(-ENOMEM);
- phys->length = 1; - phys->phys[0].paddr = virt_to_phys(priv->memory); - phys->phys[0].len = priv->length; + phys->length = priv->npages; + for (unsigned int i = 0; i < priv->npages; i++) { + phys->phys[i].paddr = page_to_phys(priv->pages[i]); + phys->phys[i].len = PAGE_SIZE; + } return phys; }
static void iommufd_dma_pal_unmap_phys(struct dma_buf_attachment *attach, struct dma_buf_phys_list *phys) { + kfree(phys); }
static const struct dma_buf_mapping_pal_exp_ops iommufd_test_dma_buf_pal_ops = { @@ -2022,21 +2028,27 @@ static int iommufd_test_dmabuf_get(struct iommufd_ucmd *ucmd, DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct iommufd_test_dma_buf *priv; struct dma_buf *dmabuf; + size_t i; int rc;
- len = ALIGN(len, PAGE_SIZE); - if (len == 0 || len > PAGE_SIZE * 512) + unsigned int npages; + + if (len == 0 || len % PAGE_SIZE || len > PAGE_SIZE * 512) return -EINVAL;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL); + npages = len >> PAGE_SHIFT; + priv = kzalloc(struct_size(priv, pages, npages), GFP_KERNEL); if (!priv) return -ENOMEM;
priv->length = len; - priv->memory = kzalloc(len, GFP_KERNEL); - if (!priv->memory) { - rc = -ENOMEM; - goto err_free; + priv->npages = npages; + for (i = 0; i < npages; i++) { + priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!priv->pages[i]) { + rc = -ENOMEM; + goto err_free; + } }
exp_info.ops = &iommufd_test_dmabuf_ops; @@ -2053,7 +2065,11 @@ static int iommufd_test_dmabuf_get(struct iommufd_ucmd *ucmd, return dma_buf_fd(dmabuf, open_flags);
err_free: - kfree(priv->memory); + for (unsigned int i = 0; i < npages; i++) { + if (!priv->pages[i]) + break; + __free_page(priv->pages[i]); + } kfree(priv); return rc; } @@ -2085,6 +2101,64 @@ static int iommufd_test_dmabuf_revoke(struct iommufd_ucmd *ucmd, int fd, return rc; }
+static int iommufd_test_md_check_dmabuf(struct iommufd_ucmd *ucmd, + unsigned int mockpt_id, int dmabuf_fd, + unsigned long iova, size_t length, + unsigned long offset) +{ + struct iommufd_hw_pagetable *hwpt; + struct iommufd_test_dma_buf *priv; + struct mock_iommu_domain *mock; + struct dma_buf *dmabuf; + unsigned int page_size; + unsigned long end; + size_t i; + int rc; + + hwpt = get_md_pagetable(ucmd, mockpt_id, &mock); + if (IS_ERR(hwpt)) + return PTR_ERR(hwpt); + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR(dmabuf)) { + rc = PTR_ERR(dmabuf); + goto out_put_hwpt; + } + + if (dmabuf->ops != &iommufd_test_dmabuf_ops) { + rc = -EINVAL; + goto out_put_dmabuf; + } + + priv = dmabuf->priv; + page_size = 1 << __ffs(mock->domain.pgsize_bitmap); + if (iova % page_size || length % page_size || offset % page_size || + check_add_overflow(offset, length, &end) || end > priv->length) { + rc = -EINVAL; + goto out_put_dmabuf; + } + + for (i = 0; i < length; i += page_size) { + phys_addr_t expected = + page_to_phys(priv->pages[(offset + i) / PAGE_SIZE]) + + ((offset + i) % PAGE_SIZE); + phys_addr_t io_phys = + mock->domain.ops->iova_to_phys(&mock->domain, iova + i); + + if (io_phys != expected) { + rc = -EINVAL; + goto out_put_dmabuf; + } + } + rc = 0; + +out_put_dmabuf: + dma_buf_put(dmabuf); +out_put_hwpt: + iommufd_put_object(ucmd->ictx, &hwpt->obj); + return rc; +} + int iommufd_test(struct iommufd_ucmd *ucmd) { struct iommu_test_cmd *cmd = ucmd->cmd; @@ -2170,6 +2244,11 @@ int iommufd_test(struct iommufd_ucmd *ucmd) return iommufd_test_dmabuf_revoke(ucmd, cmd->dmabuf_revoke.dmabuf_fd, cmd->dmabuf_revoke.revoked); + case IOMMU_TEST_OP_MD_CHECK_DMABUF: + return iommufd_test_md_check_dmabuf( + ucmd, cmd->id, cmd->check_dmabuf.dmabuf_fd, + cmd->check_dmabuf.iova, cmd->check_dmabuf.length, + cmd->check_dmabuf.offset); default: return -EOPNOTSUPP; } diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index dadad277f4eb2e..2673f9f153392f 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -1580,10 +1580,53 @@ TEST_F(iommufd_ioas, dmabuf_simple) test_err_ioctl_ioas_map_file(EINVAL, dfd, buf_size, buf_size, &iova); test_err_ioctl_ioas_map_file(EINVAL, dfd, 0, buf_size + 1, &iova); test_ioctl_ioas_map_file(dfd, 0, buf_size, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, buf_size, 0);
close(dfd); }
+TEST_F(iommufd_ioas, dmabuf_multi_page) +{ + __u64 iova; + int dfd; + + /* Single page */ + test_cmd_get_dmabuf(PAGE_SIZE, &dfd); + test_ioctl_ioas_map_file(dfd, 0, PAGE_SIZE, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, PAGE_SIZE, 0); + close(dfd); + + /* Many pages - exercises batch filling across multiple phys entries */ + test_cmd_get_dmabuf(PAGE_SIZE * 64, &dfd); + test_ioctl_ioas_map_file(dfd, 0, PAGE_SIZE * 64, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, PAGE_SIZE * 64, + 0); + close(dfd); + + /* Sub-range from the middle - exercises seeking into the phys array */ + test_cmd_get_dmabuf(PAGE_SIZE * 16, &dfd); + test_ioctl_ioas_map_file(dfd, PAGE_SIZE * 4, PAGE_SIZE * 8, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, PAGE_SIZE * 8, + PAGE_SIZE * 4); + close(dfd); + + /* Multiple sub-ranges from the same dmabuf */ + test_cmd_get_dmabuf(PAGE_SIZE * 16, &dfd); + test_ioctl_ioas_map_file(dfd, 0, PAGE_SIZE * 4, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, PAGE_SIZE * 4, + 0); + test_ioctl_ioas_map_file(dfd, PAGE_SIZE * 8, PAGE_SIZE * 4, &iova); + if (variant->mock_domains) + test_cmd_check_dmabuf(self->hwpt_id, dfd, iova, PAGE_SIZE * 4, + PAGE_SIZE * 8); + close(dfd); +} + TEST_F(iommufd_ioas, dmabuf_revoke) { size_t buf_size = PAGE_SIZE*4; diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index 5502751d500c89..35fd91d354f998 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -593,6 +593,23 @@ static int _test_cmd_revoke_dmabuf(int fd, int dmabuf_fd, bool revoked) #define test_cmd_revoke_dmabuf(dmabuf_fd, revoke) \ ASSERT_EQ(0, _test_cmd_revoke_dmabuf(self->fd, dmabuf_fd, revoke))
+#define test_cmd_check_dmabuf(_hwpt_id, _dmabuf_fd, _iova, _length, _offset) \ + ({ \ + struct iommu_test_cmd check_cmd = { \ + .size = sizeof(check_cmd), \ + .op = IOMMU_TEST_OP_MD_CHECK_DMABUF, \ + .id = _hwpt_id, \ + .check_dmabuf = { .dmabuf_fd = _dmabuf_fd, \ + .iova = _iova, \ + .length = _length, \ + .offset = _offset }, \ + }; \ + ASSERT_EQ(0, ioctl(self->fd, \ + _IOMMU_TEST_CMD( \ + IOMMU_TEST_OP_MD_CHECK_DMABUF), \ + &check_cmd)); \ + }) + static int _test_ioctl_destroy(int fd, unsigned int id) { struct iommu_destroy cmd = {
Some basic coverage of common flows: - Check dma_buf_match_mapping()'s rules. These choices effectively become driver facing API and it would be a pain to change them later - Test the dma_bug_sgt attachment flow to see that the new wrappers work
Signed-off-by: Jason Gunthorpe jgg@nvidia.com --- drivers/dma-buf/Makefile | 1 + drivers/dma-buf/st-dma-mapping.c | 373 +++++++++++++++++++++++++++++++ 2 files changed, 374 insertions(+) create mode 100644 drivers/dma-buf/st-dma-mapping.c
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 12c86da25866c1..0ba311be8d3547 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -12,6 +12,7 @@ dmabuf_kunit-y := \ st-dma-fence.o \ st-dma-fence-chain.o \ st-dma-fence-unwrap.o \ + st-dma-mapping.o \ st-dma-resv.o
obj-$(CONFIG_DMABUF_KUNIT_TEST) += dmabuf_kunit.o diff --git a/drivers/dma-buf/st-dma-mapping.c b/drivers/dma-buf/st-dma-mapping.c new file mode 100644 index 00000000000000..1bccfe43a576d0 --- /dev/null +++ b/drivers/dma-buf/st-dma-mapping.c @@ -0,0 +1,373 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KUnit tests for dma_buf_match_mapping() + */ + +#include <kunit/device.h> +#include <kunit/test.h> +#include <linux/dma-buf.h> +#include <linux/dma-buf-mapping.h> +#include <linux/errno.h> + +/* Mock tracking state -- reset before each test */ +static bool mock_match_called; +static const struct dma_buf_mapping_match *mock_match_exp_arg; +static const struct dma_buf_mapping_match *mock_match_imp_arg; +static int mock_match_ret; + +static bool mock_finish_called; +static const struct dma_buf_match_args *mock_finish_args_arg; +static const struct dma_buf_mapping_match *mock_finish_exp_arg; +static const struct dma_buf_mapping_match *mock_finish_imp_arg; + +static int reset_mock_state(struct kunit *test) +{ + mock_match_called = false; + mock_match_exp_arg = NULL; + mock_match_imp_arg = NULL; + mock_match_ret = 0; + mock_finish_called = false; + mock_finish_args_arg = NULL; + mock_finish_exp_arg = NULL; + mock_finish_imp_arg = NULL; + return 0; +} + +static int mock_match(struct dma_buf *dmabuf, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp) +{ + mock_match_called = true; + mock_match_exp_arg = exp; + mock_match_imp_arg = imp; + return mock_match_ret; +} + +static void mock_finish_match(struct dma_buf_match_args *args, + const struct dma_buf_mapping_match *exp, + const struct dma_buf_mapping_match *imp) +{ + mock_finish_called = true; + mock_finish_args_arg = args; + mock_finish_exp_arg = exp; + mock_finish_imp_arg = imp; + + /* Test doesn't always set attach */ + if (args->attach) + args->attach->map_type = (struct dma_buf_mapping_match){ + .type = exp->type, + .exp_ops = exp->exp_ops, + }; +} + +/* Type with both match and finish_match callbacks */ +static struct dma_buf_mapping_type mock_type_a = { + .name = "mock_type_a", + .match = mock_match, + .finish_match = mock_finish_match, +}; + +/* Second type -- distinct pointer identity from A */ +static struct dma_buf_mapping_type mock_type_b = { + .name = "mock_type_b", + .match = mock_match, + .finish_match = mock_finish_match, +}; + +static void test_match_fail(struct kunit *test) +{ + struct dma_buf_mapping_match matches[] = { { .type = &mock_type_a } }; + struct dma_buf_mapping_match exp[] = { { .type = &mock_type_b } }; + struct dma_buf_match_args args = { + .imp_matches = matches, + .imp_len = ARRAY_SIZE(matches), + }; + + /* Zero-length exporter array returns -EINVAL */ + KUNIT_EXPECT_EQ(test, dma_buf_match_mapping(&args, NULL, 0), -EINVAL); + KUNIT_EXPECT_FALSE(test, mock_match_called); + KUNIT_EXPECT_FALSE(test, mock_finish_called); + + /* Zero-length importer array returns -EINVAL */ + args = (struct dma_buf_match_args){}; + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, matches, + ARRAY_SIZE(matches)), + -EINVAL); + KUNIT_EXPECT_FALSE(test, mock_match_called); + KUNIT_EXPECT_FALSE(test, mock_finish_called); + + /* Different types produce no match */ + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, exp, ARRAY_SIZE(exp)), + -EINVAL); + KUNIT_EXPECT_FALSE(test, mock_match_called); + KUNIT_EXPECT_FALSE(test, mock_finish_called); +} + +/* When type->match() is NULL same types always match */ +static void test_match_no_match_callback(struct kunit *test) +{ + static struct dma_buf_mapping_type mock_type_no_match = { + .name = "mock_type_no_match", + .finish_match = mock_finish_match, + }; + struct dma_buf_mapping_match matches[] = { + { .type = &mock_type_no_match } + }; + struct dma_buf_match_args args = { + .imp_matches = matches, + .imp_len = ARRAY_SIZE(matches), + }; + + KUNIT_EXPECT_EQ( + test, + dma_buf_match_mapping(&args, matches, ARRAY_SIZE(matches)), 0); + KUNIT_EXPECT_FALSE(test, mock_match_called); + KUNIT_EXPECT_TRUE(test, mock_finish_called); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_args_arg, &args); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_exp_arg, &matches[0]); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_imp_arg, &matches[0]); +} + +static void test_match_callback_returns(struct kunit *test) +{ + struct dma_buf_mapping_match matches[] = { { .type = &mock_type_a } }; + struct dma_buf_match_args args = { + .imp_matches = matches, + .imp_len = ARRAY_SIZE(matches), + }; + + /* type->match() returns -EOPNOTSUPP. Skips to next */ + mock_match_ret = -EOPNOTSUPP; + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, matches, + ARRAY_SIZE(matches)), + -EINVAL); + KUNIT_EXPECT_TRUE(test, mock_match_called); + KUNIT_EXPECT_FALSE(test, mock_finish_called); + + /* type->match() returns an error code. Stops immediately, returns code */ + mock_match_ret = -ENOMEM; + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, matches, + ARRAY_SIZE(matches)), + -ENOMEM); + KUNIT_EXPECT_TRUE(test, mock_match_called); + KUNIT_EXPECT_FALSE(test, mock_finish_called); +} + +/* Multiple importers. First exporter compatible type wins */ +static void test_match_exporter_priority(struct kunit *test) +{ + struct dma_buf_mapping_match exp1[2] = { + { .type = &mock_type_a }, + { .type = &mock_type_b }, + }; + struct dma_buf_mapping_match exp2[] = { { .type = &mock_type_b } }; + struct dma_buf_mapping_match imp[2] = { + { .type = &mock_type_a }, + { .type = &mock_type_b }, + }; + struct dma_buf_match_args args = { + .imp_matches = imp, + .imp_len = ARRAY_SIZE(imp), + }; + + /* First matches */ + KUNIT_EXPECT_EQ( + test, dma_buf_match_mapping(&args, exp1, ARRAY_SIZE(exp1)), 0); + KUNIT_EXPECT_TRUE(test, mock_finish_called); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_exp_arg, &exp1[0]); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_imp_arg, &imp[0]); + + /* Second matches */ + KUNIT_EXPECT_EQ( + test, dma_buf_match_mapping(&args, exp2, ARRAY_SIZE(exp2)), 0); + KUNIT_EXPECT_TRUE(test, mock_finish_called); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_exp_arg, &exp2[0]); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_imp_arg, &imp[1]); +} + +/* Multiple exporters. First exporter compatible type wins */ +static void test_match_importer_priority(struct kunit *test) +{ + struct dma_buf_mapping_match exp[] = { + { .type = &mock_type_a }, + { .type = &mock_type_b }, + }; + struct dma_buf_mapping_match imp1[] = { { .type = &mock_type_b } }; + struct dma_buf_mapping_match imp2[] = { + { .type = &mock_type_b }, + { .type = &mock_type_a }, + }; + struct dma_buf_match_args args = { + .imp_matches = imp1, + .imp_len = ARRAY_SIZE(imp1), + }; + + /* Single importer */ + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, exp, ARRAY_SIZE(exp)), 0); + KUNIT_EXPECT_TRUE(test, mock_finish_called); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_exp_arg, &exp[1]); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_imp_arg, &imp1[0]); + + /* Two importers, skipping the first */ + args = (struct dma_buf_match_args){ + .imp_matches = imp2, + .imp_len = ARRAY_SIZE(imp2), + }; + KUNIT_EXPECT_EQ(test, + dma_buf_match_mapping(&args, exp, ARRAY_SIZE(exp)), 0); + KUNIT_EXPECT_TRUE(test, mock_finish_called); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_exp_arg, &exp[0]); + KUNIT_EXPECT_PTR_EQ(test, mock_finish_imp_arg, &imp2[1]); +} + +static void mock_dmabuf_release(struct dma_buf *dmabuf) +{ +} + +static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attach, + enum dma_data_direction dir) +{ + return ERR_PTR(-ENODEV); +} + +static void mock_unmap_dma_buf(struct dma_buf_attachment *attach, + struct sg_table *sgt, + enum dma_data_direction dir) +{ +} + +static const struct dma_buf_mapping_sgt_exp_ops mock_sgt_ops = { + .map_dma_buf = mock_map_dma_buf, + .unmap_dma_buf = mock_unmap_dma_buf, +}; + +static const struct dma_buf_ops mock_dmabuf_simple_sgt_ops = { + .release = mock_dmabuf_release, + DMA_BUF_SIMPLE_SGT_EXP_MATCH(mock_map_dma_buf, mock_unmap_dma_buf), +}; + +static int mock_dmabuf_match_mapping(struct dma_buf_match_args *args) +{ + struct dma_buf_mapping_match sgt_match[2]; + unsigned int num_match = 0; + + sgt_match[num_match++] = + (struct dma_buf_mapping_match){ .type = &mock_type_a }; + + sgt_match[num_match++] = DMA_BUF_EMAPPING_SGT(&mock_sgt_ops); + + return dma_buf_match_mapping(args, sgt_match, ARRAY_SIZE(sgt_match)); +} + +static const struct dma_buf_ops mock_dmabuf_two_exp_ops = { + .release = mock_dmabuf_release, + .match_mapping = mock_dmabuf_match_mapping, +}; + +struct dma_exporter { + const struct dma_buf_ops *ops; + const char *desc; +}; + +static struct dma_buf *mock_dmabuf_export(const struct dma_buf_ops *ops) +{ + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + + exp_info.ops = ops; + exp_info.size = PAGE_SIZE; + exp_info.priv = ERR_PTR(-EINVAL); + return dma_buf_export(&exp_info); +} + +/* + * Check that a simple SGT exporter with single_exporter_match works with + * dma_buf_sgt_attach() + */ +static void test_sgt_attach(struct kunit *test) +{ + const struct dma_exporter *param = test->param_value; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct device *dev; + + dev = kunit_device_register(test, "dma-buf-test"); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); + + dmabuf = mock_dmabuf_export(param->ops); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dmabuf); + + attach = dma_buf_sgt_attach(dmabuf, dev); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, attach); + + KUNIT_EXPECT_PTR_EQ(test, attach->map_type.type, + &dma_buf_mapping_sgt_type); + KUNIT_EXPECT_PTR_EQ(test, dma_buf_sgt_dma_device(attach), dev); + KUNIT_EXPECT_FALSE(test, dma_buf_sgt_p2p_allowed(attach)); + + dma_buf_detach(dmabuf, attach); + dma_buf_put(dmabuf); +} + +static void mock_move_notify(struct dma_buf_attachment *attach) +{ +} + +static const struct dma_buf_attach_ops mock_importer_ops = { + .move_notify = &mock_move_notify, +}; + +/* Check a dynamic attach with a non-sgt mapping type */ +static void test_mock_attach(struct kunit *test) +{ + struct dma_buf_mapping_match imp[] = { { .type = &mock_type_a } }; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct device *dev; + + dev = kunit_device_register(test, "dma-buf-test"); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); + + dmabuf = mock_dmabuf_export(&mock_dmabuf_two_exp_ops); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dmabuf); + + attach = dma_buf_mapping_attach(dmabuf, imp, ARRAY_SIZE(imp), + &mock_importer_ops, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, attach); + + KUNIT_EXPECT_PTR_EQ(test, attach->map_type.type, &mock_type_a); + + dma_buf_detach(dmabuf, attach); + dma_buf_put(dmabuf); +} + +static const struct dma_exporter dma_exporter_params[] = { + { &mock_dmabuf_simple_sgt_ops, "simple_sgt" }, + { &mock_dmabuf_two_exp_ops, "two_exp" }, +}; +KUNIT_ARRAY_PARAM_DESC(dma_exporter, dma_exporter_params, desc); + +static struct kunit_case dma_mapping_cases[] = { + KUNIT_CASE(test_match_fail), + KUNIT_CASE(test_match_no_match_callback), + KUNIT_CASE(test_match_callback_returns), + KUNIT_CASE(test_match_exporter_priority), + KUNIT_CASE(test_match_importer_priority), + KUNIT_CASE_PARAM(test_sgt_attach, dma_exporter_gen_params), + KUNIT_CASE(test_mock_attach), + {} +}; + +static struct kunit_suite dma_mapping_test_suite = { + .name = "dma-buf-mapping", + .init = reset_mock_state, + .test_cases = dma_mapping_cases, +}; + +kunit_test_suite(dma_mapping_test_suite); + +MODULE_IMPORT_NS("DMA_BUF");
linaro-mm-sig@lists.linaro.org