This patch introduces a new heap driver to expose DT non‑reusable "shared-dma-pool" coherent regions as dma-buf heaps, so userspace can allocate buffers from each reserved, named region.
Because these regions are device‑dependent, each heap instance binds a heap device to its reserved‑mem region via a newly introduced helper function -namely, of_reserved_mem_device_init_with_mem()- so coherent allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep review focused on the new heap; I plan to follow up based on Eric’s [1] and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2@... [2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.o... [3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812c...
Signed-off-by: Albert Esteve aesteve@redhat.com --- Changes in v2: - Removed dmem charging parts - Moved coherent heap registering logic to coherent.c - Made heap device a member of struct dma_heap - Split dma_heap_add logic into create/register, to be able to access the stored heap device before registered. - Avoid platform device in favour of heap device - Added a wrapper to rmem device_init() op - Switched from late_initcall() to module_init() - Made the coherent heap driver modular - Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef43...
--- Albert Esteve (5): dma-buf: dma-heap: split dma_heap_add of_reserved_mem: add a helper for rmem device_init op dma-buf: heaps: Add Coherent heap to dmabuf heaps dma: coherent: register to coherent heap dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1): dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++-- drivers/dma-buf/heaps/Kconfig | 9 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/coherent_heap.c | 429 ++++++++++++++++++++++++++++++++++ drivers/of/of_reserved_mem.c | 27 ++- include/linux/dma-heap.h | 16 ++ include/linux/dma-map-ops.h | 7 + include/linux/of_reserved_mem.h | 8 + kernel/dma/coherent.c | 34 +++ 9 files changed, 642 insertions(+), 27 deletions(-) --- base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
From: John Stultz john.stultz@linaro.org
Keep track of the heap device struct.
This will be useful for special DMA allocations and actions.
Signed-off-by: John Stultz john.stultz@linaro.org Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/dma-buf/dma-heap.c | 34 ++++++++++++++++++++++++++-------- include/linux/dma-heap.h | 2 ++ 2 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index ac5f8685a6494..1124d63eb1398 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -31,6 +31,7 @@ * @heap_devt: heap device node * @list: list head connecting to list of heaps * @heap_cdev: heap char device + * @heap_dev: heap device * * Represents a heap of memory from which buffers can be made. */ @@ -41,6 +42,7 @@ struct dma_heap { dev_t heap_devt; struct list_head list; struct cdev heap_cdev; + struct device *heap_dev; };
static LIST_HEAD(heap_list); @@ -223,6 +225,19 @@ const char *dma_heap_get_name(struct dma_heap *heap) } EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP");
+/** + * dma_heap_get_dev() - get device struct for the heap + * @heap: DMA-Heap to retrieve device struct from + * + * Returns: + * The device struct for the heap. + */ +struct device *dma_heap_get_dev(struct dma_heap *heap) +{ + return heap->heap_dev; +} +EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP"); + /** * dma_heap_add - adds a heap to dmabuf heaps * @exp_info: information needed to register this heap @@ -230,7 +245,6 @@ EXPORT_SYMBOL_NS_GPL(dma_heap_get_name, "DMA_BUF_HEAP"); struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { struct dma_heap *heap, *h, *err_ret; - struct device *dev_ret; unsigned int minor; int ret;
@@ -272,14 +286,14 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) goto err1; }
- dev_ret = device_create(dma_heap_class, - NULL, - heap->heap_devt, - NULL, - heap->name); - if (IS_ERR(dev_ret)) { + heap->heap_dev = device_create(dma_heap_class, + NULL, + heap->heap_devt, + NULL, + heap->name); + if (IS_ERR(heap->heap_dev)) { pr_err("dma_heap: Unable to create device\n"); - err_ret = ERR_CAST(dev_ret); + err_ret = ERR_CAST(heap->heap_dev); goto err2; }
@@ -295,6 +309,10 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) } }
+ /* Make sure it doesn't disappear on us */ + heap->heap_dev = get_device(heap->heap_dev); + + /* Add heap to the list */ list_add(&heap->list, &heap_list); mutex_unlock(&heap_list_lock); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 648328a64b27e..493085e69b70e 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -12,6 +12,7 @@ #include <linux/types.h>
struct dma_heap; +struct device;
/** * struct dma_heap_ops - ops to operate on a given heap @@ -43,6 +44,7 @@ struct dma_heap_export_info { void *dma_heap_get_drvdata(struct dma_heap *heap);
const char *dma_heap_get_name(struct dma_heap *heap); +struct device *dma_heap_get_dev(struct dma_heap *heap);
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
On Tue, 3 Mar 2026 13:33:44 +0100, Albert Esteve wrote:
From: John Stultz john.stultz@linaro.org
Keep track of the heap device struct.
This will be useful for special DMA allocations
[ ... ]
Reviewed-by: Maxime Ripard mripard@kernel.org
Thanks! Maxime
Split dma_heap_add() into creation and registration phases while preserving the ordering between cdev_add() and device_add(), and ensuring all device fields are initialised.
This will allow to access the heap_dev before it is registered and becomes available to userspace, making error handling easier.
Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/dma-buf/dma-heap.c | 126 +++++++++++++++++++++++++++++++++++---------- include/linux/dma-heap.h | 3 ++ 2 files changed, 103 insertions(+), 26 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index 1124d63eb1398..88189d4e48561 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -238,15 +238,30 @@ struct device *dma_heap_get_dev(struct dma_heap *heap) } EXPORT_SYMBOL_NS_GPL(dma_heap_get_dev, "DMA_BUF_HEAP");
+static void dma_heap_dev_release(struct device *dev) +{ + struct dma_heap *heap; + + pr_debug("heap device: '%s': %s\n", dev_name(dev), __func__); + heap = dev_get_drvdata(dev); + kfree(heap->name); + kfree(heap); + kfree(dev); +} + /** - * dma_heap_add - adds a heap to dmabuf heaps - * @exp_info: information needed to register this heap + * dma_heap_create() - allocate and initialize a heap object + * @exp_info: information needed to create a heap + * + * Creates a heap instance but does not register it or create device nodes. + * Use dma_heap_register() to make it visible to userspace, or + * dma_heap_destroy() to release it. + * + * Returns a heap on success or ERR_PTR(-errno) on failure. */ -struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) +struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info) { - struct dma_heap *heap, *h, *err_ret; - unsigned int minor; - int ret; + struct dma_heap *heap;
if (!exp_info->name || !strcmp(exp_info->name, "")) { pr_err("dma_heap: Cannot add heap without a name\n"); @@ -265,13 +280,41 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) heap->name = exp_info->name; heap->ops = exp_info->ops; heap->priv = exp_info->priv; + heap->heap_dev = kzalloc_obj(*heap->heap_dev); + if (!heap->heap_dev) { + kfree(heap); + return ERR_PTR(-ENOMEM); + } + + device_initialize(heap->heap_dev); + dev_set_drvdata(heap->heap_dev, heap); + + dev_set_name(heap->heap_dev, heap->name); + heap->heap_dev->class = dma_heap_class; + heap->heap_dev->release = dma_heap_dev_release; + + return heap; +} +EXPORT_SYMBOL_NS_GPL(dma_heap_create, "DMA_BUF_HEAP"); + +/** + * dma_heap_register() - register a heap with the dma-heap framework + * @heap: heap instance created with dma_heap_create() + * + * Registers the heap, creating its device node and adding it to the heap + * list. Returns 0 on success or a negative error code on failure. + */ +int dma_heap_register(struct dma_heap *heap) +{ + struct dma_heap *h; + unsigned int minor; + int ret;
/* Find unused minor number */ ret = xa_alloc(&dma_heap_minors, &minor, heap, XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL); if (ret < 0) { pr_err("dma_heap: Unable to get minor number for heap\n"); - err_ret = ERR_PTR(ret); goto err0; }
@@ -282,42 +325,34 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1); if (ret < 0) { pr_err("dma_heap: Unable to add char device\n"); - err_ret = ERR_PTR(ret); goto err1; }
- heap->heap_dev = device_create(dma_heap_class, - NULL, - heap->heap_devt, - NULL, - heap->name); - if (IS_ERR(heap->heap_dev)) { - pr_err("dma_heap: Unable to create device\n"); - err_ret = ERR_CAST(heap->heap_dev); + heap->heap_dev->devt = heap->heap_devt; + + ret = device_add(heap->heap_dev); + if (ret) { + pr_err("dma_heap: Unable to add device\n"); goto err2; }
mutex_lock(&heap_list_lock); /* check the name is unique */ list_for_each_entry(h, &heap_list, list) { - if (!strcmp(h->name, exp_info->name)) { + if (!strcmp(h->name, heap->name)) { mutex_unlock(&heap_list_lock); pr_err("dma_heap: Already registered heap named %s\n", - exp_info->name); - err_ret = ERR_PTR(-EINVAL); + heap->name); + ret = -EINVAL; goto err3; } }
- /* Make sure it doesn't disappear on us */ - heap->heap_dev = get_device(heap->heap_dev); - - /* Add heap to the list */ list_add(&heap->list, &heap_list); mutex_unlock(&heap_list_lock);
- return heap; + return 0;
err3: device_destroy(dma_heap_class, heap->heap_devt); @@ -326,8 +361,47 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) err1: xa_erase(&dma_heap_minors, minor); err0: - kfree(heap); - return err_ret; + dma_heap_destroy(heap); + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_heap_register, "DMA_BUF_HEAP"); + +/** + * dma_heap_destroy() - release a heap created by dma_heap_create() + * @heap: heap instance to release + * + * Drops the heap device reference; the heap and its device are freed in the + * device release path when the last reference is gone. + */ +void dma_heap_destroy(struct dma_heap *heap) +{ + put_device(heap->heap_dev); +} +EXPORT_SYMBOL_NS_GPL(dma_heap_destroy, "DMA_BUF_HEAP"); + +/** + * dma_heap_add - adds a heap to dmabuf heaps + * @exp_info: information needed to register this heap + */ +struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) +{ + struct dma_heap *heap; + int ret; + + heap = dma_heap_create(exp_info); + if (IS_ERR(heap)) { + pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap)); + return PTR_ERR(heap); + } + + ret = dma_heap_register(heap); + if (ret) { + pr_err("dma_heap: failed to register heap (%d)\n", ret); + dma_heap_destroy(heap); + return ERR_PTR(ret); + } + + return heap; } EXPORT_SYMBOL_NS_GPL(dma_heap_add, "DMA_BUF_HEAP");
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 493085e69b70e..1b0ea43ba66c3 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -46,6 +46,9 @@ void *dma_heap_get_drvdata(struct dma_heap *heap); const char *dma_heap_get_name(struct dma_heap *heap); struct device *dma_heap_get_dev(struct dma_heap *heap);
+struct dma_heap *dma_heap_create(const struct dma_heap_export_info *exp_info); +int dma_heap_register(struct dma_heap *heap); +void dma_heap_destroy(struct dma_heap *heap); struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
Add a helper function wrapping internal reserved memory device_init call and expose it externally.
Use the new helper function within of_reserved_mem_device_init_by_idx().
Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/of/of_reserved_mem.c | 27 +++++++++++++++++++++++---- include/linux/of_reserved_mem.h | 8 ++++++++ 2 files changed, 31 insertions(+), 4 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 1fd28f8056108..3a350bef8f11e 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -605,6 +605,28 @@ struct rmem_assigned_device { static LIST_HEAD(of_rmem_assigned_device_list); static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
+/** + * of_reserved_mem_device_init_with_mem() - assign reserved memory region to + * given device + * @dev: Pointer to the device to configure + * @rmem: Reserved memory region to assign + * + * This function assigns respective DMA-mapping operations based on the + * reserved memory region already provided in @rmem to the @dev device, + * without walking DT nodes. + * + * Returns error code or zero on success. + */ +int of_reserved_mem_device_init_with_mem(struct device *dev, + struct reserved_mem *rmem) +{ + if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init) + return -EINVAL; + + return rmem->ops->device_init(rmem, dev); +} +EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_with_mem); + /** * of_reserved_mem_device_init_by_idx() - assign reserved memory region to * given device @@ -643,14 +665,11 @@ int of_reserved_mem_device_init_by_idx(struct device *dev, rmem = of_reserved_mem_lookup(target); of_node_put(target);
- if (!rmem || !rmem->ops || !rmem->ops->device_init) - return -EINVAL; - rd = kmalloc_obj(struct rmem_assigned_device); if (!rd) return -ENOMEM;
- ret = rmem->ops->device_init(rmem, dev); + ret = of_reserved_mem_device_init_with_mem(dev, rmem); if (ret == 0) { rd->dev = dev; rd->rmem = rmem; diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index f573423359f48..12f7ddb7ee61f 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -32,6 +32,8 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem); #define RESERVEDMEM_OF_DECLARE(name, compat, init) \ _OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
+int of_reserved_mem_device_init_with_mem(struct device *dev, + struct reserved_mem *rmem); int of_reserved_mem_device_init_by_idx(struct device *dev, struct device_node *np, int idx); int of_reserved_mem_device_init_by_name(struct device *dev, @@ -51,6 +53,12 @@ int of_reserved_mem_region_count(const struct device_node *np); #define RESERVEDMEM_OF_DECLARE(name, compat, init) \ _OF_DECLARE_STUB(reservedmem, name, compat, init, reservedmem_of_init_fn)
+static inline int of_reserved_mem_device_init_with_mem(struct device *dev, + struct reserved_mem *rmem) +{ + return -EOPNOTSUPP; +} + static inline int of_reserved_mem_device_init_by_idx(struct device *dev, struct device_node *np, int idx) {
Add a dma-buf heap for DT coherent reserved-memory (i.e., 'shared-dma-pool' without 'reusable' property), exposing one heap per region for userspace buffers.
The heap binds the heap device to each memory region so coherent allocations use the correct dev->dma_mem, and it defers registration until module_init when normal allocators are available.
Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/dma-buf/dma-heap.c | 4 +- drivers/dma-buf/heaps/Kconfig | 9 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/coherent_heap.c | 426 ++++++++++++++++++++++++++++++++++ include/linux/dma-heap.h | 11 + include/linux/dma-map-ops.h | 7 + 6 files changed, 456 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index 88189d4e48561..ba87e5ac16ae2 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -390,8 +390,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
heap = dma_heap_create(exp_info); if (IS_ERR(heap)) { - pr_err("dma_heap: failed to create heap (%d)\n", PTR_ERR(heap)); - return PTR_ERR(heap); + pr_err("dma_heap: failed to create heap (%ld)\n", PTR_ERR(heap)); + return ERR_CAST(heap); }
ret = dma_heap_register(heap); diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c42264..aeb475e585048 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_COHERENT + bool "DMA-BUF Coherent Reserved-Memory Heap" + depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT + help + Choose this option to enable coherent reserved-memory dma-buf heaps. + This heap is backed by non-reusable DT "shared-dma-pool" regions. + If your system defines coherent reserved-memory regions, you should + say Y here. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032f..96bda7a65f041 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_COHERENT) += coherent_heap.o diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c new file mode 100644 index 0000000000000..d033d737bb9df --- /dev/null +++ b/drivers/dma-buf/heaps/coherent_heap.c @@ -0,0 +1,426 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF heap for coherent reserved-memory regions + * + * Copyright (C) 2026 Red Hat, Inc. + * Author: Albert Esteve aesteve@redhat.com + * + */ + +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/dma-map-ops.h> +#include <linux/dma-mapping.h> +#include <linux/err.h> +#include <linux/highmem.h> +#include <linux/iosys-map.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> + +struct coherent_heap { + struct dma_heap *heap; + struct reserved_mem *rmem; + char *name; +}; + +struct coherent_heap_buffer { + struct coherent_heap *heap; + struct list_head attachments; + struct mutex lock; + unsigned long len; + dma_addr_t dma_addr; + void *alloc_vaddr; + struct page **pages; + pgoff_t pagecount; + int vmap_cnt; + void *vaddr; +}; + +struct dma_heap_attachment { + struct device *dev; + struct sg_table table; + struct list_head list; + bool mapped; +}; + +static int coherent_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + int ret; + + a = kzalloc_obj(*a); + if (!a) + return -ENOMEM; + + ret = sg_alloc_table_from_pages(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + GFP_KERNEL); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + a->mapped = false; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void coherent_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table *coherent_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + struct sg_table *table = &a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(-ENOMEM); + a->mapped = true; + + return table; +} + +static void coherent_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int coherent_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int coherent_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int coherent_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct coherent_heap *coh_heap = buffer->heap; + struct device *heap_dev = dma_heap_get_dev(coh_heap->heap); + + return dma_mmap_coherent(heap_dev, vma, buffer->alloc_vaddr, + buffer->dma_addr, buffer->len); +} + +static void *coherent_heap_do_vmap(struct coherent_heap_buffer *buffer) +{ + void *vaddr; + + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int coherent_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + int ret = 0; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + iosys_map_set_vaddr(map, buffer->vaddr); + goto out; + } + + vaddr = coherent_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + ret = PTR_ERR(vaddr); + goto out; + } + + buffer->vaddr = vaddr; + buffer->vmap_cnt++; + iosys_map_set_vaddr(map, buffer->vaddr); +out: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void coherent_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); + iosys_map_clear(map); +} + +static void coherent_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct coherent_heap_buffer *buffer = dmabuf->priv; + struct coherent_heap *coh_heap = buffer->heap; + struct device *heap_dev = dma_heap_get_dev(coh_heap->heap); + + if (buffer->vmap_cnt > 0) { + WARN(1, "%s: buffer still mapped in the kernel\n", __func__); + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + buffer->vmap_cnt = 0; + } + + if (buffer->alloc_vaddr) + dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr, + buffer->dma_addr); + kfree(buffer->pages); + kfree(buffer); +} + +static const struct dma_buf_ops coherent_heap_buf_ops = { + .attach = coherent_heap_attach, + .detach = coherent_heap_detach, + .map_dma_buf = coherent_heap_map_dma_buf, + .unmap_dma_buf = coherent_heap_unmap_dma_buf, + .begin_cpu_access = coherent_heap_dma_buf_begin_cpu_access, + .end_cpu_access = coherent_heap_dma_buf_end_cpu_access, + .mmap = coherent_heap_mmap, + .vmap = coherent_heap_vmap, + .vunmap = coherent_heap_vunmap, + .release = coherent_heap_dma_buf_release, +}; + +static struct dma_buf *coherent_heap_allocate(struct dma_heap *heap, + unsigned long len, + u32 fd_flags, + u64 heap_flags) +{ + struct coherent_heap *coh_heap; + struct coherent_heap_buffer *buffer; + struct device *heap_dev; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + size_t size = PAGE_ALIGN(len); + pgoff_t pagecount = size >> PAGE_SHIFT; + struct dma_buf *dmabuf; + int ret = -ENOMEM; + pgoff_t pg; + + coh_heap = dma_heap_get_drvdata(heap); + if (!coh_heap) + return ERR_PTR(-EINVAL); + + heap_dev = dma_heap_get_dev(coh_heap->heap); + if (!heap_dev) + return ERR_PTR(-ENODEV); + + buffer = kzalloc_obj(*buffer); + if (!buffer) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->len = size; + buffer->heap = coh_heap; + buffer->pagecount = pagecount; + + buffer->alloc_vaddr = dma_alloc_coherent(heap_dev, buffer->len, + &buffer->dma_addr, GFP_KERNEL); + if (!buffer->alloc_vaddr) { + ret = -ENOMEM; + goto free_buffer; + } + + buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages), + GFP_KERNEL); + if (!buffer->pages) { + ret = -ENOMEM; + goto free_dma; + } + + for (pg = 0; pg < pagecount; pg++) + buffer->pages[pg] = virt_to_page((char *)buffer->alloc_vaddr + + (pg * PAGE_SIZE)); + + /* create the dmabuf */ + exp_info.exp_name = dma_heap_get_name(heap); + exp_info.ops = &coherent_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto free_pages; + } + return dmabuf; + +free_pages: + kfree(buffer->pages); +free_dma: + dma_free_coherent(heap_dev, buffer->len, buffer->alloc_vaddr, + buffer->dma_addr); +free_buffer: + kfree(buffer); + return ERR_PTR(ret); +} + +static const struct dma_heap_ops coherent_heap_ops = { + .allocate = coherent_heap_allocate, +}; + +static int coherent_heap_init_dma_mask(struct device *dev) +{ + int ret; + + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (!ret) + return 0; + + /* Fallback to 32-bit DMA mask */ + return dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32)); +} + +static int __coherent_heap_register(struct reserved_mem *rmem) +{ + struct dma_heap_export_info exp_info; + struct coherent_heap *coh_heap; + struct device *heap_dev; + int ret; + + if (!rmem || !rmem->name) + return -EINVAL; + + coh_heap = kzalloc_obj(*coh_heap); + if (!coh_heap) + return -ENOMEM; + + coh_heap->rmem = rmem; + coh_heap->name = kstrdup(rmem->name, GFP_KERNEL); + if (!coh_heap->name) { + ret = -ENOMEM; + goto free_coherent_heap; + } + + exp_info.name = coh_heap->name; + exp_info.ops = &coherent_heap_ops; + exp_info.priv = coh_heap; + + coh_heap->heap = dma_heap_create(&exp_info); + if (IS_ERR(coh_heap->heap)) { + ret = PTR_ERR(coh_heap->heap); + goto free_name; + } + + heap_dev = dma_heap_get_dev(coh_heap->heap); + ret = coherent_heap_init_dma_mask(heap_dev); + if (ret) { + pr_err("coherent_heap: failed to set DMA mask (%d)\n", ret); + goto destroy_heap; + } + + ret = of_reserved_mem_device_init_with_mem(heap_dev, rmem); + if (ret) { + pr_err("coherent_heap: failed to initialize memory (%d)\n", ret); + goto destroy_heap; + } + + ret = dma_heap_register(coh_heap->heap); + if (ret) { + pr_err("coherent_heap: failed to register heap (%d)\n", ret); + goto destroy_heap; + } + + return 0; + +destroy_heap: + dma_heap_destroy(coh_heap->heap); + coh_heap->heap = NULL; +free_name: + kfree(coh_heap->name); +free_coherent_heap: + kfree(coh_heap); + + return ret; +} + +static int __init coherent_heap_register(void) +{ + struct reserved_mem *rmem; + unsigned int i; + int ret; + + for (i = 0; (rmem = dma_coherent_get_reserved_region(i)) != NULL; i++) { + ret = __coherent_heap_register(rmem); + if (ret) { + pr_warn("Failed to add coherent heap %s", + rmem->name ? rmem->name : "unknown"); + continue; + } + } + + return 0; +} +module_init(coherent_heap_register); +MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions"); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 1b0ea43ba66c3..77e6cb66ffce1 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -9,10 +9,12 @@ #ifndef _DMA_HEAPS_H #define _DMA_HEAPS_H
+#include <linux/errno.h> #include <linux/types.h>
struct dma_heap; struct device; +struct reserved_mem;
/** * struct dma_heap_ops - ops to operate on a given heap @@ -53,4 +55,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
extern bool mem_accounting;
+#if IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT) +int dma_heap_coherent_register(struct reserved_mem *rmem); +#else +static inline int dma_heap_coherent_register(struct reserved_mem *rmem) +{ + return -EOPNOTSUPP; +} +#endif + #endif /* _DMA_HEAPS_H */ diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 60b63756df821..c87e5e44e5383 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -12,6 +12,7 @@
struct cma; struct iommu_ops; +struct reserved_mem;
struct dma_map_ops { void *(*alloc)(struct device *dev, size_t size, @@ -161,6 +162,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size, int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr); int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, size_t size, int *ret); +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx); #else static inline int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, dma_addr_t device_addr, size_t size) @@ -172,6 +174,11 @@ static inline int dma_declare_coherent_memory(struct device *dev, #define dma_release_from_dev_coherent(dev, order, vaddr) (0) #define dma_mmap_from_dev_coherent(dev, vma, vaddr, order, ret) (0) static inline void dma_release_coherent_memory(struct device *dev) { } +static inline +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx) +{ + return NULL; +} #endif /* CONFIG_DMA_DECLARE_COHERENT */
#ifdef CONFIG_DMA_GLOBAL_POOL
Add dma_heap_coherent_register() call within reserved memory DMA setup logic for non-reusable DT nodes.
Signed-off-by: Albert Esteve aesteve@redhat.com --- kernel/dma/coherent.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c index 1147497bc512c..d0d0979ffb153 100644 --- a/kernel/dma/coherent.c +++ b/kernel/dma/coherent.c @@ -9,6 +9,7 @@ #include <linux/module.h> #include <linux/dma-direct.h> #include <linux/dma-map-ops.h> +#include <linux/dma-heap.h>
struct dma_coherent_mem { void *virt_base; @@ -334,6 +335,31 @@ static phys_addr_t dma_reserved_default_memory_base __initdata; static phys_addr_t dma_reserved_default_memory_size __initdata; #endif
+#define MAX_COHERENT_REGIONS 64 + +static struct reserved_mem *rmem_coherent_areas[MAX_COHERENT_REGIONS]; +static unsigned int rmem_coherent_areas_num; + +static int rmem_coherent_insert_area(struct reserved_mem *rmem) +{ + if (rmem_coherent_areas_num >= MAX_COHERENT_REGIONS) { + pr_warn("Deferred heap areas list full, dropping %s\n", + rmem->name ? rmem->name : "unknown"); + return -EINVAL; + } + rmem_coherent_areas[rmem_coherent_areas_num++] = rmem; + return 0; +} + +struct reserved_mem *dma_coherent_get_reserved_region(unsigned int idx) +{ + if (idx >= rmem_coherent_areas_num) + return NULL; + + return rmem_coherent_areas[idx]; +} +EXPORT_SYMBOL_GPL(dma_coherent_get_reserved_region); + static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) { struct dma_coherent_mem *mem = rmem->priv; @@ -393,6 +419,14 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem) rmem->ops = &rmem_dma_ops; pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n", &rmem->base, (unsigned long)rmem->size / SZ_1M); + + if (IS_ENABLED(CONFIG_DMABUF_HEAPS_COHERENT)) { + int ret = rmem_coherent_insert_area(rmem); + + if (ret) + pr_warn("Reserved memory: failed to store coherent area for %s (%d)\n", + rmem->name ? rmem->name : "unknown", ret); + } return 0; }
Following the current efforts to make CMA heap as module, we can do the same and turn the Coherent heap into a module as well, by changing the Kconfig into a tristate and importing the proper dma-buf namespaces.
This heap won't be able to unload (same as happens with the CMA heap), since we're missing a big part of the infrastructure that would allow to make it safe.
Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/dma-buf/heaps/Kconfig | 2 +- drivers/dma-buf/heaps/coherent_heap.c | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index aeb475e585048..2f84a1018b900 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -14,7 +14,7 @@ config DMABUF_HEAPS_CMA regions, you should say Y here.
config DMABUF_HEAPS_COHERENT - bool "DMA-BUF Coherent Reserved-Memory Heap" + tristate "DMA-BUF Coherent Reserved-Memory Heap" depends on DMABUF_HEAPS && OF_RESERVED_MEM && DMA_DECLARE_COHERENT help Choose this option to enable coherent reserved-memory dma-buf heaps. diff --git a/drivers/dma-buf/heaps/coherent_heap.c b/drivers/dma-buf/heaps/coherent_heap.c index d033d737bb9df..cdf8efa6c1564 100644 --- a/drivers/dma-buf/heaps/coherent_heap.c +++ b/drivers/dma-buf/heaps/coherent_heap.c @@ -424,3 +424,6 @@ static int __init coherent_heap_register(void) } module_init(coherent_heap_register); MODULE_DESCRIPTION("DMA-BUF heap for coherent reserved-memory regions"); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS("DMA_BUF"); +MODULE_IMPORT_NS("DMA_BUF_HEAP");
On Tue, Mar 3, 2026 at 4:34 AM Albert Esteve aesteve@redhat.com wrote:
This patch introduces a new heap driver to expose DT non‑reusable "shared-dma-pool" coherent regions as dma-buf heaps, so userspace can allocate buffers from each reserved, named region.
Just a nit here: Might be good to provide some higher level context as to why this is wanted, and what it enables.
Also, "shared-dma-pool" is also used for CMA regions, so it might be unclear initially how this is different from the CMA heap (you do mention non-reusable, but that's a prettty subtle detail).
Might be good to add some of the rationale to the patch adding the heap implementation as well so it makes it into the git history.
thanks -john
On Tue, Mar 3, 2026 at 9:55 PM John Stultz jstultz@google.com wrote:
On Tue, Mar 3, 2026 at 4:34 AM Albert Esteve aesteve@redhat.com wrote:
This patch introduces a new heap driver to expose DT non‑reusable "shared-dma-pool" coherent regions as dma-buf heaps, so userspace can allocate buffers from each reserved, named region.
Just a nit here: Might be good to provide some higher level context as to why this is wanted, and what it enables.
Also, "shared-dma-pool" is also used for CMA regions, so it might be unclear initially how this is different from the CMA heap (you do mention non-reusable, but that's a prettty subtle detail).
Sure, I will expand this for the next revision and try to clarify the points you mentioned here (and add these points to the relevant patch).
BR, Albert
Might be good to add some of the rationale to the patch adding the heap implementation as well so it makes it into the git history.
thanks -john
linaro-mm-sig@lists.linaro.org