From: Jiri Pirko jiri@nvidia.com
Confidential computing (CoCo) VMs/guests, such as AMD SEV and Intel TDX, run with encrypted/protected memory which creates a challenge for devices that do not support DMA to it (no TDISP support).
For kernel-only DMA operations, swiotlb bounce buffering provides a transparent solution by copying data through decrypted memory. However, the only way to get this memory into userspace is via the DMA API's dma_alloc_pages()/dma_mmap_pages() type interfaces which limits the use of the memory to a single DMA device, and is incompatible with pin_user_pages().
These limitations are particularly problematic for the RDMA subsystem which makes heavy use of pin_user_pages() and expects flexible memory usage between many different DMA devices.
This patch series enables userspace to explicitly request decrypted (shared) memory allocations from the dma-buf system heap. Userspace can mmap this memory and pass the dma-buf fd to other existing importers such as RDMA or DRM devices to access the memory. The DMA API is improved to allow the dma heap exporter to DMA map the shared memory to each importing device.
--- v1->v2: patch1: - rebased on top of recent dma-mapping-fixes patch2: - fixed build errors on s390 by including mem_encrypt.h - converted system heap flag implementation to a separate heap
Based on dma-mapping-fixes HEAD d5b5e8149af0f5efed58653cbebf1cb3258ce49a
Jiri Pirko (2): dma-mapping: introduce DMA_ATTR_CC_DECRYPTED for pre-decrypted memory dma-buf: heaps: system: add system_cc_decrypted heap for explicitly decrypted memory
drivers/dma-buf/heaps/system_heap.c | 103 ++++++++++++++++++++++++++-- include/linux/dma-heap.h | 1 + include/linux/dma-mapping.h | 6 ++ include/trace/events/dma.h | 3 +- include/uapi/linux/dma-heap.h | 3 +- kernel/dma/direct.h | 14 +++- 6 files changed, 119 insertions(+), 11 deletions(-)
From: Jiri Pirko jiri@nvidia.com
Current CC designs don't place a vIOMMU in front of untrusted devices. Instead, the DMA API forces all untrusted device DMA through swiotlb bounce buffers (is_swiotlb_force_bounce()) which copies data into decrypted memory on behalf of the device.
When a caller has already arranged for the memory to be decrypted via set_memory_decrypted(), the DMA API needs to know so it can map directly using the unencrypted physical address rather than bounce buffering. Following the pattern of DMA_ATTR_MMIO, add DMA_ATTR_CC_DECRYPTED for this purpose. Like the MMIO case, only the caller knows what kind of memory it has and must inform the DMA API for it to work correctly.
Signed-off-by: Jiri Pirko jiri@nvidia.com --- v1->v2: - rebased on top of recent dma-mapping-fixes --- include/linux/dma-mapping.h | 6 ++++++ include/trace/events/dma.h | 3 ++- kernel/dma/direct.h | 14 +++++++++++--- 3 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 29973baa0581..ae3d85e494ec 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -85,6 +85,12 @@ * a cacheline must have this attribute for this to be considered safe. */ #define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11) +/* + * DMA_ATTR_CC_DECRYPTED: Indicates memory that has been explicitly decrypted + * (shared) for confidential computing guests. The caller must have + * called set_memory_decrypted(). A struct page is required. + */ +#define DMA_ATTR_CC_DECRYPTED (1UL << 12)
/* * A dma_addr_t can hold any valid DMA or bus address for the platform. It can diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 33e99e792f1a..b8082d5177c4 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -32,7 +32,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ { DMA_ATTR_NO_WARN, "NO_WARN" }, \ { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ - { DMA_ATTR_MMIO, "MMIO" }) + { DMA_ATTR_MMIO, "MMIO" }, \ + { DMA_ATTR_CC_DECRYPTED, "CC_DECRYPTED" })
DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e89f175e9c2d..c047a9d0fda3 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -84,16 +84,24 @@ static inline dma_addr_t dma_direct_map_phys(struct device *dev, dma_addr_t dma_addr;
if (is_swiotlb_force_bounce(dev)) { - if (attrs & DMA_ATTR_MMIO) - return DMA_MAPPING_ERROR; + if (!(attrs & DMA_ATTR_CC_DECRYPTED)) { + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR;
- return swiotlb_map(dev, phys, size, dir, attrs); + return swiotlb_map(dev, phys, size, dir, attrs); + } + } else if (attrs & DMA_ATTR_CC_DECRYPTED) { + return DMA_MAPPING_ERROR; }
if (attrs & DMA_ATTR_MMIO) { dma_addr = phys; if (unlikely(!dma_capable(dev, dma_addr, size, false))) goto err_overflow; + } else if (attrs & DMA_ATTR_CC_DECRYPTED) { + dma_addr = phys_to_dma_unencrypted(dev, phys); + if (unlikely(!dma_capable(dev, dma_addr, size, false))) + goto err_overflow; } else { dma_addr = phys_to_dma(dev, phys); if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
From: Jiri Pirko jiri@nvidia.com
Add a new "system_cc_decrypted" dma-buf heap to allow userspace to allocate decrypted (shared) memory for confidential computing (CoCo) VMs.
On CoCo VMs, guest memory is encrypted by default. The hardware uses an encryption bit in page table entries (C-bit on AMD SEV, "shared" bit on Intel TDX) to control whether a given memory access is encrypted or decrypted. The kernel's direct map is set up with encryption enabled, so pages returned by alloc_pages() are encrypted in the direct map by default. To make this memory usable for devices that do not support DMA to encrypted memory (no TDISP support), it has to be explicitly decrypted. A couple of things are needed to properly handle decrypted memory for the dma-buf use case:
- set_memory_decrypted() on the direct map after allocation: Besides clearing the encryption bit in the direct map PTEs, this also notifies the hypervisor about the page state change. On free, the inverse set_memory_encrypted() must be called before returning pages to the allocator. If re-encryption fails, pages are intentionally leaked to prevent decrypted memory from being reused as private.
- pgprot_decrypted() for userspace and kernel virtual mappings: Any new mapping of the decrypted pages, be it to userspace via mmap or to kernel vmalloc space via vmap, creates PTEs independent of the direct map. These must also have the encryption bit cleared, otherwise accesses through them would see encrypted (garbage) data.
- DMA_ATTR_CC_DECRYPTED for DMA mapping: Since the pages are already decrypted, the DMA API needs to be informed via DMA_ATTR_CC_DECRYPTED so it can map them correctly as unencrypted for device access.
On non-CoCo VMs, the system_cc_decrypted heap is not registered to prevent misuse by userspace that does not understand the security implications of explicitly decrypted memory.
Signed-off-by: Jiri Pirko jiri@nvidia.com --- v1->v2: - fixed build errors on s390 by including mem_encrypt.h - converted system heap flag implementation to a separate heap --- drivers/dma-buf/heaps/system_heap.c | 103 ++++++++++++++++++++++++++-- include/linux/dma-heap.h | 1 + include/uapi/linux/dma-heap.h | 3 +- 3 files changed, 100 insertions(+), 7 deletions(-)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index b3650d8fd651..a525e9aaaffa 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -10,17 +10,25 @@ * Andrew F. Davis afd@ti.com */
+#include <linux/cc_platform.h> #include <linux/dma-buf.h> #include <linux/dma-mapping.h> #include <linux/dma-heap.h> #include <linux/err.h> #include <linux/highmem.h> +#include <linux/mem_encrypt.h> #include <linux/mm.h> +#include <linux/set_memory.h> #include <linux/module.h> +#include <linux/pgtable.h> #include <linux/scatterlist.h> #include <linux/slab.h> #include <linux/vmalloc.h>
+struct system_heap_priv { + bool decrypted; +}; + struct system_heap_buffer { struct dma_heap *heap; struct list_head attachments; @@ -29,6 +37,7 @@ struct system_heap_buffer { struct sg_table sg_table; int vmap_cnt; void *vaddr; + bool decrypted; };
struct dma_heap_attachment { @@ -36,6 +45,7 @@ struct dma_heap_attachment { struct sg_table table; struct list_head list; bool mapped; + bool decrypted; };
#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO) @@ -52,6 +62,34 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP}; static const unsigned int orders[] = {8, 4, 0}; #define NUM_ORDERS ARRAY_SIZE(orders)
+static int system_heap_set_page_decrypted(struct page *page) +{ + unsigned long addr = (unsigned long)page_address(page); + unsigned int nr_pages = 1 << compound_order(page); + int ret; + + ret = set_memory_decrypted(addr, nr_pages); + if (ret) + pr_warn_ratelimited("dma-buf system heap: failed to decrypt page at %p\n", + page_address(page)); + + return ret; +} + +static int system_heap_set_page_encrypted(struct page *page) +{ + unsigned long addr = (unsigned long)page_address(page); + unsigned int nr_pages = 1 << compound_order(page); + int ret; + + ret = set_memory_encrypted(addr, nr_pages); + if (ret) + pr_warn_ratelimited("dma-buf system heap: failed to re-encrypt page at %p, leaking memory\n", + page_address(page)); + + return ret; +} + static int dup_sg_table(struct sg_table *from, struct sg_table *to) { struct scatterlist *sg, *new_sg; @@ -90,6 +128,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); a->mapped = false; + a->decrypted = buffer->decrypted;
attachment->priv = a;
@@ -119,9 +158,11 @@ static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attac { struct dma_heap_attachment *a = attachment->priv; struct sg_table *table = &a->table; + unsigned long attrs; int ret;
- ret = dma_map_sgtable(attachment->dev, table, direction, 0); + attrs = a->decrypted ? DMA_ATTR_CC_DECRYPTED : 0; + ret = dma_map_sgtable(attachment->dev, table, direction, attrs); if (ret) return ERR_PTR(ret);
@@ -188,8 +229,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) unsigned long addr = vma->vm_start; unsigned long pgoff = vma->vm_pgoff; struct scatterlist *sg; + pgprot_t prot; int i, ret;
+ prot = vma->vm_page_prot; + if (buffer->decrypted) + prot = pgprot_decrypted(prot); + for_each_sgtable_sg(table, sg, i) { unsigned long n = sg->length >> PAGE_SHIFT;
@@ -206,8 +252,7 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) if (addr + size > vma->vm_end) size = vma->vm_end - addr;
- ret = remap_pfn_range(vma, addr, page_to_pfn(page), - size, vma->vm_page_prot); + ret = remap_pfn_range(vma, addr, page_to_pfn(page), size, prot); if (ret) return ret;
@@ -225,6 +270,7 @@ static void *system_heap_do_vmap(struct system_heap_buffer *buffer) struct page **pages = vmalloc(sizeof(struct page *) * npages); struct page **tmp = pages; struct sg_page_iter piter; + pgprot_t prot; void *vaddr;
if (!pages) @@ -235,7 +281,10 @@ static void *system_heap_do_vmap(struct system_heap_buffer *buffer) *tmp++ = sg_page_iter_page(&piter); }
- vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + prot = PAGE_KERNEL; + if (buffer->decrypted) + prot = pgprot_decrypted(prot); + vaddr = vmap(pages, npages, VM_MAP, prot); vfree(pages);
if (!vaddr) @@ -296,6 +345,14 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) for_each_sgtable_sg(table, sg, i) { struct page *page = sg_page(sg);
+ /* + * Intentionally leak pages that cannot be re-encrypted + * to prevent decrypted memory from being reused. + */ + if (buffer->decrypted && + system_heap_set_page_encrypted(page)) + continue; + __free_pages(page, compound_order(page)); } sg_free_table(table); @@ -347,6 +404,8 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap, DEFINE_DMA_BUF_EXPORT_INFO(exp_info); unsigned long size_remaining = len; unsigned int max_order = orders[0]; + struct system_heap_priv *priv = dma_heap_get_drvdata(heap); + bool decrypted = priv->decrypted; struct dma_buf *dmabuf; struct sg_table *table; struct scatterlist *sg; @@ -362,6 +421,7 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap, mutex_init(&buffer->lock); buffer->heap = heap; buffer->len = len; + buffer->decrypted = decrypted;
INIT_LIST_HEAD(&pages); i = 0; @@ -396,6 +456,14 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap, list_del(&page->lru); }
+ if (decrypted) { + for_each_sgtable_sg(table, sg, i) { + ret = system_heap_set_page_decrypted(sg_page(sg)); + if (ret) + goto free_pages; + } + } + /* create the dmabuf */ exp_info.exp_name = dma_heap_get_name(heap); exp_info.ops = &system_heap_buf_ops; @@ -413,6 +481,13 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap, for_each_sgtable_sg(table, sg, i) { struct page *p = sg_page(sg);
+ /* + * Intentionally leak pages that cannot be re-encrypted + * to prevent decrypted memory from being reused. + */ + if (buffer->decrypted && + system_heap_set_page_encrypted(p)) + continue; __free_pages(p, compound_order(p)); } sg_free_table(table); @@ -428,6 +503,14 @@ static const struct dma_heap_ops system_heap_ops = { .allocate = system_heap_allocate, };
+static struct system_heap_priv system_heap_priv = { + .decrypted = false, +}; + +static struct system_heap_priv system_heap_cc_decrypted_priv = { + .decrypted = true, +}; + static int __init system_heap_create(void) { struct dma_heap_export_info exp_info; @@ -435,8 +518,18 @@ static int __init system_heap_create(void)
exp_info.name = "system"; exp_info.ops = &system_heap_ops; - exp_info.priv = NULL; + exp_info.priv = &system_heap_priv; + + sys_heap = dma_heap_add(&exp_info); + if (IS_ERR(sys_heap)) + return PTR_ERR(sys_heap); + + if (IS_ENABLED(CONFIG_HIGHMEM) || + !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return 0;
+ exp_info.name = "system_cc_decrypted"; + exp_info.priv = &system_heap_cc_decrypted_priv; sys_heap = dma_heap_add(&exp_info); if (IS_ERR(sys_heap)) return PTR_ERR(sys_heap); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 648328a64b27..d97b668413c1 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -10,6 +10,7 @@ #define _DMA_HEAPS_H
#include <linux/types.h> +#include <uapi/linux/dma-heap.h>
struct dma_heap;
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa..ab95bb355ed5 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -18,8 +18,7 @@ /* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */ #define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
-/* Currently no heap flags */ -#define DMA_HEAP_VALID_HEAP_FLAGS (0ULL) +#define DMA_HEAP_VALID_HEAP_FLAGS (0)
/** * struct dma_heap_allocation_data - metadata passed from userspace for
On Mon, Feb 23, 2026 at 1:51 AM Jiri Pirko jiri@resnulli.us wrote:
From: Jiri Pirko jiri@nvidia.com
Add a new "system_cc_decrypted" dma-buf heap to allow userspace to allocate decrypted (shared) memory for confidential computing (CoCo) VMs.
On CoCo VMs, guest memory is encrypted by default. The hardware uses an encryption bit in page table entries (C-bit on AMD SEV, "shared" bit on Intel TDX) to control whether a given memory access is encrypted or decrypted. The kernel's direct map is set up with encryption enabled, so pages returned by alloc_pages() are encrypted in the direct map by default. To make this memory usable for devices that do not support DMA to encrypted memory (no TDISP support), it has to be explicitly decrypted. A couple of things are needed to properly handle decrypted memory for the dma-buf use case:
set_memory_decrypted() on the direct map after allocation: Besides clearing the encryption bit in the direct map PTEs, this also notifies the hypervisor about the page state change. On free, the inverse set_memory_encrypted() must be called before returning pages to the allocator. If re-encryption fails, pages are intentionally leaked to prevent decrypted memory from being reused as private.
pgprot_decrypted() for userspace and kernel virtual mappings: Any new mapping of the decrypted pages, be it to userspace via mmap or to kernel vmalloc space via vmap, creates PTEs independent of the direct map. These must also have the encryption bit cleared, otherwise accesses through them would see encrypted (garbage) data.
DMA_ATTR_CC_DECRYPTED for DMA mapping: Since the pages are already decrypted, the DMA API needs to be informed via DMA_ATTR_CC_DECRYPTED so it can map them correctly as unencrypted for device access.
On non-CoCo VMs, the system_cc_decrypted heap is not registered to prevent misuse by userspace that does not understand the security implications of explicitly decrypted memory.
Signed-off-by: Jiri Pirko jiri@nvidia.com
Thanks for reworking this! I've not reviewed it super closely, but I believe it resolves my objection on your first version.
Few nits/questions below.
@@ -296,6 +345,14 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) for_each_sgtable_sg(table, sg, i) { struct page *page = sg_page(sg);
/** Intentionally leak pages that cannot be re-encrypted* to prevent decrypted memory from being reused.*/if (buffer->decrypted &&system_heap_set_page_encrypted(page))continue;
What are the conditions where this would fail? How much of an edge case is this? I fret this opens a DoS vector if one is able to allocate from this heap and then stress the system when doing the free.
Should there be some global list of leaked decrypted pages such that the mm subsystem could try again later to recover these?
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 648328a64b27..d97b668413c1 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -10,6 +10,7 @@ #define _DMA_HEAPS_H
#include <linux/types.h> +#include <uapi/linux/dma-heap.h>
struct dma_heap;
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa..ab95bb355ed5 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -18,8 +18,7 @@ /* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */ #define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
-/* Currently no heap flags */ -#define DMA_HEAP_VALID_HEAP_FLAGS (0ULL) +#define DMA_HEAP_VALID_HEAP_FLAGS (0)
/**
- struct dma_heap_allocation_data - metadata passed from userspace for
Are these header changes still necessary?
thanks -john
Mon, Feb 23, 2026 at 07:33:07PM +0100, jstultz@google.com wrote:
On Mon, Feb 23, 2026 at 1:51 AM Jiri Pirko jiri@resnulli.us wrote:
From: Jiri Pirko jiri@nvidia.com
Add a new "system_cc_decrypted" dma-buf heap to allow userspace to allocate decrypted (shared) memory for confidential computing (CoCo) VMs.
On CoCo VMs, guest memory is encrypted by default. The hardware uses an encryption bit in page table entries (C-bit on AMD SEV, "shared" bit on Intel TDX) to control whether a given memory access is encrypted or decrypted. The kernel's direct map is set up with encryption enabled, so pages returned by alloc_pages() are encrypted in the direct map by default. To make this memory usable for devices that do not support DMA to encrypted memory (no TDISP support), it has to be explicitly decrypted. A couple of things are needed to properly handle decrypted memory for the dma-buf use case:
set_memory_decrypted() on the direct map after allocation: Besides clearing the encryption bit in the direct map PTEs, this also notifies the hypervisor about the page state change. On free, the inverse set_memory_encrypted() must be called before returning pages to the allocator. If re-encryption fails, pages are intentionally leaked to prevent decrypted memory from being reused as private.
pgprot_decrypted() for userspace and kernel virtual mappings: Any new mapping of the decrypted pages, be it to userspace via mmap or to kernel vmalloc space via vmap, creates PTEs independent of the direct map. These must also have the encryption bit cleared, otherwise accesses through them would see encrypted (garbage) data.
DMA_ATTR_CC_DECRYPTED for DMA mapping: Since the pages are already decrypted, the DMA API needs to be informed via DMA_ATTR_CC_DECRYPTED so it can map them correctly as unencrypted for device access.
On non-CoCo VMs, the system_cc_decrypted heap is not registered to prevent misuse by userspace that does not understand the security implications of explicitly decrypted memory.
Signed-off-by: Jiri Pirko jiri@nvidia.com
Thanks for reworking this! I've not reviewed it super closely, but I believe it resolves my objection on your first version.
Few nits/questions below.
@@ -296,6 +345,14 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) for_each_sgtable_sg(table, sg, i) { struct page *page = sg_page(sg);
/** Intentionally leak pages that cannot be re-encrypted* to prevent decrypted memory from being reused.*/if (buffer->decrypted &&system_heap_set_page_encrypted(page))continue;What are the conditions where this would fail? How much of an edge case is this? I fret this opens a DoS vector if one is able to allocate from this heap and then stress the system when doing the free.
From what I can see, the failure of set_memory_encrypted() is quite rare. Don't see any real DoS scenario for this. All the failures seems to be either theoretical (sanity checks, malicious VMM) or concurrent kexec execution in case of x86/pat.
Should there be some global list of leaked decrypted pages such that the mm subsystem could try again later to recover these?
swiotlb does the same non-recovery leakage. I belive is it not worth implementing this at this time,
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 648328a64b27..d97b668413c1 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -10,6 +10,7 @@ #define _DMA_HEAPS_H
#include <linux/types.h> +#include <uapi/linux/dma-heap.h>
struct dma_heap;
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa..ab95bb355ed5 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -18,8 +18,7 @@ /* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */ #define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
-/* Currently no heap flags */ -#define DMA_HEAP_VALID_HEAP_FLAGS (0ULL) +#define DMA_HEAP_VALID_HEAP_FLAGS (0)
/**
- struct dma_heap_allocation_data - metadata passed from userspace for
Are these header changes still necessary?
Oops, leftovers. Will remove.
Thanks!
thanks -john
On Tue, Feb 24, 2026 at 09:32:01AM +0100, Jiri Pirko wrote:
Should there be some global list of leaked decrypted pages such that the mm subsystem could try again later to recover these?
swiotlb does the same non-recovery leakage. I belive is it not worth implementing this at this time,
Yeah, I agree
Looking at the callers the purpose of the return code is to trigger the memory leak because there is no way to recover from this. We have no idea when in future the hypervisor might permit the operation and we have no way to keep track of the memory until it does.
It is not a great API design at all, it only makes sense from the hypervisor perspective where it can run out of memory trying to do these changes..
Jason
linaro-mm-sig@lists.linaro.org