This RFC builds on T.J. Mercier's earlier series [1] which added a memory.stat counter for exported dma-bufs and a binder-backed mechanism to transfer charges between cgroups.
The first commit is taken almost verbatim from TJ's series: it introduces MEMCG_DMABUF as a dedicated per-cgroup stat, so that the total exported dma-buf footprint is visible both system-wide (via the root cgroup) and per-application (via per-process cgroups). This avoids the overhead of DMABUF_SYSFS_STATS and integrates naturally into the existing cgroup memory hierarchy.
The rest of the series departs from TJ's approach. While the first commit introduces the memcg stat infrastructure for dmabufs, the export-time charging it introduces in dma_buf_export() is then superseded: we charge at dma_heap_ioctl_allocate() time, using a new charge_pid_fd field in struct dma_heap_allocation_data. The allocator opens a pidfd for its client (e.g., from binder's sender_pid), passes it to the ioctl, and the kernel charges the buffer directly to the client's cgroup at allocation time, so no transfer step is needed.
This decouples the accounting path from binder entirely: any allocator that knows its client's PID can use the pid_fd mechanism regardless of the IPC transport in use.
The cross-cgroup charging capability requires access control. Patches #3 and #4 add a generic LSM hook (security_dma_heap_alloc) and an SELinux implementation based on a new dma_heap object class with a charge_to permission, so policy authors can express which domains are allowed to charge memory to another domain's cgroup.
Last patch adds some tests to verify the new charge_pid_fd field.
We are sending it as an RFC to spark broader discussion. It may or may not be the right path forward, and we welcome feedback on the trade-offs.
Collision note: Eric Chanudet's series [2] adds __GFP_ACCOUNT to system_heap page allocations as an opt-in module parameter. That approach charges pages to the allocator's own kmem, which overlaps with MEMCG_DMABUF. This series explicitly removes __GFP_ACCOUNT from system heap allocations and routes all accounting through the MEMCG_DMABUF path to avoid double-counting.
[1] https://lore.kernel.org/cgroups/20230109213809.418135-1-tjmercier@google.com... [2] https://lore.kernel.org/r/20260113-dmabuf-heap-system-memcg-v2-0-e85722cc2f2...
Signed-off-by: Albert Esteve aesteve@redhat.com --- Albert Esteve (4): dma-heap: charge dma-buf memory via explicit memcg security: dma-heap: Add dma_heap_alloc LSM hook selinux: Restrict cross-cgroup dma-heap charging selftests/dmabuf-heaps: Add dma-buf memcg accounting tests
T.J. Mercier (1): memcg: Track exported dma-buffers
Documentation/admin-guide/cgroup-v2.rst | 5 + drivers/dma-buf/dma-buf.c | 7 + drivers/dma-buf/dma-heap.c | 54 +++++- drivers/dma-buf/heaps/system_heap.c | 2 - include/linux/dma-buf.h | 4 + include/linux/lsm_hook_defs.h | 1 + include/linux/memcontrol.h | 37 ++++ include/linux/security.h | 7 + include/uapi/linux/dma-heap.h | 6 + mm/memcontrol.c | 19 ++ security/security.c | 16 ++ security/selinux/hooks.c | 7 + security/selinux/include/classmap.h | 1 + tools/testing/selftests/cgroup/Makefile | 2 +- tools/testing/selftests/cgroup/test_memcontrol.c | 143 +++++++++++++- tools/testing/selftests/dmabuf-heaps/config | 1 + tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 126 ++++++++++++- tools/testing/selftests/dmabuf-heaps/vmtest.sh | 205 +++++++++++++++++++++ 18 files changed, 633 insertions(+), 10 deletions(-) --- base-commit: 74fe02ce122a6103f207d29fafc8b3a53de6abaf change-id: 20260508-v2_20230123_tjmercier_google_com-f44fcfb16530
Best regards,
From: "T.J. Mercier" tjmercier@google.com
When a buffer is exported to userspace, use memcg to attribute the buffer to the allocating cgroup until all buffer references are released.
Unlike the dmabuf sysfs stats implementation, this memcg accounting avoids contention over the kernfs_rwsem incurred when creating or removing nodes.
Signed-off-by: T.J. Mercier tjmercier@google.com Signed-off-by: Albert Esteve aesteve@redhat.com --- Documentation/admin-guide/cgroup-v2.rst | 4 ++++ drivers/dma-buf/dma-buf.c | 13 ++++++++++++ include/linux/dma-buf.h | 4 ++++ include/linux/memcontrol.h | 37 +++++++++++++++++++++++++++++++++ mm/memcontrol.c | 19 +++++++++++++++++ 5 files changed, 77 insertions(+)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 6efd0095ed995..8bdbc2e866430 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1635,6 +1635,10 @@ The following nested keys are defined. Amount of memory used for storing in-kernel data structures.
+ dmabuf (npn) + Amount of memory used for exported DMA buffers allocated by the cgroup. + Stays with the allocating cgroup regardless of how the buffer is shared. + workingset_refault_anon Number of refaults of previously evicted anonymous pages.
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 71f37544a5c61..ce02377f48908 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -14,6 +14,7 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/memcontrol.h> #include <linux/dma-fence.h> #include <linux/dma-fence-unwrap.h> #include <linux/anon_inodes.h> @@ -180,6 +181,9 @@ static void dma_buf_release(struct dentry *dentry) */ BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
+ mem_cgroup_uncharge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE); + mem_cgroup_put(dmabuf->memcg); + dmabuf->ops->release(dmabuf);
if (dmabuf->resv == (struct dma_resv *)&dmabuf[1]) @@ -760,6 +764,13 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) dmabuf->resv = resv; }
+ dmabuf->memcg = get_mem_cgroup_from_mm(current->mm); + if (!mem_cgroup_charge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE, + GFP_KERNEL)) { + ret = -ENOMEM; + goto err_memcg; + } + file->private_data = dmabuf; file->f_path.dentry->d_fsdata = dmabuf; dmabuf->file = file; @@ -770,6 +781,8 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
return dmabuf;
+err_memcg: + mem_cgroup_put(dmabuf->memcg); err_file: fput(file); err_module: diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index d1203da56fc5f..d9f1ccb51c60e 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -27,6 +27,7 @@ struct device; struct dma_buf; struct dma_buf_attachment; +struct mem_cgroup;
/** * struct dma_buf_ops - operations possible on struct dma_buf @@ -429,6 +430,9 @@ struct dma_buf {
__poll_t active; } cb_in, cb_out; + + /** @memcg: the cgroup to which this buffer is currently attributed */ + struct mem_cgroup *memcg; };
/** diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index dc3fa687759b4..10068a833ad9e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -39,6 +39,7 @@ enum memcg_stat_item { MEMCG_ZSWAP_B, MEMCG_ZSWAPPED, MEMCG_ZSWAP_INCOMP, + MEMCG_DMABUF, MEMCG_NR_STAT, };
@@ -649,6 +650,24 @@ int mem_cgroup_charge_hugetlb(struct folio* folio, gfp_t gfp); int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry);
+/** + * mem_cgroup_charge_dmabuf - Charge dma-buf memory to a cgroup and update stat counter + * @memcg: memcg to charge + * @nr_pages: number of pages to charge + * @gfp_mask: reclaim mode + * + * Charges @nr_pages to @memcg. Returns %true if the charge fit within + * @memcg's configured limit, %false if it doesn't. + */ +bool __mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, gfp_t gfp_mask); +static inline bool mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, + gfp_t gfp_mask) +{ + if (mem_cgroup_disabled()) + return true; + return __mem_cgroup_charge_dmabuf(memcg, nr_pages, gfp_mask); +} + void __mem_cgroup_uncharge(struct folio *folio);
/** @@ -664,6 +683,14 @@ static inline void mem_cgroup_uncharge(struct folio *folio) __mem_cgroup_uncharge(folio); }
+void __mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages); +static inline void mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge_dmabuf(memcg, nr_pages); +} + void __mem_cgroup_uncharge_folios(struct folio_batch *folios); static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios) { @@ -1142,10 +1169,20 @@ static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, return 0; }
+static inline bool mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, + gfp_t gfp_mask) +{ + return true; +} + static inline void mem_cgroup_uncharge(struct folio *folio) { }
+static inline void mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ +} + static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c03d4787d4668..15cee13d3ccd6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -433,6 +433,7 @@ static const unsigned int memcg_stat_items[] = { MEMCG_ZSWAP_B, MEMCG_ZSWAPPED, MEMCG_ZSWAP_INCOMP, + MEMCG_DMABUF, };
#define NR_MEMCG_NODE_STAT_ITEMS ARRAY_SIZE(memcg_node_stat_items) @@ -1580,6 +1581,7 @@ static const struct memory_stat memory_stats[] = { #ifdef CONFIG_HUGETLB_PAGE { "hugetlb", NR_HUGETLB }, #endif + { "dmabuf", MEMCG_DMABUF },
/* The memory events */ { "workingset_refault_anon", WORKINGSET_REFAULT_ANON }, @@ -5399,6 +5401,23 @@ void mem_cgroup_flush_workqueue(void) flush_workqueue(memcg_wq); }
+bool __mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, gfp_t gfp_mask) +{ + if (try_charge(memcg, gfp_mask, nr_pages) == 0) { + mod_memcg_state(memcg, MEMCG_DMABUF, nr_pages); + return true; + } + + return false; +} + +void __mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + mod_memcg_state(memcg, MEMCG_DMABUF, -nr_pages); + if (!mem_cgroup_is_root(memcg)) + refill_stock(memcg, nr_pages); +} + static int __init cgroup_memory(char *s) { char *token;
On embedded platforms a central process often allocates dma-buf memory on behalf of client applications. Without a way to attribute the charge to the requesting client's cgroup, the cost lands on the allocator, making per-cgroup memory limits ineffective for the actual consumers.
Add charge_pid_fd to struct dma_heap_allocation_data. When set to a valid pidfd, DMA_HEAP_IOCTL_ALLOC resolves the target task's memcg and charges the buffer there via mem_cgroup_charge_dmabuf() inside dma_heap_buffer_alloc(). Without charge_pid_fd, and with the mem_accounting module parameter enabled, the buffer is charged to the allocator's own cgroup.
Additionally, commit 3c227be90659 ("dma-buf: system_heap: account for system heap allocation in memcg") adds __GFP_ACCOUNT to system-heap page allocations. Keeping __GFP_ACCOUNT would charge the same pages twice (once to kmem, once to MEMCG_DMABUF), thus remove it and route all accounting through a single MEMCG_DMABUF path.
Usage examples:
1. Central allocator charging to a client at allocation time. The allocator knows the client's PID (e.g., from binder's sender_pid) and uses pidfd to attribute the charge:
pid_t client_pid = txn->sender_pid; int pidfd = pidfd_open(client_pid, 0);
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, .charge_pid_fd = pidfd, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); close(pidfd); /* alloc.fd is now charged to client's cgroup */
2. Default allocation (no pidfd, mem_accounting=1). When charge_pid_fd is not set and the mem_accounting module parameter is enabled, the buffer is charged to the allocator's own cgroup:
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); /* charged to current process's cgroup */
Current limitations:
- Single-owner model: a dma-buf carries one memcg charge regardless of how many processes share it. Means only the first owner (and exporter) of the shared buffer bears the charge. - Only memcg accounting supported. While this makes sense for system heap buffers, other heaps (e.g., CMA heaps) will require selectively charging also for the dmem controller.
Signed-off-by: Albert Esteve aesteve@redhat.com --- Documentation/admin-guide/cgroup-v2.rst | 5 ++-- drivers/dma-buf/dma-buf.c | 16 ++++--------- drivers/dma-buf/dma-heap.c | 42 ++++++++++++++++++++++++++++++--- drivers/dma-buf/heaps/system_heap.c | 2 -- include/uapi/linux/dma-heap.h | 6 +++++ 5 files changed, 53 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 8bdbc2e866430..824d269531eb1 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1636,8 +1636,9 @@ The following nested keys are defined. structures.
dmabuf (npn) - Amount of memory used for exported DMA buffers allocated by the cgroup. - Stays with the allocating cgroup regardless of how the buffer is shared. + Amount of memory used for exported DMA buffers allocated by or on + behalf of the cgroup. Stays with the allocating cgroup regardless + of how the buffer is shared.
workingset_refault_anon Number of refaults of previously evicted anonymous pages. diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ce02377f48908..23fb758b78297 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -181,8 +181,11 @@ static void dma_buf_release(struct dentry *dentry) */ BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
- mem_cgroup_uncharge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE); - mem_cgroup_put(dmabuf->memcg); + if (dmabuf->memcg) { + mem_cgroup_uncharge_dmabuf(dmabuf->memcg, + PAGE_ALIGN(dmabuf->size) / PAGE_SIZE); + mem_cgroup_put(dmabuf->memcg); + }
dmabuf->ops->release(dmabuf);
@@ -764,13 +767,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) dmabuf->resv = resv; }
- dmabuf->memcg = get_mem_cgroup_from_mm(current->mm); - if (!mem_cgroup_charge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE, - GFP_KERNEL)) { - ret = -ENOMEM; - goto err_memcg; - } - file->private_data = dmabuf; file->f_path.dentry->d_fsdata = dmabuf; dmabuf->file = file; @@ -781,8 +777,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
return dmabuf;
-err_memcg: - mem_cgroup_put(dmabuf->memcg); err_file: fput(file); err_module: diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index ac5f8685a6494..ff6e259afcdc0 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -7,13 +7,17 @@ */
#include <linux/cdev.h> +#include <linux/cgroup.h> #include <linux/device.h> #include <linux/dma-buf.h> #include <linux/dma-heap.h> +#include <linux/memcontrol.h> +#include <linux/sched/mm.h> #include <linux/err.h> #include <linux/export.h> #include <linux/list.h> #include <linux/nospec.h> +#include <linux/pidfd.h> #include <linux/syscalls.h> #include <linux/uaccess.h> #include <linux/xarray.h> @@ -55,10 +59,12 @@ MODULE_PARM_DESC(mem_accounting, "Enable cgroup-based memory accounting for dma-buf heap allocations (default=false).");
static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, - u32 fd_flags, - u64 heap_flags) + u32 fd_flags, u64 heap_flags, + struct mem_cgroup *charge_to) { struct dma_buf *dmabuf; + unsigned int nr_pages; + struct mem_cgroup *memcg = charge_to; int fd;
/* @@ -73,6 +79,22 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf);
+ nr_pages = len / PAGE_SIZE; + + if (memcg) + css_get(&memcg->css); + else if (mem_accounting) + memcg = get_mem_cgroup_from_mm(current->mm); + + if (memcg) { + if (!mem_cgroup_charge_dmabuf(memcg, nr_pages, GFP_KERNEL)) { + mem_cgroup_put(memcg); + dma_buf_put(dmabuf); + return -ENOMEM; + } + dmabuf->memcg = memcg; + } + fd = dma_buf_fd(dmabuf, fd_flags); if (fd < 0) { dma_buf_put(dmabuf); @@ -102,6 +124,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) { struct dma_heap_allocation_data *heap_allocation = data; struct dma_heap *heap = file->private_data; + struct mem_cgroup *memcg = NULL; + struct task_struct *task; + unsigned int pidfd_flags; int fd;
if (heap_allocation->fd) @@ -113,9 +138,20 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) return -EINVAL;
+ if (heap_allocation->charge_pid_fd) { + task = pidfd_get_task(heap_allocation->charge_pid_fd, &pidfd_flags); + if (IS_ERR(task)) + return PTR_ERR(task); + + memcg = get_mem_cgroup_from_mm(task->mm); + put_task_struct(task); + } + fd = dma_heap_buffer_alloc(heap, heap_allocation->len, heap_allocation->fd_flags, - heap_allocation->heap_flags); + heap_allocation->heap_flags, + memcg); + mem_cgroup_put(memcg); if (fd < 0) return fd;
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 03c2b87cb1112..95d7688167b93 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -385,8 +385,6 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; flags = order_flags[i]; - if (mem_accounting) - flags |= __GFP_ACCOUNT; page = alloc_pages(flags, orders[i]); if (!page) continue; diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa6..e02b0f8cbc6a1 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -29,6 +29,10 @@ * handle to the allocated dma-buf * @fd_flags: file descriptor flags used when allocating * @heap_flags: flags passed to heap + * @charge_pid_fd: optional pidfd of the process whose cgroup should be + * charged for this allocation; 0 means charge the calling + * process's cgroup + * @__padding: reserved, must be zero * * Provided by userspace as an argument to the ioctl */ @@ -37,6 +41,8 @@ struct dma_heap_allocation_data { __u32 fd; __u32 fd_flags; __u64 heap_flags; + __u32 charge_pid_fd; + __u32 __padding; };
#define DMA_HEAP_IOC_MAGIC 'H'
On 5/12/26 11:10, Albert Esteve wrote:
On embedded platforms a central process often allocates dma-buf memory on behalf of client applications. Without a way to attribute the charge to the requesting client's cgroup, the cost lands on the allocator, making per-cgroup memory limits ineffective for the actual consumers.
Add charge_pid_fd to struct dma_heap_allocation_data. When set to a valid pidfd, DMA_HEAP_IOCTL_ALLOC resolves the target task's memcg and charges the buffer there via mem_cgroup_charge_dmabuf() inside dma_heap_buffer_alloc(). Without charge_pid_fd, and with the mem_accounting module parameter enabled, the buffer is charged to the allocator's own cgroup.
Additionally, commit 3c227be90659 ("dma-buf: system_heap: account for system heap allocation in memcg") adds __GFP_ACCOUNT to system-heap page allocations. Keeping __GFP_ACCOUNT would charge the same pages twice (once to kmem, once to MEMCG_DMABUF), thus remove it and route all accounting through a single MEMCG_DMABUF path.
Usage examples:
Central allocator charging to a client at allocation time. The allocator knows the client's PID (e.g., from binder's sender_pid) and uses pidfd to attribute the charge:
pid_t client_pid = txn->sender_pid; int pidfd = pidfd_open(client_pid, 0);
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, .charge_pid_fd = pidfd, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); close(pidfd); /* alloc.fd is now charged to client's cgroup */
Default allocation (no pidfd, mem_accounting=1). When charge_pid_fd is not set and the mem_accounting module parameter is enabled, the buffer is charged to the allocator's own cgroup:
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); /* charged to current process's cgroup */
Current limitations:
- Single-owner model: a dma-buf carries one memcg charge regardless of how many processes share it. Means only the first owner (and exporter) of the shared buffer bears the charge.
- Only memcg accounting supported. While this makes sense for system heap buffers, other heaps (e.g., CMA heaps) will require selectively charging also for the dmem controller.
Well that doesn't looks soo bad, it at least seems to tackle the problem at hand for Android and some of other embedded use cases.
I'm just not sure if this is future prove and will work for all use cases, e.g. cloud gaming, native context for automotive etc...
Essentially the problem boils down to two limitations: 1) a piece of memory can only be charged to one cgroup, the framework doesn't has a concept of charging shared memory to multiple groups 2) when memory references in the form of file descriptors are passed between applications we have no way of changing the accounting to a different cgroup
The passing of the memory reference already has a well defined uAPI and if we could solve those two limitations we not only solve the problem without introducing new uAPI (with potential new security risks) but also solve it for all other use cases which uses file descriptors as well as. E.g. memfd, accel and GPU drivers etc...
On the other hand it is really nice to finally see this tackled for at least DMA-buf heaps. On the GPU side I have seen just another try of a driver doing some kind of special driver specific accounting to solve this just a few weeks ago. And to be honest such single driver island approach have the tendency to break more often that they are working correctly.
Regards, Christian.
Signed-off-by: Albert Esteve aesteve@redhat.com
Documentation/admin-guide/cgroup-v2.rst | 5 ++-- drivers/dma-buf/dma-buf.c | 16 ++++--------- drivers/dma-buf/dma-heap.c | 42 ++++++++++++++++++++++++++++++--- drivers/dma-buf/heaps/system_heap.c | 2 -- include/uapi/linux/dma-heap.h | 6 +++++ 5 files changed, 53 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 8bdbc2e866430..824d269531eb1 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1636,8 +1636,9 @@ The following nested keys are defined. structures. dmabuf (npn)
Amount of memory used for exported DMA buffers allocated by the cgroup.Stays with the allocating cgroup regardless of how the buffer is shared.
Amount of memory used for exported DMA buffers allocated by or onbehalf of the cgroup. Stays with the allocating cgroup regardlessof how the buffer is shared.workingset_refault_anon Number of refaults of previously evicted anonymous pages. diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ce02377f48908..23fb758b78297 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -181,8 +181,11 @@ static void dma_buf_release(struct dentry *dentry) */ BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
- mem_cgroup_uncharge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE);
- mem_cgroup_put(dmabuf->memcg);
- if (dmabuf->memcg) {
mem_cgroup_uncharge_dmabuf(dmabuf->memcg,PAGE_ALIGN(dmabuf->size) / PAGE_SIZE);mem_cgroup_put(dmabuf->memcg);- }
dmabuf->ops->release(dmabuf); @@ -764,13 +767,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) dmabuf->resv = resv; }
- dmabuf->memcg = get_mem_cgroup_from_mm(current->mm);
- if (!mem_cgroup_charge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE,
GFP_KERNEL)) {ret = -ENOMEM;goto err_memcg;- }
- file->private_data = dmabuf; file->f_path.dentry->d_fsdata = dmabuf; dmabuf->file = file;
@@ -781,8 +777,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) return dmabuf; -err_memcg:
- mem_cgroup_put(dmabuf->memcg);
err_file: fput(file); err_module: diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index ac5f8685a6494..ff6e259afcdc0 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -7,13 +7,17 @@ */ #include <linux/cdev.h> +#include <linux/cgroup.h> #include <linux/device.h> #include <linux/dma-buf.h> #include <linux/dma-heap.h> +#include <linux/memcontrol.h> +#include <linux/sched/mm.h> #include <linux/err.h> #include <linux/export.h> #include <linux/list.h> #include <linux/nospec.h> +#include <linux/pidfd.h> #include <linux/syscalls.h> #include <linux/uaccess.h> #include <linux/xarray.h> @@ -55,10 +59,12 @@ MODULE_PARM_DESC(mem_accounting, "Enable cgroup-based memory accounting for dma-buf heap allocations (default=false)."); static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
u32 fd_flags,u64 heap_flags)
u32 fd_flags, u64 heap_flags,struct mem_cgroup *charge_to){ struct dma_buf *dmabuf;
- unsigned int nr_pages;
- struct mem_cgroup *memcg = charge_to; int fd;
/* @@ -73,6 +79,22 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf);
- nr_pages = len / PAGE_SIZE;
- if (memcg)
css_get(&memcg->css);- else if (mem_accounting)
memcg = get_mem_cgroup_from_mm(current->mm);- if (memcg) {
if (!mem_cgroup_charge_dmabuf(memcg, nr_pages, GFP_KERNEL)) {mem_cgroup_put(memcg);dma_buf_put(dmabuf);return -ENOMEM;}dmabuf->memcg = memcg;- }
- fd = dma_buf_fd(dmabuf, fd_flags); if (fd < 0) { dma_buf_put(dmabuf);
@@ -102,6 +124,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) { struct dma_heap_allocation_data *heap_allocation = data; struct dma_heap *heap = file->private_data;
- struct mem_cgroup *memcg = NULL;
- struct task_struct *task;
- unsigned int pidfd_flags; int fd;
if (heap_allocation->fd) @@ -113,9 +138,20 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) return -EINVAL;
- if (heap_allocation->charge_pid_fd) {
task = pidfd_get_task(heap_allocation->charge_pid_fd, &pidfd_flags);if (IS_ERR(task))return PTR_ERR(task);memcg = get_mem_cgroup_from_mm(task->mm);put_task_struct(task);- }
- fd = dma_heap_buffer_alloc(heap, heap_allocation->len, heap_allocation->fd_flags,
heap_allocation->heap_flags);
heap_allocation->heap_flags,memcg);- mem_cgroup_put(memcg); if (fd < 0) return fd;
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 03c2b87cb1112..95d7688167b93 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -385,8 +385,6 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; flags = order_flags[i];
if (mem_accounting) page = alloc_pages(flags, orders[i]); if (!page) continue;flags |= __GFP_ACCOUNT;diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa6..e02b0f8cbc6a1 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -29,6 +29,10 @@
handle to the allocated dma-buf- @fd_flags: file descriptor flags used when allocating
- @heap_flags: flags passed to heap
- @charge_pid_fd: optional pidfd of the process whose cgroup should be
charged for this allocation; 0 means charge the calling
process's cgroup*/
- @__padding: reserved, must be zero
- Provided by userspace as an argument to the ioctl
@@ -37,6 +41,8 @@ struct dma_heap_allocation_data { __u32 fd; __u32 fd_flags; __u64 heap_flags;
- __u32 charge_pid_fd;
- __u32 __padding;
}; #define DMA_HEAP_IOC_MAGIC 'H'
On Tue, May 12, 2026 at 3:14 AM Christian König christian.koenig@amd.com wrote:
On 5/12/26 11:10, Albert Esteve wrote:
On embedded platforms a central process often allocates dma-buf memory on behalf of client applications. Without a way to attribute the charge to the requesting client's cgroup, the cost lands on the allocator, making per-cgroup memory limits ineffective for the actual consumers.
Add charge_pid_fd to struct dma_heap_allocation_data. When set to a valid pidfd, DMA_HEAP_IOCTL_ALLOC resolves the target task's memcg and charges the buffer there via mem_cgroup_charge_dmabuf() inside dma_heap_buffer_alloc(). Without charge_pid_fd, and with the mem_accounting module parameter enabled, the buffer is charged to the allocator's own cgroup.
Additionally, commit 3c227be90659 ("dma-buf: system_heap: account for system heap allocation in memcg") adds __GFP_ACCOUNT to system-heap page allocations. Keeping __GFP_ACCOUNT would charge the same pages twice (once to kmem, once to MEMCG_DMABUF), thus remove it and route all accounting through a single MEMCG_DMABUF path.
Usage examples:
Central allocator charging to a client at allocation time. The allocator knows the client's PID (e.g., from binder's sender_pid) and uses pidfd to attribute the charge:
pid_t client_pid = txn->sender_pid; int pidfd = pidfd_open(client_pid, 0);
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, .charge_pid_fd = pidfd, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); close(pidfd); /* alloc.fd is now charged to client's cgroup */
Default allocation (no pidfd, mem_accounting=1). When charge_pid_fd is not set and the mem_accounting module parameter is enabled, the buffer is charged to the allocator's own cgroup:
struct dma_heap_allocation_data alloc = { .len = buffer_size, .fd_flags = O_RDWR | O_CLOEXEC, }; ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc); /* charged to current process's cgroup */
Current limitations:
- Single-owner model: a dma-buf carries one memcg charge regardless of how many processes share it. Means only the first owner (and exporter) of the shared buffer bears the charge.
- Only memcg accounting supported. While this makes sense for system heap buffers, other heaps (e.g., CMA heaps) will require selectively charging also for the dmem controller.
Well that doesn't looks soo bad, it at least seems to tackle the problem at hand for Android and some of other embedded use cases.
Yeah I think this might work. I know of 3 cases, and it trivially solves the first two. The third requires some work on our end to extend our userspace interfaces to include the pidfd but it seems doable. I'm checking with our graphics folks.
1) Direct allocation from user (e.g. app -> allocation ioctl on /dev/dma_heap/foo) No changes required to userspace. mem_accounting=1 charges the app.
2) Single hop remote allocation (e.g. app -> AHardwareBuffer_allocate -> gralloc) gralloc has the caller's pid as described in the commit message. Open a pidfd and pass it in the dma_heap_allocation_data.
3) Double hop remote allocation (e.g. app -> dequeueBuffer -> SurfaceFlinger -> gralloc) In this case gralloc knows SurfaceFlinger's pid, but not the app's. So we need to add the app's pidfd to the SurfaceFlinger -> gralloc interface, or transfer the memcg charge from SurfaceFlinger to the app after the allocation. It'd be nice to avoid the charge transfer option entirely, but if we need it that doesn't seem so bad in this case because it's a bulk charge for the entire dmabuf rather than per-page. So the exporter doesn't need to get involved (we wouldn't need a new dma_buf_op) and we wouldn't have to worry about looping and locking for each page.
I'm just not sure if this is future prove and will work for all use cases, e.g. cloud gaming, native context for automotive etc...
Essentially the problem boils down to two limitations:
- a piece of memory can only be charged to one cgroup, the framework doesn't has a concept of charging shared memory to multiple groups
Yup, memcg already has this problem with pagecache and shmem.
- when memory references in the form of file descriptors are passed between applications we have no way of changing the accounting to a different cgroup
The passing of the memory reference already has a well defined uAPI and if we could solve those two limitations we not only solve the problem without introducing new uAPI (with potential new security risks) but also solve it for all other use cases which uses file descriptors as well as. E.g. memfd, accel and GPU drivers etc...
On the other hand it is really nice to finally see this tackled for at least DMA-buf heaps.
I have a question about this part. Albert I guess you are interested only in accounting dmabuf-heap allocations, or do you expect to add __GFP_ACCOUNT or mem_cgroup_charge_dmabuf calls to other non-dmabuf-heap exporters?
On the GPU side I have seen just another try of a driver doing some kind of special driver specific accounting to solve this just a few weeks ago. And to be honest such single driver island approach have the tendency to break more often that they are working correctly.
Regards, Christian.
Signed-off-by: Albert Esteve aesteve@redhat.com
Documentation/admin-guide/cgroup-v2.rst | 5 ++-- drivers/dma-buf/dma-buf.c | 16 ++++--------- drivers/dma-buf/dma-heap.c | 42 ++++++++++++++++++++++++++++++--- drivers/dma-buf/heaps/system_heap.c | 2 -- include/uapi/linux/dma-heap.h | 6 +++++ 5 files changed, 53 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 8bdbc2e866430..824d269531eb1 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1636,8 +1636,9 @@ The following nested keys are defined. structures.
dmabuf (npn)
Amount of memory used for exported DMA buffers allocated by the cgroup.Stays with the allocating cgroup regardless of how the buffer is shared.
Amount of memory used for exported DMA buffers allocated by or onbehalf of the cgroup. Stays with the allocating cgroup regardlessof how the buffer is shared. workingset_refault_anon Number of refaults of previously evicted anonymous pages.diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ce02377f48908..23fb758b78297 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -181,8 +181,11 @@ static void dma_buf_release(struct dentry *dentry) */ BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
mem_cgroup_uncharge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE);mem_cgroup_put(dmabuf->memcg);
if (dmabuf->memcg) {mem_cgroup_uncharge_dmabuf(dmabuf->memcg,PAGE_ALIGN(dmabuf->size) / PAGE_SIZE);mem_cgroup_put(dmabuf->memcg);} dmabuf->ops->release(dmabuf);@@ -764,13 +767,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) dmabuf->resv = resv; }
dmabuf->memcg = get_mem_cgroup_from_mm(current->mm);if (!mem_cgroup_charge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE,GFP_KERNEL)) {ret = -ENOMEM;goto err_memcg;}file->private_data = dmabuf; file->f_path.dentry->d_fsdata = dmabuf; dmabuf->file = file;@@ -781,8 +777,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
return dmabuf;-err_memcg:
mem_cgroup_put(dmabuf->memcg);err_file: fput(file); err_module: diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index ac5f8685a6494..ff6e259afcdc0 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -7,13 +7,17 @@ */
#include <linux/cdev.h> +#include <linux/cgroup.h> #include <linux/device.h> #include <linux/dma-buf.h> #include <linux/dma-heap.h> +#include <linux/memcontrol.h> +#include <linux/sched/mm.h> #include <linux/err.h> #include <linux/export.h> #include <linux/list.h> #include <linux/nospec.h> +#include <linux/pidfd.h> #include <linux/syscalls.h> #include <linux/uaccess.h> #include <linux/xarray.h> @@ -55,10 +59,12 @@ MODULE_PARM_DESC(mem_accounting, "Enable cgroup-based memory accounting for dma-buf heap allocations (default=false).");
static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
u32 fd_flags,u64 heap_flags)
u32 fd_flags, u64 heap_flags,struct mem_cgroup *charge_to){ struct dma_buf *dmabuf;
unsigned int nr_pages;struct mem_cgroup *memcg = charge_to; int fd; /*@@ -73,6 +79,22 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf);
nr_pages = len / PAGE_SIZE;if (memcg)css_get(&memcg->css);else if (mem_accounting)memcg = get_mem_cgroup_from_mm(current->mm);if (memcg) {if (!mem_cgroup_charge_dmabuf(memcg, nr_pages, GFP_KERNEL)) {mem_cgroup_put(memcg);dma_buf_put(dmabuf);return -ENOMEM;}dmabuf->memcg = memcg;}fd = dma_buf_fd(dmabuf, fd_flags); if (fd < 0) { dma_buf_put(dmabuf);@@ -102,6 +124,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) { struct dma_heap_allocation_data *heap_allocation = data; struct dma_heap *heap = file->private_data;
struct mem_cgroup *memcg = NULL;struct task_struct *task;unsigned int pidfd_flags; int fd; if (heap_allocation->fd)@@ -113,9 +138,20 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) return -EINVAL;
if (heap_allocation->charge_pid_fd) {task = pidfd_get_task(heap_allocation->charge_pid_fd, &pidfd_flags);if (IS_ERR(task))return PTR_ERR(task);memcg = get_mem_cgroup_from_mm(task->mm);put_task_struct(task);}fd = dma_heap_buffer_alloc(heap, heap_allocation->len, heap_allocation->fd_flags,
heap_allocation->heap_flags);
heap_allocation->heap_flags,memcg);mem_cgroup_put(memcg); if (fd < 0) return fd;diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 03c2b87cb1112..95d7688167b93 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -385,8 +385,6 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; flags = order_flags[i];
if (mem_accounting)flags |= __GFP_ACCOUNT; page = alloc_pages(flags, orders[i]); if (!page) continue;diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h index a4cf716a49fa6..e02b0f8cbc6a1 100644 --- a/include/uapi/linux/dma-heap.h +++ b/include/uapi/linux/dma-heap.h @@ -29,6 +29,10 @@
handle to the allocated dma-buf- @fd_flags: file descriptor flags used when allocating
- @heap_flags: flags passed to heap
- @charge_pid_fd: optional pidfd of the process whose cgroup should be
charged for this allocation; 0 means charge the calling
process's cgroup*/
- @__padding: reserved, must be zero
- Provided by userspace as an argument to the ioctl
@@ -37,6 +41,8 @@ struct dma_heap_allocation_data { __u32 fd; __u32 fd_flags; __u64 heap_flags;
__u32 charge_pid_fd;__u32 __padding;};
#define DMA_HEAP_IOC_MAGIC 'H'
DMA_HEAP_IOCTL_ALLOC accepts a charge_pid_fd field that, when set, causes the allocation to be charged to an arbitrary process's cgroup rather than the caller's.
Without an access-control point, any process that holds a handle to a dma-heap device node can charge unlimited memory to any other process's cgroup, potentially exhausting that cgroup's limit and triggering OOM kills independent of the victim's own activity or privileges.
Add security_dma_heap_alloc(), called in dma_heap_ioctl_allocate() when charge_pid_fd refers to another process. The hook receives the credentials of the allocating process (from) and the credentials of the process whose cgroup will be charged (to), giving security modules a controlled enforcement point for cross-cgroup dma-buf attribution policy.
When CONFIG_SECURITY is not set the hook compiles to an inline returning 0, adding no overhead to the fast path.
Signed-off-by: Albert Esteve aesteve@redhat.com --- drivers/dma-buf/dma-heap.c | 12 +++++++++++- include/linux/lsm_hook_defs.h | 1 + include/linux/security.h | 7 +++++++ security/security.c | 16 ++++++++++++++++ 4 files changed, 35 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index ff6e259afcdc0..e8ffb1031955e 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -18,6 +18,7 @@ #include <linux/list.h> #include <linux/nospec.h> #include <linux/pidfd.h> +#include <linux/security.h> #include <linux/syscalls.h> #include <linux/uaccess.h> #include <linux/xarray.h> @@ -122,12 +123,13 @@ static int dma_heap_open(struct inode *inode, struct file *file)
static long dma_heap_ioctl_allocate(struct file *file, void *data) { + const struct cred *tcred; struct dma_heap_allocation_data *heap_allocation = data; struct dma_heap *heap = file->private_data; struct mem_cgroup *memcg = NULL; struct task_struct *task; unsigned int pidfd_flags; - int fd; + int fd, ret;
if (heap_allocation->fd) return -EINVAL; @@ -143,6 +145,14 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) if (IS_ERR(task)) return PTR_ERR(task);
+ tcred = get_task_cred(task); + ret = security_dma_heap_alloc(current_cred(), tcred); + put_cred(tcred); + if (ret) { + put_task_struct(task); + return ret; + } + memcg = get_mem_cgroup_from_mm(task->mm); put_task_struct(task); } diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h index 2b8dfb35caed3..6a91656f97e1e 100644 --- a/include/linux/lsm_hook_defs.h +++ b/include/linux/lsm_hook_defs.h @@ -43,6 +43,7 @@ LSM_HOOK(int, 0, capset, struct cred *new, const struct cred *old, const kernel_cap_t *permitted) LSM_HOOK(int, 0, capable, const struct cred *cred, struct user_namespace *ns, int cap, unsigned int opts) +LSM_HOOK(int, 0, dma_heap_alloc, const struct cred *from, const struct cred *to) LSM_HOOK(int, 0, quotactl, int cmds, int type, int id, const struct super_block *sb) LSM_HOOK(int, 0, quota_on, struct dentry *dentry) LSM_HOOK(int, 0, syslog, int type) diff --git a/include/linux/security.h b/include/linux/security.h index 41d7367cf4036..f1dad1eabe754 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -350,6 +350,7 @@ int security_capable(const struct cred *cred, struct user_namespace *ns, int cap, unsigned int opts); +int security_dma_heap_alloc(const struct cred *from, const struct cred *to); int security_quotactl(int cmds, int type, int id, const struct super_block *sb); int security_quota_on(struct dentry *dentry); int security_syslog(int type); @@ -701,6 +702,12 @@ static inline int security_capable(const struct cred *cred, return cap_capable(cred, ns, cap, opts); }
+static inline int security_dma_heap_alloc(const struct cred *from, + const struct cred *to) +{ + return 0; +} + static inline int security_quotactl(int cmds, int type, int id, const struct super_block *sb) { diff --git a/security/security.c b/security/security.c index 4e999f0236516..4adacef73c507 100644 --- a/security/security.c +++ b/security/security.c @@ -660,6 +660,22 @@ int security_capable(const struct cred *cred, return call_int_hook(capable, cred, ns, cap, opts); }
+/** + * security_dma_heap_alloc() - Check if cross-cgroup dma-heap charging is allowed + * @from: credentials of the allocating process + * @to: credentials of the process to charge + * + * Check whether the process with credentials @from is allowed to allocate + * dma-heap memory and charge it to the cgroup of the process with credentials + * @to. + * + * Return: Returns 0 if permission is granted. + */ +int security_dma_heap_alloc(const struct cred *from, const struct cred *to) +{ + return call_int_hook(dma_heap_alloc, from, to); +} + /** * security_quotactl() - Check if a quotactl() syscall is allowed for this fs * @cmds: commands
The security_dma_heap_alloc() hook allows security modules to control which processes may charge dma-buf allocations to another process's cgroup via the charge_pid_fd field of DMA_HEAP_IOCTL_ALLOC. Without a policy implementation, the hook is a no-op and the restriction is not enforced.
On SELinux-managed systems any domain with access to a dma-heap device node can therefore exhaust another cgroup's memory budget without restriction.
Implement selinux_dma_heap_alloc() using avc_has_perm() with a new dma_heap object class and a charge_to permission. Policy authors can then grant cross-cgroup charging selectively, for example:
allow allocator_app_t client_app_t:dma_heap charge_to;
Signed-off-by: Albert Esteve aesteve@redhat.com --- security/selinux/hooks.c | 7 +++++++ security/selinux/include/classmap.h | 1 + 2 files changed, 8 insertions(+)
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 0f704380a8c81..ea1f410b9f619 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -2189,6 +2189,12 @@ static int selinux_capable(const struct cred *cred, struct user_namespace *ns, return cred_has_capability(cred, cap, opts, ns == &init_user_ns); }
+static int selinux_dma_heap_alloc(const struct cred *from, const struct cred *to) +{ + return avc_has_perm(cred_sid(from), cred_sid(to), + SECCLASS_DMA_HEAP, DMA_HEAP__CHARGE_TO, NULL); +} + static int selinux_quotactl(int cmds, int type, int id, const struct super_block *sb) { const struct cred *cred = current_cred(); @@ -7541,6 +7547,7 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = { LSM_HOOK_INIT(capget, selinux_capget), LSM_HOOK_INIT(capset, selinux_capset), LSM_HOOK_INIT(capable, selinux_capable), + LSM_HOOK_INIT(dma_heap_alloc, selinux_dma_heap_alloc), LSM_HOOK_INIT(quotactl, selinux_quotactl), LSM_HOOK_INIT(quota_on, selinux_quota_on), LSM_HOOK_INIT(syslog, selinux_syslog), diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h index 90cb61b164256..d232f7808f6b8 100644 --- a/security/selinux/include/classmap.h +++ b/security/selinux/include/classmap.h @@ -181,6 +181,7 @@ const struct security_class_mapping secclass_map[] = { { "user_namespace", { "create", NULL } }, { "memfd_file", { COMMON_FILE_PERMS, "execute_no_trans", "entrypoint", NULL } }, + { "dma_heap", { "charge_to", NULL } }, /* last one */ { NULL, {} } };
Add tests for the new charge_pid_fd field in struct dma_heap_allocation_data.
When the charge_pid_fd feature is absent (unpatched kernel), the probe in pidfd_alloc_supported() detects this and the tests are skipped gracefully.
Add vmtest.sh similar to other subsystem suites, to orchestrate building the selftests (optionally with a freshly compiled kernel) inside a virtme-ng VM, so the tests can be run without modifying the host system. Add a config fragment with required Kconfig symbols.
Also add test_memcg_dmabuf() to the existing test_memcontrol suite to verify end-to-end cross-cgroup accounting: a parent process opens a pidfd for a child in a separate cgroup, allocates a dma-buf via DMA_HEAP_IOCTL_ALLOC with that pidfd, and asserts that memory.stat dmabuf in the child's cgroup reflects the allocation. If the dmabuf key is missing (unpatched kernel) or /dev/dma_heap/system is absent, the test is skipped.
Assisted-by: Claude:claude-sonnet-4-6 Cursor Signed-off-by: Albert Esteve aesteve@redhat.com --- tools/testing/selftests/cgroup/Makefile | 2 +- tools/testing/selftests/cgroup/test_memcontrol.c | 143 +++++++++++++- tools/testing/selftests/dmabuf-heaps/config | 1 + tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 126 ++++++++++++- tools/testing/selftests/dmabuf-heaps/vmtest.sh | 205 +++++++++++++++++++++ 5 files changed, 473 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile index e01584c2189ac..9edfc9f1de5c4 100644 --- a/tools/testing/selftests/cgroup/Makefile +++ b/tools/testing/selftests/cgroup/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -CFLAGS += -Wall -pthread +CFLAGS += -Wall -pthread $(KHDR_INCLUDES)
all: ${HELPER_PROGS}
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index b43da9bc20c49..b6a228407530f 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -19,9 +19,17 @@ #include <errno.h> #include <sys/mman.h>
+#include <linux/dma-heap.h> +#include <signal.h> +#include <sys/ioctl.h> + +#include "../pidfd/pidfd.h" #include "kselftest.h" #include "cgroup_util.h"
+#define DMA_HEAP_SYSTEM "/dev/dma_heap/system" +#define ONE_MEG (1024 * 1024) + #define MEMCG_SOCKSTAT_WAIT_RETRIES 30
static bool has_localevents; @@ -1762,6 +1770,125 @@ static int test_memcg_inotify_delete_dir(const char *root) return ret; }
+static int memcg_dmabuf_child(const char *cgroup, void *arg) +{ + pause(); + return 0; +} + +/* + * This test allocates a dma-buf via DMA_HEAP_IOCTL_ALLOC with a pidfd + * pointing to a child process in a separate cgroup, then checks that + * memory.stat[dmabuf] in the child's cgroup rises by the allocation size + * and returns to zero after the buffer fd is closed. + */ +static int test_memcg_dmabuf(const char *root) +{ + char *parent = NULL, *child_cg = NULL; + int ret = KSFT_FAIL; + int heap_fd = -1, dmabuf_fd = -1, pidfd = -1; + pid_t child_pid; + int child_status; + long dmabuf_stat; + struct dma_heap_allocation_data alloc = { + .len = ONE_MEG, + .fd_flags = O_RDWR | O_CLOEXEC, + }; + + if (access(DMA_HEAP_SYSTEM, R_OK | W_OK)) { + ret = KSFT_SKIP; + goto cleanup; + } + + parent = cg_name(root, "dmabuf_memcg_test"); + if (!parent) + goto cleanup; + + if (cg_create(parent)) + goto cleanup_parent; + + if (cg_write(parent, "cgroup.subtree_control", "+memory")) + goto cleanup_parent; + + child_cg = cg_name(parent, "child"); + if (!child_cg) + goto cleanup_parent; + + if (cg_create(child_cg)) + goto cleanup_parent; + + child_pid = cg_run_nowait(child_cg, memcg_dmabuf_child, NULL); + if (child_pid < 0) + goto cleanup_child; + + if (cg_wait_for_proc_count(child_cg, 1)) + goto cleanup_kill; + + pidfd = sys_pidfd_open(child_pid, 0); + if (pidfd < 0) { + ret = KSFT_SKIP; + goto cleanup_kill; + } + + heap_fd = open(DMA_HEAP_SYSTEM, O_RDWR); + if (heap_fd < 0) { + ret = KSFT_SKIP; + goto cleanup_pidfd; + } + + alloc.charge_pid_fd = (__u32)pidfd; + if (ioctl(heap_fd, DMA_HEAP_IOCTL_ALLOC, &alloc) < 0) + goto cleanup_heap; + dmabuf_fd = (int)alloc.fd; + + dmabuf_stat = cg_read_key_long(child_cg, "memory.stat", "dmabuf "); + if (dmabuf_stat == -1) { + ret = KSFT_SKIP; + goto cleanup_dmabuf; + } + if (dmabuf_stat != ONE_MEG) + dmabuf_stat = cg_read_key_long_poll(child_cg, "memory.stat", + "dmabuf ", ONE_MEG, + 15, 200000); + if (dmabuf_stat != ONE_MEG) { + fprintf(stderr, "Expected dmabuf stat %d, got %ld\n", + ONE_MEG, dmabuf_stat); + goto cleanup_dmabuf; + } + + close(dmabuf_fd); + dmabuf_fd = -1; + + dmabuf_stat = cg_read_key_long_poll(child_cg, "memory.stat", + "dmabuf ", 0, 15, 200000); + if (dmabuf_stat != 0) { + fprintf(stderr, "Expected dmabuf stat 0 after close, got %ld\n", + dmabuf_stat); + goto cleanup_heap; + } + + ret = KSFT_PASS; + +cleanup_dmabuf: + if (dmabuf_fd >= 0) + close(dmabuf_fd); +cleanup_heap: + close(heap_fd); +cleanup_pidfd: + close(pidfd); +cleanup_kill: + kill(child_pid, SIGTERM); + waitpid(child_pid, &child_status, 0); +cleanup_child: + cg_destroy(child_cg); + free(child_cg); +cleanup_parent: + cg_destroy(parent); + free(parent); +cleanup: + return ret; +} + #define T(x) { x, #x } struct memcg_test { int (*fn)(const char *root); @@ -1783,16 +1910,26 @@ struct memcg_test { T(test_memcg_oom_group_score_events), T(test_memcg_inotify_delete_file), T(test_memcg_inotify_delete_dir), + T(test_memcg_dmabuf), }; #undef T
int main(int argc, char **argv) { char root[PATH_MAX]; - int i, proc_status; + int i, proc_status, plan; + const char *filter = NULL; + + if (argc > 1) + filter = argv[1]; + + plan = 0; + for (i = 0; i < ARRAY_SIZE(tests); i++) + if (!filter || !strcmp(tests[i].name, filter)) + plan++;
ksft_print_header(); - ksft_set_plan(ARRAY_SIZE(tests)); + ksft_set_plan(plan); if (cg_find_unified_root(root, sizeof(root), NULL)) ksft_exit_skip("cgroup v2 isn't mounted\n");
@@ -1818,6 +1955,8 @@ int main(int argc, char **argv) has_localevents = proc_status;
for (i = 0; i < ARRAY_SIZE(tests); i++) { + if (filter && strcmp(tests[i].name, filter)) + continue; switch (tests[i].fn(root)) { case KSFT_PASS: ksft_test_result_pass("%s\n", tests[i].name); diff --git a/tools/testing/selftests/dmabuf-heaps/config b/tools/testing/selftests/dmabuf-heaps/config index be091f1cdfa04..94c8f33b71a28 100644 --- a/tools/testing/selftests/dmabuf-heaps/config +++ b/tools/testing/selftests/dmabuf-heaps/config @@ -1,3 +1,4 @@ +CONFIG_MEMCG=y CONFIG_DMABUF_HEAPS=y CONFIG_DMABUF_HEAPS_SYSTEM=y CONFIG_DRM_VGEM=y diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c index fc9694fc4e89e..904332b17698a 100644 --- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c @@ -3,6 +3,7 @@ #include <dirent.h> #include <errno.h> #include <fcntl.h> +#include <signal.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -10,11 +11,14 @@ #include <unistd.h> #include <sys/ioctl.h> #include <sys/mman.h> +#include <sys/syscall.h> #include <sys/types.h> +#include <sys/wait.h>
#include <linux/dma-buf.h> #include <linux/dma-heap.h> #include <drm/drm.h> +#include "../pidfd/pidfd.h" #include "kselftest.h"
#define DEVPATH "/dev/dma_heap" @@ -320,6 +324,8 @@ static int dmabuf_heap_alloc_newer(int fd, size_t len, unsigned int flags, __u32 fd; __u32 fd_flags; __u64 heap_flags; + __u32 charge_pid_fd; + __u32 __padding; __u64 garbage1; __u64 garbage2; __u64 garbage3; @@ -328,6 +334,8 @@ static int dmabuf_heap_alloc_newer(int fd, size_t len, unsigned int flags, .fd = 0, .fd_flags = O_RDWR | O_CLOEXEC, .heap_flags = flags, + .charge_pid_fd = 0, + .__padding = 0, .garbage1 = 0xffffffff, .garbage2 = 0x88888888, .garbage3 = 0x11111111, @@ -390,6 +398,120 @@ static void test_alloc_errors(char *heap_name) close(heap_fd); }
+static int dmabuf_heap_alloc_pidfd(int fd, size_t len, unsigned int heap_flags, + unsigned int charge_pid_fd, int *dmabuf_fd) +{ + struct dma_heap_allocation_data data = { + .len = len, + .fd = 0, + .fd_flags = O_RDWR | O_CLOEXEC, + .heap_flags = heap_flags, + .charge_pid_fd = charge_pid_fd, + }; + int ret; + + if (!dmabuf_fd) + return -EINVAL; + + ret = ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data); + if (ret < 0) + return ret; + *dmabuf_fd = (int)data.fd; + return ret; +} + +/* + * Probe whether the kernel honours charge_pid_fd in DMA_HEAP_IOCTL_ALLOC. + */ +static bool pidfd_alloc_supported(int heap_fd) +{ + int devnull_fd, dmabuf_fd = -1, ret; + + devnull_fd = open("/dev/null", O_RDONLY); + if (devnull_fd < 0) + return false; + + ret = dmabuf_heap_alloc_pidfd(heap_fd, ONE_MEG, 0, devnull_fd, &dmabuf_fd); + if (dmabuf_fd >= 0) { + close(dmabuf_fd); + dmabuf_fd = -1; + } + close(devnull_fd); + return ret < 0; +} + +/* + * Test: allocate charging the calling process's own cgroup via a self pidfd. + */ +static void test_alloc_pidfd_self(char *heap_name) +{ + int heap_fd = -1, pidfd = -1, dmabuf_fd = -1, ret; + + heap_fd = dmabuf_heap_open(heap_name); + + if (!pidfd_alloc_supported(heap_fd)) { + ksft_test_result_skip("charge_pid_fd not supported by this kernel\n"); + goto out; + } + + pidfd = sys_pidfd_open(getpid(), 0); + if (pidfd < 0) { + ksft_test_result_skip("pidfd_open not available\n"); + goto out; + } + + ret = dmabuf_heap_alloc_pidfd(heap_fd, ONE_MEG, 0, pidfd, &dmabuf_fd); + ksft_test_result(!ret, "Allocation with self pidfd %d\n", ret); + if (dmabuf_fd >= 0) + close(dmabuf_fd); + close(pidfd); +out: + close(heap_fd); +} + +/* + * Test: allocate charging a child process's cgroup via a child pidfd. + */ +static void test_alloc_pidfd_child(char *heap_name) +{ + int heap_fd = -1, pidfd = -1, dmabuf_fd = -1; + pid_t child_pid; + int status, ret; + + heap_fd = dmabuf_heap_open(heap_name); + + if (!pidfd_alloc_supported(heap_fd)) { + ksft_test_result_skip("charge_pid_fd not supported by this kernel\n"); + goto out; + } + + child_pid = fork(); + if (child_pid == 0) { + pause(); + _exit(0); + } + if (child_pid < 0) + ksft_exit_fail_msg("fork failed: %s\n", strerror(errno)); + + pidfd = sys_pidfd_open(child_pid, 0); + if (pidfd < 0) { + kill(child_pid, SIGTERM); + waitpid(child_pid, &status, 0); + ksft_test_result_skip("pidfd_open for child failed\n"); + goto out; + } + + ret = dmabuf_heap_alloc_pidfd(heap_fd, ONE_MEG, 0, pidfd, &dmabuf_fd); + ksft_test_result(!ret, "Allocation with child pidfd %d\n", ret); + if (dmabuf_fd >= 0) + close(dmabuf_fd); + close(pidfd); + kill(child_pid, SIGTERM); + waitpid(child_pid, &status, 0); +out: + close(heap_fd); +} + static int numer_of_heaps(void) { DIR *d = opendir(DEVPATH); @@ -420,7 +542,7 @@ int main(void) return KSFT_SKIP; }
- ksft_set_plan(11 * numer_of_heaps()); + ksft_set_plan(13 * numer_of_heaps());
while ((dir = readdir(d))) { if (!strncmp(dir->d_name, ".", 2)) @@ -435,6 +557,8 @@ int main(void) test_alloc_zeroed(dir->d_name, ONE_MEG); test_alloc_compat(dir->d_name); test_alloc_errors(dir->d_name); + test_alloc_pidfd_self(dir->d_name); + test_alloc_pidfd_child(dir->d_name); } closedir(d);
diff --git a/tools/testing/selftests/dmabuf-heaps/vmtest.sh b/tools/testing/selftests/dmabuf-heaps/vmtest.sh new file mode 100755 index 0000000000000..6f1a878384127 --- /dev/null +++ b/tools/testing/selftests/dmabuf-heaps/vmtest.sh @@ -0,0 +1,205 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (c) 2026 Red Hat +# +# Dependencies: +# * virtme-ng +# * qemu (used by virtme-ng) + +readonly SCRIPT_DIR="$(cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)" +readonly KERNEL_CHECKOUT=$(realpath "${SCRIPT_DIR}"/../../../../) +readonly CGROUP_DIR="${KERNEL_CHECKOUT}/tools/testing/selftests/cgroup" + +source "${SCRIPT_DIR}"/../kselftest/ktap_helpers.sh + +readonly DMABUF_HEAP_TEST="${SCRIPT_DIR}"/dmabuf-heap +readonly MEMCONTROL_TEST="${CGROUP_DIR}"/test_memcontrol +readonly TMP_DIR=$(mktemp -d /tmp/dmabuf-vmtest.XXXXXXXX) + +VERBOSE=false +BUILD=false +BUILD_HOST="" +BUILD_HOST_PODMAN_CONTAINER_NAME="" + +usage() { + echo + echo "$0 [OPTIONS]" + echo + echo "Options" + echo " -b: build the kernel from the current source tree and use it for the VM" + echo " -H: hostname for remote build host (used with -b)" + echo " -p: podman container name for remote build host (used with -b)" + echo " Example: -H beefyserver -p vng" + + echo " -v: enable verbose vng/qemu output" + echo + + exit 1 +} + +die() { + echo "$*" >&2 + exit "${KSFT_FAIL}" +} + +cleanup() { + rm -rf "${TMP_DIR}" +} + +check_deps() { + for dep in vng make; do + if [[ ! -x $(command -v "${dep}") ]]; then + echo -e "skip: dependency ${dep} not found!\n" + exit "${KSFT_SKIP}" + fi + done + + if [[ ! -x "${DMABUF_HEAP_TEST}" ]]; then + printf "skip: %s not found!" "${DMABUF_HEAP_TEST}" + printf " Please build the kselftest dmabuf-heaps target (or use -b).\n" + exit "${KSFT_SKIP}" + fi + + if [[ ! -x "${MEMCONTROL_TEST}" ]]; then + printf "skip: %s not found!" "${MEMCONTROL_TEST}" + printf " Please build the kselftest cgroup target (or use -b).\n" + exit "${KSFT_SKIP}" + fi +} + +check_vng() { + local tested_versions=("1.36" "1.37") + local version + local ok=0 + + version="$(vng --version)" + for tv in "${tested_versions[@]}"; do + if [[ "${version}" == *"${tv}"* ]]; then + ok=1 + break + fi + done + + if [[ "${ok}" -eq 0 ]]; then + printf "warning: vng version '%s' has not been tested and may " "${version}" >&2 + printf "not function properly.\n\tThe following versions have been tested: " >&2 + echo "${tested_versions[@]}" >&2 + fi +} + +build_selftests() { + make -C "${KERNEL_CHECKOUT}" headers_install \ + INSTALL_HDR_PATH="${TMP_DIR}/usr" -j"$(nproc)" + + local khdr="-isystem ${TMP_DIR}/usr/include" + + if ! make -C "${SCRIPT_DIR}" KHDR_INCLUDES="${khdr}" -j"$(nproc)"; then + die "failed to build dmabuf-heaps selftests" + fi + + if ! make -C "${CGROUP_DIR}" KHDR_INCLUDES="${khdr}" \ + "${MEMCONTROL_TEST}" -j"$(nproc)"; then + die "failed to build cgroup/test_memcontrol selftest" + fi +} + +handle_build() { + if ! ${BUILD}; then + return + fi + + if [[ ! -d "${KERNEL_CHECKOUT}" ]]; then + echo "-b requires vmtest.sh called from the kernel source tree" >&2 + exit 1 + fi + + pushd "${KERNEL_CHECKOUT}" &>/dev/null + + if ! vng --kconfig --config "${SCRIPT_DIR}/config"; then + die "failed to generate .config for kernel source tree (${KERNEL_CHECKOUT})" + fi + + local vng_args=("-v" "--config" "${SCRIPT_DIR}/config" "--build") + + if [[ -n "${BUILD_HOST}" ]]; then + vng_args+=("--build-host" "${BUILD_HOST}") + fi + + if [[ -n "${BUILD_HOST_PODMAN_CONTAINER_NAME}" ]]; then + vng_args+=("--build-host-exec-prefix" \ + "podman exec -ti ${BUILD_HOST_PODMAN_CONTAINER_NAME}") + fi + + if ! vng "${vng_args[@]}"; then + die "failed to build kernel from source tree (${KERNEL_CHECKOUT})" + fi + + build_selftests + + popd &>/dev/null +} + +make_runner() { + # virtme-ng shares the host filesystem, so TMP_DIR is accessible + # inside the VM at the same absolute path. + cat > "${TMP_DIR}/run_tests.sh" <<-EOF + #!/bin/sh + set -u + PASS=0; FAIL=0; SKIP=0; N=0 + + run() { + name="$1"; shift + N=$((N+1)) + "$@"; rc=$? + if [ $rc -eq 0 ]; then echo "ok $N $name"; PASS=$((PASS+1)) + elif [ $rc -eq 4 ]; then echo "ok $N $name # SKIP"; SKIP=$((SKIP+1)) + else echo "not ok $N $name"; FAIL=$((FAIL+1)) + fi + } + + run "dmabuf-heap charge_pid_fd ioctl" ${DMABUF_HEAP_TEST} + run "memcontrol dma-buf memcg" ${MEMCONTROL_TEST} test_memcg_dmabuf + echo "# PASS=$PASS SKIP=$SKIP FAIL=$FAIL" + [ $FAIL -eq 0 ] + EOF + chmod +x "${TMP_DIR}/run_tests.sh" +} + +run_vm() { + local verbose_opt="" + local kernel_opt="" + + ${VERBOSE} && verbose_opt="--verbose" + + # If we are running from within the kernel source tree, use the kernel + # source tree as the kernel to boot, otherwise use the running kernel. + if [[ "$(realpath "$(pwd)")" == "${KERNEL_CHECKOUT}"* ]]; then + kernel_opt="${KERNEL_CHECKOUT}" + fi + + vng --run ${kernel_opt} ${verbose_opt} --user root --memory 512M \ + --exec "${TMP_DIR}/run_tests.sh" +} + +while getopts :hvbH:p: o +do + case $o in + v) VERBOSE=true;; + b) BUILD=true;; + H) BUILD_HOST=$OPTARG;; + p) BUILD_HOST_PODMAN_CONTAINER_NAME=$OPTARG;; + h|*) usage;; + esac +done +shift $((OPTIND-1)) + +trap cleanup EXIT + +check_vng +handle_build +check_deps +make_runner + +echo "Booting VM and running tests..." +run_vm
linaro-mm-sig@lists.linaro.org