On Mon, Apr 21, 2025 at 4:39 PM Alexei Starovoitov
<alexei.starovoitov(a)gmail.com> wrote:
>
> On Mon, Apr 21, 2025 at 1:40 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> >
> > > > new file mode 100644
> > > > index 000000000000..b4b8be1d6aa4
> > > > --- /dev/null
> > > > +++ b/kernel/bpf/dmabuf_iter.c
> > >
> > > Maybe we should add this file to drivers/dma-buf. I would like to
> > > hear other folks thoughts on this.
> >
> > This is fine with me, and would save us the extra
> > CONFIG_DMA_SHARED_BUFFER check that's currently needed in
> > kernel/bpf/Makefile but would require checking CONFIG_BPF instead.
> > Sumit / Christian any objections to moving the dmabuf bpf iterator
> > implementation into drivers/dma-buf?
>
> The driver directory would need to 'depends on BPF_SYSCALL'.
> Are you sure you want this?
> imo kernel/bpf/ is fine for this.
I don't have a strong preference so either way is fine with me. The
main difference I see is maintainership.
> You also probably want
> .feature = BPF_ITER_RESCHED
> in bpf_dmabuf_reg_info.
Thank you, this looks like a good idea.
> Also have you considered open coded iterator for dmabufs?
> Would it help with the interface to user space?
I read through the open coded iterator patches, and it looks like they
would be slightly more efficient by avoiding seq_file overhead. As far
as the interface to userspace, for the purpose of replacing what's
currently exposed by CONFIG_DMABUF_SYSFS_STATS I don't think there is
a difference. However it looks like if I were to try to replace all of
our userspace analysis of dmabufs with a single bpf program then an
open coded iterator would make that much easier. I had not considered
attempting that.
One problem I see with open coded iterators is that support is much
more recent (2023 vs 2020). We support longterm stable kernels (back
to 5.4 currently but probably 5.10 by the time this would be used), so
it seems like it would be harder to backport the kernel support for an
open-coded iterator that far since it only goes back as far as 6.6
now. Actually it doesn't look like it is possible while also
maintaining the stable ABI we provide to device vendors. Which means
we couldn't get rid of the dmabuf sysfs stats userspace dependency
until 6.1 EOL in Dec. 2027. :\ So I'm in favor of a traditional bpf
iterator here for now.
On Mon, Apr 14, 2025 at 09:21:25PM +0200, Thomas Petazzoni wrote:
> > "UIO is a broken legacy mess, so let's add more broken things
> > to it as broken + broken => still broken, so no harm done", am I
> > getting that right?
>
> Who says UIO is a "broken legacy mess"? Only you says so. I don't see
> any indication anywhere in the kernel tree suggesting that UIO is
> considered a broken legacy mess.
Explain what the difference is between UIO and VFIO, especially VFIO
no-iommu mode?
I've always understood that UIO is for very simple devices that cannot
do DMA. So it's very simple operating model and simple security work
fine.
IMHO, if the can use DMA it should use VFIO. If you have no iommu then
you should use the VFIO unsafe no-iommu path. It still provides a
solid framework.
As to this series, I have seen a number of requests to improve the
VFIO no-iommu path to allow working with the existing IOAS scheme to
register memory but to allow the kernel the return the no-iommu
DMAable address of the IOAS pinned memory. This would replace the
hacky use of mlock and /proc/XX/pagemap that people use today.
If that were done, could you use VFIO no-iommu?
> Keep in mind that when you're running code as root, you can load a
> kernel module, which can do anything on the system security-wise. So
> letting UIO expose MMIO registers of devices to userspace applications
> running as root is not any worse than that.
That isn't fully true.. UIO isn't fitting into the security model by
allowing DMA capable devices to be exposed without checking for
CAP_SYS_RAW_IO first.
Jason
On Mon, Apr 21, 2025 at 10:58 AM Song Liu <song(a)kernel.org> wrote:
>
> On Mon, Apr 14, 2025 at 3:53 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> >
> > The dmabuf iterator traverses the list of all DMA buffers. The list is
> > maintained only when CONFIG_DEBUG_FS is enabled.
> >
> > DMA buffers are refcounted through their associated struct file. A
> > reference is taken on each buffer as the list is iterated to ensure each
> > buffer persists for the duration of the bpf program execution without
> > holding the list mutex.
> >
> > Signed-off-by: T.J. Mercier <tjmercier(a)google.com>
> > ---
> > include/linux/btf_ids.h | 1 +
> > kernel/bpf/Makefile | 3 +
> > kernel/bpf/dmabuf_iter.c | 130 +++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 134 insertions(+)
> > create mode 100644 kernel/bpf/dmabuf_iter.c
> >
> > diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
> > index 139bdececdcf..899ead57d89d 100644
> > --- a/include/linux/btf_ids.h
> > +++ b/include/linux/btf_ids.h
> > @@ -284,5 +284,6 @@ extern u32 bpf_cgroup_btf_id[];
> > extern u32 bpf_local_storage_map_btf_id[];
> > extern u32 btf_bpf_map_id[];
> > extern u32 bpf_kmem_cache_btf_id[];
> > +extern u32 bpf_dmabuf_btf_id[];
>
> This is not necessary. See below.
>
> >
> > #endif
> > diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> > index 70502f038b92..5b30d37ef055 100644
> > --- a/kernel/bpf/Makefile
> > +++ b/kernel/bpf/Makefile
> > @@ -53,6 +53,9 @@ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
> > obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o
> > obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o
> > obj-$(CONFIG_BPF_SYSCALL) += kmem_cache_iter.o
> > +ifeq ($(CONFIG_DEBUG_FS),y)
> > +obj-$(CONFIG_BPF_SYSCALL) += dmabuf_iter.o
> > +endif
> >
> > CFLAGS_REMOVE_percpu_freelist.o = $(CC_FLAGS_FTRACE)
> > CFLAGS_REMOVE_bpf_lru_list.o = $(CC_FLAGS_FTRACE)
> > diff --git a/kernel/bpf/dmabuf_iter.c b/kernel/bpf/dmabuf_iter.c
> > new file mode 100644
> > index 000000000000..b4b8be1d6aa4
> > --- /dev/null
> > +++ b/kernel/bpf/dmabuf_iter.c
>
> Maybe we should add this file to drivers/dma-buf. I would like to
> hear other folks thoughts on this.
This is fine with me, and would save us the extra
CONFIG_DMA_SHARED_BUFFER check that's currently needed in
kernel/bpf/Makefile but would require checking CONFIG_BPF instead.
Sumit / Christian any objections to moving the dmabuf bpf iterator
implementation into drivers/dma-buf?
> > @@ -0,0 +1,130 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright (c) 2025 Google LLC */
> > +#include <linux/bpf.h>
> > +#include <linux/btf_ids.h>
> > +#include <linux/dma-buf.h>
> > +#include <linux/kernel.h>
> > +#include <linux/seq_file.h>
> > +
> > +BTF_ID_LIST_GLOBAL_SINGLE(bpf_dmabuf_btf_id, struct, dma_buf)
>
> Use BTF_ID_LIST_SINGLE(), then we don't need this in btf_ids.h
>
> > +DEFINE_BPF_ITER_FUNC(dmabuf, struct bpf_iter_meta *meta, struct dma_buf *dmabuf)
> > +
> > +static void *dmabuf_iter_seq_start(struct seq_file *seq, loff_t *pos)
> > +{
> > + struct dma_buf *dmabuf, *ret = NULL;
> > +
> > + if (*pos) {
> > + *pos = 0;
> > + return NULL;
> > + }
> > + /* Look for the first buffer we can obtain a reference to.
> > + * The list mutex does not protect a dmabuf's refcount, so it can be
> > + * zeroed while we are iterating. Therefore we cannot call get_dma_buf()
> > + * since the caller of this program may not already own a reference to
> > + * the buffer.
> > + */
>
> We should use kernel comment style for new code. IOW,
> /*
> * Look for ...
> */
>
>
> Thanks,
> Song
Thanks, I have incorporated all of your comments and retested. I will
give some time for more feedback before sending a v2.
>
> [...]
On Thu, Apr 17, 2025 at 1:26 PM Song Liu <song(a)kernel.org> wrote:
>
> On Thu, Apr 17, 2025 at 9:05 AM T.J. Mercier <tjmercier(a)google.com> wrote:
> >
> > On Wed, Apr 16, 2025 at 9:56 PM Song Liu <song(a)kernel.org> wrote:
> > >
> > > On Wed, Apr 16, 2025 at 7:09 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> > > >
> > > > On Wed, Apr 16, 2025 at 6:26 PM Song Liu <song(a)kernel.org> wrote:
> > > [...]
> > > > >
> > > > > Here is another rookie question, it appears to me there is a file descriptor
> > > > > associated with each DMA buffer, can we achieve the same goal with
> > > > > a task-file iterator?
> > > >
> > > > That would find almost all of them, but not the kernel-only
> > > > allocations. (kernel_rss in the dmabuf_dump output I attached earlier.
> > > > If there's a leak, it's likely to show up in kernel_rss because some
> > > > driver forgot to release its reference(s).) Also wouldn't that be a
> > > > ton more iterations since we'd have to visit every FD to find the
> > > > small portion that are dmabufs? I'm not actually sure if buffers that
> > > > have been mapped, and then have had their file descriptors closed
> > > > would show up in task_struct->files; if not I think that would mean
> > > > scanning both files and vmas for each task.
> > >
> > > I don't think scanning all FDs to find a small portion of specific FDs
> > > is a real issue. We have a tool that scans all FDs in the system and
> > > only dump data for perf_event FDs. I think it should be easy to
> > > prototype a tool by scanning all files and all vmas. If that turns out
> > > to be very slow, which I highly doubt will be, we can try other
> > > approaches.
> >
> > But this will not find *all* the buffers, and that defeats the purpose
> > of having the iterator.
>
> Do you mean this approach cannot get kernel only allocations? If
> that's the case, we probably do need a separate iterator. I am
> interested in other folks thoughts on this.
Correct.
> > > OTOH, I am wondering whether we can build a more generic iterator
> > > for a list of objects. Adding a iterator for each important kernel lists
> > > seems not scalable in the long term.
> >
> > I think the wide variety of differences in locking for different
> > objects would make this difficult to do in a generic way.
>
> Agreed it is not easy to build a generic solution. But with the
> help from BTF, we can probably build something that covers
> a large number of use cases.
I'm curious what this would look like. I guess a good test would be to
see if anything would work for some subset of the already existing
iterators.
On Wed, Apr 16, 2025 at 9:56 PM Song Liu <song(a)kernel.org> wrote:
>
> On Wed, Apr 16, 2025 at 7:09 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> >
> > On Wed, Apr 16, 2025 at 6:26 PM Song Liu <song(a)kernel.org> wrote:
> [...]
> > >
> > > Here is another rookie question, it appears to me there is a file descriptor
> > > associated with each DMA buffer, can we achieve the same goal with
> > > a task-file iterator?
> >
> > That would find almost all of them, but not the kernel-only
> > allocations. (kernel_rss in the dmabuf_dump output I attached earlier.
> > If there's a leak, it's likely to show up in kernel_rss because some
> > driver forgot to release its reference(s).) Also wouldn't that be a
> > ton more iterations since we'd have to visit every FD to find the
> > small portion that are dmabufs? I'm not actually sure if buffers that
> > have been mapped, and then have had their file descriptors closed
> > would show up in task_struct->files; if not I think that would mean
> > scanning both files and vmas for each task.
>
> I don't think scanning all FDs to find a small portion of specific FDs
> is a real issue. We have a tool that scans all FDs in the system and
> only dump data for perf_event FDs. I think it should be easy to
> prototype a tool by scanning all files and all vmas. If that turns out
> to be very slow, which I highly doubt will be, we can try other
> approaches.
But this will not find *all* the buffers, and that defeats the purpose
of having the iterator.
> OTOH, I am wondering whether we can build a more generic iterator
> for a list of objects. Adding a iterator for each important kernel lists
> seems not scalable in the long term.
I think the wide variety of differences in locking for different
objects would make this difficult to do in a generic way.
Am 16.04.25 um 23:51 schrieb Juan Yescas:
> On Wed, Apr 16, 2025 at 4:34 AM Christian König
> <christian.koenig(a)amd.com> wrote:
>>
>>
>> Am 15.04.25 um 19:19 schrieb Juan Yescas:
>>> This change sets the allocation orders for the different page sizes
>>> (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders
>>> for large page sizes were calculated incorrectly, this caused system
>>> heap to allocate from 2% to 4% more memory on 16KiB page size kernels.
>>>
>>> This change was tested on 4k/16k page size kernels.
>>>
>>> Signed-off-by: Juan Yescas <jyescas(a)google.com>
>>> ---
>>> drivers/dma-buf/heaps/system_heap.c | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
>>> index 26d5dc89ea16..54674c02dcb4 100644
>>> --- a/drivers/dma-buf/heaps/system_heap.c
>>> +++ b/drivers/dma-buf/heaps/system_heap.c
>>> @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP};
>>> * to match with the sizes often found in IOMMUs. Using order 4 pages instead
>>> * of order 0 pages can significantly improve the performance of many IOMMUs
>>> * by reducing TLB pressure and time spent updating page tables.
>>> + *
>>> + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. The possible
>>> + * page sizes for ARM devices could be 4K, 16K and 64K.
>>> */
>>> -static const unsigned int orders[] = {8, 4, 0};
>>> +#define ORDER_1M (20 - PAGE_SHIFT)
>>> +#define ORDER_64K (16 - PAGE_SHIFT)
>>> +#define ORDER_FOR_PAGE_SIZE (0)
>>> +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, ORDER_FOR_PAGE_SIZE};
>>> +#
>> Good catch, but I think the defines are just overkill.
>>
>> What you should do instead is to subtract page shift when using the array.
>>
> There are several occurrences of orders in the file and I think it is
> better to do the calculations up front in the array. Would you be ok
> if we get rid of the defines as per your suggestion and make the
> calculations during the definition of the array. Something like this:
>
> static const unsigned int orders[] = {20 - PAGE_SHIFT, 16 - PAGE_SHIFT, 0};
Yeah that totally works for me as well. Just make sure that a code comment nearby explains why 20, 16 and 0.
>> Apart from that using 1M, 64K and then falling back to 4K just sounds random to me. We have especially pushed back on 64K more than once because it is actually not beneficial in almost all cases.
>>
> In the hardware where the driver is used, the 64K is beneficial for:
>
> Arm SMMUv3 (4KB granule size):
> 64KB (contiguous bit groups 16 4KB pages)
>
> SysMMU benefits from 64KB (“large” page) on 4k/16k page sizes.
Question could this HW also work with pages larger than 64K? (I strongly expect yes).
>> I suggest to fix the code in system_heap_allocate to not over allocate instead and just try the available orders like TTM does. This has proven to be working architecture independent.
>>
> Do you mean to have an implementation similar to __ttm_pool_alloc()?
Yeah something like that.
> https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree…
>
> If that is the case, we can try the change, run some benchmarks and
> submit the patch if we see positive results.
Could be that this doesn't matter for your platform, but it's minimal extra overhead asking for larger chunks first and it just avoids fragmenting everything into 64K chunks.
That is kind of important since the code in DMA-heaps should be platform neutral if possible.
Regards,
Christian.
>
> Thanks
> Juan
>
>> Regards,
>> Christian.
>>
>>> #define NUM_ORDERS ARRAY_SIZE(orders)
>>>
>>> static struct sg_table *dup_sg_table(struct sg_table *table)
On Wed, Apr 16, 2025 at 6:26 PM Song Liu <song(a)kernel.org> wrote:
>
> On Wed, Apr 16, 2025 at 4:40 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> >
> > On Wed, Apr 16, 2025 at 4:08 PM Song Liu <song(a)kernel.org> wrote:
> > >
> > > On Wed, Apr 16, 2025 at 3:51 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> > > [...]
> > > > >
> > > > > IIUC, the iterator simply traverses elements in a linked list. I feel it is
> > > > > an overkill to implement a new BPF iterator for it.
> > > >
> > > > Like other BPF iterators such as kmem_cache_iter or task_iter.
> > > > Cgroup_iter iterates trees instead of lists. This is iterating over
> > > > kernel objects just like the docs say, "A BPF iterator is a type of
> > > > BPF program that allows users to iterate over specific types of kernel
> > > > objects". More complicated iteration should not be a requirement here.
> > > >
> > > > > Maybe we simply
> > > > > use debugging tools like crash or drgn for this? The access with
> > > > > these tools will not be protected by the mutex. But from my personal
> > > > > experience, this is not a big issue for user space debugging tools.
> > > >
> > > > drgn is *way* too slow, and even if it weren't the dependencies for
> > > > running it aren't available. crash needs debug symbols which also
> > > > aren't available on user builds. This is not just for manual
> > > > debugging, it's for reporting memory use in production. Or anything
> > > > else someone might care to extract like attachment info or refcounts.
> > >
> > > Could you please share more information about the use cases and
> > > the time constraint here, and why drgn is too slow. Is most of the delay
> > > comes from parsing DWARF? This is mostly for my curiosity, because
> > > I have been thinking about using drgn to do some monitoring in
> > > production.
> > >
> > > Thanks,
> > > Song
> >
> > These RunCommands have 10 second timeouts for example. It's rare that
> > I see them get exceeded but it happens occasionally.:
> > https://cs.android.com/android/platform/superproject/main/+/main:frameworks…
>
> Thanks for sharing this information.
>
> > The last time I used drgn (admittedly back in 2023) it took over a
> > minute to iterate through less than 200 cgroups. I'm not sure what the
> > root cause of the slowness was, but I'd expect the DWARF processing to
> > be done up-front once and the slowness I experienced was not just at
> > startup. Eventually I switched over to tracefs for that issue, which
> > we still use for some telemetry.
>
> I haven't tried drgn on Android. On server side, iterating should 200
> cgroups should be fairly fast (< 5 seconds, where DWARF parsing is
> the most expensive part).
>
> > Other uses are by statsd for telemetry, memory reporting on app kills
> > or death, and for "dumpsys meminfo".
>
> Here is another rookie question, it appears to me there is a file descriptor
> associated with each DMA buffer, can we achieve the same goal with
> a task-file iterator?
That would find almost all of them, but not the kernel-only
allocations. (kernel_rss in the dmabuf_dump output I attached earlier.
If there's a leak, it's likely to show up in kernel_rss because some
driver forgot to release its reference(s).) Also wouldn't that be a
ton more iterations since we'd have to visit every FD to find the
small portion that are dmabufs? I'm not actually sure if buffers that
have been mapped, and then have had their file descriptors closed
would show up in task_struct->files; if not I think that would mean
scanning both files and vmas for each task.
On Wed, Apr 16, 2025 at 4:08 PM Song Liu <song(a)kernel.org> wrote:
>
> On Wed, Apr 16, 2025 at 3:51 PM T.J. Mercier <tjmercier(a)google.com> wrote:
> [...]
> > >
> > > IIUC, the iterator simply traverses elements in a linked list. I feel it is
> > > an overkill to implement a new BPF iterator for it.
> >
> > Like other BPF iterators such as kmem_cache_iter or task_iter.
> > Cgroup_iter iterates trees instead of lists. This is iterating over
> > kernel objects just like the docs say, "A BPF iterator is a type of
> > BPF program that allows users to iterate over specific types of kernel
> > objects". More complicated iteration should not be a requirement here.
> >
> > > Maybe we simply
> > > use debugging tools like crash or drgn for this? The access with
> > > these tools will not be protected by the mutex. But from my personal
> > > experience, this is not a big issue for user space debugging tools.
> >
> > drgn is *way* too slow, and even if it weren't the dependencies for
> > running it aren't available. crash needs debug symbols which also
> > aren't available on user builds. This is not just for manual
> > debugging, it's for reporting memory use in production. Or anything
> > else someone might care to extract like attachment info or refcounts.
>
> Could you please share more information about the use cases and
> the time constraint here, and why drgn is too slow. Is most of the delay
> comes from parsing DWARF? This is mostly for my curiosity, because
> I have been thinking about using drgn to do some monitoring in
> production.
>
> Thanks,
> Song
These RunCommands have 10 second timeouts for example. It's rare that
I see them get exceeded but it happens occasionally.:
https://cs.android.com/android/platform/superproject/main/+/main:frameworks…
The last time I used drgn (admittedly back in 2023) it took over a
minute to iterate through less than 200 cgroups. I'm not sure what the
root cause of the slowness was, but I'd expect the DWARF processing to
be done up-front once and the slowness I experienced was not just at
startup. Eventually I switched over to tracefs for that issue, which
we still use for some telemetry.
Other uses are by statsd for telemetry, memory reporting on app kills
or death, and for "dumpsys meminfo".
Thanks,
T.J.