Hi Alain,
kernel test robot noticed the following build warnings:
[auto build test WARNING on atorgue-stm32/stm32-next]
[also build test WARNING on robh/for-next linus/master v6.19-rc1 next-20251219]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Alain-Volmat/media-stm32-dcm…
base: https://git.kernel.org/pub/scm/linux/kernel/git/atorgue/stm32.git stm32-next
patch link: https://lore.kernel.org/r/20251218-stm32-dcmi-dma-chaining-v1-1-39948ca6cbf…
patch subject: [PATCH 01/12] media: stm32: dcmi: Switch from __maybe_unused to pm_sleep_ptr()
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20251221/202512210044.xNNW6QJZ-lkp@…)
compiler: arc-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251221/202512210044.xNNW6QJZ-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512210044.xNNW6QJZ-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/media/platform/st/stm32/stm32-dcmi.c:2127:12: warning: 'dcmi_resume' defined but not used [-Wunused-function]
2127 | static int dcmi_resume(struct device *dev)
| ^~~~~~~~~~~
>> drivers/media/platform/st/stm32/stm32-dcmi.c:2116:12: warning: 'dcmi_suspend' defined but not used [-Wunused-function]
2116 | static int dcmi_suspend(struct device *dev)
| ^~~~~~~~~~~~
vim +/dcmi_resume +2127 drivers/media/platform/st/stm32/stm32-dcmi.c
2115
> 2116 static int dcmi_suspend(struct device *dev)
2117 {
2118 /* disable clock */
2119 pm_runtime_force_suspend(dev);
2120
2121 /* change pinctrl state */
2122 pinctrl_pm_select_sleep_state(dev);
2123
2124 return 0;
2125 }
2126
> 2127 static int dcmi_resume(struct device *dev)
2128 {
2129 /* restore pinctl default state */
2130 pinctrl_pm_select_default_state(dev);
2131
2132 /* clock enable */
2133 pm_runtime_force_resume(dev);
2134
2135 return 0;
2136 }
2137
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On 12/19/25 11:25, Maxime Ripard wrote:
> On Mon, Dec 15, 2025 at 03:53:22PM +0100, Christian König wrote:
>> On 12/15/25 14:59, Maxime Ripard wrote:
...
>>>>> The shared ownership is indeed broken, but it's not more or less broken
>>>>> than, say, memfd + udmabuf, and I'm sure plenty of others.
>>>>>
>>>>> So we really improve the common case, but only make the "advanced"
>>>>> slightly more broken than it already is.
>>>>>
>>>>> Would you disagree?
>>>>
>>>> I strongly disagree. As far as I can see there is a huge chance we
>>>> break existing use cases with that.
>>>
>>> Which ones? And what about the ones that are already broken?
>>
>> Well everybody that expects that driver resources are *not* accounted to memcg.
>
> Which is a thing only because these buffers have never been accounted
> for in the first place.
Yeah, completely agree. By not accounting it for such a long time we ended up with people depending on this behavior.
Not nice, but that's what it is.
> So I guess the conclusion is that we shouldn't
> even try to do memory accounting, because someone somewhere might not
> expect that one of its application would take too much RAM in the
> system?
Well we do need some kind of solution to the problem. Either having some setting where you say "This memcg limit is inclusive/exclusive device driver allocated memory" or have a completely separate limit for device driver allocated memory.
Key point is we have both use cases, so we need to support both.
>>>> There has been some work on TTM by Dave but I still haven't found time
>>>> to wrap my head around all possible side effects such a change can
>>>> have.
>>>>
>>>> The fundamental problem is that neither memcg nor the classic resource
>>>> tracking (e.g. the OOM killer) has a good understanding of shared
>>>> resources.
>>>
>>> And yet heap allocations don't necessarily have to be shared. But they
>>> all have to be allocated.
>>>
>>>> For example you can use memfd to basically kill any process in the
>>>> system because the OOM killer can't identify the process which holds
>>>> the reference to the memory in question. And that is a *MUCH* bigger
>>>> problem than just inaccurate memcg accounting.
>>>
>>> When you frame it like that, sure. Also, you can use the system heap to
>>> DoS any process in the system. I'm not saying that what you're concerned
>>> about isn't an issue, but let's not brush off other people legitimate
>>> issues as well.
>>
>> Completely agree, but we should prioritize.
>>
>> That driver allocated memory is not memcg accounted is actually uAPI,
>> e.g. that is not something which can easily change.
>>
>> While fixing the OOM killer looks perfectly doable and will then most
>> likely also show a better path how to fix the memcg accounting.
>
> I don't necessarily disagree, but we don't necessarily have the same
> priorities either. Your use-cases are probably quite different from
> mine, and that's ok. But that's precisely why all these discussions
> should be made on the ML when possible, or at least have some notes when
> a discussion has happened at a conference or something.
>
> So far, my whole experience with this topic, despite being the only one
> (afaik) sending patches about this for the last 1.5y, is that everytime
> some work on this is done the answer is "oh but you shouldn't have
> worked on it because we completely changed our mind", and that's pretty
> frustrating.
Welcome to the club :)
I've already posted patches to start addressing at least the OOM killer issue ~10 years ago.
Those patches were not well received because back then driver memory was negligible and the problem simply didn't hurt much.
But by now we have GPUs and AI accelerators which eat up 90% of your system memory, security researchers stumbling over it and IIRC even multiple CVE numbers for some of the resulting issues...
I should probably dig it up and re-send my patch set.
Happy holidays,
Christian.
>
> Maxime
Hi,
Here are kernel-doc fixes for mm subsystem, based on mm-hotfixes-unstable
branch. This series is split from previous assorted kernel-doc fixes series
[1] with review trailers applied.
I'm also including textsearch fix since there's currently no maintainer
for include/linux/textsearch.h (get_maintainer.pl only shows LKML).
Enjoy!
[1]: https://lore.kernel.org/linux-fsdevel/20251215113903.46555-1-bagasdotme@gma…
Bagas Sanjaya (4):
mm: Describe @flags parameter in memalloc_flags_save()
textsearch: Describe @list member in ts_ops search
mm: vmalloc: Fix up vrealloc_node_align() kernel-doc macro name
mm, kfence: Describe @slab parameter in __kfence_obj_info()
include/linux/kfence.h | 1 +
include/linux/sched/mm.h | 1 +
include/linux/textsearch.h | 1 +
mm/vmalloc.c | 2 +-
4 files changed, 4 insertions(+), 1 deletion(-)
base-commit: 980dbceadd50af9437257d8095d4a3606818e8c4
--
An old man doll... just what I always wanted! - Clara
On 12/11/25 16:04, Tvrtko Ursulin wrote:
...
>> @@ -90,6 +73,11 @@ static int test_signaling(void *arg)
>> goto err_free;
>> }
>> + if (rcu_dereference_protected(f->ops, true)) {
>> + pr_err("Fence ops not cleared on signal\n");
>> + goto err_free;
>> + }
>
> Bump to after the signaled check just below? Otherwise the signaled state hasn't been ascertained yet.
Done. I've put it to the end of the test.
>> +
>> if (!dma_fence_is_signaled(f)) {
>> pr_err("Fence not reporting signaled\n");
>> goto err_free;
>> @@ -540,19 +528,7 @@ int dma_fence(void)
>> SUBTEST(test_stub),
>> SUBTEST(race_signal_callback),
>> };
>> - int ret;
>> pr_info("sizeof(dma_fence)=%zu\n", sizeof(struct dma_fence));
>> -
>> - slab_fences = KMEM_CACHE(mock_fence,
>> - SLAB_TYPESAFE_BY_RCU |
>
> Hm.. race_signal_callback looks like it could be depending on SLAB_TYPESAFE_BY_RCU. To you not?
Hui? As far as I can see it doesn't.
The race_signal_callback test just depends on the general RCU functionality of fences.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> - SLAB_HWCACHE_ALIGN);
>> - if (!slab_fences)
>> - return -ENOMEM;
>> -
>> - ret = subtests(tests, NULL);
>> -
>> - kmem_cache_destroy(slab_fences);
>> -
>> - return ret;
>> + return subtests(tests, NULL);
>> }
>
Hi,
Here are assorted kernel-doc fixes for 6.19 cycle. As the name
implies, for the merging strategy, the patches can be taken by
respective maintainers to appropriate fixes branches (targetting
6.19 of course) (e.g. for mm it will be mm-hotfixes).
Enjoy!
Bagas Sanjaya (14):
genalloc: Describe @start_addr parameter in genpool_algo_t
mm: Describe @flags parameter in memalloc_flags_save()
textsearch: Describe @list member in ts_ops search
mm: vmalloc: Fix up vrealloc_node_align() kernel-doc macro name
mm, kfence: Describe @slab parameter in __kfence_obj_info()
virtio: Describe @map and @vmap members in virtio_device struct
fs: Describe @isnew parameter in ilookup5_nowait()
VFS: fix __start_dirop() kernel-doc warnings
drm/amd/display: Don't use kernel-doc comment in
dc_register_software_state struct
drm/amdgpu: Describe @AMD_IP_BLOCK_TYPE_RAS in amd_ip_block_type enum
drm/gem/shmem: Describe @shmem and @size parameters
drm/scheduler: Describe @result in drm_sched_job_done()
drm/gpusvm: Fix drm_gpusvm_pages_valid_unlocked() kernel-doc comment
net: bridge: Describe @tunnel_hash member in net_bridge_vlan_group
struct
drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
drivers/gpu/drm/amd/include/amd_shared.h | 1 +
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
drivers/gpu/drm/drm_gpusvm.c | 4 ++--
drivers/gpu/drm/scheduler/sched_main.c | 1 +
fs/inode.c | 1 +
fs/namei.c | 3 ++-
include/linux/genalloc.h | 1 +
include/linux/kfence.h | 1 +
include/linux/sched/mm.h | 1 +
include/linux/textsearch.h | 1 +
include/linux/virtio.h | 2 ++
mm/vmalloc.c | 2 +-
net/bridge/br_private.h | 1 +
14 files changed, 18 insertions(+), 6 deletions(-)
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
--
An old man doll... just what I always wanted! - Clara
On 12/17/25 11:25, Lukas Bulwahn wrote:
> From: Lukas Bulwahn <lukas.bulwahn(a)redhat.com>
>
> The intent of commit 646013f513f3 ("dma-buf: enable DMABUF_DEBUG by default
> on DEBUG kernels") is clear, but it mixes up the config option name. The
> config option for kernel debugging is named DEBUG_KERNEL, not DEBUG.
>
> Fix up the DMABUF_DEBUG definition to use the intended name.
>
> Fixes: 646013f513f3 ("dma-buf: enable DMABUF_DEBUG by default on DEBUG kernels")
> Signed-off-by: Lukas Bulwahn <lukas.bulwahn(a)redhat.com>
Ah, yeah. I mixed up the C define vs the config option. Thanks for pointing that out.
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/dma-buf/Kconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index fdd823e446cc..426c9ad3364f 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -55,7 +55,7 @@ config DMABUF_MOVE_NOTIFY
> config DMABUF_DEBUG
> bool "DMA-BUF debug checks"
> depends on DMA_SHARED_BUFFER
> - default y if DEBUG
> + default y if DEBUG_KERNEL
> help
> This option enables additional checks for DMA-BUF importers and
> exporters. Specifically it validates that importers do not peek at the
On 12/15/25 14:59, Maxime Ripard wrote:
> On Mon, Dec 15, 2025 at 02:30:47PM +0100, Christian König wrote:
>> On 12/15/25 11:51, Maxime Ripard wrote:
>>> Hi TJ,
>>>
>>> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
>>>> On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
>>>>>
>>>>> The system dma-buf heap lets userspace allocate buffers from the page
>>>>> allocator. However, these allocations are not accounted for in memcg,
>>>>> allowing processes to escape limits that may be configured.
>>>>>
>>>>> Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
>>>>
>>>> We had a discussion just last night in the MM track at LPC about how
>>>> shared memory accounted in memcg is pretty broken. Without a way to
>>>> identify (and possibly transfer) ownership of a shared buffer, this
>>>> makes the accounting of shared memory, and zombie memcg problems
>>>> worse. :\
>>>
>>> Are there notes or a report from that discussion anywhere?
>>>
>>> The way I see it, the dma-buf heaps *trivial* case is non-existent at
>>> the moment and that's definitely broken. Any application can bypass its
>>> cgroups limits trivially, and that's a pretty big hole in the system.
>>
>> Well, that is just the tip of the iceberg.
>>
>> Pretty much all driver interfaces doesn't account to memcg at the
>> moment, all the way from alsa, over GPUs (both TTM and SHM-GEM) to
>> V4L2.
>
> Yes, I know, and step 1 of the plan we discussed earlier this year is to
> fix the heaps.
>
>>> The shared ownership is indeed broken, but it's not more or less broken
>>> than, say, memfd + udmabuf, and I'm sure plenty of others.
>>>
>>> So we really improve the common case, but only make the "advanced"
>>> slightly more broken than it already is.
>>>
>>> Would you disagree?
>>
>> I strongly disagree. As far as I can see there is a huge chance we
>> break existing use cases with that.
>
> Which ones? And what about the ones that are already broken?
Well everybody that expects that driver resources are *not* accounted to memcg.
>> There has been some work on TTM by Dave but I still haven't found time
>> to wrap my head around all possible side effects such a change can
>> have.
>>
>> The fundamental problem is that neither memcg nor the classic resource
>> tracking (e.g. the OOM killer) has a good understanding of shared
>> resources.
>
> And yet heap allocations don't necessarily have to be shared. But they
> all have to be allocated.
>
>> For example you can use memfd to basically kill any process in the
>> system because the OOM killer can't identify the process which holds
>> the reference to the memory in question. And that is a *MUCH* bigger
>> problem than just inaccurate memcg accounting.
>
> When you frame it like that, sure. Also, you can use the system heap to
> DoS any process in the system. I'm not saying that what you're concerned
> about isn't an issue, but let's not brush off other people legitimate
> issues as well.
Completely agree, but we should prioritize.
That driver allocated memory is not memcg accounted is actually uAPI, e.g. that is not something which can easily change.
While fixing the OOM killer looks perfectly doable and will then most likely also show a better path how to fix the memcg accounting.
Christian.
>
> Maxime
On Mon, Dec 15, 2025 at 7:51 PM Maxime Ripard <mripard(a)redhat.com> wrote:
>
> Hi TJ,
Hi Maxime,
> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
> > On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
> > >
> > > The system dma-buf heap lets userspace allocate buffers from the page
> > > allocator. However, these allocations are not accounted for in memcg,
> > > allowing processes to escape limits that may be configured.
> > >
> > > Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
> >
> > We had a discussion just last night in the MM track at LPC about how
> > shared memory accounted in memcg is pretty broken. Without a way to
> > identify (and possibly transfer) ownership of a shared buffer, this
> > makes the accounting of shared memory, and zombie memcg problems
> > worse. :\
>
> Are there notes or a report from that discussion anywhere?
The LPC vids haven't been clipped yet, and actually I can't even find
the recorded full live stream from Hall A2 on the first day. So I
don't think there's anything to look at, but I bet there's probably
nothing there you don't already know.
> The way I see it, the dma-buf heaps *trivial* case is non-existent at
> the moment and that's definitely broken. Any application can bypass its
> cgroups limits trivially, and that's a pretty big hole in the system.
Agree, but if we only charge the first allocator then limits can still
easily be bypassed assuming an app can cause an allocation outside of
its cgroup tree.
I'm not sure using static memcg limits where a significant portion of
the memory can be shared is really feasible. Even with just pagecache
being charged to memcgs, we're having trouble defining a static memcg
limit that is really useful since it has to be high enough to
accomodate occasional spikes due to shared memory that might or might
not be charged (since it can only be charged to one memcg - it may be
spread around or it may all get charged to one memcg). So excessive
anonymous use has to get really bad before it gets punished.
What I've been hearing lately is that folks are polling memory.stat or
PSI or other metrics and using that to take actions (memory.reclaim /
killing / adjust memory.high) at runtime rather than relying on
memory.high/max behavior with a static limit.
> The shared ownership is indeed broken, but it's not more or less broken
> than, say, memfd + udmabuf, and I'm sure plenty of others.
One thing that's worse about system heap buffers is that unlike memfd
the memory isn't reclaimable. So without killing all users there's
currently no way to deal with the zombie issue. Harry's proposing
reparenting, but I don't think our current interfaces support that
because we'd have to mess with the page structs behind system heap
dmabufs to change the memcg during reparenting.
Ah... but udmabuf pins the memfd pages, so you're right that memfd +
udmabuf isn't worse.
> So we really improve the common case, but only make the "advanced"
> slightly more broken than it already is.
>
> Would you disagree?
I think memcg limits in this case just wouldn't be usable because of
what I mentioned above. In our common case the allocator is in a
different cgroup tree than the real users of the buffer.
> Maxime
On 12/15/25 16:53, Tvrtko Ursulin wrote:
>
> On 15/12/2025 15:38, Christian König wrote:
>> On 12/15/25 10:20, Tvrtko Ursulin wrote:
>>>
>>> On 12/12/2025 15:50, Christian König wrote:
>>>> On 12/11/25 16:13, Tvrtko Ursulin wrote:
>>>>>
>>>>> On 11/12/2025 13:16, Christian König wrote:
>>>>>> Using the inline lock is now the recommended way for dma_fence implementations.
>>>>>>
>>>>>> So use this approach for the scheduler fences as well just in case if
>>>>>> anybody uses this as blueprint for its own implementation.
>>>>>>
>>>>>> Also saves about 4 bytes for the external spinlock.
>>>>>>
>>>>>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>>>>>> include/drm/gpu_scheduler.h | 4 ----
>>>>>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> index 08ccbde8b2f5..47471b9e43f9 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>>>>> /* If we already have an earlier deadline, keep it: */
>>>>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>>>>> ktime_before(fence->deadline, deadline)) {
>>>>>> - spin_unlock_irqrestore(&fence->lock, flags);
>>>>>> + dma_fence_unlock_irqrestore(f, flags);
>>>>>
>>>>> Rebase error I guess. Pull into the locking helpers patch.
>>>>
>>>> No that is actually completely intentional here.
>>>>
>>>> Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
>>>>
>>>> Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
>>>
>>> I don't follow. The code is currently like this:
>>>
>>> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>> ktime_t deadline)
>>> {
>>> struct drm_sched_fence *fence = to_drm_sched_fence(f);
>>> struct dma_fence *parent;
>>> unsigned long flags;
>>>
>>> spin_lock_irqsave(&fence->lock, flags);
>>>
>>> /* If we already have an earlier deadline, keep it: */
>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>> ktime_before(fence->deadline, deadline)) {
>>> spin_unlock_irqrestore(&fence->lock, flags);
>>> return;
>>> }
>>>
>>> fence->deadline = deadline;
>>> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>>>
>>> spin_unlock_irqrestore(&fence->lock, flags);...
>>>
>>> The diff changes one out of the three lock/unlock operations. Other two are changed in 3/19. All three should surely be changed in the same patch.
>>
>> We could change those spin_lock/unlock calls in patch #3, but I don't think that this is clean.
>>
>> See the code here currently uses fence->lock and patch #3 would change it to use fence->finished->lock instead. That might be the pointer at the moment, but that is just by coincident and not design.
>>
>> Only this change here ontop makes it intentional that we use fence->finished->lock for everything.
>
> Sorry I still don't follow. After 3/19 and before this 9/19 the function looks like this:
>
> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> ktime_t deadline)
> {
> struct drm_sched_fence *fence = to_drm_sched_fence(f);
> struct dma_fence *parent;
> unsigned long flags;
>
> dma_fence_lock_irqsave(f, flags);
>
> /* If we already have an earlier deadline, keep it: */
> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
> ktime_before(fence->deadline, deadline)) {
> spin_unlock_irqrestore(&fence->lock, flags);
> return;
> }
>
> fence->deadline = deadline;
> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>
> dma_fence_unlock_irqrestore(f, flags);
>
> Notice the lonely spin_unlock_irqrestore on the early return path while other two use the dma_fence_(un)lock helpers. Am I blind or how is that clean?
Oh, that's what you mean. Sorry I was blind!
Yeah that is clearly unintentional.
Thanks,
Christian.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>> return;
>>>>>> }
>>>>>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>>>>>> fence->owner = owner;
>>>>>> fence->drm_client_id = drm_client_id;
>>>>>> - spin_lock_init(&fence->lock);
>>>>>> return fence;
>>>>>> }
>>>>>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>>>> fence->sched = entity->rq->sched;
>>>>>> seq = atomic_inc_return(&entity->fence_seq);
>>>>>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>>>>> - &fence->lock, entity->fence_context, seq);
>>>>>> + NULL, entity->fence_context, seq);
>>>>>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>>>>> - &fence->lock, entity->fence_context + 1, seq);
>>>>>> + NULL, entity->fence_context + 1, seq);
>>>>>> }
>>>>>> module_init(drm_sched_fence_slab_init);
>>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>>> index fb88301b3c45..b77f24a783e3 100644
>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>>>>>> * belongs to.
>>>>>> */
>>>>>> struct drm_gpu_scheduler *sched;
>>>>>> - /**
>>>>>> - * @lock: the lock used by the scheduled and the finished fences.
>>>>>> - */
>>>>>> - spinlock_t lock;
>>>>>> /**
>>>>>> * @owner: job owner for debugging
>>>>>> */
>>>>>
>>>>
>>>
>>
>
On 12/15/25 10:20, Tvrtko Ursulin wrote:
>
> On 12/12/2025 15:50, Christian König wrote:
>> On 12/11/25 16:13, Tvrtko Ursulin wrote:
>>>
>>> On 11/12/2025 13:16, Christian König wrote:
>>>> Using the inline lock is now the recommended way for dma_fence implementations.
>>>>
>>>> So use this approach for the scheduler fences as well just in case if
>>>> anybody uses this as blueprint for its own implementation.
>>>>
>>>> Also saves about 4 bytes for the external spinlock.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>>>> ---
>>>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>>>> include/drm/gpu_scheduler.h | 4 ----
>>>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>>>> index 08ccbde8b2f5..47471b9e43f9 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>>> /* If we already have an earlier deadline, keep it: */
>>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>>> ktime_before(fence->deadline, deadline)) {
>>>> - spin_unlock_irqrestore(&fence->lock, flags);
>>>> + dma_fence_unlock_irqrestore(f, flags);
>>>
>>> Rebase error I guess. Pull into the locking helpers patch.
>>
>> No that is actually completely intentional here.
>>
>> Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
>>
>> Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
>
> I don't follow. The code is currently like this:
>
> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> ktime_t deadline)
> {
> struct drm_sched_fence *fence = to_drm_sched_fence(f);
> struct dma_fence *parent;
> unsigned long flags;
>
> spin_lock_irqsave(&fence->lock, flags);
>
> /* If we already have an earlier deadline, keep it: */
> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
> ktime_before(fence->deadline, deadline)) {
> spin_unlock_irqrestore(&fence->lock, flags);
> return;
> }
>
> fence->deadline = deadline;
> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>
> spin_unlock_irqrestore(&fence->lock, flags);...
>
> The diff changes one out of the three lock/unlock operations. Other two are changed in 3/19. All three should surely be changed in the same patch.
We could change those spin_lock/unlock calls in patch #3, but I don't think that this is clean.
See the code here currently uses fence->lock and patch #3 would change it to use fence->finished->lock instead. That might be the pointer at the moment, but that is just by coincident and not design.
Only this change here ontop makes it intentional that we use fence->finished->lock for everything.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>> return;
>>>> }
>>>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>>>> fence->owner = owner;
>>>> fence->drm_client_id = drm_client_id;
>>>> - spin_lock_init(&fence->lock);
>>>> return fence;
>>>> }
>>>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>> fence->sched = entity->rq->sched;
>>>> seq = atomic_inc_return(&entity->fence_seq);
>>>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>>> - &fence->lock, entity->fence_context, seq);
>>>> + NULL, entity->fence_context, seq);
>>>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>>> - &fence->lock, entity->fence_context + 1, seq);
>>>> + NULL, entity->fence_context + 1, seq);
>>>> }
>>>> module_init(drm_sched_fence_slab_init);
>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>> index fb88301b3c45..b77f24a783e3 100644
>>>> --- a/include/drm/gpu_scheduler.h
>>>> +++ b/include/drm/gpu_scheduler.h
>>>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>>>> * belongs to.
>>>> */
>>>> struct drm_gpu_scheduler *sched;
>>>> - /**
>>>> - * @lock: the lock used by the scheduled and the finished fences.
>>>> - */
>>>> - spinlock_t lock;
>>>> /**
>>>> * @owner: job owner for debugging
>>>> */
>>>
>>
>
On 12/15/25 11:51, Maxime Ripard wrote:
> Hi TJ,
>
> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
>> On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
>>>
>>> The system dma-buf heap lets userspace allocate buffers from the page
>>> allocator. However, these allocations are not accounted for in memcg,
>>> allowing processes to escape limits that may be configured.
>>>
>>> Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
>>
>> We had a discussion just last night in the MM track at LPC about how
>> shared memory accounted in memcg is pretty broken. Without a way to
>> identify (and possibly transfer) ownership of a shared buffer, this
>> makes the accounting of shared memory, and zombie memcg problems
>> worse. :\
>
> Are there notes or a report from that discussion anywhere?
>
> The way I see it, the dma-buf heaps *trivial* case is non-existent at
> the moment and that's definitely broken. Any application can bypass its
> cgroups limits trivially, and that's a pretty big hole in the system.
Well, that is just the tip of the iceberg.
Pretty much all driver interfaces doesn't account to memcg at the moment, all the way from alsa, over GPUs (both TTM and SHM-GEM) to V4L2.
> The shared ownership is indeed broken, but it's not more or less broken
> than, say, memfd + udmabuf, and I'm sure plenty of others.
>
> So we really improve the common case, but only make the "advanced"
> slightly more broken than it already is.
>
> Would you disagree?
I strongly disagree. As far as I can see there is a huge chance we break existing use cases with that.
There has been some work on TTM by Dave but I still haven't found time to wrap my head around all possible side effects such a change can have.
The fundamental problem is that neither memcg nor the classic resource tracking (e.g. the OOM killer) has a good understanding of shared resources.
For example you can use memfd to basically kill any process in the system because the OOM killer can't identify the process which holds the reference to the memory in question. And that is a *MUCH* bigger problem than just inaccurate memcg accounting.
Regards,
Christian.
>
> Maxime
On 12/11/25 16:13, Tvrtko Ursulin wrote:
>
> On 11/12/2025 13:16, Christian König wrote:
>> Using the inline lock is now the recommended way for dma_fence implementations.
>>
>> So use this approach for the scheduler fences as well just in case if
>> anybody uses this as blueprint for its own implementation.
>>
>> Also saves about 4 bytes for the external spinlock.
>>
>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>> ---
>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>> include/drm/gpu_scheduler.h | 4 ----
>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>> index 08ccbde8b2f5..47471b9e43f9 100644
>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>> /* If we already have an earlier deadline, keep it: */
>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>> ktime_before(fence->deadline, deadline)) {
>> - spin_unlock_irqrestore(&fence->lock, flags);
>> + dma_fence_unlock_irqrestore(f, flags);
>
> Rebase error I guess. Pull into the locking helpers patch.
No that is actually completely intentional here.
Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> return;
>> }
>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>> fence->owner = owner;
>> fence->drm_client_id = drm_client_id;
>> - spin_lock_init(&fence->lock);
>> return fence;
>> }
>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>> fence->sched = entity->rq->sched;
>> seq = atomic_inc_return(&entity->fence_seq);
>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>> - &fence->lock, entity->fence_context, seq);
>> + NULL, entity->fence_context, seq);
>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>> - &fence->lock, entity->fence_context + 1, seq);
>> + NULL, entity->fence_context + 1, seq);
>> }
>> module_init(drm_sched_fence_slab_init);
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index fb88301b3c45..b77f24a783e3 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>> * belongs to.
>> */
>> struct drm_gpu_scheduler *sched;
>> - /**
>> - * @lock: the lock used by the scheduled and the finished fences.
>> - */
>> - spinlock_t lock;
>> /**
>> * @owner: job owner for debugging
>> */
>
On 12/12/25 14:20, Karol Wachowski wrote:
> Add missing drm_gem_object_put() call when drm_gem_object_lookup()
> successfully returns an object. This fixes a GEM object reference
> leak that can prevent driver modules from unloading when using
> prime buffers.
>
> Fixes: 53096728b891 ("drm: Add DRM prime interface to reassign GEM handle")
> Signed-off-by: Karol Wachowski <karol.wachowski(a)linux.intel.com>
> ---
> Changes between v1 and v2:
> - move setting ret value under if branch as suggested in review
> - add Cc: stable 6.18+
Oh don't CC the stable list on the review mail directly, just add "CC: stable(a)vger.kernel.org # 6.18+" to the tags. Greg is going to complain about that :(
With that done Reviewed-by: Christian König <christian.koenig(a)amd.com> and please push to drm-misc-fixes.
If you don't have commit rights for drm-misc-fixes please ping me and I'm going to push that.
Thanks,
Christian.
> ---
> drivers/gpu/drm/drm_gem.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index ca1956608261..bcc08a6aebf8 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1010,8 +1010,10 @@ int drm_gem_change_handle_ioctl(struct drm_device *dev, void *data,
> if (!obj)
> return -ENOENT;
>
> - if (args->handle == args->new_handle)
> - return 0;
> + if (args->handle == args->new_handle) {
> + ret = 0;
> + goto out;
> + }
>
> mutex_lock(&file_priv->prime.lock);
>
> @@ -1043,6 +1045,8 @@ int drm_gem_change_handle_ioctl(struct drm_device *dev, void *data,
>
> out_unlock:
> mutex_unlock(&file_priv->prime.lock);
> +out:
> + drm_gem_object_put(obj);
>
> return ret;
> }
On 12/11/25 15:35, Tvrtko Ursulin wrote:
>
> Hi,
>
> On 11/12/2025 13:16, Christian König wrote:
>> Implement per-fence spinlocks, allowing implementations to not give an
>> external spinlock to protect the fence internal statei. Instead a spinlock
>> embedded into the fence structure itself is used in this case.
>>
>> Shared spinlocks have the problem that implementations need to guarantee
>> that the lock live at least as long all fences referencing them.
>>
>> Using a per-fence spinlock allows completely decoupling spinlock producer
>> and consumer life times, simplifying the handling in most use cases.
>>
>> v2: improve naming, coverage and function documentation
>> v3: fix one additional locking in the selftests
>>
>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>
> I don't think I gave r-b on this one. Not just yet at least. Maybe you have missed the comments I had in the previous two rounds? I will repeat them below.
I was already wondering why you gave comments and an rb but though that the comments might just be optional.
Going to remove that and see on the comments below.
>> @@ -365,7 +364,7 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>> struct dma_fence_cb *cur, *tmp;
>> struct list_head cb_list;
>> - lockdep_assert_held(fence->lock);
>> + lockdep_assert_held(dma_fence_spinlock(fence));
>> if (unlikely(test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
>> &fence->flags)))
>> @@ -412,9 +411,9 @@ void dma_fence_signal_timestamp(struct dma_fence *fence, ktime_t timestamp)
>> if (WARN_ON(!fence))
>> return;
>> - spin_lock_irqsave(fence->lock, flags);
>> + dma_fence_lock_irqsave(fence, flags);
>
> For the locking wrappers I think it would be better to introduce them in a purely mechanical patch preceding this one. That is, just add the wrappers and nothing else.
That doesn't fully work for all cases, but I will separate it out a bit more.
>> static inline uint64_t amdgpu_vm_tlb_seq(struct amdgpu_vm *vm)
>> {
>> + struct dma_fence *fence;
>> unsigned long flags;
>> - spinlock_t *lock;
>> /*
>> * Workaround to stop racing between the fence signaling and handling
>> - * the cb. The lock is static after initially setting it up, just make
>> - * sure that the dma_fence structure isn't freed up.
>> + * the cb.
>> */
>> rcu_read_lock();
>> - lock = vm->last_tlb_flush->lock;
>> + fence = dma_fence_get_rcu(vm->last_tlb_flush);
>
> Why does this belong here? If taking a reference fixes some race it needs to be a separate patch. If it doesn't then this patch shouldn't be adding it.
The code previously assumed that the lock is global and can't go away while the function is called. When we start to use an inline lock that assumption is not true any more.
But you're right that can be a separate patch.
>> @@ -362,6 +368,38 @@ dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
>> } while (1);
>> }
>> +/**
>> + * dma_fence_spinlock - return pointer to the spinlock protecting the fence
>> + * @fence: the fence to get the lock from
>> + *
>> + * Return either the pointer to the embedded or the external spin lock.
>> + */
>> +static inline spinlock_t *dma_fence_spinlock(struct dma_fence *fence)
>> +{
>> + return test_bit(DMA_FENCE_FLAG_INLINE_LOCK_BIT, &fence->flags) ?
>> + &fence->inline_lock : fence->extern_lock;
>
> Is sprinkling of conditionals better than growing the struct? Probably yes, since branch misses are cheaper than cache misses. Unless the code grows significantly on some hot path and we get instruction cache misses instead. Who knows. But let say in the commit message we considered it and decided on this solution due xyz.
Sure.
>
> On a quick grep there is one arch where this grows the struct past a cache line anyway, but as it is PA-RISC I guess no one cares. Lets mention that in the commit message as well.
Interesting, I was aware of the problems on Sparc regarding spinlocks but that PA-RISC also has something more complicated then an int is news to me.
Anyway I agree it doesn't really matter.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>> +}
>> +
>> +/**
>> + * dma_fence_lock_irqsave - irqsave lock the fence
>> + * @fence: the fence to lock
>> + * @flags: where to store the CPU flags.
>> + *
>> + * Lock the fence, preventing it from changing to the signaled state.
>> + */
>> +#define dma_fence_lock_irqsave(fence, flags) \
>> + spin_lock_irqsave(dma_fence_spinlock(fence), flags)
>> +
>> +/**
>> + * dma_fence_unlock_irqrestore - unlock the fence and irqrestore
>> + * @fence: the fence to unlock
>> + * @flags the CPU flags to restore
>> + *
>> + * Unlock the fence, allowing it to change it's state to signaled again.
>> + */
>> +#define dma_fence_unlock_irqrestore(fence, flags) \
>> + spin_unlock_irqrestore(dma_fence_spinlock(fence), flags)
>> +
>> #ifdef CONFIG_LOCKDEP
>> bool dma_fence_begin_signalling(void);
>> void dma_fence_end_signalling(bool cookie);
>
On 12/11/25 13:33, Philipp Stanner wrote:
> On Thu, 2025-12-11 at 13:16 +0100, Christian König wrote:
>> Hi everyone,
>>
>> dma_fences have ever lived under the tyranny dictated by the module
>> lifetime of their issuer, leading to crashes should anybody still holding
>> a reference to a dma_fence when the module of the issuer was unloaded.
>>
>> The basic problem is that when buffer are shared between drivers
>> dma_fence objects can leak into external drivers and stay there even
>> after they are signaled. The dma_resv object for example only lazy releases
>> dma_fences.
>>
>> So what happens is that when the module who originally created the dma_fence
>> unloads the dma_fence_ops function table becomes unavailable as well and so
>> any attempt to release the fence crashes the system.
>>
>> Previously various approaches have been discussed, including changing the
>> locking semantics of the dma_fence callbacks (by me) as well as using the
>> drm scheduler as intermediate layer (by Sima) to disconnect dma_fences
>> from their actual users, but none of them are actually solving all problems.
>>
>> Tvrtko did some really nice prerequisite work by protecting the returned
>> strings of the dma_fence_ops by RCU. This way dma_fence creators where
>> able to just wait for an RCU grace period after fence signaling before
>> they could be save to free those data structures.
>>
>> Now this patch set here goes a step further and protects the whole
>> dma_fence_ops structure by RCU, so that after the fence signals the
>> pointer to the dma_fence_ops is set to NULL when there is no wait nor
>> release callback given. All functionality which use the dma_fence_ops
>> reference are put inside an RCU critical section, except for the
>> deprecated issuer specific wait and of course the optional release
>> callback.
>>
>> Additional to the RCU changes the lock protecting the dma_fence state
>> previously had to be allocated external. This set here now changes the
>> functionality to make that external lock optional and allows dma_fences
>> to use an inline lock and be self contained.
>>
>> v4:
>>
>> Rebases the whole set on upstream changes, especially the cleanup
>> from Philip in patch "drm/amdgpu: independence for the amdkfd_fence!".
>>
>> Adding two patches which brings the DMA-fence self tests up to date.
>> The first selftest changes removes the mock_wait and so actually starts
>> testing the default behavior instead of some hacky implementation in the
>> test. This one should probably go upstream independent of this set.
>> The second drops the mock_fence as well and tests the new RCU and inline
>> spinlock functionality.
>>
>> Especially the first patch still needs a Reviewed-by, apart from that I
>> think I've addressed all review comments.
>>
>> The plan is to push the core DMA-buf changes to drm-misc-next and then the
>> driver specific changes through the driver channels as approprite.
>
> This does not apply to drm-misc-next (unless I'm screwing up badly).
>
> Where can I apply it? I'd like to test the drm_sched changes before
> this gets merged.
drm-tip from a few days ago, otherwise the xe changes won't work.
Regards,
Christian.
>
> P.
>
>>
>> Please review and comment,
>> Christian.
>>
>>
>
On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
>
> The system dma-buf heap lets userspace allocate buffers from the page
> allocator. However, these allocations are not accounted for in memcg,
> allowing processes to escape limits that may be configured.
>
> Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
We had a discussion just last night in the MM track at LPC about how
shared memory accounted in memcg is pretty broken. Without a way to
identify (and possibly transfer) ownership of a shared buffer, this
makes the accounting of shared memory, and zombie memcg problems
worse. :\
>
> Userspace components using the system heap can be constrained with, e.g:
> systemd-run --user --scope -p MemoryMax=10M ...
>
> Signed-off-by: Eric Chanudet <echanude(a)redhat.com>
> ---
> drivers/dma-buf/heaps/system_heap.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 4c782fe33fd4..c91fcdff4b77 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -38,10 +38,10 @@ struct dma_heap_attachment {
> bool mapped;
> };
>
> -#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO)
> +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_ACCOUNT)
> #define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \
> | __GFP_NORETRY) & ~__GFP_RECLAIM) \
> - | __GFP_COMP)
> + | __GFP_COMP | __GFP_ACCOUNT)
> static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP};
> /*
> * The selection of the orders used for allocation (1MB, 64K, 4K) is designed
> --
> 2.52.0
>
Hi everyone,
dma_fences have ever lived under the tyranny dictated by the module
lifetime of their issuer, leading to crashes should anybody still holding
a reference to a dma_fence when the module of the issuer was unloaded.
The basic problem is that when buffer are shared between drivers
dma_fence objects can leak into external drivers and stay there even
after they are signaled. The dma_resv object for example only lazy releases
dma_fences.
So what happens is that when the module who originally created the dma_fence
unloads the dma_fence_ops function table becomes unavailable as well and so
any attempt to release the fence crashes the system.
Previously various approaches have been discussed, including changing the
locking semantics of the dma_fence callbacks (by me) as well as using the
drm scheduler as intermediate layer (by Sima) to disconnect dma_fences
from their actual users, but none of them are actually solving all problems.
Tvrtko did some really nice prerequisite work by protecting the returned
strings of the dma_fence_ops by RCU. This way dma_fence creators where
able to just wait for an RCU grace period after fence signaling before
they could be save to free those data structures.
Now this patch set here goes a step further and protects the whole
dma_fence_ops structure by RCU, so that after the fence signals the
pointer to the dma_fence_ops is set to NULL when there is no wait nor
release callback given. All functionality which use the dma_fence_ops
reference are put inside an RCU critical section, except for the
deprecated issuer specific wait and of course the optional release
callback.
Additional to the RCU changes the lock protecting the dma_fence state
previously had to be allocated external. This set here now changes the
functionality to make that external lock optional and allows dma_fences
to use an inline lock and be self contained.
v4:
Rebases the whole set on upstream changes, especially the cleanup
from Philip in patch "drm/amdgpu: independence for the amdkfd_fence!".
Adding two patches which brings the DMA-fence self tests up to date.
The first selftest changes removes the mock_wait and so actually starts
testing the default behavior instead of some hacky implementation in the
test. This one should probably go upstream independent of this set.
The second drops the mock_fence as well and tests the new RCU and inline
spinlock functionality.
Especially the first patch still needs a Reviewed-by, apart from that I
think I've addressed all review comments.
The plan is to push the core DMA-buf changes to drm-misc-next and then the
driver specific changes through the driver channels as approprite.
Please review and comment,
Christian.
If there is a large number (hundreds) of dmabufs allocated, the text
output generated from dmabuf_iter_seq_show can exceed common user buffer
sizes (e.g. PAGE_SIZE) necessitating multiple start/stop cycles to
iterate through all dmabufs. However the dmabuf iterator currently
returns NULL in dmabuf_iter_seq_start for all non-zero pos values, which
results in the truncation of the output before all dmabufs are handled.
After dma_buf_iter_begin / dma_buf_iter_next, the refcount of the buffer
is elevated so that the BPF iterator program can run without holding any
locks. When a stop occurs, instead of immediately dropping the reference
on the buffer, stash a pointer to the buffer in seq->priv until
either start is called or the iterator is released. This also enables
the resumption of iteration without first walking through the list of
dmabufs based on the pos value.
Fixes: 76ea95534995 ("bpf: Add dmabuf iterator")
Signed-off-by: T.J. Mercier <tjmercier(a)google.com>
---
kernel/bpf/dmabuf_iter.c | 56 +++++++++++++++++++++++++++++++++++-----
1 file changed, 49 insertions(+), 7 deletions(-)
diff --git a/kernel/bpf/dmabuf_iter.c b/kernel/bpf/dmabuf_iter.c
index 4dd7ef7c145c..cd500248abd9 100644
--- a/kernel/bpf/dmabuf_iter.c
+++ b/kernel/bpf/dmabuf_iter.c
@@ -6,10 +6,33 @@
#include <linux/kernel.h>
#include <linux/seq_file.h>
+struct dmabuf_iter_priv {
+ /*
+ * If this pointer is non-NULL, the buffer's refcount is elevated to
+ * prevent destruction between stop/start. If reading is not resumed and
+ * start is never called again, then dmabuf_iter_seq_fini drops the
+ * reference when the iterator is released.
+ */
+ struct dma_buf *dmabuf;
+};
+
static void *dmabuf_iter_seq_start(struct seq_file *seq, loff_t *pos)
{
- if (*pos)
- return NULL;
+ struct dmabuf_iter_priv *p = seq->private;
+
+ if (*pos) {
+ struct dma_buf *dmabuf = p->dmabuf;
+
+ if (!dmabuf)
+ return NULL;
+
+ /*
+ * Always resume from where we stopped, regardless of the value
+ * of pos.
+ */
+ p->dmabuf = NULL;
+ return dmabuf;
+ }
return dma_buf_iter_begin();
}
@@ -54,8 +77,11 @@ static void dmabuf_iter_seq_stop(struct seq_file *seq, void *v)
{
struct dma_buf *dmabuf = v;
- if (dmabuf)
- dma_buf_put(dmabuf);
+ if (dmabuf) {
+ struct dmabuf_iter_priv *p = seq->private;
+
+ p->dmabuf = dmabuf;
+ }
}
static const struct seq_operations dmabuf_iter_seq_ops = {
@@ -71,11 +97,27 @@ static void bpf_iter_dmabuf_show_fdinfo(const struct bpf_iter_aux_info *aux,
seq_puts(seq, "dmabuf iter\n");
}
+static int dmabuf_iter_seq_init(void *priv, struct bpf_iter_aux_info *aux)
+{
+ struct dmabuf_iter_priv *p = (struct dmabuf_iter_priv *)priv;
+
+ p->dmabuf = NULL;
+ return 0;
+}
+
+static void dmabuf_iter_seq_fini(void *priv)
+{
+ struct dmabuf_iter_priv *p = (struct dmabuf_iter_priv *)priv;
+
+ if (p->dmabuf)
+ dma_buf_put(p->dmabuf);
+}
+
static const struct bpf_iter_seq_info dmabuf_iter_seq_info = {
.seq_ops = &dmabuf_iter_seq_ops,
- .init_seq_private = NULL,
- .fini_seq_private = NULL,
- .seq_priv_size = 0,
+ .init_seq_private = dmabuf_iter_seq_init,
+ .fini_seq_private = dmabuf_iter_seq_fini,
+ .seq_priv_size = sizeof(struct dmabuf_iter_priv),
};
static struct bpf_iter_reg bpf_dmabuf_reg_info = {
base-commit: 30f09200cc4aefbd8385b01e41bde2e4565a6f0e
--
2.52.0.177.g9f829587af-goog
On Fri, Dec 05, 2025 at 04:18:38PM +0900, Byungchul Park wrote:
> Add documents describing the concept and APIs of dept.
>
> Signed-off-by: Byungchul Park <byungchul(a)sk.com>
> ---
> Documentation/dev-tools/dept.rst | 778 +++++++++++++++++++++++++++
> Documentation/dev-tools/dept_api.rst | 125 +++++
You forget to add toctree entries:
---- >8 ----
diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst
index 4b8425e348abd1..02c858f5ed1fa2 100644
--- a/Documentation/dev-tools/index.rst
+++ b/Documentation/dev-tools/index.rst
@@ -22,6 +22,8 @@ Documentation/process/debugging/index.rst
clang-format
coccinelle
sparse
+ dept
+ dept_api
kcov
gcov
kasan
> +Lockdep detects a deadlock by checking lock acquisition order. For
> +example, a graph to track acquisition order built by lockdep might look
> +like:
> +
> +.. literal::
> +
> + A -> B -
> + \
> + -> E
> + /
> + C -> D -
> +
> + where 'A -> B' means that acquisition A is prior to acquisition B
> + with A still held.
Use code-block directive for literal code blocks:
---- >8 ----
diff --git a/Documentation/dev-tools/dept.rst b/Documentation/dev-tools/dept.rst
index 333166464543d7..8394c4ea81bc2a 100644
--- a/Documentation/dev-tools/dept.rst
+++ b/Documentation/dev-tools/dept.rst
@@ -10,7 +10,7 @@ Lockdep detects a deadlock by checking lock acquisition order. For
example, a graph to track acquisition order built by lockdep might look
like:
-.. literal::
+.. code-block::
A -> B -
\
@@ -25,7 +25,7 @@ Lockdep keeps adding each new acquisition order into the graph at
runtime. For example, 'E -> C' will be added when the two locks have
been acquired in the order, E and then C. The graph will look like:
-.. literal::
+.. code-block::
A -> B -
\
@@ -41,7 +41,7 @@ been acquired in the order, E and then C. The graph will look like:
This graph contains a subgraph that demonstrates a loop like:
-.. literal::
+.. code-block::
-> E -
/ \
@@ -76,7 +76,7 @@ e.g. irq context, normal process context, wq worker context, or so on.
Can lockdep detect the following deadlock?
-.. literal::
+.. code-block::
context X context Y context Z
@@ -91,7 +91,7 @@ Can lockdep detect the following deadlock?
No. What about the following?
-.. literal::
+.. code-block::
context X context Y
@@ -116,7 +116,7 @@ What leads a deadlock
A deadlock occurs when one or multi contexts are waiting for events that
will never happen. For example:
-.. literal::
+.. code-block::
context X context Y context Z
@@ -148,7 +148,7 @@ In terms of dependency:
Dependency graph reflecting this example will look like:
-.. literal::
+.. code-block::
-> C -> A -> B -
/ \
@@ -171,7 +171,7 @@ Introduce DEPT
DEPT(DEPendency Tracker) tracks wait and event instead of lock
acquisition order so as to recognize the following situation:
-.. literal::
+.. code-block::
context X context Y context Z
@@ -186,7 +186,7 @@ acquisition order so as to recognize the following situation:
and builds up a dependency graph at runtime that is similar to lockdep.
The graph might look like:
-.. literal::
+.. code-block::
-> C -> A -> B -
/ \
@@ -199,7 +199,7 @@ DEPT keeps adding each new dependency into the graph at runtime. For
example, 'B -> D' will be added when event D occurrence is a
prerequisite to reaching event B like:
-.. literal::
+.. code-block::
context W
@@ -211,7 +211,7 @@ prerequisite to reaching event B like:
After the addition, the graph will look like:
-.. literal::
+.. code-block::
-> D
/
@@ -236,7 +236,7 @@ How DEPT works
Let's take a look how DEPT works with the 1st example in the section
'Limitation of lockdep'.
-.. literal::
+.. code-block::
context X context Y context Z
@@ -256,7 +256,7 @@ event.
Adding comments to describe DEPT's view in detail:
-.. literal::
+.. code-block::
context X context Y context Z
@@ -293,7 +293,7 @@ Adding comments to describe DEPT's view in detail:
Let's build up dependency graph with this example. Firstly, context X:
-.. literal::
+.. code-block::
context X
@@ -304,7 +304,7 @@ Let's build up dependency graph with this example. Firstly, context X:
There are no events to create dependency. Next, context Y:
-.. literal::
+.. code-block::
context Y
@@ -332,7 +332,7 @@ event A cannot be triggered if wait B cannot be awakened by event B.
Therefore, we can say event A depends on event B, say, 'A -> B'. The
graph will look like after adding the dependency:
-.. literal::
+.. code-block::
A -> B
@@ -340,7 +340,7 @@ graph will look like after adding the dependency:
Lastly, context Z:
-.. literal::
+.. code-block::
context Z
@@ -362,7 +362,7 @@ triggered if wait A cannot be awakened by event A. Therefore, we can
say event B depends on event A, say, 'B -> A'. The graph will look like
after adding the dependency:
-.. literal::
+.. code-block::
-> A -> B -
/ \
@@ -386,7 +386,7 @@ Interpret DEPT report
The following is the same example in the section 'How DEPT works'.
-.. literal::
+.. code-block::
context X context Y context Z
@@ -425,7 +425,7 @@ We can simplify this by labeling each waiting point with [W], each
point where its event's context starts with [S] and each event with [E].
This example will look like after the labeling:
-.. literal::
+.. code-block::
context X context Y context Z
@@ -443,7 +443,7 @@ DEPT uses the symbols [W], [S] and [E] in its report as described above.
The following is an example reported by DEPT for a real problem in
practice.
-.. literal::
+.. code-block::
Link: https://lore.kernel.org/lkml/6383cde5-cf4b-facf-6e07-1378a485657d@I-love.SA…
Link: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.pa…
@@ -646,7 +646,7 @@ practice.
Let's take a look at the summary that is the most important part.
-.. literal::
+.. code-block::
---------------------------------------------------
summary
@@ -669,7 +669,7 @@ Let's take a look at the summary that is the most important part.
The summary shows the following scenario:
-.. literal::
+.. code-block::
context A context B context ?(unknown)
@@ -684,7 +684,7 @@ The summary shows the following scenario:
Adding comments to describe DEPT's view in detail:
-.. literal::
+.. code-block::
context A context B context ?(unknown)
@@ -711,7 +711,7 @@ Adding comments to describe DEPT's view in detail:
Let's build up dependency graph with this report. Firstly, context A:
-.. literal::
+.. code-block::
context A
@@ -735,7 +735,7 @@ unlock(&ni->ni_lock:0) depends on folio_unlock(&f1), say,
The graph will look like after adding the dependency:
-.. literal::
+.. code-block::
unlock(&ni->ni_lock:0) -> folio_unlock(&f1)
@@ -743,7 +743,7 @@ The graph will look like after adding the dependency:
Secondly, context B:
-.. literal::
+.. code-block::
context B
@@ -762,7 +762,7 @@ folio_unlock(&f1) depends on unlock(&ni->ni_lock:0), say,
The graph will look like after adding the dependency:
-.. literal::
+.. code-block::
-> unlock(&ni->ni_lock:0) -> folio_unlock(&f1) -
/ \
> +Limitation of lockdep
> +---------------------
> +
> +Lockdep deals with a deadlock by typical lock e.g. spinlock and mutex,
> +that are supposed to be released within the acquisition context.
> +However, when it comes to a deadlock by folio lock that is not supposed
> +to be released within the acquisition context or other general
> +synchronization mechanisms, lockdep doesn't work.
> +
> +NOTE: In this document, 'context' refers to any type of unique context
> +e.g. irq context, normal process context, wq worker context, or so on.
> +
> +Can lockdep detect the following deadlock?
> +
> +.. literal::
> +
> + context X context Y context Z
> +
> + mutex_lock A
> + folio_lock B
> + folio_lock B <- DEADLOCK
> + mutex_lock A <- DEADLOCK
> + folio_unlock B
> + folio_unlock B
> + mutex_unlock A
> + mutex_unlock A
> +
> +No. What about the following?
> +
> +.. literal::
> +
> + context X context Y
> +
> + mutex_lock A
> + mutex_lock A <- DEADLOCK
> + wait_for_complete B <- DEADLOCK
> + complete B
> + mutex_unlock A
> + mutex_unlock A
> +
> +No.
One unanswered question from my v17 review [1]: You explain in "How DEPT works"
section how DEPT detects deadlock in the first example (the former with three
contexts). Can you do the same on the second example (the latter with two
contexts)?
Thanks.
[1]: https://lore.kernel.org/linux-doc/aN84jKyrE1BumpLj@archie.me/
--
An old man doll... just what I always wanted! - Clara