On 12/17/25 11:25, Lukas Bulwahn wrote:
> From: Lukas Bulwahn <lukas.bulwahn(a)redhat.com>
>
> The intent of commit 646013f513f3 ("dma-buf: enable DMABUF_DEBUG by default
> on DEBUG kernels") is clear, but it mixes up the config option name. The
> config option for kernel debugging is named DEBUG_KERNEL, not DEBUG.
>
> Fix up the DMABUF_DEBUG definition to use the intended name.
>
> Fixes: 646013f513f3 ("dma-buf: enable DMABUF_DEBUG by default on DEBUG kernels")
> Signed-off-by: Lukas Bulwahn <lukas.bulwahn(a)redhat.com>
Ah, yeah. I mixed up the C define vs the config option. Thanks for pointing that out.
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/dma-buf/Kconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index fdd823e446cc..426c9ad3364f 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -55,7 +55,7 @@ config DMABUF_MOVE_NOTIFY
> config DMABUF_DEBUG
> bool "DMA-BUF debug checks"
> depends on DMA_SHARED_BUFFER
> - default y if DEBUG
> + default y if DEBUG_KERNEL
> help
> This option enables additional checks for DMA-BUF importers and
> exporters. Specifically it validates that importers do not peek at the
Hi,
Here are assorted kernel-doc fixes for 6.19 cycle. As the name
implies, for the merging strategy, the patches can be taken by
respective maintainers to appropriate fixes branches (targetting
6.19 of course) (e.g. for mm it will be mm-hotfixes).
Enjoy!
Bagas Sanjaya (14):
genalloc: Describe @start_addr parameter in genpool_algo_t
mm: Describe @flags parameter in memalloc_flags_save()
textsearch: Describe @list member in ts_ops search
mm: vmalloc: Fix up vrealloc_node_align() kernel-doc macro name
mm, kfence: Describe @slab parameter in __kfence_obj_info()
virtio: Describe @map and @vmap members in virtio_device struct
fs: Describe @isnew parameter in ilookup5_nowait()
VFS: fix __start_dirop() kernel-doc warnings
drm/amd/display: Don't use kernel-doc comment in
dc_register_software_state struct
drm/amdgpu: Describe @AMD_IP_BLOCK_TYPE_RAS in amd_ip_block_type enum
drm/gem/shmem: Describe @shmem and @size parameters
drm/scheduler: Describe @result in drm_sched_job_done()
drm/gpusvm: Fix drm_gpusvm_pages_valid_unlocked() kernel-doc comment
net: bridge: Describe @tunnel_hash member in net_bridge_vlan_group
struct
drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
drivers/gpu/drm/amd/include/amd_shared.h | 1 +
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
drivers/gpu/drm/drm_gpusvm.c | 4 ++--
drivers/gpu/drm/scheduler/sched_main.c | 1 +
fs/inode.c | 1 +
fs/namei.c | 3 ++-
include/linux/genalloc.h | 1 +
include/linux/kfence.h | 1 +
include/linux/sched/mm.h | 1 +
include/linux/textsearch.h | 1 +
include/linux/virtio.h | 2 ++
mm/vmalloc.c | 2 +-
net/bridge/br_private.h | 1 +
14 files changed, 18 insertions(+), 6 deletions(-)
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
--
An old man doll... just what I always wanted! - Clara
On 12/15/25 14:59, Maxime Ripard wrote:
> On Mon, Dec 15, 2025 at 02:30:47PM +0100, Christian König wrote:
>> On 12/15/25 11:51, Maxime Ripard wrote:
>>> Hi TJ,
>>>
>>> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
>>>> On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
>>>>>
>>>>> The system dma-buf heap lets userspace allocate buffers from the page
>>>>> allocator. However, these allocations are not accounted for in memcg,
>>>>> allowing processes to escape limits that may be configured.
>>>>>
>>>>> Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
>>>>
>>>> We had a discussion just last night in the MM track at LPC about how
>>>> shared memory accounted in memcg is pretty broken. Without a way to
>>>> identify (and possibly transfer) ownership of a shared buffer, this
>>>> makes the accounting of shared memory, and zombie memcg problems
>>>> worse. :\
>>>
>>> Are there notes or a report from that discussion anywhere?
>>>
>>> The way I see it, the dma-buf heaps *trivial* case is non-existent at
>>> the moment and that's definitely broken. Any application can bypass its
>>> cgroups limits trivially, and that's a pretty big hole in the system.
>>
>> Well, that is just the tip of the iceberg.
>>
>> Pretty much all driver interfaces doesn't account to memcg at the
>> moment, all the way from alsa, over GPUs (both TTM and SHM-GEM) to
>> V4L2.
>
> Yes, I know, and step 1 of the plan we discussed earlier this year is to
> fix the heaps.
>
>>> The shared ownership is indeed broken, but it's not more or less broken
>>> than, say, memfd + udmabuf, and I'm sure plenty of others.
>>>
>>> So we really improve the common case, but only make the "advanced"
>>> slightly more broken than it already is.
>>>
>>> Would you disagree?
>>
>> I strongly disagree. As far as I can see there is a huge chance we
>> break existing use cases with that.
>
> Which ones? And what about the ones that are already broken?
Well everybody that expects that driver resources are *not* accounted to memcg.
>> There has been some work on TTM by Dave but I still haven't found time
>> to wrap my head around all possible side effects such a change can
>> have.
>>
>> The fundamental problem is that neither memcg nor the classic resource
>> tracking (e.g. the OOM killer) has a good understanding of shared
>> resources.
>
> And yet heap allocations don't necessarily have to be shared. But they
> all have to be allocated.
>
>> For example you can use memfd to basically kill any process in the
>> system because the OOM killer can't identify the process which holds
>> the reference to the memory in question. And that is a *MUCH* bigger
>> problem than just inaccurate memcg accounting.
>
> When you frame it like that, sure. Also, you can use the system heap to
> DoS any process in the system. I'm not saying that what you're concerned
> about isn't an issue, but let's not brush off other people legitimate
> issues as well.
Completely agree, but we should prioritize.
That driver allocated memory is not memcg accounted is actually uAPI, e.g. that is not something which can easily change.
While fixing the OOM killer looks perfectly doable and will then most likely also show a better path how to fix the memcg accounting.
Christian.
>
> Maxime
On Mon, Dec 15, 2025 at 7:51 PM Maxime Ripard <mripard(a)redhat.com> wrote:
>
> Hi TJ,
Hi Maxime,
> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
> > On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
> > >
> > > The system dma-buf heap lets userspace allocate buffers from the page
> > > allocator. However, these allocations are not accounted for in memcg,
> > > allowing processes to escape limits that may be configured.
> > >
> > > Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
> >
> > We had a discussion just last night in the MM track at LPC about how
> > shared memory accounted in memcg is pretty broken. Without a way to
> > identify (and possibly transfer) ownership of a shared buffer, this
> > makes the accounting of shared memory, and zombie memcg problems
> > worse. :\
>
> Are there notes or a report from that discussion anywhere?
The LPC vids haven't been clipped yet, and actually I can't even find
the recorded full live stream from Hall A2 on the first day. So I
don't think there's anything to look at, but I bet there's probably
nothing there you don't already know.
> The way I see it, the dma-buf heaps *trivial* case is non-existent at
> the moment and that's definitely broken. Any application can bypass its
> cgroups limits trivially, and that's a pretty big hole in the system.
Agree, but if we only charge the first allocator then limits can still
easily be bypassed assuming an app can cause an allocation outside of
its cgroup tree.
I'm not sure using static memcg limits where a significant portion of
the memory can be shared is really feasible. Even with just pagecache
being charged to memcgs, we're having trouble defining a static memcg
limit that is really useful since it has to be high enough to
accomodate occasional spikes due to shared memory that might or might
not be charged (since it can only be charged to one memcg - it may be
spread around or it may all get charged to one memcg). So excessive
anonymous use has to get really bad before it gets punished.
What I've been hearing lately is that folks are polling memory.stat or
PSI or other metrics and using that to take actions (memory.reclaim /
killing / adjust memory.high) at runtime rather than relying on
memory.high/max behavior with a static limit.
> The shared ownership is indeed broken, but it's not more or less broken
> than, say, memfd + udmabuf, and I'm sure plenty of others.
One thing that's worse about system heap buffers is that unlike memfd
the memory isn't reclaimable. So without killing all users there's
currently no way to deal with the zombie issue. Harry's proposing
reparenting, but I don't think our current interfaces support that
because we'd have to mess with the page structs behind system heap
dmabufs to change the memcg during reparenting.
Ah... but udmabuf pins the memfd pages, so you're right that memfd +
udmabuf isn't worse.
> So we really improve the common case, but only make the "advanced"
> slightly more broken than it already is.
>
> Would you disagree?
I think memcg limits in this case just wouldn't be usable because of
what I mentioned above. In our common case the allocator is in a
different cgroup tree than the real users of the buffer.
> Maxime
On 12/15/25 16:53, Tvrtko Ursulin wrote:
>
> On 15/12/2025 15:38, Christian König wrote:
>> On 12/15/25 10:20, Tvrtko Ursulin wrote:
>>>
>>> On 12/12/2025 15:50, Christian König wrote:
>>>> On 12/11/25 16:13, Tvrtko Ursulin wrote:
>>>>>
>>>>> On 11/12/2025 13:16, Christian König wrote:
>>>>>> Using the inline lock is now the recommended way for dma_fence implementations.
>>>>>>
>>>>>> So use this approach for the scheduler fences as well just in case if
>>>>>> anybody uses this as blueprint for its own implementation.
>>>>>>
>>>>>> Also saves about 4 bytes for the external spinlock.
>>>>>>
>>>>>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>>>>>> include/drm/gpu_scheduler.h | 4 ----
>>>>>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> index 08ccbde8b2f5..47471b9e43f9 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>>>>> /* If we already have an earlier deadline, keep it: */
>>>>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>>>>> ktime_before(fence->deadline, deadline)) {
>>>>>> - spin_unlock_irqrestore(&fence->lock, flags);
>>>>>> + dma_fence_unlock_irqrestore(f, flags);
>>>>>
>>>>> Rebase error I guess. Pull into the locking helpers patch.
>>>>
>>>> No that is actually completely intentional here.
>>>>
>>>> Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
>>>>
>>>> Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
>>>
>>> I don't follow. The code is currently like this:
>>>
>>> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>> ktime_t deadline)
>>> {
>>> struct drm_sched_fence *fence = to_drm_sched_fence(f);
>>> struct dma_fence *parent;
>>> unsigned long flags;
>>>
>>> spin_lock_irqsave(&fence->lock, flags);
>>>
>>> /* If we already have an earlier deadline, keep it: */
>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>> ktime_before(fence->deadline, deadline)) {
>>> spin_unlock_irqrestore(&fence->lock, flags);
>>> return;
>>> }
>>>
>>> fence->deadline = deadline;
>>> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>>>
>>> spin_unlock_irqrestore(&fence->lock, flags);...
>>>
>>> The diff changes one out of the three lock/unlock operations. Other two are changed in 3/19. All three should surely be changed in the same patch.
>>
>> We could change those spin_lock/unlock calls in patch #3, but I don't think that this is clean.
>>
>> See the code here currently uses fence->lock and patch #3 would change it to use fence->finished->lock instead. That might be the pointer at the moment, but that is just by coincident and not design.
>>
>> Only this change here ontop makes it intentional that we use fence->finished->lock for everything.
>
> Sorry I still don't follow. After 3/19 and before this 9/19 the function looks like this:
>
> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> ktime_t deadline)
> {
> struct drm_sched_fence *fence = to_drm_sched_fence(f);
> struct dma_fence *parent;
> unsigned long flags;
>
> dma_fence_lock_irqsave(f, flags);
>
> /* If we already have an earlier deadline, keep it: */
> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
> ktime_before(fence->deadline, deadline)) {
> spin_unlock_irqrestore(&fence->lock, flags);
> return;
> }
>
> fence->deadline = deadline;
> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>
> dma_fence_unlock_irqrestore(f, flags);
>
> Notice the lonely spin_unlock_irqrestore on the early return path while other two use the dma_fence_(un)lock helpers. Am I blind or how is that clean?
Oh, that's what you mean. Sorry I was blind!
Yeah that is clearly unintentional.
Thanks,
Christian.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>> return;
>>>>>> }
>>>>>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>>>>>> fence->owner = owner;
>>>>>> fence->drm_client_id = drm_client_id;
>>>>>> - spin_lock_init(&fence->lock);
>>>>>> return fence;
>>>>>> }
>>>>>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>>>> fence->sched = entity->rq->sched;
>>>>>> seq = atomic_inc_return(&entity->fence_seq);
>>>>>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>>>>> - &fence->lock, entity->fence_context, seq);
>>>>>> + NULL, entity->fence_context, seq);
>>>>>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>>>>> - &fence->lock, entity->fence_context + 1, seq);
>>>>>> + NULL, entity->fence_context + 1, seq);
>>>>>> }
>>>>>> module_init(drm_sched_fence_slab_init);
>>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>>> index fb88301b3c45..b77f24a783e3 100644
>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>>>>>> * belongs to.
>>>>>> */
>>>>>> struct drm_gpu_scheduler *sched;
>>>>>> - /**
>>>>>> - * @lock: the lock used by the scheduled and the finished fences.
>>>>>> - */
>>>>>> - spinlock_t lock;
>>>>>> /**
>>>>>> * @owner: job owner for debugging
>>>>>> */
>>>>>
>>>>
>>>
>>
>
On 12/15/25 10:20, Tvrtko Ursulin wrote:
>
> On 12/12/2025 15:50, Christian König wrote:
>> On 12/11/25 16:13, Tvrtko Ursulin wrote:
>>>
>>> On 11/12/2025 13:16, Christian König wrote:
>>>> Using the inline lock is now the recommended way for dma_fence implementations.
>>>>
>>>> So use this approach for the scheduler fences as well just in case if
>>>> anybody uses this as blueprint for its own implementation.
>>>>
>>>> Also saves about 4 bytes for the external spinlock.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>>>> ---
>>>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>>>> include/drm/gpu_scheduler.h | 4 ----
>>>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>>>> index 08ccbde8b2f5..47471b9e43f9 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>>> /* If we already have an earlier deadline, keep it: */
>>>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>>>> ktime_before(fence->deadline, deadline)) {
>>>> - spin_unlock_irqrestore(&fence->lock, flags);
>>>> + dma_fence_unlock_irqrestore(f, flags);
>>>
>>> Rebase error I guess. Pull into the locking helpers patch.
>>
>> No that is actually completely intentional here.
>>
>> Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
>>
>> Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
>
> I don't follow. The code is currently like this:
>
> static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> ktime_t deadline)
> {
> struct drm_sched_fence *fence = to_drm_sched_fence(f);
> struct dma_fence *parent;
> unsigned long flags;
>
> spin_lock_irqsave(&fence->lock, flags);
>
> /* If we already have an earlier deadline, keep it: */
> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
> ktime_before(fence->deadline, deadline)) {
> spin_unlock_irqrestore(&fence->lock, flags);
> return;
> }
>
> fence->deadline = deadline;
> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>
> spin_unlock_irqrestore(&fence->lock, flags);...
>
> The diff changes one out of the three lock/unlock operations. Other two are changed in 3/19. All three should surely be changed in the same patch.
We could change those spin_lock/unlock calls in patch #3, but I don't think that this is clean.
See the code here currently uses fence->lock and patch #3 would change it to use fence->finished->lock instead. That might be the pointer at the moment, but that is just by coincident and not design.
Only this change here ontop makes it intentional that we use fence->finished->lock for everything.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>> return;
>>>> }
>>>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>>>> fence->owner = owner;
>>>> fence->drm_client_id = drm_client_id;
>>>> - spin_lock_init(&fence->lock);
>>>> return fence;
>>>> }
>>>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>> fence->sched = entity->rq->sched;
>>>> seq = atomic_inc_return(&entity->fence_seq);
>>>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>>> - &fence->lock, entity->fence_context, seq);
>>>> + NULL, entity->fence_context, seq);
>>>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>>> - &fence->lock, entity->fence_context + 1, seq);
>>>> + NULL, entity->fence_context + 1, seq);
>>>> }
>>>> module_init(drm_sched_fence_slab_init);
>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>> index fb88301b3c45..b77f24a783e3 100644
>>>> --- a/include/drm/gpu_scheduler.h
>>>> +++ b/include/drm/gpu_scheduler.h
>>>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>>>> * belongs to.
>>>> */
>>>> struct drm_gpu_scheduler *sched;
>>>> - /**
>>>> - * @lock: the lock used by the scheduled and the finished fences.
>>>> - */
>>>> - spinlock_t lock;
>>>> /**
>>>> * @owner: job owner for debugging
>>>> */
>>>
>>
>
On 12/15/25 11:51, Maxime Ripard wrote:
> Hi TJ,
>
> On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
>> On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
>>>
>>> The system dma-buf heap lets userspace allocate buffers from the page
>>> allocator. However, these allocations are not accounted for in memcg,
>>> allowing processes to escape limits that may be configured.
>>>
>>> Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
>>
>> We had a discussion just last night in the MM track at LPC about how
>> shared memory accounted in memcg is pretty broken. Without a way to
>> identify (and possibly transfer) ownership of a shared buffer, this
>> makes the accounting of shared memory, and zombie memcg problems
>> worse. :\
>
> Are there notes or a report from that discussion anywhere?
>
> The way I see it, the dma-buf heaps *trivial* case is non-existent at
> the moment and that's definitely broken. Any application can bypass its
> cgroups limits trivially, and that's a pretty big hole in the system.
Well, that is just the tip of the iceberg.
Pretty much all driver interfaces doesn't account to memcg at the moment, all the way from alsa, over GPUs (both TTM and SHM-GEM) to V4L2.
> The shared ownership is indeed broken, but it's not more or less broken
> than, say, memfd + udmabuf, and I'm sure plenty of others.
>
> So we really improve the common case, but only make the "advanced"
> slightly more broken than it already is.
>
> Would you disagree?
I strongly disagree. As far as I can see there is a huge chance we break existing use cases with that.
There has been some work on TTM by Dave but I still haven't found time to wrap my head around all possible side effects such a change can have.
The fundamental problem is that neither memcg nor the classic resource tracking (e.g. the OOM killer) has a good understanding of shared resources.
For example you can use memfd to basically kill any process in the system because the OOM killer can't identify the process which holds the reference to the memory in question. And that is a *MUCH* bigger problem than just inaccurate memcg accounting.
Regards,
Christian.
>
> Maxime
On 12/11/25 16:13, Tvrtko Ursulin wrote:
>
> On 11/12/2025 13:16, Christian König wrote:
>> Using the inline lock is now the recommended way for dma_fence implementations.
>>
>> So use this approach for the scheduler fences as well just in case if
>> anybody uses this as blueprint for its own implementation.
>>
>> Also saves about 4 bytes for the external spinlock.
>>
>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>> ---
>> drivers/gpu/drm/scheduler/sched_fence.c | 7 +++----
>> include/drm/gpu_scheduler.h | 4 ----
>> 2 files changed, 3 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>> index 08ccbde8b2f5..47471b9e43f9 100644
>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>> @@ -161,7 +161,7 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>> /* If we already have an earlier deadline, keep it: */
>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>> ktime_before(fence->deadline, deadline)) {
>> - spin_unlock_irqrestore(&fence->lock, flags);
>> + dma_fence_unlock_irqrestore(f, flags);
>
> Rebase error I guess. Pull into the locking helpers patch.
No that is actually completely intentional here.
Previously we had a separate lock which protected both the DMA-fences as well as the deadline state.
Now we turn that upside down by dropping the separate lock and protecting the deadline state with the dma_fence lock instead.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> return;
>> }
>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>> fence->owner = owner;
>> fence->drm_client_id = drm_client_id;
>> - spin_lock_init(&fence->lock);
>> return fence;
>> }
>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>> fence->sched = entity->rq->sched;
>> seq = atomic_inc_return(&entity->fence_seq);
>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>> - &fence->lock, entity->fence_context, seq);
>> + NULL, entity->fence_context, seq);
>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>> - &fence->lock, entity->fence_context + 1, seq);
>> + NULL, entity->fence_context + 1, seq);
>> }
>> module_init(drm_sched_fence_slab_init);
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index fb88301b3c45..b77f24a783e3 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>> * belongs to.
>> */
>> struct drm_gpu_scheduler *sched;
>> - /**
>> - * @lock: the lock used by the scheduled and the finished fences.
>> - */
>> - spinlock_t lock;
>> /**
>> * @owner: job owner for debugging
>> */
>