On 1/13/26 17:12, Philipp Stanner wrote:
On Tue, 2026-01-13 at 16:16 +0100, Christian König wrote:
Using the inline lock is now the recommended way for dma_fence implementations.
For the scheduler fence use the inline lock for the scheduled fence part and then the lock from the scheduled fence as external lock for the finished fence.
This way there is no functional difference, except for saving the space for the separate lock.
v2: re-work the patch to avoid any functional difference
*cough cough*
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/scheduler/sched_fence.c | 6 +++--- include/drm/gpu_scheduler.h | 4 ---- 2 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 724d77694246..112677231f9a 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, fence->owner = owner; fence->drm_client_id = drm_client_id;
- spin_lock_init(&fence->lock);
return fence; } @@ -230,9 +229,10 @@ void drm_sched_fence_init(struct drm_sched_fence *fence, fence->sched = entity->rq->sched; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
&fence->lock, entity->fence_context, seq);
NULL, entity->fence_context, seq);dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
&fence->lock, entity->fence_context + 1, seq);
dma_fence_spinlock(&fence->scheduled),I think while you are correct that this is no functional difference, it is still a bad idea which violates the entire idea of your series:
All fences are now independent from each other and the fence context – except for those two.
Some fences are more equal than others ;)
Yeah, I was going back and forth once more if I should keep this patch at all or just drop it.
By implementing this, you would also show to people browsing the code that it can be a good idea or can be done to have fences share locks. Do you want that?
Good question. For almost all cases we don't want this, but once more the scheduler is special.
In the scheduler we have two fences in one, the scheduled one and the finished one.
So here it technically makes sense to have this construct to be defensive.
But on the other hand it has no practical value because it still doesn't allow us to unload the scheduler module. We would need a much wider rework for being able to do that.
So maybe I should just really drop this patch or at least keep it back until we had time to figure out what the next steps are.
As far as I have learned from you and our discussions, that would be a very bombastic violation of the sacred "dma-fence-rules".
Well using the inline fence is "only" a strong recommendation. It's not as heavy as the signaling rules because when you mess up those you can easily kill the whole system.
I believe it's definitely worth sacrificing some bytes so that those two fences get fully decoupled. Who will have it on their radar that they are special? Think about future reworks.
This doesn't even save any bytes, my thinking was more that this is the more defensive approach should anybody use the spinlock pointer from the scheduler fence to do some locking.
Besides that, no objections from my side.
Thanks, Christian.
P.
entity->fence_context + 1, seq);} module_init(drm_sched_fence_slab_init); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 78e07c2507c7..ad3704685163 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -297,10 +297,6 @@ struct drm_sched_fence { * belongs to. */ struct drm_gpu_scheduler *sched; - /** - * @lock: the lock used by the scheduled and the finished fences. - */
- spinlock_t lock;
/** * @owner: job owner for debugging */
linaro-mm-sig@lists.linaro.org