WW mutexes and dma-resv objects, which embed them, typically have a number of locks belocking to the same lock class. However code using them typically want to verify the locking on object granularity, not lock-class granularity.
This series add ww_mutex functions to facilitate that, (patch 1) and utilizes these functions in the dma-resv lock checks.
Thomas Hellström (2): kernel/locking/ww_mutex: Add per-lock lock-check helpers dma-buf/dma-resv: Improve the dma-resv lockdep checks
include/linux/dma-resv.h | 7 +++++-- include/linux/ww_mutex.h | 18 ++++++++++++++++++ kernel/locking/mutex.c | 10 ++++++++++ 3 files changed, 33 insertions(+), 2 deletions(-)
Code using ww_mutexes typically by design have a number of such mutexes sharing the same ww_class, and within a ww transaction they are all lockdep annotated using a nest_lock which means that multiple ww_mutexes of the same lockdep class may be locked at the same time. That means that lock_is_held() returns true and lockdep_assert_held() doesn't fire as long as there is a *single* ww_mutex held of the same class. IOW within a WW transaction.
Code using these mutexes typically want to assert that individual ww_mutexes are held. Not that any ww_mutex of the same class is held.
Introduce functions that can be used for that.
RFC: Placement of the functions? lockdep.c? Are the #ifdefs testing for the correct config?
Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com --- include/linux/ww_mutex.h | 18 ++++++++++++++++++ kernel/locking/mutex.c | 10 ++++++++++ 2 files changed, 28 insertions(+)
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 45ff6f7a872b..7bc0f533dea6 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -380,4 +380,22 @@ static inline bool ww_mutex_is_locked(struct ww_mutex *lock) return ww_mutex_base_is_locked(&lock->base); }
+#ifdef CONFIG_PROVE_LOCKING + +bool ww_mutex_held(struct ww_mutex *lock); + +#else /* CONFIG_PROVE_LOCKING */ + +static inline bool ww_mutex_held(struct ww_mutex *lock) +{ + return true; +} + +#endif /* CONFIG_PROVE_LOCKING */ + +static inline void ww_mutex_assert_held(struct ww_mutex *lock) +{ + lockdep_assert(ww_mutex_held(lock)); +} + #endif diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index de7d6702cd96..37868b739efd 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -1174,3 +1174,13 @@ int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) return 1; } EXPORT_SYMBOL(atomic_dec_and_mutex_lock); + +#ifdef CONFIG_PROVE_LOCKING + +bool ww_mutex_held(struct ww_mutex *lock) +{ + return __ww_mutex_owner(&lock->base) == current; +} +EXPORT_SYMBOL(ww_mutex_held); + +#endif /* CONFIG_PROVE_LOCKING */
On Thu, Nov 20, 2025 at 12:03:40PM +0100, Thomas Hellström wrote:
Code using ww_mutexes typically by design have a number of such mutexes sharing the same ww_class, and within a ww transaction they are all lockdep annotated using a nest_lock which means that multiple ww_mutexes of the same lockdep class may be locked at the same time. That means that lock_is_held() returns true and lockdep_assert_held() doesn't fire as long as there is a *single* ww_mutex held of the same class. IOW within a WW transaction.
Code using these mutexes typically want to assert that individual ww_mutexes are held. Not that any ww_mutex of the same class is held.
Introduce functions that can be used for that.
RFC: Placement of the functions? lockdep.c? Are the #ifdefs testing for the correct config?
Yeah, I think so.
Ack on this.
Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com
include/linux/ww_mutex.h | 18 ++++++++++++++++++ kernel/locking/mutex.c | 10 ++++++++++ 2 files changed, 28 insertions(+)
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 45ff6f7a872b..7bc0f533dea6 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -380,4 +380,22 @@ static inline bool ww_mutex_is_locked(struct ww_mutex *lock) return ww_mutex_base_is_locked(&lock->base); } +#ifdef CONFIG_PROVE_LOCKING
+bool ww_mutex_held(struct ww_mutex *lock);
+#else /* CONFIG_PROVE_LOCKING */
+static inline bool ww_mutex_held(struct ww_mutex *lock) +{
- return true;
+}
+#endif /* CONFIG_PROVE_LOCKING */
+static inline void ww_mutex_assert_held(struct ww_mutex *lock) +{
- lockdep_assert(ww_mutex_held(lock));
+}
#endif diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index de7d6702cd96..37868b739efd 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -1174,3 +1174,13 @@ int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) return 1; } EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
+#ifdef CONFIG_PROVE_LOCKING
+bool ww_mutex_held(struct ww_mutex *lock) +{
- return __ww_mutex_owner(&lock->base) == current;
+} +EXPORT_SYMBOL(ww_mutex_held);
+#endif /* CONFIG_PROVE_LOCKING */
2.51.1
Ensure that dma_resv_held() and dma_resv_assert_held() operate on individual reservation objects within a WW transaction rather than on the reservation WW class.
Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com --- include/linux/dma-resv.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index c5ab6fd9ebe8..001de3880fde 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -308,8 +308,11 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor) fence = dma_resv_iter_first(cursor); fence; \ fence = dma_resv_iter_next(cursor))
-#define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base) -#define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base) +#define dma_resv_held(obj) (lockdep_is_held(&(obj)->lock.base) && ww_mutex_held(&(obj)->lock)) +#define dma_resv_assert_held(obj) do { \ + lockdep_assert_held(&(obj)->lock.base); \ + ww_mutex_assert_held(&(obj)->lock); \ + } while (0)
#ifdef CONFIG_DEBUG_MUTEXES void dma_resv_reset_max_fences(struct dma_resv *obj);
On 11/20/25 12:03, Thomas Hellström wrote:
Ensure that dma_resv_held() and dma_resv_assert_held() operate on individual reservation objects within a WW transaction rather than on the reservation WW class.
Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com
I can't judge the lockdep backend changes, but this patch here makes a lot of sense.
Reviewed-by: Christian König christian.koenig@amd.com
That reminds me that Pierre-Eric stumbled over some odd lockdep behavior while working on TTM as well. @Pierre-Eric what that this issue?
Regards, Christian.
include/linux/dma-resv.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index c5ab6fd9ebe8..001de3880fde 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -308,8 +308,11 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor) fence = dma_resv_iter_first(cursor); fence; \ fence = dma_resv_iter_next(cursor)) -#define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base) -#define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base) +#define dma_resv_held(obj) (lockdep_is_held(&(obj)->lock.base) && ww_mutex_held(&(obj)->lock)) +#define dma_resv_assert_held(obj) do { \
lockdep_assert_held(&(obj)->lock.base); \ww_mutex_assert_held(&(obj)->lock); \- } while (0)
#ifdef CONFIG_DEBUG_MUTEXES void dma_resv_reset_max_fences(struct dma_resv *obj);
linaro-mm-sig@lists.linaro.org