On Sun, Apr 28, 2013 at 07:04:07PM +0200, Maarten Lankhorst wrote:
Changes since RFC patch v1:
- Updated to use atomic_long instead of atomic, since the reservation_id was a long.
- added mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow
- removed mutex_locked_set_reservation_id (or w/e it was called)
Changes since RFC patch v2:
- remove use of __mutex_lock_retval_arg, add warnings when using wrong combination of mutex_(,reserve_)lock/unlock.
Changes since v1:
- Add __always_inline to __mutex_lock_common, otherwise reservation paths can be triggered from normal locks, because __builtin_constant_p might evaluate to false for the constant 0 in that case. Tests for this have been added in the next patch.
- Updated documentation slightly.
Changes since v2:
- Renamed everything to ww_mutex. (mlankhorst)
- Added ww_acquire_ctx and ww_class. (mlankhorst)
- Added a lot of checks for wrong api usage. (mlankhorst)
- Documentation updates. (danvet)
While writing the kerneldoc I've carefully check that all restrictions are enforced through debug checks somehow. I think that with full mutex debug (including lockdep) enabled, plus the slowpath injector patch I've just posted, _all_ interface abuse will be catched at runtime as long as all the single-threaded/uncontended cases are exercises sufficiently.
So I think we've fully achieved level 5 on the Rusty API safety scale here. Higher levels seem pretty hard given that the concepts are rather fancy, but I think with the new (and much more consitent) naming, plus the explicit introduction as (more abstruct) structures for ww_class and ww_acquire_context the interface is about as intuitive as it gets.
So all together I'm pretty happy with what the interface looks like. And one quick bikeshed below on the implementation. -Daniel
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch
Documentation/ww-mutex-design.txt | 322 +++++++++++++++++++++++++++ include/linux/mutex-debug.h | 1 include/linux/mutex.h | 257 +++++++++++++++++++++ kernel/mutex.c | 445 ++++++++++++++++++++++++++++++++++++- lib/debug_locks.c | 2 5 files changed, 1010 insertions(+), 17 deletions(-) create mode 100644 Documentation/ww-mutex-design.txt
[snip]
+/*
- after acquiring lock with fastpath or when we lost out in contested
- slowpath, set ctx and wake up any waiters so they can recheck.
- This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set,
- as the fastpath and opportunistic spinning are disabled in that case.
- */
+static __always_inline void +ww_mutex_set_context_fastpath(struct ww_mutex *lock,
struct ww_acquire_ctx *ctx)
+{
- unsigned long flags;
- struct mutex_waiter *cur;
- ww_mutex_lock_acquired(lock, ctx, false);
- lock->ctx = ctx;
- smp_mb__after_atomic_dec();
I think this should be
+ smp_mb__after_atomic_dec(); + lock->ctx = ctx; + smp_mb();
Also I wonder a bit how much this hurts the fastpath, and whether we should just shovel the ctx into the atomic field with a cmpxcht, like the rt mutex code does with the current pointer.
- /*
* Check if lock is contended, if not there is nobody to wake up
*/
- if (likely(atomic_read(&lock->base.count) == 0))
return;
- /*
* Uh oh, we raced in fastpath, wake up everyone in this case,
* so they can see the new ctx
*/
- spin_lock_mutex(&lock->base.wait_lock, flags);
- list_for_each_entry(cur, &lock->base.wait_list, list) {
debug_mutex_wake_waiter(&lock->base, cur);
wake_up_process(cur->task);
- }
- spin_unlock_mutex(&lock->base.wait_lock, flags);
+}