Hi Juri,
Do you know how to test the rt-mutex?
Regards
On 04/01/2017 01:37 PM, Alex Shi wrote:
Hi Juri & Mathieu,
Sorry, sent out a wrong version of this patch. The correct one as following: Would you like to review this small patchset?
Regards Alex
From a9b9fdcb07bcfef969c1bd7c660063108e2e61c2 Mon Sep 17 00:00:00 2001 From: Alex Shi alex.shi@linaro.org Date: Fri, 31 Mar 2017 11:40:22 +0800 Subject: [RFC PATCH 2/3] rt-mutex: deboost priority conditionally when rt-mutex unlock
The rt_mutex_fastunlock() will deboost 'current' task when it should be. but the rt_mutex_slowunlock() function will set the 'deboost' flag unconditionally. That cause some unnecessary priority adjustment.
'current' release a lock, so 'current' is a higher prio task than the next top waiter, unless the current prio was gotten from this top waiter, iff so, we need to deboost 'current' after lock release.
Signed-off-by: Alex Shi alex.shi@linaro.org To: linux-kernel@vger.kernel.org To: Ingo Molnar mingo@redhat.com To: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de
kernel/locking/rtmutex.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 6edc32e..f283a12 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1037,10 +1037,11 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
- Called with lock->wait_lock held and interrupts disabled.
*/ -static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, +static bool mark_wakeup_next_waiter(struct wake_q_head *wake_q, struct rt_mutex *lock) { struct rt_mutex_waiter *waiter;
- bool deboost = false;
raw_spin_lock(¤t->pi_lock); @@ -1055,6 +1056,15 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, rt_mutex_dequeue_pi(current, waiter); /*
* 'current' release this lock, so 'current' is a higher prio task
* than the next top waiter, unless the current prio was gotten
* from this top waiter, iff so, we need to deboost 'current'
* after lock release.
*/
- if (current->prio == waiter->prio)
deboost = true;
- /*
- As we are waking up the top waiter, and the waiter stays
- queued on the lock until it gets the lock, this lock
- obviously has waiters. Just set the bit here and this has
@@ -1067,6 +1077,8 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, raw_spin_unlock(¤t->pi_lock); wake_q_add(wake_q, waiter->task);
- return deboost;
} /* @@ -1336,6 +1348,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, struct wake_q_head *wake_q) { unsigned long flags;
- bool deboost = false;
/* irqsave required to support early boot calls */ raw_spin_lock_irqsave(&lock->wait_lock, flags); @@ -1389,12 +1402,12 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, * * Queue the next waiter for wakeup once we release the wait_lock. */
- mark_wakeup_next_waiter(wake_q, lock);
- deboost = mark_wakeup_next_waiter(wake_q, lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags); /* check PI boosting */
- return true;
- return deboost;
} /*