On Thu, 15 Apr 2021 16:02:29 +0100, Will Deacon wrote:
On Thu, Apr 15, 2021 at 02:25:52PM +0000, Ali Saidi wrote:
While this code is executed with the wait_lock held, a reader can acquire the lock without holding wait_lock. The writer side loops checking the value with the atomic_cond_read_acquire(), but only truly acquires the lock when the compare-and-exchange is completed successfully which isn’t ordered. The other atomic operations from this point are release-ordered and thus reads after the lock acquisition can be completed before the lock is truly acquired which violates the guarantees the lock should be making.
I think it would be worth spelling this out with an example. The issue appears to be a concurrent reader in interrupt context taking and releasing the lock in the window where the writer has returned from the atomic_cond_read_acquire() but has not yet performed the cmpxchg(). Loads can be speculated during this time, but the A-B-A of the lock word from _QW_WAITING to (_QW_WAITING | _QR_BIAS) and back to _QW_WAITING allows the atomic_cmpxchg_relaxed() to succeed. Is that right?
You're right. What we're seeing is an A-B-A problem that can allow atomic_cond_read_acquire() to succeed and before the cmpxchg succeeds a reader performs an A-B-A on the lock which allows the core to observe a read that follows the cmpxchg ahead of the cmpxchg succeeding.
We've seen a problem in epoll where the reader does a xchg while holding the read lock, but the writer can see a value change out from under it.
Writer | Reader 2 -------------------------------------------------------------------------------- ep_scan_ready_list() | |- write_lock_irq() | |- queued_write_lock_slowpath() | |- atomic_cond_read_acquire() | | read_lock_irqsave(&ep->lock, flags); | chain_epi_lockless() | epi->next = xchg(&ep->ovflist, epi); | read_unlock_irqrestore(&ep->lock, flags); | atomic_cmpxchg_relaxed() | READ_ONCE(ep->ovflist);
With that in mind, it would probably be a good idea to eyeball the qspinlock slowpath as well, as that uses both atomic_cond_read_acquire() and atomic_try_cmpxchg_relaxed().
It seems plausible that the same thing could occur here in qspinlock: if ((val & _Q_TAIL_MASK) == tail) { if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL)) goto release; /* No contention */ }
Fixes: b519b56e378ee ("locking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwloc")
Ack, will fix.
Typo in the quoted subject ('qrwloc').
Signed-off-by: Ali Saidi alisaidi@amazon.com Cc: stable@vger.kernel.org
kernel/locking/qrwlock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index 4786dd271b45..10770f6ac4d9 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -73,8 +73,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock) /* When no more readers or writers, set the locked flag */ do {
atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING);
- } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING,
atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
- } while (atomic_cmpxchg_acquire(&lock->cnts, _QW_WAITING, _QW_LOCKED) != _QW_WAITING);
Patch looks good, so with an updated message:
Acked-by: Will Deacon will@kernel.org
Will