The patch below does not apply to the 5.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From d3e3c102d107bb84251455a298cf475f24bab995 Mon Sep 17 00:00:00 2001
From: Jens Axboe axboe@kernel.dk Date: Thu, 11 Nov 2021 17:32:53 -0700 Subject: [PATCH] io-wq: serialize hash clear with wakeup
We need to ensure that we serialize the stalled and hash bits with the wait_queue wait handler, or we could be racing with someone modifying the hashed state after we find it busy, but before we then give up and wait for it to be cleared. This can cause random delays or stalls when handling buffered writes for many files, where some of these files cause hash collisions between the worker threads.
Cc: stable@vger.kernel.org Reported-by: Daniel Black daniel@mariadb.org Fixes: e941894eae31 ("io-wq: make buffered file write hashed work map per-ctx") Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/fs/io-wq.c b/fs/io-wq.c index afd955d53db9..88202de519f6 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -423,9 +423,10 @@ static inline unsigned int io_get_work_hash(struct io_wq_work *work) return work->flags >> IO_WQ_HASH_SHIFT; }
-static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash) +static bool io_wait_on_hash(struct io_wqe *wqe, unsigned int hash) { struct io_wq *wq = wqe->wq; + bool ret = false;
spin_lock_irq(&wq->hash->wait.lock); if (list_empty(&wqe->wait.entry)) { @@ -433,9 +434,11 @@ static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash) if (!test_bit(hash, &wq->hash->map)) { __set_current_state(TASK_RUNNING); list_del_init(&wqe->wait.entry); + ret = true; } } spin_unlock_irq(&wq->hash->wait.lock); + return ret; }
static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct, @@ -475,14 +478,21 @@ static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct, }
if (stall_hash != -1U) { + bool unstalled; + /* * Set this before dropping the lock to avoid racing with new * work being added and clearing the stalled bit. */ set_bit(IO_ACCT_STALLED_BIT, &acct->flags); raw_spin_unlock(&wqe->lock); - io_wait_on_hash(wqe, stall_hash); + unstalled = io_wait_on_hash(wqe, stall_hash); raw_spin_lock(&wqe->lock); + if (unstalled) { + clear_bit(IO_ACCT_STALLED_BIT, &acct->flags); + if (wq_has_sleeper(&wqe->wq->hash->wait)) + wake_up(&wqe->wq->hash->wait); + } }
return NULL; @@ -564,8 +574,11 @@ static void io_worker_handle_work(struct io_worker *worker) io_wqe_enqueue(wqe, linked);
if (hash != -1U && !next_hashed) { + /* serialize hash clear with wake_up() */ + spin_lock_irq(&wq->hash->wait.lock); clear_bit(hash, &wq->hash->map); clear_bit(IO_ACCT_STALLED_BIT, &acct->flags); + spin_unlock_irq(&wq->hash->wait.lock); if (wq_has_sleeper(&wq->hash->wait)) wake_up(&wq->hash->wait); raw_spin_lock(&wqe->lock);
On 11/14/21 7:25 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 5.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
This needs a few more backports - here are two backports that it depends on, and then this one backported as well.
On Sun, Nov 14, 2021 at 07:39:27AM -0700, Jens Axboe wrote:
On 11/14/21 7:25 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 5.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
This needs a few more backports - here are two backports that it depends on, and then this one backported as well.
Now queued up, thanks.
greg k-h
linux-stable-mirror@lists.linaro.org