The patch below does not apply to the 5.10-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 44eff40a32e8f5228ae041006352e32638ad2368 Mon Sep 17 00:00:00 2001
From: Pavel Begunkov asml.silence@gmail.com Date: Mon, 26 Jul 2021 14:14:31 +0100 Subject: [PATCH] io_uring: fix io_prep_async_link locking
io_prep_async_link() may be called after arming a linked timeout, automatically making it unsafe to traverse the linked list. Guard with completion_lock if there was a linked timeout.
Cc: stable@vger.kernel.org # 5.9+ Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/93f7c617e2b4f012a2a175b3dab6bc2f27cebc48.162730443... Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/fs/io_uring.c b/fs/io_uring.c index 5a0fd6bcd318..c4d2b320cdd4 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1279,8 +1279,17 @@ static void io_prep_async_link(struct io_kiocb *req) { struct io_kiocb *cur;
- io_for_each_link(cur, req) - io_prep_async_work(cur); + if (req->flags & REQ_F_LINK_TIMEOUT) { + struct io_ring_ctx *ctx = req->ctx; + + spin_lock_irq(&ctx->completion_lock); + io_for_each_link(cur, req) + io_prep_async_work(cur); + spin_unlock_irq(&ctx->completion_lock); + } else { + io_for_each_link(cur, req) + io_prep_async_work(cur); + } }
static void io_queue_async_work(struct io_kiocb *req)
linux-stable-mirror@lists.linaro.org