The patch below does not apply to the 5.15-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From e302f1046f4c209291b07ff7bc4d15ca26891f16 Mon Sep 17 00:00:00 2001 From: Hao Xu haoxu@linux.alibaba.com Date: Thu, 25 Nov 2021 17:21:02 +0800 Subject: [PATCH] io_uring: fix no lock protection for ctx->cq_extra
ctx->cq_extra should be protected by completion lock so that the req_need_defer() does the right check.
Cc: stable@vger.kernel.org Signed-off-by: Hao Xu haoxu@linux.alibaba.com Link: https://lore.kernel.org/r/20211125092103.224502-2-haoxu@linux.alibaba.com Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/fs/io_uring.c b/fs/io_uring.c index f666a0e7f5e8..ae9534382b26 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -6537,12 +6537,15 @@ static __cold void io_drain_req(struct io_kiocb *req) u32 seq = io_get_sequence(req);
/* Still need defer if there is pending req in defer list. */ + spin_lock(&ctx->completion_lock); if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) { + spin_unlock(&ctx->completion_lock); queue: ctx->drain_active = false; io_req_task_queue(req); return; } + spin_unlock(&ctx->completion_lock);
ret = io_req_prep_async(req); if (ret) {
linux-stable-mirror@lists.linaro.org