Recently failed to apply io_uring stable-5.15 patches.
Dylan Yudaken (1): io_uring: always lock in io_apoll_task_func
Pavel Begunkov (2): io_uring: break out of iowq iopoll on teardown io_uring: break iopolling on signal
io_uring/io-wq.c | 10 ++++++++++ io_uring/io-wq.h | 1 + io_uring/io_uring.c | 9 ++++++++- 3 files changed, 19 insertions(+), 1 deletion(-)
From: Dylan Yudaken dylany@meta.com
[ upstream commit c06c6c5d276707e04cedbcc55625e984922118aa ]
This is required for the failure case (io_req_complete_failed) and is missing.
The alternative would be to only lock in the failure path, however all of the non-error paths in io_poll_check_events that do not do not return IOU_POLL_NO_ACTION end up locking anyway. The only extraneous lock would be for the multishot poll overflowing the CQE ring, however multishot poll would probably benefit from being locked as it will allow completions to be batched.
So it seems reasonable to lock always.
Signed-off-by: Dylan Yudaken dylany@meta.com Link: https://lore.kernel.org/r/20221124093559.3780686-3-dylany@meta.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Pavel Begunkov asml.silence@gmail.com --- io_uring/io_uring.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index e4de493bcff4..fec6b6a409e7 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -5716,6 +5716,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked) if (ret > 0) return;
+ io_tw_lock(req->ctx, locked); io_poll_remove_entries(req); spin_lock(&ctx->completion_lock); hash_del(&req->hash_node);
[ upstream commit 45500dc4e01c167ee063f3dcc22f51ced5b2b1e9 ]
io-wq will retry iopoll even when it failed with -EAGAIN. If that races with task exit, which sets TIF_NOTIFY_SIGNAL for all its workers, such workers might potentially infinitely spin retrying iopoll again and again and each time failing on some allocation / waiting / etc. Don't keep spinning if io-wq is dying.
Fixes: 561fb04a6a225 ("io_uring: replace workqueue usage with io-wq") Cc: stable@vger.kernel.org Signed-off-by: Pavel Begunkov asml.silence@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk --- io_uring/io-wq.c | 10 ++++++++++ io_uring/io-wq.h | 1 + io_uring/io_uring.c | 3 ++- 3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 81485c1a9879..fe8594a0396c 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -176,6 +176,16 @@ static void io_worker_ref_put(struct io_wq *wq) complete(&wq->worker_done); }
+bool io_wq_worker_stopped(void) +{ + struct io_worker *worker = current->pf_io_worker; + + if (WARN_ON_ONCE(!io_wq_current_is_worker())) + return true; + + return test_bit(IO_WQ_BIT_EXIT, &worker->wqe->wq->state); +} + static void io_worker_cancel_cb(struct io_worker *worker) { struct io_wqe_acct *acct = io_wqe_get_acct(worker); diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h index bf5c4c533760..48721cbd5f40 100644 --- a/io_uring/io-wq.h +++ b/io_uring/io-wq.h @@ -129,6 +129,7 @@ void io_wq_hash_work(struct io_wq_work *work, void *val);
int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask); int io_wq_max_workers(struct io_wq *wq, int *new_count); +bool io_wq_worker_stopped(void);
static inline bool io_wq_is_hashed(struct io_wq_work *work) { diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index fec6b6a409e7..077c9527be37 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -7069,7 +7069,8 @@ static void io_wq_submit_work(struct io_wq_work *work) */ if (ret != -EAGAIN || !(req->ctx->flags & IORING_SETUP_IOPOLL)) break; - + if (io_wq_worker_stopped()) + break; /* * If REQ_F_NOWAIT is set, then don't wait or retry with * poll. -EAGAIN is final for that case.
[ upstream commit dc314886cb3d0e4ab2858003e8de2917f8a3ccbd ]
Don't keep spinning iopoll with a signal set. It'll eventually return back, e.g. by virtue of need_resched(), but it's not a nice user experience.
Cc: stable@vger.kernel.org Fixes: def596e9557c9 ("io_uring: support for IO polling") Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/eeba551e82cad12af30c3220125eb6cb244cc94c.169159433... Signed-off-by: Jens Axboe axboe@kernel.dk --- io_uring/io_uring.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 077c9527be37..1519125b9814 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2668,6 +2668,11 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min) break; } ret = io_do_iopoll(ctx, &nr_events, min); + + if (task_sigpending(current)) { + ret = -EINTR; + goto out; + } } while (!ret && nr_events < min && !need_resched()); out: mutex_unlock(&ctx->uring_lock);
linux-stable-mirror@lists.linaro.org