The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 76b367a2d83163cf19173d5cb0b562acbabc8eac # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024021330-twice-pacify-2be5@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Possible dependencies:
76b367a2d831 ("io_uring/net: limit inline multishot retries") 91e5d765a82f ("io_uring/net: un-indent mshot retry path in io_recv_finish()")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 76b367a2d83163cf19173d5cb0b562acbabc8eac Mon Sep 17 00:00:00 2001 From: Jens Axboe axboe@kernel.dk Date: Mon, 29 Jan 2024 12:00:58 -0700 Subject: [PATCH] io_uring/net: limit inline multishot retries
If we have multiple clients and some/all are flooding the receives to such an extent that we can retry a LOT handling multishot receives, then we can be starving some clients and hence serving traffic in an imbalanced fashion.
Limit multishot retry attempts to some arbitrary value, whose only purpose serves to ensure that we don't keep serving a single connection for way too long. We default to 32 retries, which should be more than enough to provide fairness, yet not so small that we'll spend too much time requeuing rather than handling traffic.
Cc: stable@vger.kernel.org Depends-on: 704ea888d646 ("io_uring/poll: add requeue return code from poll multishot handling") Depends-on: 1e5d765a82f ("io_uring/net: un-indent mshot retry path in io_recv_finish()") Depends-on: e84b01a880f6 ("io_uring/poll: move poll execution helpers higher up") Fixes: b3fdea6ecb55 ("io_uring: multishot recv") Fixes: 9bb66906f23e ("io_uring: support multishot in recvmsg") Link: https://github.com/axboe/liburing/issues/1043 Signed-off-by: Jens Axboe axboe@kernel.dk
diff --git a/io_uring/net.c b/io_uring/net.c index 740c6bfa5b59..a12ff69e6843 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -60,6 +60,7 @@ struct io_sr_msg { unsigned len; unsigned done_io; unsigned msg_flags; + unsigned nr_multishot_loops; u16 flags; /* initialised and used only by !msg send variants */ u16 addr_len; @@ -70,6 +71,13 @@ struct io_sr_msg { struct io_kiocb *notif; };
+/* + * Number of times we'll try and do receives if there's more data. If we + * exceed this limit, then add us to the back of the queue and retry from + * there. This helps fairness between flooding clients. + */ +#define MULTISHOT_MAX_RETRY 32 + static inline bool io_check_multishot(struct io_kiocb *req, unsigned int issue_flags) { @@ -611,6 +619,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sr->msg_flags |= MSG_CMSG_COMPAT; #endif sr->done_io = 0; + sr->nr_multishot_loops = 0; return 0; }
@@ -654,12 +663,20 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, */ if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER, *ret, cflags | IORING_CQE_F_MORE)) { + struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); + int mshot_retry_ret = IOU_ISSUE_SKIP_COMPLETE; + io_recv_prep_retry(req); /* Known not-empty or unknown state, retry */ - if (cflags & IORING_CQE_F_SOCK_NONEMPTY || msg->msg_inq == -1) - return false; + if (cflags & IORING_CQE_F_SOCK_NONEMPTY || msg->msg_inq == -1) { + if (sr->nr_multishot_loops++ < MULTISHOT_MAX_RETRY) + return false; + /* mshot retries exceeded, force a requeue */ + sr->nr_multishot_loops = 0; + mshot_retry_ret = IOU_REQUEUE; + } if (issue_flags & IO_URING_F_MULTISHOT) - *ret = IOU_ISSUE_SKIP_COMPLETE; + *ret = mshot_retry_ret; else *ret = -EAGAIN; return true;
On 2/13/24 6:19 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 76b367a2d83163cf19173d5cb0b562acbabc8eac # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024021330-twice-pacify-2be5@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Here's the series for 6.6-stable.
On Tue, Feb 13, 2024 at 07:52:53AM -0700, Jens Axboe wrote:
On 2/13/24 6:19 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 76b367a2d83163cf19173d5cb0b562acbabc8eac # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024021330-twice-pacify-2be5@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Here's the series for 6.6-stable.
-- Jens Axboe
From 582cc8795c22337041abc7ee06f9de34f1592922 Mon Sep 17 00:00:00 2001 From: Jens Axboe axboe@kernel.dk Date: Mon, 29 Jan 2024 11:52:54 -0700 Subject: [PATCH 1/4] io_uring/poll: move poll execution helpers higher up
Commit e84b01a880f635e3084a361afba41f95ff500d12 upstream.
In preparation for calling __io_poll_execute() higher up, move the functions to avoid forward declarations.
No functional changes in this patch.
Signed-off-by: Jens Axboe axboe@kernel.dk
io_uring/poll.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c index a4084acaff91..a2f21ae093dc 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -226,6 +226,30 @@ enum { IOU_POLL_REISSUE = 3, }; +static void __io_poll_execute(struct io_kiocb *req, int mask) +{
- io_req_set_res(req, mask, 0);
- /*
* This is useful for poll that is armed on behalf of another
* request, and where the wakeup path could be on a different
* CPU. We want to avoid pulling in req->apoll->events for that
* case.
*/
- if (req->opcode == IORING_OP_POLL_ADD)
req->io_task_work.func = io_poll_task_func;
- else
req->io_task_work.func = io_apoll_task_func;
- trace_io_uring_task_add(req, mask);
- io_req_task_work_add(req);
+}
+static inline void io_poll_execute(struct io_kiocb *req, int res) +{
- if (io_poll_get_ownership(req))
__io_poll_execute(req, res);
+}
/*
- All poll tw should go through this. Checks for poll events, manages
- references, does rewait, etc.
@@ -372,30 +396,6 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked) io_req_complete_failed(req, ret); } -static void __io_poll_execute(struct io_kiocb *req, int mask) -{
- io_req_set_res(req, mask, 0);
- /*
* This is useful for poll that is armed on behalf of another
* request, and where the wakeup path could be on a different
* CPU. We want to avoid pulling in req->apoll->events for that
* case.
*/
- if (req->opcode == IORING_OP_POLL_ADD)
req->io_task_work.func = io_poll_task_func;
- else
req->io_task_work.func = io_apoll_task_func;
- trace_io_uring_task_add(req, mask);
- io_req_task_work_add(req);
-}
-static inline void io_poll_execute(struct io_kiocb *req, int res) -{
- if (io_poll_get_ownership(req))
__io_poll_execute(req, res);
-}
This first patch fails to apply to the 6.6.y tree, are you sure you made it against the correct one? These functions do not look like this to me.
confused,
greg k-h
On 2/13/24 9:15 AM, Greg KH wrote:
On Tue, Feb 13, 2024 at 07:52:53AM -0700, Jens Axboe wrote:
On 2/13/24 6:19 AM, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 76b367a2d83163cf19173d5cb0b562acbabc8eac # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024021330-twice-pacify-2be5@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Here's the series for 6.6-stable.
-- Jens Axboe
From 582cc8795c22337041abc7ee06f9de34f1592922 Mon Sep 17 00:00:00 2001 From: Jens Axboe axboe@kernel.dk Date: Mon, 29 Jan 2024 11:52:54 -0700 Subject: [PATCH 1/4] io_uring/poll: move poll execution helpers higher up
Commit e84b01a880f635e3084a361afba41f95ff500d12 upstream.
In preparation for calling __io_poll_execute() higher up, move the functions to avoid forward declarations.
No functional changes in this patch.
Signed-off-by: Jens Axboe axboe@kernel.dk
io_uring/poll.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c index a4084acaff91..a2f21ae093dc 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -226,6 +226,30 @@ enum { IOU_POLL_REISSUE = 3, }; +static void __io_poll_execute(struct io_kiocb *req, int mask) +{
- io_req_set_res(req, mask, 0);
- /*
* This is useful for poll that is armed on behalf of another
* request, and where the wakeup path could be on a different
* CPU. We want to avoid pulling in req->apoll->events for that
* case.
*/
- if (req->opcode == IORING_OP_POLL_ADD)
req->io_task_work.func = io_poll_task_func;
- else
req->io_task_work.func = io_apoll_task_func;
- trace_io_uring_task_add(req, mask);
- io_req_task_work_add(req);
+}
+static inline void io_poll_execute(struct io_kiocb *req, int res) +{
- if (io_poll_get_ownership(req))
__io_poll_execute(req, res);
+}
/*
- All poll tw should go through this. Checks for poll events, manages
- references, does rewait, etc.
@@ -372,30 +396,6 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked) io_req_complete_failed(req, ret); } -static void __io_poll_execute(struct io_kiocb *req, int mask) -{
- io_req_set_res(req, mask, 0);
- /*
* This is useful for poll that is armed on behalf of another
* request, and where the wakeup path could be on a different
* CPU. We want to avoid pulling in req->apoll->events for that
* case.
*/
- if (req->opcode == IORING_OP_POLL_ADD)
req->io_task_work.func = io_poll_task_func;
- else
req->io_task_work.func = io_apoll_task_func;
- trace_io_uring_task_add(req, mask);
- io_req_task_work_add(req);
-}
-static inline void io_poll_execute(struct io_kiocb *req, int res) -{
- if (io_poll_get_ownership(req))
__io_poll_execute(req, res);
-}
This first patch fails to apply to the 6.6.y tree, are you sure you made it against the correct one? These functions do not look like this to me.
Sorry my bad, refreshing them for 6.1-stable and I guess I did that before I sent them out. Hence the mua used the new copy...
Here are the ones I have in my local tree, from testing.
On Tue, Feb 13, 2024 at 09:18:43AM -0700, Jens Axboe wrote:
This first patch fails to apply to the 6.6.y tree, are you sure you made it against the correct one? These functions do not look like this to me.
Sorry my bad, refreshing them for 6.1-stable and I guess I did that before I sent them out. Hence the mua used the new copy...
Here are the ones I have in my local tree, from testing.
Now queued up, but to confirm, 6.1.y did NOT need these, right?
thanks,
greg k-h
On 2/13/24 9:27 AM, Greg KH wrote:
On Tue, Feb 13, 2024 at 09:18:43AM -0700, Jens Axboe wrote:
This first patch fails to apply to the 6.6.y tree, are you sure you made it against the correct one? These functions do not look like this to me.
Sorry my bad, refreshing them for 6.1-stable and I guess I did that before I sent them out. Hence the mua used the new copy...
Here are the ones I have in my local tree, from testing.
Now queued up, but to confirm, 6.1.y did NOT need these, right?
Correct, it's not required. Would be nice to have, but I'd need to backport more things. So I think it's better if we leave 6.1-stable as-is for now.
linux-stable-mirror@lists.linaro.org