6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 256aab46e31683d76d45ccbedc287b4d3f3e322b ]
The code "max(1U, 3 * (1U << shift) / 4)" comes from the Kyber I/O scheduler. The Kyber I/O scheduler maintains one internal queue per hwq and hence derives its async_depth from the number of hwq tags. Using this approach for the mq-deadline scheduler is wrong since the mq-deadline scheduler maintains one internal queue for all hwqs combined. Hence this revert.
Cc: stable@vger.kernel.org Cc: Damien Le Moal dlemoal@kernel.org Cc: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Cc: Zhiguo Niu Zhiguo.Niu@unisoc.com Fixes: d47f9717e5cf ("block/mq-deadline: use correct way to throttling write requests") Signed-off-by: Bart Van Assche bvanassche@acm.org Link: https://lore.kernel.org/r/20240313214218.1736147-1-bvanassche@acm.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/mq-deadline.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 55e26065c2e27..f10c2a0d18d41 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -622,9 +622,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct deadline_data *dd = q->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - unsigned int shift = tags->bitmap_tags.sb.shift;
- dd->async_depth = max(1U, 3 * (1U << shift) / 4); + dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); }