6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bvanassche@acm.org
[ Upstream commit 6259151c04d4e0085e00d2dcb471ebdd1778e72e ]
Call .limit_depth() after data->hctx has been set such that data->hctx can be used in .limit_depth() implementations.
Cc: Christoph Hellwig hch@lst.de Cc: Damien Le Moal dlemoal@kernel.org Cc: Zhiguo Niu zhiguo.niu@unisoc.com Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by: Bart Van Assche bvanassche@acm.org Tested-by: Zhiguo Niu zhiguo.niu@unisoc.com Reviewed-by: Christoph Hellwig hch@lst.de Link: https://lore.kernel.org/r/20240509170149.7639-2-bvanassche@acm.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-mq.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 4c91889affa7c..7cc315527a44c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -447,6 +447,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) if (data->cmd_flags & REQ_NOWAIT) data->flags |= BLK_MQ_REQ_NOWAIT;
+retry: + data->ctx = blk_mq_get_ctx(q); + data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); + if (q->elevator) { /* * All requests use scheduler tags when an I/O scheduler is @@ -468,13 +472,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) if (ops->limit_depth) ops->limit_depth(data->cmd_flags, data); } - } - -retry: - data->ctx = blk_mq_get_ctx(q); - data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); - if (!(data->rq_flags & RQF_SCHED_TAGS)) + } else { blk_mq_tag_busy(data->hctx); + }
if (data->flags & BLK_MQ_REQ_RESERVED) data->rq_flags |= RQF_RESV;