On Sun, Aug 23, 2020 at 08:06:07PM -0700, Bart Van Assche wrote:
With the current implementation the following race can happen:
- blk_pre_runtime_suspend() calls blk_freeze_queue_start() and blk_mq_unfreeze_queue().
- blk_queue_enter() calls blk_queue_pm_only() and that function returns true.
- blk_queue_enter() calls blk_pm_request_resume() and that function does not call pm_request_resume() because the queue runtime status is RPM_ACTIVE.
- blk_pre_runtime_suspend() changes the queue status into RPM_SUSPENDING.
Fix this race by changing the queue runtime status into RPM_SUSPENDING before switching q_usage_counter to atomic mode.
Cc: Alan Stern stern@rowland.harvard.edu Cc: Stanley Chu stanley.chu@mediatek.com Cc: Ming Lei ming.lei@redhat.com Cc: stable stable@vger.kernel.org Fixes: 986d413b7c15 ("blk-mq: Enable support for runtime power management") Signed-off-by: Can Guo cang@codeaurora.org Signed-off-by: Bart Van Assche bvanassche@acm.org
block/blk-pm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/block/blk-pm.c b/block/blk-pm.c index b85234d758f7..17bd020268d4 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -67,6 +67,10 @@ int blk_pre_runtime_suspend(struct request_queue *q) WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE);
- spin_lock_irq(&q->queue_lock);
- q->rpm_status = RPM_SUSPENDING;
- spin_unlock_irq(&q->queue_lock);
- /*
- Increase the pm_only counter before checking whether any
- non-PM blk_queue_enter() calls are in progress to avoid that any
@@ -89,15 +93,14 @@ int blk_pre_runtime_suspend(struct request_queue *q) /* Switch q_usage_counter back to per-cpu mode. */ blk_mq_unfreeze_queue(q);
- spin_lock_irq(&q->queue_lock);
- if (ret < 0)
- if (ret < 0) {
spin_lock_irq(&q->queue_lock);
pm_runtime_mark_last_busy(q->dev);q->rpm_status = RPM_ACTIVE;
- else
q->rpm_status = RPM_SUSPENDING;
- spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(&q->queue_lock);
- if (ret) blk_clear_pm_only(q);
- }
return ret; }
Acked-by: Alan Stern stern@rowland.harvard.edu