On Thu, May 29, 2025 at 06:19:31AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baid... Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
Thanks! Something else small to improve.
drivers/block/virtio_blk.c | 82 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..d37df878f4e9 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,86 @@ static int virtblk_probe(struct virtio_device *vdev) return err; } +static bool virtblk_request_cancel(struct request *rq, void *data)
it is more
virtblk_request_complete_broken_with_ioerr
and maybe a comment? /* * If the vq is broken, device will not complete requests. * So we do it for the device. */
+{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk)
and one goes okay what does it do exactly? cleanup device in a broken way? turns out no, it cleans up a broken device. And an overview would be good. Maybe, a small comment will help:
/* * if the device is broken, it will not use any buffers and waiting * for that to happen is pointless. We'll do it in the driver, * completing all requests for the device. */
+{
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
so one has to read it, and understand that we did not need to call it in the 1st place on a non broken device. Moving it to the caller would be cleaner.
- /* Start freezing the queue, so that new requests keeps waiting at the
wrong style of comment for blk.
/* this is * net style */
/* * this is * rest of the linux style */
* door of bio_queue_enter(). We cannot fully freeze the queue because
* freezed queue is an empty queue and there are pending requests, so
a frozen queue
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
* This ensures that later when cancel is attempted, then are not
they are not?
* getting processed by the queue_rq() or queue_rqs() handlers.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_request_cancel().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /* Mark the disk dead so that once queue unfreeze, the requests
... once we unfreeze the queue
* waiting at the door of bio_queue_enter() can be aborted right away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1641,8 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
I prefer simply moving the test here:
if (virtqueue_is_broken(vblk->vqs[0].vq)) virtblk_broken_device_cleanup(vblk);
makes it much clearer what is going on, imho.
del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set); -- 2.34.1
From: Michael S. Tsirkin mst@redhat.com Sent: Thursday, May 29, 2025 1:34 PM
On Thu, May 29, 2025 at 06:19:31AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
Thanks! Something else small to improve.
drivers/block/virtio_blk.c | 82 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..d37df878f4e9 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,86 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+static bool virtblk_request_cancel(struct request *rq, void *data)
it is more
virtblk_request_complete_broken_with_ioerr
and maybe a comment? /*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
*/
Ok. will add.
+{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk)
and one goes okay what does it do exactly? cleanup device in a broken way? turns out no, it cleans up a broken device. And an overview would be good. Maybe, a small comment will help:
Virtblk_cleanup_broken_device()?
Is that name ok?
/*
- if the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do it in the driver,
- completing all requests for the device.
*/
Will add it.
+{
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
so one has to read it, and understand that we did not need to call it in the 1st place on a non broken device. Moving it to the caller would be cleaner.
Ok. will move.
- /* Start freezing the queue, so that new requests keeps waiting at
+the
wrong style of comment for blk.
/* this is
- net style
*/
/*
- this is
- rest of the linux style
*/
Ok. will fix it.
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* freezed queue is an empty queue and there are pending requests,
+so
a frozen queue
Will fix it.
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
* This ensures that later when cancel is attempted, then are not
they are not?
Will fix this too.
* getting processed by the queue_rq() or queue_rqs() handlers.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_request_cancel().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel,
vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /* Mark the disk dead so that once queue unfreeze, the requests
... once we unfreeze the queue
Ok.
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted.
*/
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1641,8 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
I prefer simply moving the test here:
if (virtqueue_is_broken(vblk->vqs[0].vq)) virtblk_broken_device_cleanup(vblk);
makes it much clearer what is going on, imho.
No strong preference, some maintainers prefer the current way others the way you preferred. So will fix as you proposed here along with above fixes in v4.
Thanks
del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
On Thu, May 29, 2025 at 09:57:51AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: Thursday, May 29, 2025 1:34 PM
On Thu, May 29, 2025 at 06:19:31AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
Thanks! Something else small to improve.
drivers/block/virtio_blk.c | 82 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..d37df878f4e9 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,86 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+static bool virtblk_request_cancel(struct request *rq, void *data)
it is more
virtblk_request_complete_broken_with_ioerr
and maybe a comment? /*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
*/
Ok. will add.
+{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk)
and one goes okay what does it do exactly? cleanup device in a broken way? turns out no, it cleans up a broken device. And an overview would be good. Maybe, a small comment will help:
Virtblk_cleanup_broken_device()?
Is that name ok?
better, I think.
/*
- if the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do it in the driver,
- completing all requests for the device.
*/
Will add it.
+{
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
so one has to read it, and understand that we did not need to call it in the 1st place on a non broken device. Moving it to the caller would be cleaner.
Ok. will move.
- /* Start freezing the queue, so that new requests keeps waiting at
+the
wrong style of comment for blk.
/* this is
- net style
*/
/*
- this is
- rest of the linux style
*/
Ok. will fix it.
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* freezed queue is an empty queue and there are pending requests,
+so
a frozen queue
Will fix it.
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
* This ensures that later when cancel is attempted, then are not
they are not?
Will fix this too.
* getting processed by the queue_rq() or queue_rqs() handlers.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_request_cancel().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel,
vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /* Mark the disk dead so that once queue unfreeze, the requests
... once we unfreeze the queue
Ok.
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted.
*/
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1641,8 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
I prefer simply moving the test here:
if (virtqueue_is_broken(vblk->vqs[0].vq)) virtblk_broken_device_cleanup(vblk);
makes it much clearer what is going on, imho.
No strong preference, some maintainers prefer the current way others the way you preferred. So will fix as you proposed here along with above fixes in v4.
Thanks
del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
linux-stable-mirror@lists.linaro.org