On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baid... Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device *vdev) return err; } +/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request *rq, void *data) +{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do the cleanup in the driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) +{
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
2.34.1
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
In one of the previous replies I answered to Michael, but don't have the link handy. It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes tag patch. The virtio stack level work is needed to support surprise removal, including the reset flow you rightly pointed.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line at
start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request *rq, +void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do the cleanup in the
+driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
In one of the previous replies I answered to Michael, but don't have the link handy. It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes tag patch. The virtio stack level work is needed to support surprise removal, including the reset flow you rightly pointed.
Have plans to do that?
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line at
start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request *rq, +void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do the cleanup in the
+driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
In one of the previous replies I answered to Michael, but don't have the link
handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes
tag patch.
The virtio stack level work is needed to support surprise removal, including
the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9 b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line
at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request +*rq, void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and
+waiting
- for that to happen is pointless. We'll do the cleanup in the
+driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either
+for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when
+queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
In one of the previous replies I answered to Michael, but don't have the link
handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes
tag patch.
The virtio stack level work is needed to support surprise removal, including
the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
Maybe it's better to wait until someone does a comprehensive fix..
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9 b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line
at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request +*rq, void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and
+waiting
- for that to happen is pointless. We'll do the cleanup in the
+driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either
+for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when
+queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset ()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that
direction.
In one of the previous replies I answered to Michael, but don't have the link
handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes
tag patch.
The virtio stack level work is needed to support surprise removal, including
the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion of v4. If you prefer a comprehensive support across layers of virtio, I suggest you should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise removal. By reverting the cited patch (or with this fix), the requests and disk deletion will not hang.
Please let me know if I should re-send to revert the patch listed in fixes tag.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb7 3ca9 b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty
line at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct +request *rq, void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and
+waiting
- for that to happen is pointless. We'll do the cleanup in
+the driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps
waiting at the
* door of bio_queue_enter(). We cannot fully freeze the
+queue
because
* frozen queue is an empty queue and there are pending
requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have
completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have
started
* before the VQs were marked as broken. Any outstanding
requests
* will be completed by
virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq()
and
* completion routine will not complete any new requests
+either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr,
vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so
that disk
* deletion can be smooth. Start the HW queues so that when
+queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests
to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may
have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue,
requests
* waiting at the door of bio_queue_enter() can be aborted
+right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be
aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: 25 June 2025 12:26 AM
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote: > When the PCI device is surprise removed, requests may not > complete the device as the VQ is marked as broken. Due to > this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset ()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that
direction.
In one of the previous replies I answered to Michael, but don't have the link
handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes
tag patch.
The virtio stack level work is needed to support surprise removal, including
the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion of v4. If you prefer a comprehensive support across layers of virtio, I suggest you should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise removal. By reverting the cited patch (or with this fix), the requests and disk deletion will not hang.
But they will hung in virtio core on reset, will they not? The tests just do not happen to trigger this?
Please let me know if I should re-send to revert the patch listed in fixes tag.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Thanks.
> > Fix it by aborting the requests when the VQ is broken. > > With this fix now fio completes swiftly. > An alternative of IO timeout has been considered, however when > the driver knows about unresponsive block device, swiftly > clearing them enables users and upper layers to react quickly. > > Verified with multiple device unplug iterations with pending > requests in virtio used ring and some pending with the device. > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > virtio pci device") > Cc: stable@vger.kernel.org > Reported-by: Li RongQing lirongqing@baidu.com > Closes: > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb7 > 3ca9 > b474 > 1@baidu.com/ > Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com > Reviewed-by: Israel Rukshin israelr@nvidia.com > Signed-off-by: Parav Pandit parav@nvidia.com > > --- > v4->v5: > - fixed comment style where comment to start with one empty > line at start > - Addressed comments from Alok > - fixed typo in broken vq check > v3->v4: > - Addressed comments from Michael > - renamed virtblk_request_cancel() to > virtblk_complete_request_with_ioerr() > - Added comments for virtblk_complete_request_with_ioerr() > - Renamed virtblk_broken_device_cleanup() to > virtblk_cleanup_broken_device() > - Added comments for virtblk_cleanup_broken_device() > - Moved the broken vq check in virtblk_remove() > - Fixed comment style to have first empty line > - replaced freezed to frozen > - Fixed comments rephrased > > v2->v3: > - Addressed comments from Michael > - updated comment for synchronizing with callbacks > > v1->v2: > - Addressed comments from Stephan > - fixed spelling to 'waiting' > - Addressed comments from Michael > - Dropped checking broken vq from queue_rq() and queue_rqs() > because it is checked in lower layer routines in virtio core > > v0->v1: > - Fixed comments from Stefan to rename a cleanup function > - Improved logic for handling any outstanding requests > in bio layer > - improved cancel callback to sync with ongoing done() > --- > drivers/block/virtio_blk.c | 95 > ++++++++++++++++++++++++++++++++++++++ > 1 file changed, 95 insertions(+) > > diff --git a/drivers/block/virtio_blk.c > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 > 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct > virtio_device *vdev) > return err; > } > > +/* > + * If the vq is broken, device will not complete requests. > + * So we do it for the device. > + */ > +static bool virtblk_complete_request_with_ioerr(struct > +request *rq, void *data) { > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > + struct virtio_blk *vblk = data; > + struct virtio_blk_vq *vq; > + unsigned long flags; > + > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > + > + spin_lock_irqsave(&vq->lock, flags); > + > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > + if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
> + blk_mq_complete_request(rq); > + > + spin_unlock_irqrestore(&vq->lock, flags); > + return true; > +} > + > +/* > + * If the device is broken, it will not use any buffers and > +waiting > + * for that to happen is pointless. We'll do the cleanup in > +the driver, > + * completing all requests for the device. > + */ > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) { > + struct request_queue *q = vblk->disk->queue; > + > + /* > + * Start freezing the queue, so that new requests keeps
waiting at the
> + * door of bio_queue_enter(). We cannot fully freeze the > +queue because > + * frozen queue is an empty queue and there are pending
requests, so
> + * only start freezing it. > + */ > + blk_freeze_queue_start(q); > + > + /* > + * When quiescing completes, all ongoing dispatches have
completed
> + * and no new dispatch will happen towards the driver. > + */ > + blk_mq_quiesce_queue(q); > + > + /* > + * Synchronize with any ongoing VQ callbacks that may have
started
> + * before the VQs were marked as broken. Any outstanding
requests
> + * will be completed by
virtblk_complete_request_with_ioerr().
> + */ > + virtio_synchronize_cbs(vblk->vdev); > + > + /* > + * At this point, no new requests can enter the queue_rq()
and
> + * completion routine will not complete any new requests > +either for the > + * broken vq. Hence, it is safe to cancel all requests which are > + * started. > + */ > + blk_mq_tagset_busy_iter(&vblk->tag_set, > + virtblk_complete_request_with_ioerr,
vblk);
> + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > + > + /* > + * All pending requests are cleaned up. Time to resume so
that disk
> + * deletion can be smooth. Start the HW queues so that when > +queue is > + * unquiesced requests can again enter the driver. > + */ > + blk_mq_start_stopped_hw_queues(q, true); > + > + /* > + * Unquiescing will trigger dispatching any pending requests
to the
> + * driver which has crossed bio_queue_enter() to the driver. > + */ > + blk_mq_unquiesce_queue(q); > + > + /* > + * Wait for all pending dispatches to terminate which may
have been
> + * initiated after unquiescing. > + */ > + blk_mq_freeze_queue_wait(q); > + > + /* > + * Mark the disk dead so that once we unfreeze the queue,
requests
> + * waiting at the door of bio_queue_enter() can be aborted > +right away. > + */ > + blk_mark_disk_dead(vblk->disk); > + > + /* Unfreeze the queue so that any waiting requests will be
aborted. */
> + blk_mq_unfreeze_queue_nomemrestore(q); > +} > + > static void virtblk_remove(struct virtio_device *vdev) { > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ > static void virtblk_remove(struct virtio_device *vdev) > /* Make sure no work handler is accessing the device. */ > flush_work(&vblk->config_work); > > + if (virtqueue_is_broken(vblk->vqs[0].vq)) > + virtblk_cleanup_broken_device(vblk); > + > del_gendisk(vblk->disk); > blk_mq_free_tag_set(&vblk->tag_set); > > -- > 2.34.1 >
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 04:34 PM
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> From: Stefan Hajnoczi stefanha@redhat.com > Sent: 25 June 2025 12:26 AM > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote: > > When the PCI device is surprise removed, requests may not > > complete the device as the VQ is marked as broken. Due to > > this, the disk deletion hangs. > > There are loops in the core virtio driver code that expect > device register reads to eventually return 0: > drivers/virtio/virtio_pci_modern.c:vp_reset() > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_r > eset > () > > Is there a hang if these loops are hit when a device has > been surprise removed? I'm trying to understand whether > surprise removal is fully supported or whether this patch is > one step in that
direction.
> In one of the previous replies I answered to Michael, but don't have the link
handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes
tag patch.
The virtio stack level work is needed to support surprise removal, including
the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion
of v4.
If you prefer a comprehensive support across layers of virtio, I suggest you
should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise removal. By reverting the cited patch (or with this fix), the requests and disk deletion
will not hang.
But they will hung in virtio core on reset, will they not? The tests just do not happen to trigger this?
It will hang if it a true surprise removal which no device did so far because it never worked. (or did, but always hung that no one reported yet)
I am familiar with 2 or more PCI devices who reports surprise removal, which do not complete the requests but yet allows device reset flow. This is because device is still there on the PCI bus. Only via side band signals device removal was reported.
But I agree that for full support, virtio all layer changes would be needed as new functionality (without fixes tag :) ).
Please let me know if I should re-send to revert the patch listed in fixes tag.
> Apart from that, I'm happy with the virtio_blk.c aspects of the
patch:
> Reviewed-by: Stefan Hajnoczi stefanha@redhat.com > Thanks.
> > > > Fix it by aborting the requests when the VQ is broken. > > > > With this fix now fio completes swiftly. > > An alternative of IO timeout has been considered, however > > when the driver knows about unresponsive block device, > > swiftly clearing them enables users and upper layers to react
quickly.
> > > > Verified with multiple device unplug iterations with > > pending requests in virtio used ring and some pending with the
device.
> > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal > > of virtio pci device") > > Cc: stable@vger.kernel.org > > Reported-by: Li RongQing lirongqing@baidu.com > > Closes: > > https://lore.kernel.org/virtualization/c45dd68698cd47238c5 > > 5fb7 > > 3ca9 > > b474 > > 1@baidu.com/ > > Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com > > Reviewed-by: Israel Rukshin israelr@nvidia.com > > Signed-off-by: Parav Pandit parav@nvidia.com > > > > --- > > v4->v5: > > - fixed comment style where comment to start with one > > empty line at start > > - Addressed comments from Alok > > - fixed typo in broken vq check > > v3->v4: > > - Addressed comments from Michael > > - renamed virtblk_request_cancel() to > > virtblk_complete_request_with_ioerr() > > - Added comments for virtblk_complete_request_with_ioerr() > > - Renamed virtblk_broken_device_cleanup() to > > virtblk_cleanup_broken_device() > > - Added comments for virtblk_cleanup_broken_device() > > - Moved the broken vq check in virtblk_remove() > > - Fixed comment style to have first empty line > > - replaced freezed to frozen > > - Fixed comments rephrased > > > > v2->v3: > > - Addressed comments from Michael > > - updated comment for synchronizing with callbacks > > > > v1->v2: > > - Addressed comments from Stephan > > - fixed spelling to 'waiting' > > - Addressed comments from Michael > > - Dropped checking broken vq from queue_rq() and queue_rqs() > > because it is checked in lower layer routines in virtio > > core > > > > v0->v1: > > - Fixed comments from Stefan to rename a cleanup function > > - Improved logic for handling any outstanding requests > > in bio layer > > - improved cancel callback to sync with ongoing done() > > --- > > drivers/block/virtio_blk.c | 95 > > ++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 95 insertions(+) > > > > diff --git a/drivers/block/virtio_blk.c > > b/drivers/block/virtio_blk.c index > > 7cffea01d868..c5e383c0ac48 > > 100644 > > --- a/drivers/block/virtio_blk.c > > +++ b/drivers/block/virtio_blk.c > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct > > virtio_device > *vdev) > > return err; > > } > > > > +/* > > + * If the vq is broken, device will not complete requests. > > + * So we do it for the device. > > + */ > > +static bool virtblk_complete_request_with_ioerr(struct > > +request *rq, void *data) { > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > + struct virtio_blk *vblk = data; > > + struct virtio_blk_vq *vq; > > + unsigned long flags; > > + > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > + > > + spin_lock_irqsave(&vq->lock, flags); > > + > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > + if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
> > + blk_mq_complete_request(rq); > > + > > + spin_unlock_irqrestore(&vq->lock, flags); > > + return true; > > +} > > + > > +/* > > + * If the device is broken, it will not use any buffers > > +and waiting > > + * for that to happen is pointless. We'll do the cleanup > > +in the driver, > > + * completing all requests for the device. > > + */ > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
{
> > + struct request_queue *q = vblk->disk->queue; > > + > > + /* > > + * Start freezing the queue, so that new requests keeps
waiting at the
> > + * door of bio_queue_enter(). We cannot fully freeze the > > +queue > because > > + * frozen queue is an empty queue and there are pending
requests, so
> > + * only start freezing it. > > + */ > > + blk_freeze_queue_start(q); > > + > > + /* > > + * When quiescing completes, all ongoing dispatches have
completed
> > + * and no new dispatch will happen towards the driver. > > + */ > > + blk_mq_quiesce_queue(q); > > + > > + /* > > + * Synchronize with any ongoing VQ callbacks that may > > +have
started
> > + * before the VQs were marked as broken. Any outstanding
requests
> > + * will be completed by
virtblk_complete_request_with_ioerr().
> > + */ > > + virtio_synchronize_cbs(vblk->vdev); > > + > > + /* > > + * At this point, no new requests can enter the > > +queue_rq()
and
> > + * completion routine will not complete any new requests > > +either for > the > > + * broken vq. Hence, it is safe to cancel all requests which are > > + * started. > > + */ > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > + virtblk_complete_request_with_ioerr,
vblk);
> > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > + > > + /* > > + * All pending requests are cleaned up. Time to resume > > +so
that disk
> > + * deletion can be smooth. Start the HW queues so that > > +when queue > is > > + * unquiesced requests can again enter the driver. > > + */ > > + blk_mq_start_stopped_hw_queues(q, true); > > + > > + /* > > + * Unquiescing will trigger dispatching any pending > > +requests
to the
> > + * driver which has crossed bio_queue_enter() to the driver. > > + */ > > + blk_mq_unquiesce_queue(q); > > + > > + /* > > + * Wait for all pending dispatches to terminate which > > +may
have been
> > + * initiated after unquiescing. > > + */ > > + blk_mq_freeze_queue_wait(q); > > + > > + /* > > + * Mark the disk dead so that once we unfreeze the > > +queue,
requests
> > + * waiting at the door of bio_queue_enter() can be > > +aborted right > away. > > + */ > > + blk_mark_disk_dead(vblk->disk); > > + > > + /* Unfreeze the queue so that any waiting requests will > > +be
aborted. */
> > + blk_mq_unfreeze_queue_nomemrestore(q); > > +} > > + > > static void virtblk_remove(struct virtio_device *vdev) { > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 > > @@ static void virtblk_remove(struct virtio_device *vdev) > > /* Make sure no work handler is accessing the device. */ > > flush_work(&vblk->config_work); > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq)) > > + virtblk_cleanup_broken_device(vblk); > > + > > del_gendisk(vblk->disk); > > blk_mq_free_tag_set(&vblk->tag_set); > > > > -- > > 2.34.1 > >
On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 04:34 PM
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 12:37 AM
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote: > > > > From: Stefan Hajnoczi stefanha@redhat.com > > Sent: 25 June 2025 12:26 AM > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote: > > > When the PCI device is surprise removed, requests may not > > > complete the device as the VQ is marked as broken. Due to > > > this, the disk deletion hangs. > > > > There are loops in the core virtio driver code that expect > > device register reads to eventually return 0: > > drivers/virtio/virtio_pci_modern.c:vp_reset() > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_r > > eset > > () > > > > Is there a hang if these loops are hit when a device has > > been surprise removed? I'm trying to understand whether > > surprise removal is fully supported or whether this patch is > > one step in that
direction.
> > > In one of the previous replies I answered to Michael, but > don't have the link handy. > It is not fully supported by this patch. It will hang. > > This patch restores driver back to the same state what it was > before the fixes tag patch. > The virtio stack level work is needed to support surprise > removal, including the reset flow you rightly pointed.
Have plans to do that?
Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion
of v4.
If you prefer a comprehensive support across layers of virtio, I suggest you
should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise removal. By reverting the cited patch (or with this fix), the requests and disk deletion
will not hang.
But they will hung in virtio core on reset, will they not? The tests just do not happen to trigger this?
It will hang if it a true surprise removal which no device did so far because it never worked. (or did, but always hung that no one reported yet)
I am familiar with 2 or more PCI devices who reports surprise removal, which do not complete the requests but yet allows device reset flow. This is because device is still there on the PCI bus. Only via side band signals device removal was reported.
So why do we care about it so much? I think it's great this patch exists, for example it makes it easier to test surprise removal and find more bugs. But is it better to just have it hang unconditionally? Are we now making a commitment that it's working - one we don't seem to intend to implement?
But I agree that for full support, virtio all layer changes would be needed as new functionality (without fixes tag :) ).
Or with a fixes tag - lots of people just use it as a signal to mean "where can this be reasonably backported to".
Please let me know if I should re-send to revert the patch listed in fixes tag.
> > Apart from that, I'm happy with the virtio_blk.c aspects of the
patch:
> > Reviewed-by: Stefan Hajnoczi stefanha@redhat.com > > > Thanks. > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > With this fix now fio completes swiftly. > > > An alternative of IO timeout has been considered, however > > > when the driver knows about unresponsive block device, > > > swiftly clearing them enables users and upper layers to react
quickly.
> > > > > > Verified with multiple device unplug iterations with > > > pending requests in virtio used ring and some pending with the
device.
> > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal > > > of virtio pci device") > > > Cc: stable@vger.kernel.org > > > Reported-by: Li RongQing lirongqing@baidu.com > > > Closes: > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c5 > > > 5fb7 > > > 3ca9 > > > b474 > > > 1@baidu.com/ > > > Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com > > > Reviewed-by: Israel Rukshin israelr@nvidia.com > > > Signed-off-by: Parav Pandit parav@nvidia.com > > > > > > --- > > > v4->v5: > > > - fixed comment style where comment to start with one > > > empty line at start > > > - Addressed comments from Alok > > > - fixed typo in broken vq check > > > v3->v4: > > > - Addressed comments from Michael > > > - renamed virtblk_request_cancel() to > > > virtblk_complete_request_with_ioerr() > > > - Added comments for virtblk_complete_request_with_ioerr() > > > - Renamed virtblk_broken_device_cleanup() to > > > virtblk_cleanup_broken_device() > > > - Added comments for virtblk_cleanup_broken_device() > > > - Moved the broken vq check in virtblk_remove() > > > - Fixed comment style to have first empty line > > > - replaced freezed to frozen > > > - Fixed comments rephrased > > > > > > v2->v3: > > > - Addressed comments from Michael > > > - updated comment for synchronizing with callbacks > > > > > > v1->v2: > > > - Addressed comments from Stephan > > > - fixed spelling to 'waiting' > > > - Addressed comments from Michael > > > - Dropped checking broken vq from queue_rq() and queue_rqs() > > > because it is checked in lower layer routines in virtio > > > core > > > > > > v0->v1: > > > - Fixed comments from Stefan to rename a cleanup function > > > - Improved logic for handling any outstanding requests > > > in bio layer > > > - improved cancel callback to sync with ongoing done() > > > --- > > > drivers/block/virtio_blk.c | 95 > > > ++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 95 insertions(+) > > > > > > diff --git a/drivers/block/virtio_blk.c > > > b/drivers/block/virtio_blk.c index > > > 7cffea01d868..c5e383c0ac48 > > > 100644 > > > --- a/drivers/block/virtio_blk.c > > > +++ b/drivers/block/virtio_blk.c > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct > > > virtio_device > > *vdev) > > > return err; > > > } > > > > > > +/* > > > + * If the vq is broken, device will not complete requests. > > > + * So we do it for the device. > > > + */ > > > +static bool virtblk_complete_request_with_ioerr(struct > > > +request *rq, void *data) { > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > + struct virtio_blk *vblk = data; > > > + struct virtio_blk_vq *vq; > > > + unsigned long flags; > > > + > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > + > > > + spin_lock_irqsave(&vq->lock, flags); > > > + > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > + if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
> > > + blk_mq_complete_request(rq); > > > + > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > + return true; > > > +} > > > + > > > +/* > > > + * If the device is broken, it will not use any buffers > > > +and waiting > > > + * for that to happen is pointless. We'll do the cleanup > > > +in the driver, > > > + * completing all requests for the device. > > > + */ > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
{
> > > + struct request_queue *q = vblk->disk->queue; > > > + > > > + /* > > > + * Start freezing the queue, so that new requests keeps
waiting at the
> > > + * door of bio_queue_enter(). We cannot fully freeze the > > > +queue > > because > > > + * frozen queue is an empty queue and there are pending
requests, so
> > > + * only start freezing it. > > > + */ > > > + blk_freeze_queue_start(q); > > > + > > > + /* > > > + * When quiescing completes, all ongoing dispatches have
completed
> > > + * and no new dispatch will happen towards the driver. > > > + */ > > > + blk_mq_quiesce_queue(q); > > > + > > > + /* > > > + * Synchronize with any ongoing VQ callbacks that may > > > +have
started
> > > + * before the VQs were marked as broken. Any outstanding
requests
> > > + * will be completed by
virtblk_complete_request_with_ioerr().
> > > + */ > > > + virtio_synchronize_cbs(vblk->vdev); > > > + > > > + /* > > > + * At this point, no new requests can enter the > > > +queue_rq()
and
> > > + * completion routine will not complete any new requests > > > +either for > > the > > > + * broken vq. Hence, it is safe to cancel all requests which are > > > + * started. > > > + */ > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > + virtblk_complete_request_with_ioerr,
vblk);
> > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > + > > > + /* > > > + * All pending requests are cleaned up. Time to resume > > > +so
that disk
> > > + * deletion can be smooth. Start the HW queues so that > > > +when queue > > is > > > + * unquiesced requests can again enter the driver. > > > + */ > > > + blk_mq_start_stopped_hw_queues(q, true); > > > + > > > + /* > > > + * Unquiescing will trigger dispatching any pending > > > +requests
to the
> > > + * driver which has crossed bio_queue_enter() to the driver. > > > + */ > > > + blk_mq_unquiesce_queue(q); > > > + > > > + /* > > > + * Wait for all pending dispatches to terminate which > > > +may
have been
> > > + * initiated after unquiescing. > > > + */ > > > + blk_mq_freeze_queue_wait(q); > > > + > > > + /* > > > + * Mark the disk dead so that once we unfreeze the > > > +queue,
requests
> > > + * waiting at the door of bio_queue_enter() can be > > > +aborted right > > away. > > > + */ > > > + blk_mark_disk_dead(vblk->disk); > > > + > > > + /* Unfreeze the queue so that any waiting requests will > > > +be
aborted. */
> > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > +} > > > + > > > static void virtblk_remove(struct virtio_device *vdev) { > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 > > > @@ static void virtblk_remove(struct virtio_device *vdev) > > > /* Make sure no work handler is accessing the device. */ > > > flush_work(&vblk->config_work); > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq)) > > > + virtblk_cleanup_broken_device(vblk); > > > + > > > del_gendisk(vblk->disk); > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > -- > > > 2.34.1 > > >
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 05:15 PM
On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 04:34 PM
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> From: Michael S. Tsirkin mst@redhat.com > Sent: 25 June 2025 12:37 AM > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote: > > > > > > > From: Stefan Hajnoczi stefanha@redhat.com > > > Sent: 25 June 2025 12:26 AM > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit
wrote:
> > > > When the PCI device is surprise removed, requests may > > > > not complete the device as the VQ is marked as broken. > > > > Due to this, the disk deletion hangs. > > > > > > There are loops in the core virtio driver code that > > > expect device register reads to eventually return 0: > > > drivers/virtio/virtio_pci_modern.c:vp_reset() > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_que > > > ue_r > > > eset > > > () > > > > > > Is there a hang if these loops are hit when a device has > > > been surprise removed? I'm trying to understand whether > > > surprise removal is fully supported or whether this > > > patch is one step in that
direction.
> > > > > In one of the previous replies I answered to Michael, but > > don't have the link > handy. > > It is not fully supported by this patch. It will hang. > > > > This patch restores driver back to the same state what it > > was before the fixes > tag patch. > > The virtio stack level work is needed to support surprise > > removal, including > the reset flow you rightly pointed. > > Have plans to do that? > Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion
of v4.
If you prefer a comprehensive support across layers of virtio, I suggest you
should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise
removal.
By reverting the cited patch (or with this fix), the requests and disk deletion
will not hang.
But they will hung in virtio core on reset, will they not? The tests just do not happen to trigger this?
It will hang if it a true surprise removal which no device did so far because it
never worked.
(or did, but always hung that no one reported yet)
I am familiar with 2 or more PCI devices who reports surprise removal,
which do not complete the requests but yet allows device reset flow.
This is because device is still there on the PCI bus. Only via side band signals
device removal was reported.
So why do we care about it so much? I think it's great this patch exists, for example it makes it easier to test surprise removal and find more bugs. But is it better to just have it hang unconditionally? Are we now making a commitment that it's working - one we don't seem to intend to implement?
The patch improves the situation from its current state. But as you posted, more changes in pci layer are needed. I didn't audit where else it may be needed.
vp_reset() may need to return the status back of successful/failure reset. Otherwise during probe(), vp_reset() aborts the reset and attempts to load the driver for removed device.
I guess suspend() callback also infinitely waits during freezing the queue also needs adaptation.
But I agree that for full support, virtio all layer changes would be needed as
new functionality (without fixes tag :) ).
Or with a fixes tag - lots of people just use it as a signal to mean "where can this be reasonably backported to".
Yes, I think the fix for the older kernels is needed, hence I cced stable too.
Please let me know if I should re-send to revert the patch listed in fixes
tag.
> > > Apart from that, I'm happy with the virtio_blk.c aspects > > > of the
patch:
> > > Reviewed-by: Stefan Hajnoczi stefanha@redhat.com > > > > > Thanks. > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > With this fix now fio completes swiftly. > > > > An alternative of IO timeout has been considered, > > > > however when the driver knows about unresponsive block > > > > device, swiftly clearing them enables users and upper > > > > layers to react
quickly.
> > > > > > > > Verified with multiple device unplug iterations with > > > > pending requests in virtio used ring and some pending > > > > with the
device.
> > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise > > > > removal of virtio pci device") > > > > Cc: stable@vger.kernel.org > > > > Reported-by: Li RongQing lirongqing@baidu.com > > > > Closes: > > > > https://lore.kernel.org/virtualization/c45dd68698cd472 > > > > 38c5 > > > > 5fb7 > > > > 3ca9 > > > > b474 > > > > 1@baidu.com/ > > > > Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com > > > > Reviewed-by: Israel Rukshin israelr@nvidia.com > > > > Signed-off-by: Parav Pandit parav@nvidia.com > > > > > > > > --- > > > > v4->v5: > > > > - fixed comment style where comment to start with one > > > > empty line at start > > > > - Addressed comments from Alok > > > > - fixed typo in broken vq check > > > > v3->v4: > > > > - Addressed comments from Michael > > > > - renamed virtblk_request_cancel() to > > > > virtblk_complete_request_with_ioerr() > > > > - Added comments for > > > > virtblk_complete_request_with_ioerr() > > > > - Renamed virtblk_broken_device_cleanup() to > > > > virtblk_cleanup_broken_device() > > > > - Added comments for virtblk_cleanup_broken_device() > > > > - Moved the broken vq check in virtblk_remove() > > > > - Fixed comment style to have first empty line > > > > - replaced freezed to frozen > > > > - Fixed comments rephrased > > > > > > > > v2->v3: > > > > - Addressed comments from Michael > > > > - updated comment for synchronizing with callbacks > > > > > > > > v1->v2: > > > > - Addressed comments from Stephan > > > > - fixed spelling to 'waiting' > > > > - Addressed comments from Michael > > > > - Dropped checking broken vq from queue_rq() and
queue_rqs()
> > > > because it is checked in lower layer routines in > > > > virtio core > > > > > > > > v0->v1: > > > > - Fixed comments from Stefan to rename a cleanup > > > > function > > > > - Improved logic for handling any outstanding requests > > > > in bio layer > > > > - improved cancel callback to sync with ongoing done() > > > > --- > > > > drivers/block/virtio_blk.c | 95 > > > > ++++++++++++++++++++++++++++++++++++++ > > > > 1 file changed, 95 insertions(+) > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > b/drivers/block/virtio_blk.c index > > > > 7cffea01d868..c5e383c0ac48 > > > > 100644 > > > > --- a/drivers/block/virtio_blk.c > > > > +++ b/drivers/block/virtio_blk.c > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct > > > > virtio_device > > > *vdev) > > > > return err; > > > > } > > > > > > > > +/* > > > > + * If the vq is broken, device will not complete requests. > > > > + * So we do it for the device. > > > > + */ > > > > +static bool > > > > +virtblk_complete_request_with_ioerr(struct > > > > +request *rq, void *data) { > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > + struct virtio_blk *vblk = data; > > > > + struct virtio_blk_vq *vq; > > > > + unsigned long flags; > > > > + > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > + > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > + > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > + if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
> > > > + blk_mq_complete_request(rq); > > > > + > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > + return true; > > > > +} > > > > + > > > > +/* > > > > + * If the device is broken, it will not use any > > > > +buffers and waiting > > > > + * for that to happen is pointless. We'll do the > > > > +cleanup in the driver, > > > > + * completing all requests for the device. > > > > + */ > > > > +static void virtblk_cleanup_broken_device(struct > > > > +virtio_blk *vblk)
{
> > > > + struct request_queue *q = vblk->disk->queue; > > > > + > > > > + /* > > > > + * Start freezing the queue, so that new requests > > > > +keeps
waiting at the
> > > > + * door of bio_queue_enter(). We cannot fully freeze > > > > +the queue > > > because > > > > + * frozen queue is an empty queue and there are > > > > +pending
requests, so
> > > > + * only start freezing it. > > > > + */ > > > > + blk_freeze_queue_start(q); > > > > + > > > > + /* > > > > + * When quiescing completes, all ongoing dispatches > > > > +have
completed
> > > > + * and no new dispatch will happen towards the
driver.
> > > > + */ > > > > + blk_mq_quiesce_queue(q); > > > > + > > > > + /* > > > > + * Synchronize with any ongoing VQ callbacks that > > > > +may have
started
> > > > + * before the VQs were marked as broken. Any > > > > +outstanding
requests
> > > > + * will be completed by
virtblk_complete_request_with_ioerr().
> > > > + */ > > > > + virtio_synchronize_cbs(vblk->vdev); > > > > + > > > > + /* > > > > + * At this point, no new requests can enter the > > > > +queue_rq()
and
> > > > + * completion routine will not complete any new > > > > +requests either for > > > the > > > > + * broken vq. Hence, it is safe to cancel all requests
which are
> > > > + * started. > > > > + */ > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > > +
virtblk_complete_request_with_ioerr,
vblk);
> > > > + > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > > + > > > > + /* > > > > + * All pending requests are cleaned up. Time to > > > > +resume so
that disk
> > > > + * deletion can be smooth. Start the HW queues so > > > > +that when queue > > > is > > > > + * unquiesced requests can again enter the driver. > > > > + */ > > > > + blk_mq_start_stopped_hw_queues(q, true); > > > > + > > > > + /* > > > > + * Unquiescing will trigger dispatching any pending > > > > +requests
to the
> > > > + * driver which has crossed bio_queue_enter() to the
driver.
> > > > + */ > > > > + blk_mq_unquiesce_queue(q); > > > > + > > > > + /* > > > > + * Wait for all pending dispatches to terminate > > > > +which may
have been
> > > > + * initiated after unquiescing. > > > > + */ > > > > + blk_mq_freeze_queue_wait(q); > > > > + > > > > + /* > > > > + * Mark the disk dead so that once we unfreeze the > > > > +queue,
requests
> > > > + * waiting at the door of bio_queue_enter() can be > > > > +aborted right > > > away. > > > > + */ > > > > + blk_mark_disk_dead(vblk->disk); > > > > + > > > > + /* Unfreeze the queue so that any waiting requests > > > > +will be
aborted. */
> > > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > > +} > > > > + > > > > static void virtblk_remove(struct virtio_device *vdev) { > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 > > > > +1653,9 @@ static void virtblk_remove(struct virtio_device
*vdev)
> > > > /* Make sure no work handler is accessing the device.
*/
> > > > flush_work(&vblk->config_work); > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq)) > > > > + virtblk_cleanup_broken_device(vblk); > > > > + > > > > del_gendisk(vblk->disk); > > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > > > -- > > > > 2.34.1 > > > >
On Wed, Jun 25, 2025 at 07:08:54PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 05:15 PM
On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 04:34 PM
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 01:24 AM
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote: > > > > From: Michael S. Tsirkin mst@redhat.com > > Sent: 25 June 2025 12:37 AM > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote: > > > > > > > > > > From: Stefan Hajnoczi stefanha@redhat.com > > > > Sent: 25 June 2025 12:26 AM > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit
wrote:
> > > > > When the PCI device is surprise removed, requests may > > > > > not complete the device as the VQ is marked as broken. > > > > > Due to this, the disk deletion hangs. > > > > > > > > There are loops in the core virtio driver code that > > > > expect device register reads to eventually return 0: > > > > drivers/virtio/virtio_pci_modern.c:vp_reset() > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_que > > > > ue_r > > > > eset > > > > () > > > > > > > > Is there a hang if these loops are hit when a device has > > > > been surprise removed? I'm trying to understand whether > > > > surprise removal is fully supported or whether this > > > > patch is one step in that direction. > > > > > > > In one of the previous replies I answered to Michael, but > > > don't have the link > > handy. > > > It is not fully supported by this patch. It will hang. > > > > > > This patch restores driver back to the same state what it > > > was before the fixes > > tag patch. > > > The virtio stack level work is needed to support surprise > > > removal, including > > the reset flow you rightly pointed. > > > > Have plans to do that? > > > Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window that your test harness happens to hit?
It was reported by Li from Baidu, whose tests failed. I missed to tag "reported-by" in v5. I had it until v4. :(
Maybe it's better to wait until someone does a comprehensive fix..
Oh, I was under impression is that you wanted to step forward in discussion
of v4.
If you prefer a comprehensive support across layers of virtio, I suggest you
should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise
removal.
By reverting the cited patch (or with this fix), the requests and disk deletion
will not hang.
But they will hung in virtio core on reset, will they not? The tests just do not happen to trigger this?
It will hang if it a true surprise removal which no device did so far because it
never worked.
(or did, but always hung that no one reported yet)
I am familiar with 2 or more PCI devices who reports surprise removal,
which do not complete the requests but yet allows device reset flow.
This is because device is still there on the PCI bus. Only via side band signals
device removal was reported.
So why do we care about it so much? I think it's great this patch exists, for example it makes it easier to test surprise removal and find more bugs. But is it better to just have it hang unconditionally? Are we now making a commitment that it's working - one we don't seem to intend to implement?
The patch improves the situation from its current state. But as you posted, more changes in pci layer are needed. I didn't audit where else it may be needed.
vp_reset() may need to return the status back of successful/failure reset. Otherwise during probe(), vp_reset() aborts the reset and attempts to load the driver for removed device.
yes however this is not at all different that hotunplug right after reset.
I guess suspend() callback also infinitely waits during freezing the queue also needs adaptation.
Which callback is that I don't understand.
But I agree that for full support, virtio all layer changes would be needed as
new functionality (without fixes tag :) ).
Or with a fixes tag - lots of people just use it as a signal to mean "where can this be reasonably backported to".
Yes, I think the fix for the older kernels is needed, hence I cced stable too.
Please let me know if I should re-send to revert the patch listed in fixes
tag.
> > > > Apart from that, I'm happy with the virtio_blk.c aspects > > > > of the
patch:
> > > > Reviewed-by: Stefan Hajnoczi stefanha@redhat.com > > > > > > > Thanks. > > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > With this fix now fio completes swiftly. > > > > > An alternative of IO timeout has been considered, > > > > > however when the driver knows about unresponsive block > > > > > device, swiftly clearing them enables users and upper > > > > > layers to react
quickly.
> > > > > > > > > > Verified with multiple device unplug iterations with > > > > > pending requests in virtio used ring and some pending > > > > > with the
device.
> > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise > > > > > removal of virtio pci device") > > > > > Cc: stable@vger.kernel.org > > > > > Reported-by: Li RongQing lirongqing@baidu.com > > > > > Closes: > > > > > https://lore.kernel.org/virtualization/c45dd68698cd472 > > > > > 38c5 > > > > > 5fb7 > > > > > 3ca9 > > > > > b474 > > > > > 1@baidu.com/ > > > > > Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com > > > > > Reviewed-by: Israel Rukshin israelr@nvidia.com > > > > > Signed-off-by: Parav Pandit parav@nvidia.com > > > > > > > > > > --- > > > > > v4->v5: > > > > > - fixed comment style where comment to start with one > > > > > empty line at start > > > > > - Addressed comments from Alok > > > > > - fixed typo in broken vq check > > > > > v3->v4: > > > > > - Addressed comments from Michael > > > > > - renamed virtblk_request_cancel() to > > > > > virtblk_complete_request_with_ioerr() > > > > > - Added comments for > > > > > virtblk_complete_request_with_ioerr() > > > > > - Renamed virtblk_broken_device_cleanup() to > > > > > virtblk_cleanup_broken_device() > > > > > - Added comments for virtblk_cleanup_broken_device() > > > > > - Moved the broken vq check in virtblk_remove() > > > > > - Fixed comment style to have first empty line > > > > > - replaced freezed to frozen > > > > > - Fixed comments rephrased > > > > > > > > > > v2->v3: > > > > > - Addressed comments from Michael > > > > > - updated comment for synchronizing with callbacks > > > > > > > > > > v1->v2: > > > > > - Addressed comments from Stephan > > > > > - fixed spelling to 'waiting' > > > > > - Addressed comments from Michael > > > > > - Dropped checking broken vq from queue_rq() and
queue_rqs()
> > > > > because it is checked in lower layer routines in > > > > > virtio core > > > > > > > > > > v0->v1: > > > > > - Fixed comments from Stefan to rename a cleanup > > > > > function > > > > > - Improved logic for handling any outstanding requests > > > > > in bio layer > > > > > - improved cancel callback to sync with ongoing done() > > > > > --- > > > > > drivers/block/virtio_blk.c | 95 > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > b/drivers/block/virtio_blk.c index > > > > > 7cffea01d868..c5e383c0ac48 > > > > > 100644 > > > > > --- a/drivers/block/virtio_blk.c > > > > > +++ b/drivers/block/virtio_blk.c > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct > > > > > virtio_device > > > > *vdev) > > > > > return err; > > > > > } > > > > > > > > > > +/* > > > > > + * If the vq is broken, device will not complete requests. > > > > > + * So we do it for the device. > > > > > + */ > > > > > +static bool > > > > > +virtblk_complete_request_with_ioerr(struct > > > > > +request *rq, void *data) { > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > > + struct virtio_blk *vblk = data; > > > > > + struct virtio_blk_vq *vq; > > > > > + unsigned long flags; > > > > > + > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > > + > > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > > + > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > > > > + blk_mq_complete_request(rq); > > > > > + > > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > > + return true; > > > > > +} > > > > > + > > > > > +/* > > > > > + * If the device is broken, it will not use any > > > > > +buffers and waiting > > > > > + * for that to happen is pointless. We'll do the > > > > > +cleanup in the driver, > > > > > + * completing all requests for the device. > > > > > + */ > > > > > +static void virtblk_cleanup_broken_device(struct > > > > > +virtio_blk *vblk)
{
> > > > > + struct request_queue *q = vblk->disk->queue; > > > > > + > > > > > + /* > > > > > + * Start freezing the queue, so that new requests > > > > > +keeps waiting at the > > > > > + * door of bio_queue_enter(). We cannot fully freeze > > > > > +the queue > > > > because > > > > > + * frozen queue is an empty queue and there are > > > > > +pending requests, so > > > > > + * only start freezing it. > > > > > + */ > > > > > + blk_freeze_queue_start(q); > > > > > + > > > > > + /* > > > > > + * When quiescing completes, all ongoing dispatches > > > > > +have completed > > > > > + * and no new dispatch will happen towards the
driver.
> > > > > + */ > > > > > + blk_mq_quiesce_queue(q); > > > > > + > > > > > + /* > > > > > + * Synchronize with any ongoing VQ callbacks that > > > > > +may have started > > > > > + * before the VQs were marked as broken. Any > > > > > +outstanding requests > > > > > + * will be completed by virtblk_complete_request_with_ioerr(). > > > > > + */ > > > > > + virtio_synchronize_cbs(vblk->vdev); > > > > > + > > > > > + /* > > > > > + * At this point, no new requests can enter the > > > > > +queue_rq() and > > > > > + * completion routine will not complete any new > > > > > +requests either for > > > > the > > > > > + * broken vq. Hence, it is safe to cancel all requests
which are
> > > > > + * started. > > > > > + */ > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > > > +
virtblk_complete_request_with_ioerr,
vblk); > > > > > + > > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > > > + > > > > > + /* > > > > > + * All pending requests are cleaned up. Time to > > > > > +resume so that disk > > > > > + * deletion can be smooth. Start the HW queues so > > > > > +that when queue > > > > is > > > > > + * unquiesced requests can again enter the driver. > > > > > + */ > > > > > + blk_mq_start_stopped_hw_queues(q, true); > > > > > + > > > > > + /* > > > > > + * Unquiescing will trigger dispatching any pending > > > > > +requests to the > > > > > + * driver which has crossed bio_queue_enter() to the
driver.
> > > > > + */ > > > > > + blk_mq_unquiesce_queue(q); > > > > > + > > > > > + /* > > > > > + * Wait for all pending dispatches to terminate > > > > > +which may have been > > > > > + * initiated after unquiescing. > > > > > + */ > > > > > + blk_mq_freeze_queue_wait(q); > > > > > + > > > > > + /* > > > > > + * Mark the disk dead so that once we unfreeze the > > > > > +queue, requests > > > > > + * waiting at the door of bio_queue_enter() can be > > > > > +aborted right > > > > away. > > > > > + */ > > > > > + blk_mark_disk_dead(vblk->disk); > > > > > + > > > > > + /* Unfreeze the queue so that any waiting requests > > > > > +will be aborted. */ > > > > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > > > +} > > > > > + > > > > > static void virtblk_remove(struct virtio_device *vdev) { > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 > > > > > +1653,9 @@ static void virtblk_remove(struct virtio_device
*vdev)
> > > > > /* Make sure no work handler is accessing the device.
*/
> > > > > flush_work(&vblk->config_work); > > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq)) > > > > > + virtblk_cleanup_broken_device(vblk); > > > > > + > > > > > del_gendisk(vblk->disk); > > > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > > > > > -- > > > > > 2.34.1 > > > > >
On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed?
Probably, yes. We can't really check broken status there though I think - reset is called with a broken device normally.
I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
I think it's a step.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baid... Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
v4->v5:
- fixed comment style where comment to start with one empty line at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device *vdev) return err; } +/*
- If the vq is broken, device will not complete requests.
- So we do it for the device.
- */
+static bool virtblk_complete_request_with_ioerr(struct request *rq, void *data) +{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+/*
- If the device is broken, it will not use any buffers and waiting
- for that to happen is pointless. We'll do the cleanup in the driver,
- completing all requests for the device.
- */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) +{
- struct request_queue *q = vblk->disk->queue;
- /*
* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue because
* frozen queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /*
* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks that may have started
* before the VQs were marked as broken. Any outstanding requests
* will be completed by virtblk_complete_request_with_ioerr().
*/
- virtio_synchronize_cbs(vblk->vdev);
- /*
* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set,
virtblk_complete_request_with_ioerr, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /*
* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /*
* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /*
* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /*
* Mark the disk dead so that once we unfreeze the queue, requests
* waiting at the door of bio_queue_enter() can be aborted right away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- if (virtqueue_is_broken(vblk->vqs[0].vq))
virtblk_cleanup_broken_device(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
2.34.1
On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Is this as simple as this?
-->
Signed-off-by: Michael S. Tsirkin mst@redhat.com
---
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 7182f43ed055..df983fa9046a 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -555,8 +555,12 @@ static void vp_reset(struct virtio_device *vdev) * This will flush out the status write, and flush in device writes, * including MSI-X interrupts, if any. */ - while (vp_modern_get_status(mdev)) + while (vp_modern_get_status(mdev)) { + /* If device is removed meanwhile, it will never respond. */ + if (!pci_device_is_present(vp_dev->pci_dev)) + break; msleep(1); + }
vp_modern_avq_cleanup(vdev);
diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index 0d3dbfaf4b23..7177ce0d63be 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -523,11 +523,19 @@ void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) vp_iowrite16(index, &cfg->cfg.queue_select); vp_iowrite16(1, &cfg->queue_reset);
- while (vp_ioread16(&cfg->queue_reset)) + while (vp_ioread16(&cfg->queue_reset)) { + /* If device is removed meanwhile, it will never respond. */ + if (!pci_device_is_present(vp_dev->pci_dev)) + break; msleep(1); + }
- while (vp_ioread16(&cfg->cfg.queue_enable)) + while (vp_ioread16(&cfg->cfg.queue_enable)) { + /* If device is removed meanwhile, it will never respond. */ + if (!pci_device_is_present(vp_dev->pci_dev)) + break; msleep(1); + } } EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset);
From: Michael S. Tsirkin mst@redhat.com Sent: 25 June 2025 06:09 PM
On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
There are loops in the core virtio driver code that expect device register reads to eventually return 0: drivers/virtio/virtio_pci_modern.c:vp_reset() drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise removed? I'm trying to understand whether surprise removal is fully supported or whether this patch is one step in that direction.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch: Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Is this as simple as this?
Didn't audit the code where else the changes needed. I had similar change a while ago, but I also recall hitting an assert in the pci layer somewhere during the vp_reset() flow.
Do not have lab access today. Will check back tomorrow and update.
-->
Signed-off-by: Michael S. Tsirkin mst@redhat.com
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 7182f43ed055..df983fa9046a 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -555,8 +555,12 @@ static void vp_reset(struct virtio_device *vdev) * This will flush out the status write, and flush in device writes, * including MSI-X interrupts, if any. */
- while (vp_modern_get_status(mdev))
while (vp_modern_get_status(mdev)) {
/* If device is removed meanwhile, it will never respond. */
if (!pci_device_is_present(vp_dev->pci_dev))
break;
msleep(1);
}
vp_modern_avq_cleanup(vdev);
diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index 0d3dbfaf4b23..7177ce0d63be 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -523,11 +523,19 @@ void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) vp_iowrite16(index, &cfg->cfg.queue_select); vp_iowrite16(1, &cfg->queue_reset);
- while (vp_ioread16(&cfg->queue_reset))
- while (vp_ioread16(&cfg->queue_reset)) {
/* If device is removed meanwhile, it will never respond. */
if (!pci_device_is_present(vp_dev->pci_dev))
msleep(1);break;
- }
- while (vp_ioread16(&cfg->cfg.queue_enable))
- while (vp_ioread16(&cfg->cfg.queue_enable)) {
/* If device is removed meanwhile, it will never respond. */
if (!pci_device_is_present(vp_dev->pci_dev))
msleep(1);break;
- }
} EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset);
linux-stable-mirror@lists.linaro.org