On Mon, May 26, 2025 at 02:40:33PM +0000, Parav Pandit wrote:
When the PCI device is surprise-removed, requests may not complete on the device as the VQ is marked as broken. As a result, disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baid... Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
Thanks! I like the patch. Yes something to improve:
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 83 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..12f31e6c00cb 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,87 @@ static int virtblk_probe(struct virtio_device *vdev) return err; } +static bool virtblk_request_cancel(struct request *rq, void *data)
+{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
My undertanding is that this is only safe to call when device is not accessing the vq anymore? Right? otherwise device can overwrite the status?
But I am not sure I understand what guarantees this. Is there an assumption here that if vq is broken and we are on remove path then device is already gone? It seems to hold but I'd prefer something that makes this guarantee at the API level.
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) +{
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
can marking vqs broken be in progress such that 0 is already broken but another one is not, yet? if not pls add a comment explainging why.
- /* Start freezing the queue, so that new requests keeps waiting at the
* door of bio_queue_enter(). We cannot fully freeze the queue because
* freezed queue is an empty queue and there are pending requests, so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
* This ensures that later when cancel is attempted, then are not
* getting processed by the queue_rq() or queue_rqs() handlers.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks, effectively quiescing
* the device and preventing it from completing further requests
* to the block layer. Any outstanding, incomplete requests will be
* completed by virtblk_request_cancel().
I think what you really mean is: finish processing in callbacks, that might have started before vqs were marked as broken.
*/
- virtio_synchronize_cbs(vblk->vdev);
- /* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /* Mark the disk dead so that once queue unfreeze, the requests
* waiting at the door of bio_queue_enter() can be aborted right away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted. */
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1642,8 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- virtblk_broken_device_cleanup(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
2.34.1
From: Michael S. Tsirkin mst@redhat.com Sent: Tuesday, May 27, 2025 1:45 AM
On Mon, May 26, 2025 at 02:40:33PM +0000, Parav Pandit wrote:
When the PCI device is surprise-removed, requests may not complete on the device as the VQ is marked as broken. As a result, disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: Li RongQing lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
Thanks! I like the patch. Yes something to improve:
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs() because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 83 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..12f31e6c00cb 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1554,6 +1554,87 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err; }
+static bool virtblk_request_cancel(struct request *rq, void *data)
+{
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
My undertanding is that this is only safe to call when device is not accessing the vq anymore? Right? otherwise device can overwrite the status?
Even if the device is accessing the VQ (such as descriptors) and its memory, and if we free and unmap the request, it is a problem. So the contract is, when the device has reported that the transport is broken, after that it must not touch the VQ.
The in_hdr.status is updated either here or in the done() handler. And both are protected by spin lock, so they don't step on each other.
But I am not sure I understand what guarantees this. Is there an assumption here that if vq is broken and we are on remove path then device is already gone?
True, the assumption is that device must not touch VQ or memory pointed by VQ when it has reported that device is broken.
It seems to hold but I'd prefer something that makes this guarantee at the API level.
Not sure how we can guarantee that from the device. It's the assumption from the driver in following the pci spec.
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
can marking vqs broken be in progress such that 0 is already broken but another one is not, yet? if not pls add a comment explainging why.
- /* Start freezing the queue, so that new requests keeps waiting at
the
* door of bio_queue_enter(). We cannot fully freeze the queue
because
* freezed queue is an empty queue and there are pending requests,
so
* only start freezing it.
*/
- blk_freeze_queue_start(q);
- /* When quiescing completes, all ongoing dispatches have completed
* and no new dispatch will happen towards the driver.
* This ensures that later when cancel is attempted, then are not
* getting processed by the queue_rq() or queue_rqs() handlers.
*/
- blk_mq_quiesce_queue(q);
- /*
* Synchronize with any ongoing VQ callbacks, effectively quiescing
* the device and preventing it from completing further requests
* to the block layer. Any outstanding, incomplete requests will be
* completed by virtblk_request_cancel().
I think what you really mean is: finish processing in callbacks, that might have started before vqs were marked as broken.
Yes, I will remove the word "effectively quiescing the device", because that is not done by pure sw code here. It is quiescing the async interrupt handler side. How about a below rephrase:
/* * Synchronize with any ongoing VQ callbacks which may be started before VQs are marked broken, * preventing it from completing further requests to the block layer. Any outstanding, * incomplete requests will be completed by virtblk_request_cancel(). */
*/
- virtio_synchronize_cbs(vblk->vdev);
- /* At this point, no new requests can enter the queue_rq() and
* completion routine will not complete any new requests either for
the
* broken vq. Hence, it is safe to cancel all requests which are
* started.
*/
- blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel,
vblk);
- blk_mq_tagset_wait_completed_request(&vblk->tag_set);
- /* All pending requests are cleaned up. Time to resume so that disk
* deletion can be smooth. Start the HW queues so that when queue
is
* unquiesced requests can again enter the driver.
*/
- blk_mq_start_stopped_hw_queues(q, true);
- /* Unquiescing will trigger dispatching any pending requests to the
* driver which has crossed bio_queue_enter() to the driver.
*/
- blk_mq_unquiesce_queue(q);
- /* Wait for all pending dispatches to terminate which may have been
* initiated after unquiescing.
*/
- blk_mq_freeze_queue_wait(q);
- /* Mark the disk dead so that once queue unfreeze, the requests
* waiting at the door of bio_queue_enter() can be aborted right
away.
*/
- blk_mark_disk_dead(vblk->disk);
- /* Unfreeze the queue so that any waiting requests will be aborted.
*/
- blk_mq_unfreeze_queue_nomemrestore(q);
+}
static void virtblk_remove(struct virtio_device *vdev) { struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1642,8 @@ static void virtblk_remove(struct virtio_device *vdev) /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
- virtblk_broken_device_cleanup(vblk);
- del_gendisk(vblk->disk); blk_mq_free_tag_set(&vblk->tag_set);
-- 2.34.1
From: Michael S. Tsirkin mst@redhat.com Sent: Tuesday, May 27, 2025 1:45 AM
On Mon, May 26, 2025 at 02:40:33PM +0000, Parav Pandit wrote:
[..]
- if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
can marking vqs broken be in progress such that 0 is already broken but another one is not, yet? if not pls add a comment explainging why.
I missed to answer this previous email.
It should not happen during surprise removal case, because surprise removal and remove() is serialized operation listed below,
virtio_pci_remove() virtio_break_device() mark N Vqs as broken.. unregister_virtio_device() virtioblk_remove()
If one attempts to unbind the pci device and once the remove() crosses the broken check, and after that if the pci device is removed (before existing virtblk_remove finishes, then yes, same hang can occur. Such thing requires an additional time out handling, which should be handled separate patch as new functionality. Since timeout handling would be additional, it won't replace the existing known surprise removal case anyway. I anticipate that it will likely reuse pieces from the proposed cleanup() function.
linux-stable-mirror@lists.linaro.org