From: Michael S. Tsirkin mst@redhat.com Sent: Wednesday, May 21, 2025 1:48 PM
On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: lirongqing@baidu.com Closes:
https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474
1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
changelog: v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
thanks for the patch! questions:
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct
blk_mq_hw_ctx *hctx,
blk_status_t status; int err;
- /* Immediately fail all incoming requests if the vq is broken.
* Once the queue is unquiesced, upper block layer flushes any
pending
* queued requests; fail them right away.
*/
- if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq)))
return BLK_STS_IOERR;
- status = virtblk_prep_rq(hctx, vblk, req, vbr); if (unlikely(status)) return status;
just below this: spin_lock_irqsave(&vblk->vqs[qid].lock, flags); err = virtblk_add_req(vblk->vqs[qid].vq, vbr); if (err) {
and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq.
Why do we need to check it one extra time here?
It may work, but for some reason if the hw queue is stopped in this flow, it can hang the IOs flushing. I considered it risky to rely on the error code ENOSPC returned by non virtio-blk driver. In other words, if lower layer changed for some reason, we may end up in stopping the hw queue when broken, and requests would hang.
Compared to that one-time entry check seems more robust.
@@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) while ((req = rq_list_pop(rqlist))) { struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- mq_hctx);
if (unlikely(virtqueue_is_broken(this_vq->vq))) {
rq_list_add_tail(&requeue_list, req);
continue;
}
- if (vq && vq != this_vq) virtblk_add_req_batch(vq, &submit_list); vq = this_vq;
similarly
The error code is not surfacing up here from virtblk_add_req(). It would end up adding checking for special error code here as well to abort by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch().
Weighing on specific error code-based data path that may require audit from lower layers now and future, an explicit check of broken in this layer could be better.
[..]