On Mon, May 26, 2025 at 5:23 AM Parav Pandit parav@nvidia.com wrote:
Hi Stefan,
From: Parav Pandit parav@nvidia.com Sent: Thursday, May 22, 2025 8:26 PM
From: Stefan Hajnoczi stefanha@gmail.com Sent: Thursday, May 22, 2025 8:06 PM
On Wed, May 21, 2025 at 10:57 PM Parav Pandit parav@nvidia.com
wrote:
From: Stefan Hajnoczi stefanha@redhat.com Sent: Wednesday, May 21, 2025 8:27 PM
On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote:
When the PCI device is surprise removed, requests may not complete the device as the VQ is marked as broken. Due to this, the disk deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly. An alternative of IO timeout has been considered, however when the driver knows about unresponsive block device, swiftly clearing them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") Cc: stable@vger.kernel.org Reported-by: lirongqing@baidu.com Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73c a9 b474 1@baidu.com/ Reviewed-by: Max Gurtovoy mgurtovoy@nvidia.com Reviewed-by: Israel Rukshin israelr@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com
changelog: v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests in bio layer
- improved cancel callback to sync with ongoing done()
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct
blk_mq_hw_ctx *hctx,
blk_status_t status; int err;
- /* Immediately fail all incoming requests if the vq is broken.
- Once the queue is unquiesced, upper block layer flushes
- any
pending
- queued requests; fail them right away.
- */
- if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq)))
return BLK_STS_IOERR;
- status = virtblk_prep_rq(hctx, vblk, req, vbr); if (unlikely(status)) return status;
@@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list
*rqlist)
while ((req = rq_list_pop(rqlist))) { struct virtio_blk_vq *this_vq =
get_virtio_blk_vq(req- mq_hctx);
if (unlikely(virtqueue_is_broken(this_vq->vq))) {
rq_list_add_tail(&requeue_list, req);
continue;
}
if (vq && vq != this_vq) virtblk_add_req_batch(vq, &submit_list); vq = this_vq;
@@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device
*vdev)
return err;
}
+static bool virtblk_request_cancel(struct request *rq, void *data) {
- struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
- struct virtio_blk *vblk = data;
- struct virtio_blk_vq *vq;
- unsigned long flags;
- vq = &vblk->vqs[rq->mq_hctx->queue_num];
- spin_lock_irqsave(&vq->lock, flags);
- vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
- if (blk_mq_request_started(rq) &&
!blk_mq_request_completed(rq))
blk_mq_complete_request(rq);
- spin_unlock_irqrestore(&vq->lock, flags);
- return true;
+}
+static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) {
- struct request_queue *q = vblk->disk->queue;
- if (!virtqueue_is_broken(vblk->vqs[0].vq))
return;
Can a subset of virtqueues be broken? If so, then this code doesn't handle
it.
On device removal all the VQs are broken. This check only uses a VQ to decide
on.
In future may be more elaborate API to have virtio_dev_broken() can be
added.
Prefer to keep this patch without extending many APIs given it has Fixes
tag.
virtblk_remove() is called not just when a PCI device is hot unplugged. For example, removing the virtio_blk kernel module or unbinding a specific virtio device instance also calls it.
This is ok.
My concern is that virtblk_broken_device_cleanup() is only intended for the cases where all virtqueues are broken or none are broken. If just the first virtqueue is broken then it completes requests on operational virtqueues and they may still raise an interrupt.
I see that vq broken is extended for each reset scenario too lately in vp_modern_enable_vq_after_reset(). So yes, this patch which was intended for original surprise removal bug where vq broken was not done for reset cases.
I believe for fixing the cited patch, device->broken flag should be used. Max indicated this in an internal review, but I was inclined to avoid adding many changes. And hence reuse vq broken.
So one option is to extend,
virtio_break_device() to have a flag like below and check during remove(). dev->broken = true;
I dig further. VQ resize is the only user of dynamically break-unbreak a VQ; and specific only to vnet device. So vq[0].broken check in this patch is sufficient in this proposed function without adding above dev->broken check.
If no further comments, I would like to post v2 addressing your and Michael's inputs. Please let me know.
Yes, please go ahead with the next revision.
Stefan