In cases where swiotlb is enabled dma_max_mapping_size takes into account the min align mask for the device. Right now the mask is set after the max hw sectors are calculated which might result in a request size that overflows the swiotlb buffer. Set the min align mask for nvme driver before calling dma_max_mapping_size while calculating max hw sectors.
Fixes: 7637de311bd2 ("nvme-pci: limit max_hw_sectors based on the DMA max mapping size") Cc: stable@vger.kernel.org Signed-off-by: Rishabh Bhatnagar risbhat@amazon.com --- Changes in V2: - Add Cc: stable@vger.kernel.org tag - Improve the commit text - Add patch version
Changes in V1: - Add fixes tag
drivers/nvme/host/pci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 98864b853eef..30e71e41a0a2 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2834,6 +2834,8 @@ static void nvme_reset_work(struct work_struct *work) nvme_start_admin_queue(&dev->ctrl); }
+ dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1); + /* * Limit the max command size to prevent iod->sg allocations going * over a single page. @@ -2846,7 +2848,6 @@ static void nvme_reset_work(struct work_struct *work) * Don't limit the IOMMU merged segment size. */ dma_set_max_seg_size(dev->dev, 0xffffffff); - dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
mutex_unlock(&dev->shutdown_lock);