 
            Since npages is declared as int, shifting npages << PAGE_SHIFT for a 2 GB+ scatter-gather list overflows before reaching __unmap_single(), leading to incorrect unmapping.
A 2 GB region equals 524,288 pages. The expression npages << PAGE_SHIFT yields 0x80000000, which exceeds INT32_MAX (0x7FFFFFFF). Casting to size_t therefore produces 0xFFFFFFFF80000000, an overflow value that breaks the unmap size calculation.
Fix the overflow by casting npages to size_t before the PAGE_SHIFT left-shift.
Fixes: 89736a0ee81d ("Revert "iommu/amd: Remove the leftover of bypass support"") Cc: stable@vger.kernel.org # 5.4 Signed-off-by: Jinhui Guo guojinhui.liam@bytedance.com ---
Hi,
We hit an IO_PAGE_FAULT on AMD with 5.4-stable when mapping a 2 GB scatter-gather list.
The fault is caused by an overflow in unmap_sg(): on stable-5.4 the SG-mmap path was never moved to the IOMMU framework, so the bug exists only in this branch.
Regards, Jinhui
drivers/iommu/amd_iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index a30aac41af42..60872d7be52b 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2682,7 +2682,7 @@ static void unmap_sg(struct device *dev, struct scatterlist *sglist, dma_dom = to_dma_ops_domain(domain); npages = sg_num_pages(dev, sglist, nelems);
- __unmap_single(dma_dom, startaddr, npages << PAGE_SHIFT, dir); + __unmap_single(dma_dom, startaddr, (size_t)npages << PAGE_SHIFT, dir); }
/*