From: Yunfei Wang yf.wang@mediatek.com
commit a3884774d731f03d3a3dd4fb70ec2d9341ceb39d upstream.
The data type of the return value of the iommu_map_sg_atomic is ssize_t, but the data type of iova size is size_t, e.g. one is int while the other is unsigned int.
When iommu_map_sg_atomic return value is compared with iova size, it will force the signed int to be converted to unsigned int, if iova map fails and iommu_map_sg_atomic return error code is less than 0, then (ret < iova_len) is false, which will to cause not do free iova, and the master can still successfully get the iova of map fail, which is not expected.
Therefore, we need to check the return value of iommu_map_sg_atomic in two cases according to whether it is less than 0.
Fixes: ad8f36e4b6b1 ("iommu: return full error code from iommu_map_sg[_atomic]()") Signed-off-by: Yunfei Wang yf.wang@mediatek.com Cc: stable@vger.kernel.org # 5.15.* Reviewed-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Miles Chen miles.chen@mediatek.com Link: https://lore.kernel.org/r/20220507085204.16914-1-yf.wang@mediatek.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iommu/dma-iommu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -619,6 +619,7 @@ static struct page **__iommu_dma_alloc_n unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; struct page **pages; dma_addr_t iova; + ssize_t ret;
if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) @@ -656,8 +657,8 @@ static struct page **__iommu_dma_alloc_n arch_dma_prep_coherent(sg_page(sg), sg->length); }
- if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot) - < size) + ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot); + if (ret < 0 || ret < size) goto out_free_sg;
sgt->sgl->dma_address = iova; @@ -1054,7 +1055,7 @@ static int iommu_dma_map_sg(struct devic * implementation - it knows better than we do. */ ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot); - if (ret < iova_len) + if (ret < 0 || ret < iova_len) goto out_free_iova;
return __finalise_sg(dev, sg, nents, iova);