From: Jacob Pan jacob.jun.pan@linux.intel.com
[ Upstream commit 1ff00279655d95ae9c285c39878aedf9ff008d25 ]
For guest requested IOTLB invalidation, address and mask are provided as part of the invalidation data. VT-d HW silently ignores any address bits below the mask. SW shall also allow such case but give warning if address does not align with the mask. This patch relax the fault handling from error to warning and proceed with invalidation request with the given mask.
Fixes: 6ee1b77ba3ac0 ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Reviewed-by: Eric Auger eric.auger@redhat.com Link: https://lore.kernel.org/r/20200724014925.15523-7-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/intel/iommu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index e7bce09a9f735..04e82f1756010 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -5452,13 +5452,12 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
switch (BIT(cache_type)) { case IOMMU_CACHE_INV_TYPE_IOTLB: + /* HW will ignore LSB bits based on address mask */ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR && size && (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) { - pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n", + pr_err_ratelimited("User address not aligned, 0x%llx, size order %llu\n", inv_info->addr_info.addr, size); - ret = -ERANGE; - goto out_unlock; }
/*