Lu Baolu baolu.lu@linux.intel.com writes:
Commit 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") moved the secondary TLB invalidations into the TLB invalidation functions to ensure that all secondary TLB invalidations happen at the same time as the CPU invalidation and added a flush-all type of secondary TLB invalidation for the batched mode, where a range of [0, -1UL) is used to indicates that the range extends to the end of the address space.
However, using an end address of -1UL caused an overflow in the Intel IOMMU driver, where the end address was rounded up to the next page. As a result, both the IOTLB and device ATC were not invalidated correctly.
Thanks for catching. This fix looks good so:
Reviewed-by: Alistair Popple apopple@nvidia.com
However examining the fixes patch again I note that we are calling mmu_notifier_invalidate_range(mm, 0, -1UL) from arch_tlbbatch_add_pending() in arch/x86/include/asm/tlbflush.h.
That seems suboptimal because we would be doing an invalidate all for every page unmap, and as of db6c1f6f236d ("mm/tlbbatch: introduce arch_flush_tlb_batched_pending()") arch_flush_tlb_batched_pending() calls flush_tlb_mm() anyway. So I think we can probably drop the explicit notifier call from arch_flush_tlb_batched_pending().
Will put togeather a patch for that.
- Alistair
Add a flush all helper function and call it when the invalidation range is from 0 to -1UL, ensuring that the entire caches are invalidated correctly.
Fixes: 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") Cc: stable@vger.kernel.org Cc: Huang Ying ying.huang@intel.com Cc: Alistair Popple apopple@nvidia.com Tested-by: Luo Yuzhang yuzhang.luo@intel.com # QAT Tested-by: Tony Zhu tony.zhu@intel.com # DSA Signed-off-by: Lu Baolu baolu.lu@linux.intel.com
drivers/iommu/intel/svm.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 50a481c895b8..588385050a07 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -216,6 +216,27 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, rcu_read_unlock(); } +static void intel_flush_svm_all(struct intel_svm *svm) +{
- struct device_domain_info *info;
- struct intel_svm_dev *sdev;
- rcu_read_lock();
- list_for_each_entry_rcu(sdev, &svm->devs, list) {
info = dev_iommu_priv_get(sdev->dev);
qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, 0, -1UL, 1);
if (info->ats_enabled) {
qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid,
svm->pasid, sdev->qdep,
0, 64 - VTD_PAGE_SHIFT);
quirk_extra_dev_tlb_flush(info, 0, 64 - VTD_PAGE_SHIFT,
svm->pasid, sdev->qdep);
}
- }
- rcu_read_unlock();
+}
/* Pages have been freed at this point */ static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, struct mm_struct *mm, @@ -223,6 +244,11 @@ static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, { struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
- if (start == 0 && end == -1UL) {
intel_flush_svm_all(svm);
return;
- }
- intel_flush_svm_range(svm, start, (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0);
}