On 24.08.22 05:03, Alistair Popple wrote:
When clearing a PTE the TLB should be flushed whilst still holding the PTL to avoid a potential race with madvise/munmap/etc. For example consider the following sequence:
CPU0 CPU1
migrate_vma_collect_pmd() pte_unmap_unlock() madvise(MADV_DONTNEED) -> zap_pte_range() pte_offset_map_lock() [ PTE not present, TLB not flushed ] pte_unmap_unlock() [ page is still accessible via stale TLB ] flush_tlb_range()
In this case the page may still be accessed via the stale TLB entry after madvise returns. Fix this by flushing the TLB while holding the PTL.
Signed-off-by: Alistair Popple apopple@nvidia.com Reported-by: Nadav Amit nadav.amit@gmail.com Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages") Cc: stable@vger.kernel.org
Changes for v3:
- New for v3
mm/migrate_device.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 27fb37d..6a5ef9f 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -254,13 +254,14 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, migrate->dst[migrate->npages] = 0; migrate->src[migrate->npages++] = mpfn; }
- arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(ptep - 1, ptl);
/* Only flush the TLB if we actually modified any entries */ if (unmapped) flush_tlb_range(walk->vma, start, end);
- arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(ptep - 1, ptl);
- return 0;
}
base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136
I'm not a TLB-flushing expert, but this matches my understanding (and a TLB flushing Linux documentation I stumbled over some while ago but cannot quickly find).
In the ordinary try_to_migrate_one() path, flushing would happen via ptep_clear_flush() (just like we do for the anon_exclusive case here as well), correct?