A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush for parallel mapping changes for the same range under non-exclusive lock (i.e. read mmap_sem). Before commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap"), MADV_DONTNEED is the only one who may do page zapping in parallel and it doesn't remove page tables. But, the forementioned commit may do munmap() under read mmap_sem and free page tables. This causes a bug [1] reported by Jan Stancek since __tlb_reset_range() may pass the wrong page table state to architecture specific TLB flush operations.
So, removing __tlb_reset_range() sounds sane. This may cause more TLB flush for MADV_DONTNEED, but it should be not called very often, hence the impact should be negligible.
The original proposed fix came from Jan Stancek who mainly debugged this issue, I just wrapped up everything together.
[1] https://lore.kernel.org/linux-mm/342bf1fd-f1bf-ed62-1127-e911b5032274@linux....
Reported-by: Jan Stancek jstancek@redhat.com Tested-by: Jan Stancek jstancek@redhat.com Cc: Will Deacon will.deacon@arm.com Cc: stable@vger.kernel.org Signed-off-by: Yang Shi yang.shi@linux.alibaba.com Signed-off-by: Jan Stancek jstancek@redhat.com --- mm/mmu_gather.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99740e1..9fd5272 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -249,11 +249,12 @@ void tlb_finish_mmu(struct mmu_gather *tlb, * flush by batching, a thread has stable TLB entry can fail to flush * the TLB by observing pte_none|!pte_dirty, for example so flush TLB * forcefully if we detect parallel PTE batching threads. + * + * munmap() may change mapping under non-excluse lock and also free + * page tables. Do not call __tlb_reset_range() for it. */ - if (mm_tlb_flush_nested(tlb->mm)) { - __tlb_reset_range(tlb); + if (mm_tlb_flush_nested(tlb->mm)) __tlb_adjust_range(tlb, start, end - start); - }
tlb_flush_mmu(tlb);