On Fri, Jun 6, 2025 at 11:28 AM Ryan Roberts ryan.roberts@arm.com wrote:
Commit 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries") described a theoretical race as such:
""" Nadav Amit identified a theoritical race between page reclaim and mprotect due to TLB flushes being batched outside of the PTL being held.
He described the race as follows:
CPU0 CPU1 ---- ---- user accesses memory using RW PTE [PTE now cached in TLB] try_to_unmap_one() ==> ptep_get_and_clear() ==> set_tlb_ubc_flush_pending() mprotect(addr, PROT_READ) ==> change_pte_range() ==> [ PTE non-present - no flush ] user writes using cached RW PTE ... try_to_unmap_flush()
The same type of race exists for reads when protecting for PROT_NONE and also exists for operations that can leave an old TLB entry behind such as munmap, mremap and madvise. """
The solution was to introduce flush_tlb_batched_pending() and call it under the PTL from mprotect/madvise/munmap/mremap to complete any pending tlb flushes.
However, while madvise_free_pte_range() and madvise_cold_or_pageout_pte_range() were both retro-fitted to call flush_tlb_batched_pending() immediately after initially acquiring the PTL, they both temporarily release the PTL to split a large folio if they stumble upon one. In this case, where re-acquiring the PTL flush_tlb_batched_pending() must be called again, but it previously was not. Let's fix that.
There are 2 Fixes: tags here: the first is the commit that fixed madvise_free_pte_range(). The second is the commit that added madvise_cold_or_pageout_pte_range(), which looks like it copy/pasted the faulty pattern from madvise_free_pte_range().
This is a theoretical bug discovered during code review.
Yeah, good point. So we could race like this:
CPU 0 CPU 1 madvise_free_pte_range pte_offset_map_lock flush_tlb_batched_pending pte_unmap_unlock try_to_unmap_one get_and_clear_full_ptes set_tlb_ubc_flush_pending pte_offset_map_lock [old PTE still cached in TLB]
which is not a security problem for the kernel (a TLB flush will happen before the page is actually freed) but affects userspace correctness.
(Maybe we could at some point refactor this into tlb_finish_mmu(), and give tlb_finish_mmu() a boolean parameter for "did we maybe try to unmap/protect some range of memory"; just like how tlb_finish_mmu() already does the safety flush against concurrent mmu_gather operations. Maybe that would make it harder to mess this up?)
Cc: stable@vger.kernel.org Fixes: 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries") Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD") Signed-off-by: Ryan Roberts ryan.roberts@arm.com
Reviewed-by: Jann Horn jannh@google.com