On 12/19/25 13:37, Harry Yoo wrote:
On Fri, Dec 12, 2025 at 08:10:19AM +0100, David Hildenbrand (Red Hat) wrote:
As reported, ever since commit 1013af4f585f ("mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race") we can end up in some situations where we perform so many IPI broadcasts when unsharing hugetlb PMD page tables that it severely regresses some workloads.
In particular, when we fork()+exit(), or when we munmap() a large area backed by many shared PMD tables, we perform one IPI broadcast per unshared PMD table.
[...snip...]
Fixes: 1013af4f585f ("mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race") Reported-by: Uschakow, Stanislav" suschako@amazon.de Closes: https://lore.kernel.org/all/4d3878531c76479d9f8ca9789dc6485d@amazon.de/ Tested-by: Laurence Oberman loberman@redhat.com Cc: stable@vger.kernel.org Signed-off-by: David Hildenbrand (Red Hat) david@kernel.org
include/asm-generic/tlb.h | 74 ++++++++++++++++++++++- include/linux/hugetlb.h | 19 +++--- mm/hugetlb.c | 121 ++++++++++++++++++++++---------------- mm/mmu_gather.c | 7 +++ mm/mprotect.c | 2 +- mm/rmap.c | 25 +++++--- 6 files changed, 179 insertions(+), 69 deletions(-)
@@ -6522,22 +6511,16 @@ long hugetlb_change_protection(struct vm_area_struct *vma, pte = huge_pte_clear_uffd_wp(pte); huge_ptep_modify_prot_commit(vma, address, ptep, old_pte, pte); pages++;
}tlb_remove_huge_tlb_entry(h, tlb, ptep, address);next: spin_unlock(ptl); cond_resched(); }
- /*
* There is nothing protecting a previously-shared page table that we* unshared through huge_pmd_unshare() from getting freed after we* release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()* succeeded, flush the range corresponding to the pud.*/- if (shared_pmd)
flush_hugetlb_tlb_range(vma, range.start, range.end);- else
flush_hugetlb_tlb_range(vma, start, end);
- tlb_flush_mmu_tlbonly(tlb);
- huge_pmd_unshare_flush(tlb, vma);
Shouldn't we teach mmu_gather that it has to call
I hope not :) In the worst case we could keep the flush_hugetlb_tlb_range() in the !shared case in. Suboptimal but I am sick and tired of dealing with this hugetlb mess.
Let me CC Ryan and Catalin for the arm64 pieces and Christophe on the ppc pieces: See [1] where we convert away from some flush_hugetlb_tlb_range() users to operate on mmu_gather using * tlb_remove_huge_tlb_entry() for mremap() and mprotect(). Before we would only use it in __unmap_hugepage_range(). * tlb_flush_pmd_range() for unsharing of shared PMD tables. We already used that in one call path.
[1] https://lore.kernel.org/all/20251212071019.471146-5-david@kernel.org/
flush_hugetlb_tlb_range() instead of ordinary TLB flush routine, otherwise it will break ARCHes that has "special requirements" for evicting hugetlb backing TLB entries?
Yeah, I was briefly wondering about that myself (and the inconsistency we had in the code). I would hope that we're good, but maybe there are some nasty corner cases we're missing. So thanks for raising that.
Given tlb_remove_huge_tlb_entry() exist (and is already getting used) I would assume that it does the right thing.
In tlb_unshare_pmd_ptdesc(), I am now using tlb_flush_pmd_range(), because we know that we are dealing with PMD-sized hugetlb folios.
And in fact, we were already doing that in case of __unmap_hugepage_range(), where we did exactly what I do now:
tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
So, again, something would already be broken there unless I am missing something important.
Looking at it, I wonder whether we must do the tlb_remove_huge_tlb_entry() in move_hugetlb_page_tables() after the move_huge_pte(). Looks like tlb_remove_huge_tlb_entry() might do some flushing on ppc (and not just updating the mmu_gather) through __tlb_remove_tlb_entry(). But it's a bit confusing.