The patch below does not apply to the 5.15-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
Possible dependencies:
f268f6cf875f ("mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths") 2ba99c5e0881 ("mm/khugepaged: fix GUP-fast interaction by sending IPI") 8d3c106e19e8 ("mm/khugepaged: take the right locks for page table retraction") 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE") 58ac9a8993a1 ("mm/khugepaged: attempt to map file/shmem-backed pte-mapped THPs by pmds") 780a4b6fb865 ("mm/khugepaged: check compound_order() in collapse_pte_mapped_thp()") b26e27015ec9 ("mm: thp: convert to use common struct mm_slot") 685405020b9f ("mm/khugepaged: stop using vma linked list") 7d2c4385c341 ("mm/khugepaged: rename prefix of shared collapse functions") 7d8faaf15545 ("mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse") 507228044236 ("mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds hugepage") a7f4e6e4c47c ("mm/thp: add flag to enforce sysfs THP in hugepage_vma_check()") 50ad2f24b3b4 ("mm/khugepaged: propagate enum scan_result codes back to callers") 9710a78ab2ae ("mm/khugepaged: dedup and simplify hugepage alloc and charging") 34d6b470ab9c ("mm/khugepaged: add struct collapse_control") c6a7f445a272 ("mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA") 1064026bab9f ("mm: khugepaged: reorg some khugepaged helpers") 7da4e2cb8b1f ("mm: thp: kill __transhuge_page_enabled()") 9fec51689ff6 ("mm: thp: kill transparent_hugepage_active()") f707fa493784 ("mm: khugepaged: better comments for anon vma check in hugepage_vma_revalidate")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From f268f6cf875f3220afc77bdd0bf1bb136eb54db9 Mon Sep 17 00:00:00 2001 From: Jann Horn jannh@google.com Date: Fri, 25 Nov 2022 22:37:14 +0100 Subject: [PATCH] mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths
Any codepath that zaps page table entries must invoke MMU notifiers to ensure that secondary MMUs (like KVM) don't keep accessing pages which aren't mapped anymore. Secondary MMUs don't hold their own references to pages that are mirrored over, so failing to notify them can lead to page use-after-free.
I'm marking this as addressing an issue introduced in commit f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of the security impact of this only came in commit 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP"), which actually omitted flushes for the removal of present PTEs, not just for the removal of empty page tables.
Link: https://lkml.kernel.org/r/20221129154730.2274278-3-jannh@google.com Link: https://lkml.kernel.org/r/20221128180252.1684965-3-jannh@google.com Link: https://lkml.kernel.org/r/20221125213714.4115729-3-jannh@google.com Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Jann Horn jannh@google.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Yang Shi shy828301@gmail.com Cc: John Hubbard jhubbard@nvidia.com Cc: Peter Xu peterx@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 294cb75d9c22..3703a56571c1 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1399,6 +1399,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v unsigned long addr, pmd_t *pmdp) { pmd_t pmd; + struct mmu_notifier_range range;
mmap_assert_write_locked(mm); if (vma->vm_file) @@ -1410,8 +1411,12 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v if (vma->anon_vma) lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr, + addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); pmd = pmdp_collapse_flush(vma, addr, pmdp); tlb_remove_table_sync_one(); + mmu_notifier_invalidate_range_end(&range); mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd));
linux-stable-mirror@lists.linaro.org