The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 5596d9e8b553dacb0ac34bcf873cbbfb16c3ba3e # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024071559-unroasted-trapper-8b66@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Possible dependencies:
5596d9e8b553 ("mm/hugetlb: fix potential race in __update_and_free_hugetlb_folio()") bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers") 51718e25c53f ("mm: convert arch_clear_hugepage_flags to take a folio") 831bc31a5e82 ("mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb") ebc20dcac4ce ("mm: hugetlb_vmemmap: convert page to folio") c5ad3233ead5 ("hugetlb_vmemmap: use folio argument for hugetlb_vmemmap_* functions") c24f188b2289 ("hugetlb: batch TLB flushes when restoring vmemmap") f13b83fdd996 ("hugetlb: batch TLB flushes when freeing vmemmap") f4b7e3efaddb ("hugetlb: batch PMD split for bulk vmemmap dedup") 91f386bf0772 ("hugetlb: batch freeing of vmemmap pages") cfb8c75099db ("hugetlb: perform vmemmap restoration on a list of pages") 79359d6d24df ("hugetlb: perform vmemmap optimization on a list of pages") d67e32f26713 ("hugetlb: restructure pool allocations") d2cf88c27f51 ("hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles") 30a89adf872d ("hugetlb: check for hugetlb folio before vmemmap_restore") d5b43e9683ec ("hugetlb: convert remove_pool_huge_page() to remove_pool_hugetlb_folio()") 04bbfd844b99 ("hugetlb: remove a few calls to page_folio()") fde1c4ecf916 ("mm: hugetlb: skip initialization of gigantic tail struct pages if freed by HVO") 3ee0aa9f0675 ("mm: move some shrinker-related function declarations to mm/internal.h") d8f5f7e445f0 ("hugetlb: set hugetlb page flag before optimizing vmemmap")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 5596d9e8b553dacb0ac34bcf873cbbfb16c3ba3e Mon Sep 17 00:00:00 2001 From: Miaohe Lin linmiaohe@huawei.com Date: Mon, 8 Jul 2024 10:51:27 +0800 Subject: [PATCH] mm/hugetlb: fix potential race in __update_and_free_hugetlb_folio()
There is a potential race between __update_and_free_hugetlb_folio() and try_memory_failure_hugetlb():
CPU1 CPU2 __update_and_free_hugetlb_folio try_memory_failure_hugetlb folio_test_hugetlb -- It's still hugetlb folio. folio_clear_hugetlb_hwpoison spin_lock_irq(&hugetlb_lock); __get_huge_page_for_hwpoison folio_set_hugetlb_hwpoison spin_unlock_irq(&hugetlb_lock); spin_lock_irq(&hugetlb_lock); __folio_clear_hugetlb(folio); -- Hugetlb flag is cleared but too late. spin_unlock_irq(&hugetlb_lock);
When the above race occurs, raw error page info will be leaked. Even worse, raw error pages won't have hwpoisoned flag set and hit pcplists/buddy. Fix this issue by deferring folio_clear_hugetlb_hwpoison() until __folio_clear_hugetlb() is done. So all raw error pages will have hwpoisoned flag set.
Link: https://lkml.kernel.org/r/20240708025127.107713-1-linmiaohe@huawei.com Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Acked-by: Muchun Song muchun.song@linux.dev Reviewed-by: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2afb70171b76..fe44324d6383 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1725,13 +1725,6 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, return; }
- /* - * Move PageHWPoison flag from head page to the raw error pages, - * which makes any healthy subpages reusable. - */ - if (unlikely(folio_test_hwpoison(folio))) - folio_clear_hugetlb_hwpoison(folio); - /* * If vmemmap pages were allocated above, then we need to clear the * hugetlb flag under the hugetlb lock. @@ -1742,6 +1735,13 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, spin_unlock_irq(&hugetlb_lock); }
+ /* + * Move PageHWPoison flag from head page to the raw error pages, + * which makes any healthy subpages reusable. + */ + if (unlikely(folio_test_hwpoison(folio))) + folio_clear_hugetlb_hwpoison(folio); + folio_ref_unfreeze(folio, 1);
/*