There is a potential race between __update_and_free_hugetlb_folio() and try_memory_failure_hugetlb():
CPU1 CPU2 __update_and_free_hugetlb_folio try_memory_failure_hugetlb folio_test_hugetlb -- It's still hugetlb folio. folio_clear_hugetlb_hwpoison spin_lock_irq(&hugetlb_lock); __get_huge_page_for_hwpoison folio_set_hugetlb_hwpoison spin_unlock_irq(&hugetlb_lock); spin_lock_irq(&hugetlb_lock); __folio_clear_hugetlb(folio); -- Hugetlb flag is cleared but too late. spin_unlock_irq(&hugetlb_lock);
When the above race occurs, raw error page info will be leaked. Even worse, raw error pages won't have hwpoisoned flag set and hit pcplists/buddy. Fix this issue by deferring folio_clear_hugetlb_hwpoison() until __folio_clear_hugetlb() is done. So all raw error pages will have hwpoisoned flag set.
Link: https://lkml.kernel.org/r/20240708025127.107713-1-linmiaohe@huawei.com Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Acked-by: Muchun Song muchun.song@linux.dev Reviewed-by: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org (cherry picked from commit 5596d9e8b553dacb0ac34bcf873cbbfb16c3ba3e) Signed-off-by: Miaohe Lin linmiaohe@huawei.com --- mm/hugetlb.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a480affd475b..fb7a531fce71 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1769,13 +1769,6 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, return; }
- /* - * Move PageHWPoison flag from head page to the raw error pages, - * which makes any healthy subpages reusable. - */ - if (unlikely(folio_test_hwpoison(folio))) - folio_clear_hugetlb_hwpoison(folio); - /* * If vmemmap pages were allocated above, then we need to clear the * hugetlb destructor under the hugetlb lock. @@ -1786,6 +1779,13 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, spin_unlock_irq(&hugetlb_lock); }
+ /* + * Move PageHWPoison flag from head page to the raw error pages, + * which makes any healthy subpages reusable. + */ + if (unlikely(folio_test_hwpoison(folio))) + folio_clear_hugetlb_hwpoison(folio); + /* * Non-gigantic pages demoted from CMA allocated gigantic pages * need to be given back to CMA in free_gigantic_folio.