6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ma Wupeng mawupeng1@huawei.com
commit 773b9a6aa6d38894b95088e3ed6f8a701d9f50fd upstream.
If a folio has an increased reference count, folio_try_get() will acquire it, perform necessary operations, and then release it. In the case of a poisoned folio without an elevated reference count (which is unlikely for memory-failure), folio_try_get() will simply bypass it.
Therefore, relocate the folio_try_get() function, responsible for checking and acquiring this reference count at first.
Link: https://lkml.kernel.org/r/20250217014329.3610326-3-mawupeng1@huawei.com Signed-off-by: Ma Wupeng mawupeng1@huawei.com Acked-by: David Hildenbrand david@redhat.com Acked-by: Miaohe Lin linmiaohe@huawei.com Cc: Michal Hocko mhocko@suse.com Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory_hotplug.c | 20 +++++++------------- 1 file changed, 7 insertions(+), 13 deletions(-)
--- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1795,12 +1795,12 @@ static void do_migrate_range(unsigned lo if (folio_test_large(folio)) pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
- /* - * HWPoison pages have elevated reference counts so the migration would - * fail on them. It also doesn't make any sense to migrate them in the - * first place. Still try to unmap such a page in case it is still mapped - * (keep the unmap as the catch all safety net). - */ + if (!folio_try_get(folio)) + continue; + + if (unlikely(page_folio(page) != folio)) + goto put_folio; + if (folio_test_hwpoison(folio) || (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) @@ -1811,14 +1811,8 @@ static void do_migrate_range(unsigned lo folio_unlock(folio); }
- continue; - } - - if (!folio_try_get(folio)) - continue; - - if (unlikely(page_folio(page) != folio)) goto put_folio; + }
if (!isolate_folio_to_list(folio, &source)) { if (__ratelimit(&migrate_rs)) {