The patch titled Subject: mm/thp: fix __split_huge_pmd_locked() for migration PMD has been added to the -mm tree. Its filename is mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd.patch
This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-thp-fix-__split_huge_pmd_locked... and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-thp-fix-__split_huge_pmd_locked...
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated there every 3-4 working days
------------------------------------------------------ From: Ralph Campbell rcampbell@nvidia.com Subject: mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD.
However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mapped.
Also, if the PMD entry is a migration PMD entry, the call to is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by checking for a PMD migration entry.
Link: https://lkml.kernel.org/r/20200903183140.19055-1-rcampbell@nvidia.com Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Reviewed-by: Yang Shi shy828301@gmail.com Reviewed-by: Zi Yan ziy@nvidia.com Cc: Jerome Glisse jglisse@redhat.com Cc: John Hubbard jhubbard@nvidia.com Cc: Alistair Popple apopple@nvidia.com Cc: Christoph Hellwig hch@lst.de Cc: Jason Gunthorpe jgg@nvidia.com Cc: Bharata B Rao bharata@linux.ibm.com Cc: Ben Skeggs bskeggs@redhat.com Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org [4.14+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/huge_memory.c | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-)
--- a/mm/huge_memory.c~mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd +++ a/mm/huge_memory.c @@ -2021,7 +2021,7 @@ static void __split_huge_pmd_locked(stru put_page(page); add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); return; - } else if (is_huge_zero_pmd(*pmd)) { + } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_invalidate_range() see comments below inside @@ -2115,30 +2115,34 @@ static void __split_huge_pmd_locked(stru pte = pte_offset_map(&_pmd, addr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); - atomic_inc(&page[i]._mapcount); - pte_unmap(pte); - } - - /* - * Set PG_double_map before dropping compound_mapcount to avoid - * false-negative page_mapped(). - */ - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) { - for (i = 0; i < HPAGE_PMD_NR; i++) + if (!pmd_migration) atomic_inc(&page[i]._mapcount); + pte_unmap(pte); }
- lock_page_memcg(page); - if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { - /* Last compound_mapcount is gone. */ - __dec_lruvec_page_state(page, NR_ANON_THPS); - if (TestClearPageDoubleMap(page)) { - /* No need in mapcount reference anymore */ + if (!pmd_migration) { + /* + * Set PG_double_map before dropping compound_mapcount to avoid + * false-negative page_mapped(). + */ + if (compound_mapcount(page) > 1 && + !TestSetPageDoubleMap(page)) { for (i = 0; i < HPAGE_PMD_NR; i++) - atomic_dec(&page[i]._mapcount); + atomic_inc(&page[i]._mapcount); + } + + lock_page_memcg(page); + if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { + /* Last compound_mapcount is gone. */ + __dec_lruvec_page_state(page, NR_ANON_THPS); + if (TestClearPageDoubleMap(page)) { + /* No need in mapcount reference anymore */ + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_dec(&page[i]._mapcount); + } } + unlock_page_memcg(page); } - unlock_page_memcg(page);
smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable); _
Patches currently in -mm which might be from rcampbell@nvidia.com are
mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd.patch mm-test-use-the-new-skip-macro.patch mm-migrate-remove-cpages-in-migrate_vma_finalize.patch mm-migrate-remove-obsolete-comment-about-device-public.patch
linux-stable-mirror@lists.linaro.org