The patch titled Subject: mm/hmm: fix ZONE_DEVICE anon page mapping reuse has been removed from the -mm tree. Its filename was mm-hmm-fix-zone_device-anon-page-mapping-reuse.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------ From: Ralph Campbell rcampbell@nvidia.com Subject: mm/hmm: fix ZONE_DEVICE anon page mapping reuse
When a ZONE_DEVICE private page is freed, the page->mapping field can be set. If this page is reused as an anonymous page, the previous value can prevent the page from being inserted into the CPU's anon rmap table. For example, when migrating a pte_none() page to device memory:
migrate_vma(ops, vma, start, end, src, dst, private) migrate_vma_collect() src[] = MIGRATE_PFN_MIGRATE migrate_vma_prepare() /* no page to lock or isolate so OK */ migrate_vma_unmap() /* no page to unmap so OK */ ops->alloc_and_copy() /* driver allocates ZONE_DEVICE page for dst[] */ migrate_vma_pages() migrate_vma_insert_page() page_add_new_anon_rmap() __page_set_anon_rmap() /* This check sees the page's stale mapping field */ if (PageAnon(page)) return /* page->mapping is not updated */
The result is that the migration appears to succeed but a subsequent CPU fault will be unable to migrate the page back to system memory or worse.
Clear the page->mapping field when freeing the ZONE_DEVICE page so stale pointer data doesn't affect future page use.
Link: http://lkml.kernel.org/r/20190719192955.30462-3-rcampbell@nvidia.com Fixes: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Reviewed-by: John Hubbard jhubbard@nvidia.com Reviewed-by: Christoph Hellwig hch@lst.de Cc: Dan Williams dan.j.williams@intel.com Cc: Jason Gunthorpe jgg@mellanox.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Ira Weiny ira.weiny@intel.com Cc: Matthew Wilcox willy@infradead.org Cc: Mel Gorman mgorman@techsingularity.net Cc: Jan Kara jack@suse.cz Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Michal Hocko mhocko@suse.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: "Jérôme Glisse" jglisse@redhat.com Cc: Andrey Ryabinin aryabinin@virtuozzo.com Cc: Christoph Lameter cl@linux.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Lai Jiangshan jiangshanlai@gmail.com Cc: Martin Schwidefsky schwidefsky@de.ibm.com Cc: Pekka Enberg penberg@kernel.org Cc: Randy Dunlap rdunlap@infradead.org Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/memremap.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
--- a/mm/memremap.c~mm-hmm-fix-zone_device-anon-page-mapping-reuse +++ a/mm/memremap.c @@ -403,6 +403,30 @@ void __put_devmap_managed_page(struct pa
mem_cgroup_uncharge(page);
+ /* + * When a device_private page is freed, the page->mapping field + * may still contain a (stale) mapping value. For example, the + * lower bits of page->mapping may still identify the page as + * an anonymous page. Ultimately, this entire field is just + * stale and wrong, and it will cause errors if not cleared. + * One example is: + * + * migrate_vma_pages() + * migrate_vma_insert_page() + * page_add_new_anon_rmap() + * __page_set_anon_rmap() + * ...checks page->mapping, via PageAnon(page) call, + * and incorrectly concludes that the page is an + * anonymous page. Therefore, it incorrectly, + * silently fails to set up the new anon rmap. + * + * For other types of ZONE_DEVICE pages, migration is either + * handled differently or not done at all, so there is no need + * to clear page->mapping. + */ + if (is_device_private_page(page)) + page->mapping = NULL; + page->pgmap->ops->page_free(page); } else if (!count) __put_page(page); _
Patches currently in -mm which might be from rcampbell@nvidia.com are