When migrating an anonymous private page to a ZONE_DEVICE private page, the source page->mapping and page->index fields are copied to the destination ZONE_DEVICE struct page and the page_mapcount() is increased. This is so rmap_walk() can be used to unmap and migrate the page back to system memory. However, try_to_unmap_one() computes the subpage pointer from a swap pte which computes an invalid page pointer and a kernel panic results such as:
BUG: unable to handle page fault for address: ffffea1fffffffc8
Currently, only single pages can be migrated to device private memory so no subpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Cc: "Jérôme Glisse" jglisse@redhat.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Christoph Hellwig hch@lst.de Cc: Jason Gunthorpe jgg@mellanox.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..ec1af8b60423 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * No need to invalidate here it will synchronize on * against the special swap migration pte. */ + subpage = page; goto discard; }
On 7/16/19 5:14 PM, Ralph Campbell wrote:
When migrating an anonymous private page to a ZONE_DEVICE private page, the source page->mapping and page->index fields are copied to the destination ZONE_DEVICE struct page and the page_mapcount() is increased. This is so rmap_walk() can be used to unmap and migrate the page back to system memory. However, try_to_unmap_one() computes the subpage pointer from a swap pte which computes an invalid page pointer and a kernel panic results such as:
BUG: unable to handle page fault for address: ffffea1fffffffc8
Currently, only single pages can be migrated to device private memory so no subpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Cc: "Jérôme Glisse" jglisse@redhat.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Christoph Hellwig hch@lst.de Cc: Jason Gunthorpe jgg@mellanox.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
mm/rmap.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..ec1af8b60423 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * No need to invalidate here it will synchronize on * against the special swap migration pte. */
}subpage = page; goto discard;
The problem is clear, but the solution still leaves the code ever so slightly more confusing, and it was already pretty difficult to begin with.
I still hold out hope for some comment documentation at least, and maybe even just removing the subpage variable (as Jerome mentioned, offline) as well.
Jerome?
thanks,
linux-stable-mirror@lists.linaro.org