On 2025/9/29 12:44, Dev Jain wrote:
On 28/09/25 10:18 am, Lance Yang wrote:
From: Lance Yang lance.yang@linux.dev
When splitting an mTHP and replacing a zero-filled subpage with the shared zeropage, try_to_map_unused_to_zeropage() currently drops the soft-dirty bit.
For userspace tools like CRIU, which rely on the soft-dirty mechanism for incremental snapshots, losing this bit means modified pages are missed, leading to inconsistent memory state after restore.
Preserve the soft-dirty bit from the old PTE when creating the zeropage mapping to ensure modified pages are correctly tracked.
Cc: stable@vger.kernel.org Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang lance.yang@linux.dev
mm/migrate.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c index ce83c2c3c287..bf364ba07a3f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -322,6 +322,10 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address), pvmw->vma->vm_page_prot));
+ if (pte_swp_soft_dirty(ptep_get(pvmw->pte))) + newpte = pte_mksoft_dirty(newpte);
set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte); dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
I think this should work.
You can pass old_pte = ptep_get(pvmw->pte) to this function to avoid calling ptep_get() multiple times.
Good catch! Will do in v2, thanks.