On Mon, Nov 14, 2022 at 05:09:32PM +0100, David Hildenbrand wrote:
On 10.11.22 21:31, Peter Xu wrote:
Ives van Hoorne from codesandbox.io reported an issue regarding possible data loss of uffd-wp when applied to memfds on heavily loaded systems. The sympton is some read page got data mismatch from the snapshot child VMs.
Here I can also reproduce with a Rust reproducer that was provided by Ives that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate 80 instances I can trigger the issues in ten minutes.
It turns out that we got some pages write-through even if uffd-wp is applied to the pte.
The problem is, when removing migration entries, we didn't really worry about write bit as long as we know it's not a write migration entry. That may not be true, for some memory types (e.g. writable shmem) mk_pte can return a pte with write bit set, then to recover the migration entry to its original state we need to explicit wr-protect the pte or it'll has the write bit set if it's a read migration entry.
For uffd it can cause write-through. I didn't verify, but I think it'll be the same for mprotect()ed pages and after migration we can miss the sigbus instead.
I don't think so. mprotect() handling relies on vma->vm_page_prot, which is supposed to do the right thing. E.g., map the pte protnone without VM_READ/VM_WRITE/....
I've removed that example when I posted v3, feel free to have a look.
The relevant code on uffd was introduced in the anon support, which is commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", 2020-04-07). However anon shouldn't suffer from this problem because anon should already have the write bit cleared always, so that may not be a proper Fixes target. To satisfy the need on the backport, I'm attaching the Fixes tag to the uffd-wp shmem support. Since no one had issue with mprotect, so I assume that's also the kernel version we should start to backport for stable, and we shouldn't need to worry before that.
Cc: Andrea Arcangeli aarcange@redhat.com Cc: stable@vger.kernel.org Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs") Reported-by: Ives van Hoorne ives@codesandbox.io Signed-off-by: Peter Xu peterx@redhat.com
mm/migrate.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c index dff333593a8a..8b6351c08c78 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, pte = pte_mkdirty(pte); if (is_writable_migration_entry(entry)) pte = maybe_mkwrite(pte, vma);
else if (pte_swp_uffd_wp(*pvmw.pte))
else
/* NOTE: mk_pte can have write bit set */
pte = pte_wrprotect(pte);
Any particular reason why not to simply glue this to pte_swp_uffd_wp(), because only that needs special care:
if (pte_swp_uffd_wp(*pvmw.pte)) { pte = pte_wrprotect(pte); pte = pte_mkuffd_wp(pte); }
And that would match what actually should have been done in commit f45ec5ff16a7 -- only special-case uffd-wp.
Note that I think there are cases where we have a PTE that was !writable, but after migration we can map it writable.
The thing is recovering the pte into its original form is the safest approach to me, so I think we need justification on why it's always safe to set the write bit.
Or do you perhaps have solid clue and think it's always safe?
BTW, does unuse_pte() need similar care?
new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); if (pte_swp_uffd_wp(*pte)) new_pte = pte_mkuffd_wp(new_pte); set_pte_at(vma->vm_mm, addr, pte, new_pte);
I think unuse path is fine because unuse only applies to private mappings, so we should always have the W bit removed there within mk_pte().
Thanks,