This patch series introduces UFFDIO_REMAP feature to userfaultfd, which has long been implemented and maintained by Andrea in his local tree [1], but was not upstreamed due to lack of use cases where this approach would be better than allocating a new page and copying the contents.
UFFDIO_COPY performs ~20% better than UFFDIO_REMAP when the application needs pages to be allocated [2]. However, with UFFDIO_REMAP, if pages are available (in userspace) for recycling, as is usually the case in heap compaction algorithms, then we can avoid the page allocation and memcpy (done by UFFDIO_COPY). Also, since the pages are recycled in the userspace, we avoid the need to release (via madvise) the pages back to the kernel [3]. We see over 40% reduction (on a Google pixel 6 device) in the compacting thread’s completion time by using UFFDIO_REMAP vs. UFFDIO_COPY. This was measured using a benchmark that emulates a heap compaction implementation using userfaultfd (to allow concurrent accesses by application threads). More details of the usecase are explained in [3].
Furthermore, UFFDIO_REMAP enables remapping swapped-out pages without touching them within the same vma. Today, it can only be done by mremap, however it forces splitting the vma.
Main changes since Andrea's last version [1]: - Trivial translations from page to folio, mmap_sem to mmap_lock - Replace pmd_trans_unstable() with pte_offset_map_nolock() and handle its possible failure - Move pte mapping into remap_pages_pte to allow for retries when source page or anon_vma is contended. Since pte_offset_map_nolock() start RCU read section, we can't block anymore after mapping a pte, so have to unmap the ptesm do the locking and retry. - Add and use anon_vma_trylock_write() to avoid blocking while in RCU read section. - Accommodate changes in mmu_notifier_range_init() API, switch to mmu_notifier_invalidate_range_start_nonblock() to avoid blocking while in RCU read section. - Open-code now removed __swp_swapcount() - Replace pmd_read_atomic() with pmdp_get_lockless() - Add new selftest for UFFDIO_REMAP
Changes since v1 [4]: - add mmget_not_zero in userfaultfd_remap, per Jann Horn - removed extern from function definitions, per Matthew Wilcox - converted to folios in remap_pages_huge_pmd, per Matthew Wilcox - use PageAnonExclusive in remap_pages_huge_pmd, per David Hildenbrand - handle pgtable transfers between MMs, per Jann Horn - ignore concurrent A/D pte bit changes, per Jann Horn - split functions into smaller units, per David Hildenbrand - test for folio_test_large in remap_anon_pte, per Matthew Wilcox - use pte_swp_exclusive for swapcount check, per David Hildenbrand - eliminated use of mmu_notifier_invalidate_range_start_nonblock, per Jann Horn - simplified THP alignment checks, per Jann Horn - refactored the loop inside remap_pages, per Jann Horn - additional clarifying comments, per Jann Horn
[1] https://gitlab.com/aarcange/aa/-/commit/2aec7aea56b10438a3881a20a411aa4b1fc1... [2] https://lore.kernel.org/all/1425575884-2574-1-git-send-email-aarcange@redhat... [3] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjn... [4] https://lore.kernel.org/all/20230914152620.2743033-1-surenb@google.com/
Andrea Arcangeli (2): userfaultfd: UFFDIO_REMAP: rmap preparation userfaultfd: UFFDIO_REMAP uABI
Suren Baghdasaryan (1): selftests/mm: add UFFDIO_REMAP ioctl test
fs/userfaultfd.c | 63 ++ include/linux/rmap.h | 5 + include/linux/userfaultfd_k.h | 12 + include/uapi/linux/userfaultfd.h | 22 + mm/huge_memory.c | 130 ++++ mm/khugepaged.c | 3 + mm/rmap.c | 13 + mm/userfaultfd.c | 590 +++++++++++++++++++ tools/testing/selftests/mm/uffd-common.c | 41 +- tools/testing/selftests/mm/uffd-common.h | 1 + tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++ 11 files changed, 940 insertions(+), 2 deletions(-)
From: Andrea Arcangeli aarcange@redhat.com
As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock.
UFFDIO_REMAP takes the anon_vma lock for writing before altering the folio->mapping, so if the folio->mapping is still the same after obtaining the anon_vma read lock (without the folio lock), the rmap walks can go ahead safely (and UFFDIO_REMAP will wait the rmap walk to complete before proceeding).
UFFDIO_REMAP serializes against itself with the folio lock.
All other places taking the anon_vma lock while holding the mmap_lock for writing, don't need to check if the folio->mapping has changed after taking the anon_vma lock, regardless of the folio lock, because UFFDIO_REMAP holds the mmap_lock for reading.
There's one constraint enforced to allow this simplification: the source pages passed to UFFDIO_REMAP must be mapped only in one vma, but this constraint is an acceptable tradeoff for UFFDIO_REMAP users.
The source addresses passed to UFFDIO_REMAP can be set as VM_DONTCOPY with MADV_DONTFORK to avoid any risk of the mapcount of the pages increasing if some thread of the process forks() before UFFDIO_REMAP run.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com --- mm/rmap.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..c1ebbd23fa61 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -542,6 +542,7 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, struct anon_vma *root_anon_vma; unsigned long anon_mapping;
+repeat: rcu_read_lock(); anon_mapping = (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) @@ -586,6 +587,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, rcu_read_unlock(); anon_vma_lock_read(anon_vma);
+ /* + * Check if UFFDIO_REMAP changed the anon_vma. This is needed + * because we don't assume the folio was locked. + */ + if (unlikely((unsigned long) READ_ONCE(folio->mapping) != + anon_mapping)) { + anon_vma_unlock_read(anon_vma); + put_anon_vma(anon_vma); + anon_vma = NULL; + goto repeat; + } + if (atomic_dec_and_test(&anon_vma->refcount)) { /* * Oops, we held the last refcount, release the lock
Suren,
Sorry to review so late.
On Fri, Sep 22, 2023 at 06:31:44PM -0700, Suren Baghdasaryan wrote:
diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..c1ebbd23fa61 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -542,6 +542,7 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, struct anon_vma *root_anon_vma; unsigned long anon_mapping; +repeat: rcu_read_lock(); anon_mapping = (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) @@ -586,6 +587,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, rcu_read_unlock(); anon_vma_lock_read(anon_vma);
- /*
* Check if UFFDIO_REMAP changed the anon_vma. This is needed
* because we don't assume the folio was locked.
*/
- if (unlikely((unsigned long) READ_ONCE(folio->mapping) !=
anon_mapping)) {
anon_vma_unlock_read(anon_vma);
put_anon_vma(anon_vma);
anon_vma = NULL;
goto repeat;
- }
We have an open-coded fast path above this:
if (down_read_trylock(&root_anon_vma->rwsem)) { /* * If the folio is still mapped, then this anon_vma is still * its anon_vma, and holding the mutex ensures that it will * not go away, see anon_vma_free(). */ if (!folio_mapped(folio)) { up_read(&root_anon_vma->rwsem); anon_vma = NULL; } goto out; }
Would that also need such check?
- if (atomic_dec_and_test(&anon_vma->refcount)) { /*
- Oops, we held the last refcount, release the lock
-- 2.42.0.515.g380fc7ccd1-goog
On Thu, Sep 28, 2023 at 9:23 AM Peter Xu peterx@redhat.com wrote:
Suren,
Sorry to review so late.
On Fri, Sep 22, 2023 at 06:31:44PM -0700, Suren Baghdasaryan wrote:
diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..c1ebbd23fa61 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -542,6 +542,7 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, struct anon_vma *root_anon_vma; unsigned long anon_mapping;
+repeat: rcu_read_lock(); anon_mapping = (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) @@ -586,6 +587,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, rcu_read_unlock(); anon_vma_lock_read(anon_vma);
/*
* Check if UFFDIO_REMAP changed the anon_vma. This is needed
* because we don't assume the folio was locked.
*/
if (unlikely((unsigned long) READ_ONCE(folio->mapping) !=
anon_mapping)) {
anon_vma_unlock_read(anon_vma);
put_anon_vma(anon_vma);
anon_vma = NULL;
goto repeat;
}
We have an open-coded fast path above this:
if (down_read_trylock(&root_anon_vma->rwsem)) { /* * If the folio is still mapped, then this anon_vma is still * its anon_vma, and holding the mutex ensures that it will * not go away, see anon_vma_free(). */ if (!folio_mapped(folio)) { up_read(&root_anon_vma->rwsem); anon_vma = NULL; } goto out; }
Would that also need such check?
Yes, I think they should be handled the same way. Will fix. Thanks!
if (atomic_dec_and_test(&anon_vma->refcount)) { /* * Oops, we held the last refcount, release the lock
-- 2.42.0.515.g380fc7ccd1-goog
-- Peter Xu
On 23.09.23 03:31, Suren Baghdasaryan wrote:
From: Andrea Arcangeli aarcange@redhat.com
As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock.
I'm curious: why don't we need this for existing users of page_move_anon_rmap()? What's special about UFFDIO_REMAP?
On Mon, Oct 02, 2023 at 04:42:50PM +0200, David Hildenbrand wrote:
On 23.09.23 03:31, Suren Baghdasaryan wrote:
From: Andrea Arcangeli aarcange@redhat.com
As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock.
I'm curious: why don't we need this for existing users of page_move_anon_rmap()? What's special about UFFDIO_REMAP?
Totally no expert on anon vma so I'm prone to errors, but IIUC the difference here is root anon vma cannot change in page_move_anon_rmap(), while UFFDIO_REMAP can.
Thanks,
On 02.10.23 17:23, Peter Xu wrote:
On Mon, Oct 02, 2023 at 04:42:50PM +0200, David Hildenbrand wrote:
On 23.09.23 03:31, Suren Baghdasaryan wrote:
From: Andrea Arcangeli aarcange@redhat.com
As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock.
I'm curious: why don't we need this for existing users of page_move_anon_rmap()? What's special about UFFDIO_REMAP?
Totally no expert on anon vma so I'm prone to errors, but IIUC the difference here is root anon vma cannot change in page_move_anon_rmap(), while UFFDIO_REMAP can.
That does sound reasonable, thanks.
Probably we can do better with the patch description (once [1] is used to move the folio to the other anon_vma).
"mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()
For now, folio_move_anon_rmap() was only used to move a folio to a different anon_vma after fork(), whereby the root anon_vma stayed unchanged. For that, it was sufficient to hold the page lock when calling folio_move_anon_rmap().
However, we want to make use of folio_move_anon_rmap() to move folios between VMAs that have a different root anon_vma. As folio_referenced() performs an RMAP walk without holding the page lock but only holding the anon_vma in read mode, holding the page lock is insufficient.
When moving to an anon_vma with a different root anon_vma, we'll have to hold both, the page lock and the anon_vma lock in write mode. Consequently, whenever we succeeded in folio_lock_anon_vma_read() to read-lock the anon_vma, we have to re-check if the mapping was changed in the meantime. If that was the case, we have to retry.
Note that folio_move_anon_rmap() must only be called if the anon page is exclusive to a process, and must not be called on KSM folios.
This is a preparation for UFFDIO_REMAP, which will hold the page lock, the anon_vma lock in write mode, and the mmap_lock in read mode. "
In addition, we should document these locking details for folio_move_anon_rmap() and probably not mention UFFDIO_REMAP in the comment in folio_lock_anon_vma_read(), but instead say "folio_move_anon_rmap() might have changed the anon_vma as we might not hold the page lock here."
[1] https://lkml.kernel.org/r/20231002142949.235104-3-david@redhat.com
On Mon, Oct 2, 2023 at 10:30 AM David Hildenbrand david@redhat.com wrote:
On 02.10.23 17:23, Peter Xu wrote:
On Mon, Oct 02, 2023 at 04:42:50PM +0200, David Hildenbrand wrote:
On 23.09.23 03:31, Suren Baghdasaryan wrote:
From: Andrea Arcangeli aarcange@redhat.com
As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock.
I'm curious: why don't we need this for existing users of page_move_anon_rmap()? What's special about UFFDIO_REMAP?
Totally no expert on anon vma so I'm prone to errors, but IIUC the difference here is root anon vma cannot change in page_move_anon_rmap(), while UFFDIO_REMAP can.
That does sound reasonable, thanks.
Probably we can do better with the patch description (once [1] is used to move the folio to the other anon_vma).
I'll develop the next version with your patches in the baseline. Hopefully by the time of my posting your patches will be in the mm-unstable.
"mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()
For now, folio_move_anon_rmap() was only used to move a folio to a different anon_vma after fork(), whereby the root anon_vma stayed unchanged. For that, it was sufficient to hold the page lock when calling folio_move_anon_rmap().
However, we want to make use of folio_move_anon_rmap() to move folios between VMAs that have a different root anon_vma. As folio_referenced() performs an RMAP walk without holding the page lock but only holding the anon_vma in read mode, holding the page lock is insufficient.
When moving to an anon_vma with a different root anon_vma, we'll have to hold both, the page lock and the anon_vma lock in write mode. Consequently, whenever we succeeded in folio_lock_anon_vma_read() to read-lock the anon_vma, we have to re-check if the mapping was changed in the meantime. If that was the case, we have to retry.
Note that folio_move_anon_rmap() must only be called if the anon page is exclusive to a process, and must not be called on KSM folios.
This is a preparation for UFFDIO_REMAP, which will hold the page lock, the anon_vma lock in write mode, and the mmap_lock in read mode. "
Thanks for taking time to write this up! Looks really clear to me. I'll reuse it.
In addition, we should document these locking details for folio_move_anon_rmap() and probably not mention UFFDIO_REMAP in the comment in folio_lock_anon_vma_read(), but instead say "folio_move_anon_rmap() might have changed the anon_vma as we might not hold the page lock here."
Sounds good. Will add.
[1] https://lkml.kernel.org/r/20231002142949.235104-3-david@redhat.com
-- Cheers,
David / dhildenb
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com --- Changes since v1: - add mmget_not_zero in userfaultfd_remap, per Jann Horn - removed extern from function definitions, per Matthew Wilcox - converted to folios in remap_pages_huge_pmd, per Matthew Wilcox - use PageAnonExclusive in remap_pages_huge_pmd, per David Hildenbrand - handle pgtable transfers between MMs, per Jann Horn - ignore concurrent A/D pte bit changes, per Jann Horn - split functions into smaller units, per David Hildenbrand - test for folio_test_large in remap_anon_pte, per Matthew Wilcox - use pte_swp_exclusive for swapcount check, per David Hildenbrand - eliminated use of mmu_notifier_invalidate_range_start_nonblock, per Jann Horn - simplified THP alignment checks, per Jann Horn - refactored the loop inside remap_pages, per Jann Horn - additional clarifying comments, per Jann Horn
fs/userfaultfd.c | 63 ++++ include/linux/rmap.h | 5 + include/linux/userfaultfd_k.h | 12 + include/uapi/linux/userfaultfd.h | 22 ++ mm/huge_memory.c | 130 +++++++ mm/khugepaged.c | 3 + mm/userfaultfd.c | 590 +++++++++++++++++++++++++++++++ 7 files changed, 825 insertions(+)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 56eaae9dac1a..5b6bb20f4518 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2027,6 +2027,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features) return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED; }
+static int userfaultfd_remap(struct userfaultfd_ctx *ctx, + unsigned long arg) +{ + __s64 ret; + struct uffdio_remap uffdio_remap; + struct uffdio_remap __user *user_uffdio_remap; + struct userfaultfd_wake_range range; + + user_uffdio_remap = (struct uffdio_remap __user *) arg; + + ret = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) + goto out; + + ret = -EFAULT; + if (copy_from_user(&uffdio_remap, user_uffdio_remap, + /* don't copy "remap" last field */ + sizeof(uffdio_remap)-sizeof(__s64))) + goto out; + + ret = validate_range(ctx->mm, uffdio_remap.dst, uffdio_remap.len); + if (ret) + goto out; + + ret = validate_range(current->mm, uffdio_remap.src, uffdio_remap.len); + if (ret) + goto out; + + ret = -EINVAL; + if (uffdio_remap.mode & ~(UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES| + UFFDIO_REMAP_MODE_DONTWAKE)) + goto out; + + if (mmget_not_zero(ctx->mm)) { + ret = remap_pages(ctx->mm, current->mm, + uffdio_remap.dst, uffdio_remap.src, + uffdio_remap.len, uffdio_remap.mode); + mmput(ctx->mm); + } else { + return -ESRCH; + } + + if (unlikely(put_user(ret, &user_uffdio_remap->remap))) + return -EFAULT; + if (ret < 0) + goto out; + + /* len == 0 would wake all */ + BUG_ON(!ret); + range.len = ret; + if (!(uffdio_remap.mode & UFFDIO_REMAP_MODE_DONTWAKE)) { + range.start = uffdio_remap.dst; + wake_userfault(ctx, &range); + } + ret = range.len == uffdio_remap.len ? 0 : -EAGAIN; + +out: + return ret; +} + /* * userland asks for a certain API version and we return which bits * and ioctl commands are implemented in this kernel for such API @@ -2113,6 +2173,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd, case UFFDIO_ZEROPAGE: ret = userfaultfd_zeropage(ctx, arg); break; + case UFFDIO_REMAP: + ret = userfaultfd_remap(ctx, arg); + break; case UFFDIO_WRITEPROTECT: ret = userfaultfd_writeprotect(ctx, arg); break; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 51cc21ebb568..614c4b439907 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma) down_write(&anon_vma->root->rwsem); }
+static inline int anon_vma_trylock_write(struct anon_vma *anon_vma) +{ + return down_write_trylock(&anon_vma->root->rwsem); +} + static inline void anon_vma_unlock_write(struct anon_vma *anon_vma) { up_write(&anon_vma->root->rwsem); diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index ac8c6854097c..9ea2c43ad4b7 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm, extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp);
+/* remap_pages */ +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 flags); +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr); + /* mm helpers */ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma, struct vm_userfaultfd_ctx vm_ctx) diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index 62151706c5a3..22d1c43e39f9 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -49,6 +49,7 @@ ((__u64)1 << _UFFDIO_WAKE | \ (__u64)1 << _UFFDIO_COPY | \ (__u64)1 << _UFFDIO_ZEROPAGE | \ + (__u64)1 << _UFFDIO_REMAP | \ (__u64)1 << _UFFDIO_WRITEPROTECT | \ (__u64)1 << _UFFDIO_CONTINUE | \ (__u64)1 << _UFFDIO_POISON) @@ -72,6 +73,7 @@ #define _UFFDIO_WAKE (0x02) #define _UFFDIO_COPY (0x03) #define _UFFDIO_ZEROPAGE (0x04) +#define _UFFDIO_REMAP (0x05) #define _UFFDIO_WRITEPROTECT (0x06) #define _UFFDIO_CONTINUE (0x07) #define _UFFDIO_POISON (0x08) @@ -91,6 +93,8 @@ struct uffdio_copy) #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \ struct uffdio_zeropage) +#define UFFDIO_REMAP _IOWR(UFFDIO, _UFFDIO_REMAP, \ + struct uffdio_remap) #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \ struct uffdio_writeprotect) #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \ @@ -340,6 +344,24 @@ struct uffdio_poison { __s64 updated; };
+struct uffdio_remap { + __u64 dst; + __u64 src; + __u64 len; + /* + * Especially if used to atomically remove memory from the + * address space the wake on the dst range is not needed. + */ +#define UFFDIO_REMAP_MODE_DONTWAKE ((__u64)1<<0) +#define UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES ((__u64)1<<1) + __u64 mode; + /* + * "remap" is written by the ioctl and must be at the end: the + * copy_from_user will not read the last 8 bytes. + */ + __s64 remap; +}; + /* * Flags for the userfaultfd(2) system call itself. */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd90822b..a8c898df36db 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1932,6 +1932,136 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; }
+#ifdef CONFIG_USERFAULTFD +/* + * The PT lock for src_pmd and the mmap_lock for reading are held by + * the caller, but it must return after releasing the + * page_table_lock. We're guaranteed the src_pmd is a pmd_trans_huge + * until the PT lock of the src_pmd is released. Just move the page + * from src_pmd to dst_pmd if possible. Return zero if succeeded in + * moving the page, -EAGAIN if it needs to be repeated by the caller, + * or other errors in case of failure. + */ +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr) +{ + pmd_t _dst_pmd, src_pmdval; + struct page *src_page; + struct folio *src_folio; + struct anon_vma *src_anon_vma, *dst_anon_vma; + spinlock_t *src_ptl, *dst_ptl; + pgtable_t src_pgtable, dst_pgtable; + struct mmu_notifier_range range; + int err = 0; + + src_pmdval = *src_pmd; + src_ptl = pmd_lockptr(src_mm, src_pmd); + + BUG_ON(!spin_is_locked(src_ptl)); + mmap_assert_locked(src_mm); + mmap_assert_locked(dst_mm); + + BUG_ON(!pmd_trans_huge(src_pmdval)); + BUG_ON(!pmd_none(dst_pmdval)); + BUG_ON(src_addr & ~HPAGE_PMD_MASK); + BUG_ON(dst_addr & ~HPAGE_PMD_MASK); + + src_page = pmd_page(src_pmdval); + if (unlikely(!PageAnonExclusive(src_page))) { + spin_unlock(src_ptl); + return -EBUSY; + } + + src_folio = page_folio(src_page); + folio_get(src_folio); + spin_unlock(src_ptl); + + /* preallocate dst_pgtable if needed */ + if (dst_mm != src_mm) { + dst_pgtable = pte_alloc_one(dst_mm); + if (unlikely(!dst_pgtable)) { + err = -ENOMEM; + goto put_folio; + } + } else { + dst_pgtable = NULL; + } + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr, + src_addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); + + /* block all concurrent rmap walks */ + folio_lock(src_folio); + + /* + * split_huge_page walks the anon_vma chain without the page + * lock. Serialize against it with the anon_vma lock, the page + * lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + err = -EAGAIN; + goto unlock_folio; + } + anon_vma_lock_write(src_anon_vma); + + dst_ptl = pmd_lockptr(dst_mm, dst_pmd); + double_pt_lock(src_ptl, dst_ptl); + if (unlikely(!pmd_same(*src_pmd, src_pmdval) || + !pmd_same(*dst_pmd, dst_pmdval) || + folio_mapcount(src_folio) != 1)) { + double_pt_unlock(src_ptl, dst_ptl); + err = -EAGAIN; + goto put_anon_vma; + } + + BUG_ON(!folio_test_head(src_folio)); + BUG_ON(!folio_test_anon(src_folio)); + + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON; + WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); + + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd); + + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd); + if (dst_pgtable) { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable); + pte_free(src_mm, src_pgtable); + dst_pgtable = NULL; + + mm_inc_nr_ptes(dst_mm); + mm_dec_nr_ptes(src_mm); + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR); + } else { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable); + } + double_pt_unlock(src_ptl, dst_ptl); + +put_anon_vma: + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); +unlock_folio: + /* unblock rmap walks */ + folio_unlock(src_folio); + mmu_notifier_invalidate_range_end(&range); + if (dst_pgtable) + pte_free(dst_mm, dst_pgtable); +put_folio: + folio_put(src_folio); + + return err; +} +#endif /* CONFIG_USERFAULTFD */ + /* * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise. * diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 88433cc25d8a..af23248b3551 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1135,6 +1135,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * Prevent all access to pagetables with the exception of * gup_fast later handled by the ptep_clear_flush and the VM * handled by the anon_vma lock + PG_lock. + * + * UFFDIO_REMAP is prevented to race as well thanks to the + * mmap_lock. */ mmap_write_lock(mm); result = hugepage_vma_revalidate(mm, address, true, &vma, cc); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 96d9eae5c7cc..5ce5e364373c 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -842,3 +842,593 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, mmap_read_unlock(dst_mm); return err; } + + +void double_pt_lock(spinlock_t *ptl1, + spinlock_t *ptl2) + __acquires(ptl1) + __acquires(ptl2) +{ + spinlock_t *ptl_tmp; + + if (ptl1 > ptl2) { + /* exchange ptl1 and ptl2 */ + ptl_tmp = ptl1; + ptl1 = ptl2; + ptl2 = ptl_tmp; + } + /* lock in virtual address order to avoid lock inversion */ + spin_lock(ptl1); + if (ptl1 != ptl2) + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING); + else + __acquire(ptl2); +} + +void double_pt_unlock(spinlock_t *ptl1, + spinlock_t *ptl2) + __releases(ptl1) + __releases(ptl2) +{ + spin_unlock(ptl1); + if (ptl1 != ptl2) + spin_unlock(ptl2); + else + __release(ptl2); +} + + +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl, + struct folio *src_folio) +{ + struct anon_vma *dst_anon_vma; + + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte) || + folio_test_large(src_folio) || + folio_estimated_sharers(src_folio) != 1) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + + BUG_ON(!folio_test_anon(src_folio)); + + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON; + WRITE_ONCE(src_folio->mapping, + (struct address_space *) dst_anon_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, + dst_addr)); + + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte); + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot); + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), + dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_ANONPAGES); + dec_mm_counter(src_mm, MM_ANONPAGES); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl) +{ + if (!pte_swp_exclusive(orig_src_pte)) + return -EBUSY; + + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte)) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte); + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_ANONPAGES); + dec_mm_counter(src_mm, MM_ANONPAGES); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +/* + * The mmap_lock for reading is held by the caller. Just move the page + * from src_pmd to dst_pmd if possible, and return true if succeeded + * in moving the page. + */ +static int remap_pages_pte(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + pmd_t *dst_pmd, + pmd_t *src_pmd, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, + unsigned long src_addr, + __u64 mode) +{ + swp_entry_t entry; + pte_t orig_src_pte, orig_dst_pte; + spinlock_t *src_ptl, *dst_ptl; + pte_t *src_pte = NULL; + pte_t *dst_pte = NULL; + + struct folio *src_folio = NULL; + struct anon_vma *src_anon_vma = NULL; + struct mmu_notifier_range range; + int err = 0; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, + src_addr, src_addr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); +retry: + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl); + + /* If an huge pmd materialized from under us fail */ + if (unlikely(!dst_pte)) { + err = -EFAULT; + goto out; + } + + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl); + + /* + * We held the mmap_lock for reading so MADV_DONTNEED + * can zap transparent huge pages under us, or the + * transparent huge page fault can establish new + * transparent huge pages under us. + */ + if (unlikely(!src_pte)) { + err = -EFAULT; + goto out; + } + + BUG_ON(pmd_none(*dst_pmd)); + BUG_ON(pmd_none(*src_pmd)); + BUG_ON(pmd_trans_huge(*dst_pmd)); + BUG_ON(pmd_trans_huge(*src_pmd)); + + spin_lock(dst_ptl); + orig_dst_pte = *dst_pte; + spin_unlock(dst_ptl); + if (!pte_none(orig_dst_pte)) { + err = -EEXIST; + goto out; + } + + spin_lock(src_ptl); + orig_src_pte = *src_pte; + spin_unlock(src_ptl); + if (pte_none(orig_src_pte)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) + err = -ENOENT; + else /* nothing to do to remap a hole */ + err = 0; + goto out; + } + + if (pte_present(orig_src_pte)) { + /* + * Pin and lock both source folio and anon_vma. Since we are in + * RCU read section, we can't block, so on contention have to + * unmap the ptes, obtain the lock and retry. + */ + if (!src_folio) { + struct folio *folio; + + /* + * Pin the page while holding the lock to be sure the + * page isn't freed under us + */ + spin_lock(src_ptl); + if (!pte_same(orig_src_pte, *src_pte)) { + spin_unlock(src_ptl); + err = -EAGAIN; + goto out; + } + + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte); + if (!folio || !folio_test_anon(folio) || + folio_test_large(folio) || + folio_estimated_sharers(folio) != 1) { + spin_unlock(src_ptl); + err = -EBUSY; + goto out; + } + + folio_get(folio); + src_folio = folio; + spin_unlock(src_ptl); + + /* block all concurrent rmap walks */ + if (!folio_trylock(src_folio)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + folio_lock(src_folio); + goto retry; + } + } + + if (!src_anon_vma) { + /* + * folio_referenced walks the anon_vma chain + * without the folio lock. Serialize against it with + * the anon_vma lock, the folio lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + /* page was unmapped from under us */ + err = -EAGAIN; + goto out; + } + if (!anon_vma_trylock_write(src_anon_vma)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + anon_vma_lock_write(src_anon_vma); + goto retry; + } + } + + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma, + dst_addr, src_addr, dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl, src_folio); + } else { + entry = pte_to_swp_entry(orig_src_pte); + if (non_swap_entry(entry)) { + if (is_migration_entry(entry)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + migration_entry_wait(src_mm, src_pmd, + src_addr); + err = -EAGAIN; + } else + err = -EFAULT; + goto out; + } + + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr, + dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl); + } + +out: + if (src_anon_vma) { + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); + } + if (src_folio) { + folio_unlock(src_folio); + folio_put(src_folio); + } + if (dst_pte) + pte_unmap(dst_pte); + if (src_pte) + pte_unmap(src_pte); + mmu_notifier_invalidate_range_end(&range); + + return err; +} + +static int validate_remap_areas(struct vm_area_struct *src_vma, + struct vm_area_struct *dst_vma) +{ + /* Only allow remapping if both have the same access and protection */ + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) || + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot)) + return -EINVAL; + + /* Only allow remapping if both are mlocked or both aren't */ + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED)) + return -EINVAL; + + /* + * Be strict and only allow remap_pages if either the src or + * dst range is registered in the userfaultfd to prevent + * userland errors going unnoticed. As far as the VM + * consistency is concerned, it would be perfectly safe to + * remove this check, but there's no useful usage for + * remap_pages ouside of userfaultfd registered ranges. This + * is after all why it is an ioctl belonging to the + * userfaultfd and not a syscall. + * + * Allow both vmas to be registered in the userfaultfd, just + * in case somebody finds a way to make such a case useful. + * Normally only one of the two vmas would be registered in + * the userfaultfd. + */ + if (!dst_vma->vm_userfaultfd_ctx.ctx && + !src_vma->vm_userfaultfd_ctx.ctx) + return -EINVAL; + + /* + * FIXME: only allow remapping across anonymous vmas, + * tmpfs should be added. + */ + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) + return -EINVAL; + + /* + * Ensure the dst_vma has a anon_vma or this page + * would get a NULL anon_vma when moved in the + * dst_vma. + */ + if (unlikely(anon_vma_prepare(dst_vma))) + return -ENOMEM; + + return 0; +} + +/** + * remap_pages - remap arbitrary anonymous pages of an existing vma + * @dst_start: start of the destination virtual memory range + * @src_start: start of the source virtual memory range + * @len: length of the virtual memory range + * + * remap_pages() remaps arbitrary anonymous pages atomically in zero + * copy. It only works on non shared anonymous pages because those can + * be relocated without generating non linear anon_vmas in the rmap + * code. + * + * It provides a zero copy mechanism to handle userspace page faults. + * The source vma pages should have mapcount == 1, which can be + * enforced by using madvise(MADV_DONTFORK) on src vma. + * + * The thread receiving the page during the userland page fault + * will receive the faulting page in the source vma through the network, + * storage or any other I/O device (MADV_DONTFORK in the source vma + * avoids remap_pages() to fail with -EBUSY if the process forks before + * remap_pages() is called), then it will call remap_pages() to map the + * page in the faulting address in the destination vma. + * + * This userfaultfd command works purely via pagetables, so it's the + * most efficient way to move physical non shared anonymous pages + * across different virtual addresses. Unlike mremap()/mmap()/munmap() + * it does not create any new vmas. The mapping in the destination + * address is atomic. + * + * It only works if the vma protection bits are identical from the + * source and destination vma. + * + * It can remap non shared anonymous pages within the same vma too. + * + * If the source virtual memory range has any unmapped holes, or if + * the destination virtual memory range is not a whole unmapped hole, + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This + * provides a very strict behavior to avoid any chance of memory + * corruption going unnoticed if there are userland race conditions. + * Only one thread should resolve the userland page fault at any given + * time for any given faulting address. This means that if two threads + * try to both call remap_pages() on the same destination address at the + * same time, the second thread will get an explicit error from this + * command. + * + * The command retval will return "len" is successful. The command + * however can be interrupted by fatal signals or errors. If + * interrupted it will return the number of bytes successfully + * remapped before the interruption if any, or the negative error if + * none. It will never return zero. Either it will return an error or + * an amount of bytes successfully moved. If the retval reports a + * "short" remap, the remap_pages() command should be repeated by + * userland with src+retval, dst+reval, len-retval if it wants to know + * about the error that interrupted it. + * + * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to + * prevent -ENOENT errors to materialize if there are holes in the + * source virtual range that is being remapped. The holes will be + * accounted as successfully remapped in the retval of the + * command. This is mostly useful to remap hugepage naturally aligned + * virtual regions without knowing if there are transparent hugepage + * in the regions or not, but preventing the risk of having to split + * the hugepmd during the remap. + * + * If there's any rmap walk that is taking the anon_vma locks without + * first obtaining the folio lock (for example split_huge_page and + * folio_referenced), they will have to verify if the folio->mapping + * has changed after taking the anon_vma lock. If it changed they + * should release the lock and retry obtaining a new anon_vma, because + * it means the anon_vma was changed by remap_pages() before the lock + * could be obtained. This is the only additional complexity added to + * the rmap code to provide this anonymous page remapping functionality. + */ +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 mode) +{ + struct vm_area_struct *src_vma, *dst_vma; + unsigned long src_addr, dst_addr; + pmd_t *src_pmd, *dst_pmd; + long err = -EINVAL; + ssize_t moved = 0; + + /* + * Sanitize the command parameters: + */ + BUG_ON(src_start & ~PAGE_MASK); + BUG_ON(dst_start & ~PAGE_MASK); + BUG_ON(len & ~PAGE_MASK); + + /* Does the address range wrap, or is the span zero-sized? */ + BUG_ON(src_start + len <= src_start); + BUG_ON(dst_start + len <= dst_start); + + /* + * Because these are read sempahores there's no risk of lock + * inversion. + */ + mmap_read_lock(dst_mm); + if (dst_mm != src_mm) + mmap_read_lock(src_mm); + + /* + * Make sure the vma is not shared, that the src and dst remap + * ranges are both valid and fully within a single existing + * vma. + */ + src_vma = find_vma(src_mm, src_start); + if (!src_vma || (src_vma->vm_flags & VM_SHARED)) + goto out; + if (src_start < src_vma->vm_start || + src_start + len > src_vma->vm_end) + goto out; + + dst_vma = find_vma(dst_mm, dst_start); + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) + goto out; + if (dst_start < dst_vma->vm_start || + dst_start + len > dst_vma->vm_end) + goto out; + + err = validate_remap_areas(src_vma, dst_vma); + if (err) + goto out; + + for (src_addr = src_start, dst_addr = dst_start; + src_addr < src_start + len;) { + spinlock_t *ptl; + pmd_t dst_pmdval; + unsigned long step_size; + + BUG_ON(dst_addr >= dst_start + len); + /* + * Below works because anonymous area would not have a + * transparent huge PUD. If file-backed support is added, + * that case would need to be handled here. + */ + src_pmd = mm_find_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + src_pmd = mm_alloc_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + err = -ENOMEM; + break; + } + } + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr); + if (unlikely(!dst_pmd)) { + err = -ENOMEM; + break; + } + + dst_pmdval = pmdp_get_lockless(dst_pmd); + /* + * If the dst_pmd is mapped as THP don't override it and just + * be strict. If dst_pmd changes into TPH after this check, the + * remap_pages_huge_pmd() will detect the change and retry + * while remap_pages_pte() will detect the change and fail. + */ + if (unlikely(pmd_trans_huge(dst_pmdval))) { + err = -EEXIST; + break; + } + + ptl = pmd_trans_huge_lock(src_pmd, src_vma); + if (ptl && !pmd_trans_huge(*src_pmd)) { + spin_unlock(ptl); + ptl = NULL; + } + + if (ptl) { + /* + * Check if we can move the pmd without + * splitting it. First check the address + * alignment to be the same in src/dst. These + * checks don't actually need the PT lock but + * it's good to do it here to optimize this + * block away at build time if + * CONFIG_TRANSPARENT_HUGEPAGE is not set. + */ + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) || + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) { + spin_unlock(ptl); + split_huge_pmd(src_vma, src_pmd, src_addr); + continue; + } + + err = remap_pages_huge_pmd(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_pmdval, + dst_vma, src_vma, + dst_addr, src_addr); + step_size = HPAGE_PMD_SIZE; + } else { + if (pmd_none(*src_pmd)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + if (unlikely(__pte_alloc(src_mm, src_pmd))) { + err = -ENOMEM; + break; + } + } + + if (unlikely(pte_alloc(dst_mm, dst_pmd))) { + err = -ENOMEM; + break; + } + + err = remap_pages_pte(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_vma, src_vma, + dst_addr, src_addr, + mode); + step_size = PAGE_SIZE; + } + + cond_resched(); + + if (!err) { + dst_addr += step_size; + src_addr += step_size; + moved += step_size; + } + + if ((!err || err == -EAGAIN) && + fatal_signal_pending(current)) + err = -EINTR; + + if (err && err != -EAGAIN) + break; + } + +out: + mmap_read_unlock(dst_mm); + if (dst_mm != src_mm) + mmap_read_unlock(src_mm); + BUG_ON(moved < 0); + BUG_ON(err > 0); + BUG_ON(!moved && !err); + return moved ? moved : err; +}
[moving Hugh into "To:" recipients as FYI for khugepaged interaction]
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
+/*
- The mmap_lock for reading is held by the caller. Just move the page
- from src_pmd to dst_pmd if possible, and return true if succeeded
- in moving the page.
- */
+static int remap_pages_pte(struct mm_struct *dst_mm,
struct mm_struct *src_mm,
pmd_t *dst_pmd,
pmd_t *src_pmd,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr,
unsigned long src_addr,
__u64 mode)
+{
swp_entry_t entry;
pte_t orig_src_pte, orig_dst_pte;
spinlock_t *src_ptl, *dst_ptl;
pte_t *src_pte = NULL;
pte_t *dst_pte = NULL;
struct folio *src_folio = NULL;
struct anon_vma *src_anon_vma = NULL;
struct mmu_notifier_range range;
int err = 0;
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
src_addr, src_addr + PAGE_SIZE);
mmu_notifier_invalidate_range_start(&range);
+retry:
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
/*
* We held the mmap_lock for reading so MADV_DONTNEED
* can zap transparent huge pages under us, or the
* transparent huge page fault can establish new
* transparent huge pages under us.
*/
if (unlikely(!src_pte)) {
err = -EFAULT;
goto out;
}
BUG_ON(pmd_none(*dst_pmd));
BUG_ON(pmd_none(*src_pmd));
BUG_ON(pmd_trans_huge(*dst_pmd));
BUG_ON(pmd_trans_huge(*src_pmd));
This works for now, but note that Hugh Dickins has recently been reworking khugepaged such that PTE-based mappings can be collapsed into transhuge mappings under the mmap lock held in *read mode*; holders of the mmap lock in read mode can only synchronize against this by taking the right page table spinlock and rechecking the pmd value. This is only the case for file-based mappings so far, not for anonymous private VMAs; and this code only operates on anonymous private VMAs so far, so it works out.
But if either Hugh further reworks khugepaged such that anonymous VMAs can be collapsed under the mmap lock in read mode, or you expand this code to work on file-backed VMAs, then it will become possible to hit these BUG_ON() calls. I'm not sure what the plans for khugepaged going forward are, but the number of edgecases everyone has to keep in mind would go down if you changed this function to deal gracefully with page tables disappearing under you.
In the newest version of mm/pgtable-generic.c, above __pte_offset_map_lock(), there is a big comment block explaining the current rules for page table access; in particular, regarding the helper pte_offset_map_nolock() that you're using:
* pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); * but when successful, it also outputs a pointer to the spinlock in ptlp - as * pte_offset_map_lock() does, but in this case without locking it. This helps * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock * pointer for the page table that it returns. In principle, the caller should * recheck *pmd once the lock is taken; in practice, no callsite needs that - * either the mmap_lock for write, or pte_same() check on contents, is enough.
If this becomes hittable in the future, I think you will need to recheck *pmd, at least for dst_pte, to avoid copying PTEs into a detached page table.
spin_lock(dst_ptl);
orig_dst_pte = *dst_pte;
spin_unlock(dst_ptl);
if (!pte_none(orig_dst_pte)) {
err = -EEXIST;
goto out;
}
spin_lock(src_ptl);
orig_src_pte = *src_pte;
spin_unlock(src_ptl);
if (pte_none(orig_src_pte)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
err = -ENOENT;
else /* nothing to do to remap a hole */
err = 0;
goto out;
}
if (pte_present(orig_src_pte)) {
/*
* Pin and lock both source folio and anon_vma. Since we are in
* RCU read section, we can't block, so on contention have to
* unmap the ptes, obtain the lock and retry.
*/
if (!src_folio) {
struct folio *folio;
/*
* Pin the page while holding the lock to be sure the
* page isn't freed under us
*/
spin_lock(src_ptl);
if (!pte_same(orig_src_pte, *src_pte)) {
spin_unlock(src_ptl);
err = -EAGAIN;
goto out;
}
folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
if (!folio || !folio_test_anon(folio) ||
folio_test_large(folio) ||
folio_estimated_sharers(folio) != 1) {
spin_unlock(src_ptl);
err = -EBUSY;
goto out;
}
folio_get(folio);
src_folio = folio;
spin_unlock(src_ptl);
/* block all concurrent rmap walks */
if (!folio_trylock(src_folio)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
folio_lock(src_folio);
goto retry;
}
}
if (!src_anon_vma) {
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
} else {
entry = pte_to_swp_entry(orig_src_pte);
if (non_swap_entry(entry)) {
if (is_migration_entry(entry)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
migration_entry_wait(src_mm, src_pmd,
src_addr);
err = -EAGAIN;
} else
err = -EFAULT;
goto out;
}
err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl);
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
On Wed, Sep 27, 2023 at 3:07 AM Jann Horn jannh@google.com wrote:
[moving Hugh into "To:" recipients as FYI for khugepaged interaction]
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
+/*
- The mmap_lock for reading is held by the caller. Just move the page
- from src_pmd to dst_pmd if possible, and return true if succeeded
- in moving the page.
- */
+static int remap_pages_pte(struct mm_struct *dst_mm,
struct mm_struct *src_mm,
pmd_t *dst_pmd,
pmd_t *src_pmd,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr,
unsigned long src_addr,
__u64 mode)
+{
swp_entry_t entry;
pte_t orig_src_pte, orig_dst_pte;
spinlock_t *src_ptl, *dst_ptl;
pte_t *src_pte = NULL;
pte_t *dst_pte = NULL;
struct folio *src_folio = NULL;
struct anon_vma *src_anon_vma = NULL;
struct mmu_notifier_range range;
int err = 0;
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
src_addr, src_addr + PAGE_SIZE);
mmu_notifier_invalidate_range_start(&range);
+retry:
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
/*
* We held the mmap_lock for reading so MADV_DONTNEED
* can zap transparent huge pages under us, or the
* transparent huge page fault can establish new
* transparent huge pages under us.
*/
if (unlikely(!src_pte)) {
err = -EFAULT;
goto out;
}
BUG_ON(pmd_none(*dst_pmd));
BUG_ON(pmd_none(*src_pmd));
BUG_ON(pmd_trans_huge(*dst_pmd));
BUG_ON(pmd_trans_huge(*src_pmd));
This works for now, but note that Hugh Dickins has recently been reworking khugepaged such that PTE-based mappings can be collapsed into transhuge mappings under the mmap lock held in *read mode*; holders of the mmap lock in read mode can only synchronize against this by taking the right page table spinlock and rechecking the pmd value. This is only the case for file-based mappings so far, not for anonymous private VMAs; and this code only operates on anonymous private VMAs so far, so it works out.
But if either Hugh further reworks khugepaged such that anonymous VMAs can be collapsed under the mmap lock in read mode, or you expand this code to work on file-backed VMAs, then it will become possible to hit these BUG_ON() calls. I'm not sure what the plans for khugepaged going forward are, but the number of edgecases everyone has to keep in mind would go down if you changed this function to deal gracefully with page tables disappearing under you.
In the newest version of mm/pgtable-generic.c, above __pte_offset_map_lock(), there is a big comment block explaining the current rules for page table access; in particular, regarding the helper pte_offset_map_nolock() that you're using:
- pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
- but when successful, it also outputs a pointer to the spinlock in ptlp - as
- pte_offset_map_lock() does, but in this case without locking it. This helps
- the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
- act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
- pointer for the page table that it returns. In principle, the caller should
- recheck *pmd once the lock is taken; in practice, no callsite needs that -
- either the mmap_lock for write, or pte_same() check on contents, is enough.
If this becomes hittable in the future, I think you will need to recheck *pmd, at least for dst_pte, to avoid copying PTEs into a detached page table.
Thanks for the warning, Jann. It sounds to me it would be better to add this pmd check now even though it's not hittable. Does that sound good to everyone?
spin_lock(dst_ptl);
orig_dst_pte = *dst_pte;
spin_unlock(dst_ptl);
if (!pte_none(orig_dst_pte)) {
err = -EEXIST;
goto out;
}
spin_lock(src_ptl);
orig_src_pte = *src_pte;
spin_unlock(src_ptl);
if (pte_none(orig_src_pte)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
err = -ENOENT;
else /* nothing to do to remap a hole */
err = 0;
goto out;
}
if (pte_present(orig_src_pte)) {
/*
* Pin and lock both source folio and anon_vma. Since we are in
* RCU read section, we can't block, so on contention have to
* unmap the ptes, obtain the lock and retry.
*/
if (!src_folio) {
struct folio *folio;
/*
* Pin the page while holding the lock to be sure the
* page isn't freed under us
*/
spin_lock(src_ptl);
if (!pte_same(orig_src_pte, *src_pte)) {
spin_unlock(src_ptl);
err = -EAGAIN;
goto out;
}
folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
if (!folio || !folio_test_anon(folio) ||
folio_test_large(folio) ||
folio_estimated_sharers(folio) != 1) {
spin_unlock(src_ptl);
err = -EBUSY;
goto out;
}
folio_get(folio);
src_folio = folio;
spin_unlock(src_ptl);
/* block all concurrent rmap walks */
if (!folio_trylock(src_folio)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
folio_lock(src_folio);
goto retry;
}
}
if (!src_anon_vma) {
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
} else {
entry = pte_to_swp_entry(orig_src_pte);
if (non_swap_entry(entry)) {
if (is_migration_entry(entry)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
migration_entry_wait(src_mm, src_pmd,
src_addr);
err = -EAGAIN;
} else
err = -EFAULT;
goto out;
}
err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl);
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
On Wed, Sep 27, 2023 at 7:12 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 3:07 AM Jann Horn jannh@google.com wrote:
[moving Hugh into "To:" recipients as FYI for khugepaged interaction]
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
+/*
- The mmap_lock for reading is held by the caller. Just move the page
- from src_pmd to dst_pmd if possible, and return true if succeeded
- in moving the page.
- */
+static int remap_pages_pte(struct mm_struct *dst_mm,
struct mm_struct *src_mm,
pmd_t *dst_pmd,
pmd_t *src_pmd,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr,
unsigned long src_addr,
__u64 mode)
+{
swp_entry_t entry;
pte_t orig_src_pte, orig_dst_pte;
spinlock_t *src_ptl, *dst_ptl;
pte_t *src_pte = NULL;
pte_t *dst_pte = NULL;
struct folio *src_folio = NULL;
struct anon_vma *src_anon_vma = NULL;
struct mmu_notifier_range range;
int err = 0;
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
src_addr, src_addr + PAGE_SIZE);
mmu_notifier_invalidate_range_start(&range);
+retry:
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
/*
* We held the mmap_lock for reading so MADV_DONTNEED
* can zap transparent huge pages under us, or the
* transparent huge page fault can establish new
* transparent huge pages under us.
*/
if (unlikely(!src_pte)) {
err = -EFAULT;
goto out;
}
BUG_ON(pmd_none(*dst_pmd));
BUG_ON(pmd_none(*src_pmd));
BUG_ON(pmd_trans_huge(*dst_pmd));
BUG_ON(pmd_trans_huge(*src_pmd));
This works for now, but note that Hugh Dickins has recently been reworking khugepaged such that PTE-based mappings can be collapsed into transhuge mappings under the mmap lock held in *read mode*; holders of the mmap lock in read mode can only synchronize against this by taking the right page table spinlock and rechecking the pmd value. This is only the case for file-based mappings so far, not for anonymous private VMAs; and this code only operates on anonymous private VMAs so far, so it works out.
But if either Hugh further reworks khugepaged such that anonymous VMAs can be collapsed under the mmap lock in read mode, or you expand this code to work on file-backed VMAs, then it will become possible to hit these BUG_ON() calls. I'm not sure what the plans for khugepaged going forward are, but the number of edgecases everyone has to keep in mind would go down if you changed this function to deal gracefully with page tables disappearing under you.
In the newest version of mm/pgtable-generic.c, above __pte_offset_map_lock(), there is a big comment block explaining the current rules for page table access; in particular, regarding the helper pte_offset_map_nolock() that you're using:
- pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
- but when successful, it also outputs a pointer to the spinlock in ptlp - as
- pte_offset_map_lock() does, but in this case without locking it. This helps
- the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
- act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
- pointer for the page table that it returns. In principle, the caller should
- recheck *pmd once the lock is taken; in practice, no callsite needs that -
- either the mmap_lock for write, or pte_same() check on contents, is enough.
If this becomes hittable in the future, I think you will need to recheck *pmd, at least for dst_pte, to avoid copying PTEs into a detached page table.
Thanks for the warning, Jann. It sounds to me it would be better to add this pmd check now even though it's not hittable. Does that sound good to everyone?
Sounds good to me.
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr)
+{
pmd_t _dst_pmd, src_pmdval;
struct page *src_page;
struct folio *src_folio;
struct anon_vma *src_anon_vma, *dst_anon_vma;
spinlock_t *src_ptl, *dst_ptl;
pgtable_t src_pgtable, dst_pgtable;
struct mmu_notifier_range range;
int err = 0;
src_pmdval = *src_pmd;
src_ptl = pmd_lockptr(src_mm, src_pmd);
BUG_ON(!spin_is_locked(src_ptl));
mmap_assert_locked(src_mm);
mmap_assert_locked(dst_mm);
BUG_ON(!pmd_trans_huge(src_pmdval));
BUG_ON(!pmd_none(dst_pmdval));
BUG_ON(src_addr & ~HPAGE_PMD_MASK);
BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
src_page = pmd_page(src_pmdval);
if (unlikely(!PageAnonExclusive(src_page))) {
spin_unlock(src_ptl);
return -EBUSY;
}
src_folio = page_folio(src_page);
folio_get(src_folio);
spin_unlock(src_ptl);
/* preallocate dst_pgtable if needed */
if (dst_mm != src_mm) {
dst_pgtable = pte_alloc_one(dst_mm);
if (unlikely(!dst_pgtable)) {
err = -ENOMEM;
goto put_folio;
}
} else {
dst_pgtable = NULL;
}
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
src_addr + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
/* block all concurrent rmap walks */
folio_lock(src_folio);
/*
* split_huge_page walks the anon_vma chain without the page
* lock. Serialize against it with the anon_vma lock, the page
* lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
err = -EAGAIN;
goto unlock_folio;
}
anon_vma_lock_write(src_anon_vma);
dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
double_pt_lock(src_ptl, dst_ptl);
if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
!pmd_same(*dst_pmd, dst_pmdval) ||
folio_mapcount(src_folio) != 1)) {
I think this is also supposed to be PageAnonExclusive()?
double_pt_unlock(src_ptl, dst_ptl);
err = -EAGAIN;
goto put_anon_vma;
}
BUG_ON(!folio_test_head(src_folio));
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
_dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
_dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
if (dst_pgtable) {
pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
pte_free(src_mm, src_pgtable);
dst_pgtable = NULL;
mm_inc_nr_ptes(dst_mm);
mm_dec_nr_ptes(src_mm);
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
} else {
pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
}
double_pt_unlock(src_ptl, dst_ptl);
+put_anon_vma:
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
+unlock_folio:
/* unblock rmap walks */
folio_unlock(src_folio);
mmu_notifier_invalidate_range_end(&range);
if (dst_pgtable)
pte_free(dst_mm, dst_pgtable);
+put_folio:
folio_put(src_folio);
return err;
+} +#endif /* CONFIG_USERFAULTFD */
[...]
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
+{
struct anon_vma *dst_anon_vma;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping,
(struct address_space *) dst_anon_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr));
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
1. process P1 does a write fault in a private anonymous VMA, creating and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
if (dst_mm != src_mm) {
inc_mm_counter(dst_mm, MM_ANONPAGES);
dec_mm_counter(src_mm, MM_ANONPAGES);
}
double_pt_unlock(dst_ptl, src_ptl);
return 0;
+}
+static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl)
+{
if (!pte_swp_exclusive(orig_src_pte))
return -EBUSY;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte)) {
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
if (dst_mm != src_mm) {
inc_mm_counter(dst_mm, MM_ANONPAGES);
dec_mm_counter(src_mm, MM_ANONPAGES);
I think this is the wrong counter. Looking at zap_pte_range(), in the "Genuine swap entry" case, we modify the MM_SWAPENTS counter, not MM_ANONPAGES.
}
double_pt_unlock(dst_ptl, src_ptl);
return 0;
+}
+/*
- The mmap_lock for reading is held by the caller. Just move the page
- from src_pmd to dst_pmd if possible, and return true if succeeded
- in moving the page.
- */
+static int remap_pages_pte(struct mm_struct *dst_mm,
struct mm_struct *src_mm,
pmd_t *dst_pmd,
pmd_t *src_pmd,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr,
unsigned long src_addr,
__u64 mode)
+{
swp_entry_t entry;
pte_t orig_src_pte, orig_dst_pte;
spinlock_t *src_ptl, *dst_ptl;
pte_t *src_pte = NULL;
pte_t *dst_pte = NULL;
struct folio *src_folio = NULL;
struct anon_vma *src_anon_vma = NULL;
struct mmu_notifier_range range;
int err = 0;
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
src_addr, src_addr + PAGE_SIZE);
mmu_notifier_invalidate_range_start(&range);
+retry:
This retry looks a bit dodgy. On this retry label, we restart almost the entire operation, including re-reading the source PTE; the only variables that carry state forward from the previous retry loop iteration are src_folio and src_anon_vma.
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
[...]
spin_lock(dst_ptl);
orig_dst_pte = *dst_pte;
spin_unlock(dst_ptl);
if (!pte_none(orig_dst_pte)) {
err = -EEXIST;
goto out;
}
spin_lock(src_ptl);
orig_src_pte = *src_pte;
Here we read an entirely new orig_src_pte value. Something like a concurrent MADV_DONTNEED+pagefault could have made the PTE point to a different page between loop iterations.
spin_unlock(src_ptl);
I think you have to insert something like the following here:
if (src_folio && (orig_dst_pte != previous_src_pte)) { err = -EAGAIN; goto out; } previous_src_pte = orig_dst_pte;
Otherwise:
if (pte_none(orig_src_pte)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
err = -ENOENT;
else /* nothing to do to remap a hole */
err = 0;
goto out;
}
if (pte_present(orig_src_pte)) {
/*
* Pin and lock both source folio and anon_vma. Since we are in
* RCU read section, we can't block, so on contention have to
* unmap the ptes, obtain the lock and retry.
*/
if (!src_folio) {
If we already found a src_folio in the previous iteration (but the trylock failed), we keep using the same src_folio without rechecking if the current PTE still points to the same folio.
struct folio *folio;
/*
* Pin the page while holding the lock to be sure the
* page isn't freed under us
*/
spin_lock(src_ptl);
if (!pte_same(orig_src_pte, *src_pte)) {
spin_unlock(src_ptl);
err = -EAGAIN;
goto out;
}
folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
if (!folio || !folio_test_anon(folio) ||
folio_test_large(folio) ||
folio_estimated_sharers(folio) != 1) {
spin_unlock(src_ptl);
err = -EBUSY;
goto out;
}
folio_get(folio);
src_folio = folio;
spin_unlock(src_ptl);
/* block all concurrent rmap walks */
if (!folio_trylock(src_folio)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
folio_lock(src_folio);
goto retry;
}
}
if (!src_anon_vma) {
(And here, if we already saw a src_anon_vma but the trylock failed, we'll keep using that src_anon_vma.)
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
So at this point we have:
- the current src_pte - some referenced+locked src_folio that used to be mapped exclusively at src_addr - (the anon_vma associated with the src_folio)
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
And then this will, without touching folio mapcounts/refcounts, delete the current PTE at src_addr, and create a PTE at dst_addr pointing to the old src_folio, leading to incorrect refcounts/mapcounts?
} else {
[...]
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
[...]
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
struct vm_area_struct *src_vma, *dst_vma;
unsigned long src_addr, dst_addr;
pmd_t *src_pmd, *dst_pmd;
long err = -EINVAL;
ssize_t moved = 0;
/*
* Sanitize the command parameters:
*/
BUG_ON(src_start & ~PAGE_MASK);
BUG_ON(dst_start & ~PAGE_MASK);
BUG_ON(len & ~PAGE_MASK);
/* Does the address range wrap, or is the span zero-sized? */
BUG_ON(src_start + len <= src_start);
BUG_ON(dst_start + len <= dst_start);
/*
* Because these are read sempahores there's no risk of lock
* inversion.
*/
mmap_read_lock(dst_mm);
if (dst_mm != src_mm)
mmap_read_lock(src_mm);
/*
* Make sure the vma is not shared, that the src and dst remap
* ranges are both valid and fully within a single existing
* vma.
*/
src_vma = find_vma(src_mm, src_start);
if (!src_vma || (src_vma->vm_flags & VM_SHARED))
goto out;
if (src_start < src_vma->vm_start ||
src_start + len > src_vma->vm_end)
goto out;
dst_vma = find_vma(dst_mm, dst_start);
if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
goto out;
if (dst_start < dst_vma->vm_start ||
dst_start + len > dst_vma->vm_end)
goto out;
err = validate_remap_areas(src_vma, dst_vma);
if (err)
goto out;
for (src_addr = src_start, dst_addr = dst_start;
src_addr < src_start + len;) {
spinlock_t *ptl;
pmd_t dst_pmdval;
unsigned long step_size;
BUG_ON(dst_addr >= dst_start + len);
/*
* Below works because anonymous area would not have a
* transparent huge PUD. If file-backed support is added,
* that case would need to be handled here.
*/
src_pmd = mm_find_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
src_pmd = mm_alloc_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
err = -ENOMEM;
break;
}
}
dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
if (unlikely(!dst_pmd)) {
err = -ENOMEM;
break;
}
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
if (ptl) {
/*
* Check if we can move the pmd without
* splitting it. First check the address
* alignment to be the same in src/dst. These
* checks don't actually need the PT lock but
* it's good to do it here to optimize this
* block away at build time if
* CONFIG_TRANSPARENT_HUGEPAGE is not set.
*/
if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
spin_unlock(ptl);
split_huge_pmd(src_vma, src_pmd, src_addr);
continue;
}
err = remap_pages_huge_pmd(dst_mm, src_mm,
dst_pmd, src_pmd,
dst_pmdval,
dst_vma, src_vma,
dst_addr, src_addr);
step_size = HPAGE_PMD_SIZE;
} else {
if (pmd_none(*src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
if (unlikely(__pte_alloc(src_mm, src_pmd))) {
err = -ENOMEM;
break;
}
}
if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
err = -ENOMEM;
break;
}
err = remap_pages_pte(dst_mm, src_mm,
dst_pmd, src_pmd,
dst_vma, src_vma,
dst_addr, src_addr,
mode);
step_size = PAGE_SIZE;
}
cond_resched();
if (!err) {
dst_addr += step_size;
src_addr += step_size;
moved += step_size;
}
if ((!err || err == -EAGAIN) &&
fatal_signal_pending(current))
err = -EINTR;
if (err && err != -EAGAIN)
break;
}
+out:
mmap_read_unlock(dst_mm);
if (dst_mm != src_mm)
mmap_read_unlock(src_mm);
BUG_ON(moved < 0);
BUG_ON(err > 0);
BUG_ON(!moved && !err);
return moved ? moved : err;
+}
2.42.0.515.g380fc7ccd1-goog
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
+{
struct anon_vma *dst_anon_vma;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
^ here you should check PageAnonExclusive. Please get rid of any implicit explicit/implcit mapcount checks.
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping,
(struct address_space *) dst_anon_vma);
I have some cleanups pending for page_move_anon_rmap(), that moves the SetPageAnonExclusive hunk out. Here we should be using page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
I'll send them out soonish.
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr)); >> +
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
Yes, can_change_pte_writable() contains the exact logic when we can turn something easily writable even if it wasn't writable before. which includes that PageAnonExclusive is set. (but with uffd-wp or softdirty tracking, there is more to consider)
On Wed, Sep 27, 2023 at 6:29 AM David Hildenbrand david@redhat.com wrote:
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
+{
struct anon_vma *dst_anon_vma;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
^ here you should check PageAnonExclusive. Please get rid of any implicit explicit/implcit mapcount checks.
Ack.
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping,
(struct address_space *) dst_anon_vma);
I have some cleanups pending for page_move_anon_rmap(), that moves the SetPageAnonExclusive hunk out. Here we should be using page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
I'll send them out soonish.
Should I keep this as is in my next version until you post the cleanups? I can add a TODO comment to convert it to folio_move_anon_rmap() once it's ready.
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr)); >> +
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
Yes, can_change_pte_writable() contains the exact logic when we can turn something easily writable even if it wasn't writable before. which includes that PageAnonExclusive is set. (but with uffd-wp or softdirty tracking, there is more to consider)
For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not set, but we want remapping to work for RO memory as well. Are you saying that a PageAnonExclusive() check alone would not be enough here?
Thanks, Suren.
-- Cheers,
David / dhildenb
On Wed, Sep 27, 2023 at 11:25:22AM -0700, Suren Baghdasaryan wrote:
For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not set, but we want remapping to work for RO memory as well.
Is there an use case that we want remap to work on RO?
The thing is, either removing a page or installing a new one with valid content imply VM_WRITE to me on either side..
Thanks,
On 27.09.23 20:25, Suren Baghdasaryan wrote:
I have some cleanups pending for page_move_anon_rmap(), that moves the SetPageAnonExclusive hunk out. Here we should be using page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
I'll send them out soonish.
Should I keep this as is in my next version until you post the cleanups? I can add a TODO comment to convert it to folio_move_anon_rmap() once it's ready.
You should just be able to use page_move_anon_rmap() and whatever gets in first cleans it up :)
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr)); >> +
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
Yes, can_change_pte_writable() contains the exact logic when we can turn something easily writable even if it wasn't writable before. which includes that PageAnonExclusive is set. (but with uffd-wp or softdirty tracking, there is more to consider)
For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not set, but we want remapping to work for RO memory as well. Are you
In a VMA without VM_WRITE you certainly wouldn't want to make PTEs writable :) That's why that function just does a sanity check that it is not called in strange context. So one would only call it if VM_WRITE is set.
saying that a PageAnonExclusive() check alone would not be enough here?
There are some interesting questions to ask here:
1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not? You most probably have to mark the PTE softdirty and not make it writable.
2) VM_UFFD_WP requires similar care I assume? Peter might know.
On Thu, Sep 28, 2023 at 10:15 AM David Hildenbrand david@redhat.com wrote:
On 27.09.23 20:25, Suren Baghdasaryan wrote:
I have some cleanups pending for page_move_anon_rmap(), that moves the SetPageAnonExclusive hunk out. Here we should be using page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
I'll send them out soonish.
Should I keep this as is in my next version until you post the cleanups? I can add a TODO comment to convert it to folio_move_anon_rmap() once it's ready.
You should just be able to use page_move_anon_rmap() and whatever gets in first cleans it up :)
Ack.
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr)); >> +
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
Yes, can_change_pte_writable() contains the exact logic when we can turn something easily writable even if it wasn't writable before. which includes that PageAnonExclusive is set. (but with uffd-wp or softdirty tracking, there is more to consider)
For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not set, but we want remapping to work for RO memory as well. Are you
In a VMA without VM_WRITE you certainly wouldn't want to make PTEs writable :) That's why that function just does a sanity check that it is not called in strange context. So one would only call it if VM_WRITE is set.
saying that a PageAnonExclusive() check alone would not be enough here?
There are some interesting questions to ask here:
- What happens if the old VMA has VM_SOFTDIRTY set but the new one not?
You most probably have to mark the PTE softdirty and not make it writable.
- VM_UFFD_WP requires similar care I assume? Peter might know.
Let me look closer into these cases. I'll also double-check if we need to support uffd_remap for R/O vmas. I assumed we do but I actually never checked. Thanks!
-- Cheers,
David / dhildenb
On Thu, Sep 28, 2023 at 11:32 AM Suren Baghdasaryan surenb@google.com wrote:
On Thu, Sep 28, 2023 at 10:15 AM David Hildenbrand david@redhat.com wrote:
On 27.09.23 20:25, Suren Baghdasaryan wrote:
I have some cleanups pending for page_move_anon_rmap(), that moves the SetPageAnonExclusive hunk out. Here we should be using page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
I'll send them out soonish.
Should I keep this as is in my next version until you post the cleanups? I can add a TODO comment to convert it to folio_move_anon_rmap() once it's ready.
You should just be able to use page_move_anon_rmap() and whatever gets in first cleans it up :)
Ack.
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr)); >> +
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
Yes, can_change_pte_writable() contains the exact logic when we can turn something easily writable even if it wasn't writable before. which includes that PageAnonExclusive is set. (but with uffd-wp or softdirty tracking, there is more to consider)
For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not set, but we want remapping to work for RO memory as well. Are you
In a VMA without VM_WRITE you certainly wouldn't want to make PTEs writable :) That's why that function just does a sanity check that it is not called in strange context. So one would only call it if VM_WRITE is set.
saying that a PageAnonExclusive() check alone would not be enough here?
There are some interesting questions to ask here:
- What happens if the old VMA has VM_SOFTDIRTY set but the new one not?
You most probably have to mark the PTE softdirty and not make it writable.
- VM_UFFD_WP requires similar care I assume? Peter might know.
Let me look closer into these cases. I'll also double-check if we need to support uffd_remap for R/O vmas. I assumed we do but I actually never checked.
Ok, I confirmed that we don't need remapping or R/O areas. So, I can use can_change_pte_writable() and keep things simple. Does that sound good?
Thanks!
-- Cheers,
David / dhildenb
On Thu, Sep 28, 2023 at 07:15:13PM +0200, David Hildenbrand wrote:
There are some interesting questions to ask here:
- What happens if the old VMA has VM_SOFTDIRTY set but the new one not? You
most probably have to mark the PTE softdirty and not make it writable.
I don't know whether anyone would care about soft-dirty used with uffd remap, but if to think about it..
Logically if the dst vma has !SOFTDIRTY (means, soft-dirty tracking enabled), then IIUC the right thing to do is to assume this page is modified, hence mark softdirty and perhaps proceed with other checks (where write bit can be set if all check pass)?
Because from a soft-dirty monitor POV on dst_vma I see this REMAP the same as writting data onto the missing page and got a page fault (e.g. UFFDIO_COPY); we just avoided the allocation and copy.
The src vma seems also fine in this regard: soft-dirty should ignore holes always anyway (e.g. DONTNEED on a page should report !soft-dirty later even if tracking).
- VM_UFFD_WP requires similar care I assume? Peter might know.
UFFD_WP shouldn't be affected, iiuc.
Let's first discuss dst vma side.
WP_UNPOPULATED made it slightly complicated but not so much. The core should be that REMAP only installs pages if it's exactly pte_none():
+ if (!pte_none(orig_dst_pte)) { + err = -EEXIST; + goto out; + }
Then it already covers things like pte markers, and any marker currently will fail the REMAP ioctl already. May not be always wanted, but no risk of losing wp notifications. If that'll be a valid use case we can work it out.
On src vma, REMAP ioctl should behave the same as DONTNEED. Now we drop the src pte along with the uffd-wp bit even if set, which is the correct behavior from that regard.
Again, I don't know whether anyone cares on any of those, though..
Thanks,
On 28.09.23 21:00, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:15:13PM +0200, David Hildenbrand wrote:
There are some interesting questions to ask here:
- What happens if the old VMA has VM_SOFTDIRTY set but the new one not? You
most probably have to mark the PTE softdirty and not make it writable.
I don't know whether anyone would care about soft-dirty used with uffd remap, but if to think about it..
Logically if the dst vma has !SOFTDIRTY (means, soft-dirty tracking enabled), then IIUC the right thing to do is to assume this page is modified, hence mark softdirty and perhaps proceed with other checks (where write bit can be set if all check pass)?
I think so, yes.
Because from a soft-dirty monitor POV on dst_vma I see this REMAP the same as writting data onto the missing page and got a page fault (e.g. UFFDIO_COPY); we just avoided the allocation and copy.
The src vma seems also fine in this regard: soft-dirty should ignore holes always anyway (e.g. DONTNEED on a page should report !soft-dirty later even if tracking).
Sounds good to me.
- VM_UFFD_WP requires similar care I assume? Peter might know.
UFFD_WP shouldn't be affected, iiuc.
Let's first discuss dst vma side.
WP_UNPOPULATED made it slightly complicated but not so much. The core should be that REMAP only installs pages if it's exactly pte_none():
if (!pte_none(orig_dst_pte)) {
err = -EEXIST;
goto out;
}
Then it already covers things like pte markers, and any marker currently will fail the REMAP ioctl already. May not be always wanted, but no risk of losing wp notifications. If that'll be a valid use case we can work it out.
Agreed.
On src vma, REMAP ioctl should behave the same as DONTNEED. Now we drop the src pte along with the uffd-wp bit even if set, which is the correct behavior from that regard.
Again, I don't know whether anyone cares on any of those, though..
If it's easy to handle, we should just handle it or instead spell it out why we believe we can break other features. Seems to be very easy to handle.
On Wed, Sep 27, 2023 at 03:29:35PM +0200, David Hildenbrand wrote:
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
^ here you should check PageAnonExclusive. Please get rid of any implicit explicit/implcit mapcount checks.
David, is PageAnon 100% accurate now in the current tree?
IOW, can it be possible that the page has total_mapcount==1 but missing AnonExclusive bit in any possible way?
Thanks,
On 28.09.23 18:24, Peter Xu wrote:
On Wed, Sep 27, 2023 at 03:29:35PM +0200, David Hildenbrand wrote:
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
^ here you should check PageAnonExclusive. Please get rid of any implicit explicit/implcit mapcount checks.
David, is PageAnon 100% accurate now in the current tree?
IOW, can it be possible that the page has total_mapcount==1 but missing AnonExclusive bit in any possible way?
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
The problem is DONTFORK is only a suggestion, but not yet restricted. If someone reaches on top of some !PAE page on src it'll never gonna proceed and keep failing, iiuc.
do_wp_page() doesn't have that issue of accuracy only because one round of CoW will just allocate a new page with PAE set guaranteed, which is pretty much self-heal and unnoticed.
So it'll be great if we can have similar self-heal way for PAE. If not, I think it's still fine we just always fail on !PAE src pages, but then maybe we should let the user know what's wrong, e.g., the user can just forgot to apply DONTFORK then forked. And then the user hits error and don't know what happened. Probably at least we should document it well in man pages.
Another option can be we keep using folio_mapcount() for pte, and another helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1) for thp. But I know that's not ideal either.
On 28.09.23 19:21, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
The problem is DONTFORK is only a suggestion, but not yet restricted. If someone reaches on top of some !PAE page on src it'll never gonna proceed and keep failing, iiuc.
Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We should document that as a limitation.
For example, you could return an error to the user that can just call UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's probably ugly as well).
do_wp_page() doesn't have that issue of accuracy only because one round of CoW will just allocate a new page with PAE set guaranteed, which is pretty much self-heal and unnoticed.
Yes. But it might have to copy, at which point the whole optimization of remap is gone :)
So it'll be great if we can have similar self-heal way for PAE. If not, I think it's still fine we just always fail on !PAE src pages, but then maybe we should let the user know what's wrong, e.g., the user can just forgot to apply DONTFORK then forked. And then the user hits error and don't know what happened. Probably at least we should document it well in man pages.
Yes, exactly.
Another option can be we keep using folio_mapcount() for pte, and another helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1) for thp. But I know that's not ideal either.
As long as we only set the pte writable if PAE is set, we're good from a CVE perspective. The other part is just simplicity of avoiding all these mapcount+swapcount games where possible.
(one day folio_mapcount() might be faster -- I'm still working on that patch in the bigger picture of handling PTE-mapped THP better)
On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
On 28.09.23 19:21, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
The problem is DONTFORK is only a suggestion, but not yet restricted. If someone reaches on top of some !PAE page on src it'll never gonna proceed and keep failing, iiuc.
Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We should document that as a limitation.
For example, you could return an error to the user that can just call UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's probably ugly as well).
We could indeed provide some special errno perhaps upon the PAE check, then document it explicitly in the man page and suggest resolutions (like DONTFORK) when user hit it.
do_wp_page() doesn't have that issue of accuracy only because one round of CoW will just allocate a new page with PAE set guaranteed, which is pretty much self-heal and unnoticed.
Yes. But it might have to copy, at which point the whole optimization of remap is gone :)
Right, but that's fine IMHO because it should still be very corner case, definitely not expected to be the majority to start impact the performance results.
So it'll be great if we can have similar self-heal way for PAE. If not, I think it's still fine we just always fail on !PAE src pages, but then maybe we should let the user know what's wrong, e.g., the user can just forgot to apply DONTFORK then forked. And then the user hits error and don't know what happened. Probably at least we should document it well in man pages.
Yes, exactly.
Another option can be we keep using folio_mapcount() for pte, and another helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1) for thp. But I know that's not ideal either.
As long as we only set the pte writable if PAE is set, we're good from a CVE perspective. The other part is just simplicity of avoiding all these mapcount+swapcount games where possible.
(one day folio_mapcount() might be faster -- I'm still working on that patch in the bigger picture of handling PTE-mapped THP better)
Sure.
For now as long as we're crystal clear on the possibility of inaccuracy of PAE, it never hits besides fork() && !DONTFORK, and properly document it, then sounds good here.
Thanks,
On Thu, Sep 28, 2023 at 11:34 AM Peter Xu peterx@redhat.com wrote:
On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
On 28.09.23 19:21, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
The problem is DONTFORK is only a suggestion, but not yet restricted. If someone reaches on top of some !PAE page on src it'll never gonna proceed and keep failing, iiuc.
Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We should document that as a limitation.
For example, you could return an error to the user that can just call UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's probably ugly as well).
We could indeed provide some special errno perhaps upon the PAE check, then document it explicitly in the man page and suggest resolutions (like DONTFORK) when user hit it.
do_wp_page() doesn't have that issue of accuracy only because one round of CoW will just allocate a new page with PAE set guaranteed, which is pretty much self-heal and unnoticed.
Yes. But it might have to copy, at which point the whole optimization of remap is gone :)
Right, but that's fine IMHO because it should still be very corner case, definitely not expected to be the majority to start impact the performance results.
So it'll be great if we can have similar self-heal way for PAE. If not, I think it's still fine we just always fail on !PAE src pages, but then maybe we should let the user know what's wrong, e.g., the user can just forgot to apply DONTFORK then forked. And then the user hits error and don't know what happened. Probably at least we should document it well in man pages.
Yes, exactly.
Another option can be we keep using folio_mapcount() for pte, and another helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1) for thp. But I know that's not ideal either.
As long as we only set the pte writable if PAE is set, we're good from a CVE perspective. The other part is just simplicity of avoiding all these mapcount+swapcount games where possible.
(one day folio_mapcount() might be faster -- I'm still working on that patch in the bigger picture of handling PTE-mapped THP better)
Sure.
For now as long as we're crystal clear on the possibility of inaccuracy of PAE, it never hits besides fork() && !DONTFORK, and properly document it, then sounds good here.
Ok, sounds like we have a consensus. I'll prepare manpage changes to document the DONTFORK requirement for uffd_remap.
Thanks,
-- Peter Xu
On 28.09.23 20:34, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
On 28.09.23 19:21, Peter Xu wrote:
On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
As described as reply to v1, without fork() and KSM, the PAE bit should stick around. If that's not the case, we should investigate why.
If we ever support the post-fork case (which the comment above remap_pages() excludes) we'll need good motivation why we'd want to make this overly-complicated feature even more complicated.
The problem is DONTFORK is only a suggestion, but not yet restricted. If someone reaches on top of some !PAE page on src it'll never gonna proceed and keep failing, iiuc.
Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We should document that as a limitation.
For example, you could return an error to the user that can just call UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's probably ugly as well).
We could indeed provide some special errno perhaps upon the PAE check, then document it explicitly in the man page and suggest resolutions (like DONTFORK) when user hit it.
Maybe it might be reasonable to consider an operation that moves the page, even if it might do an internal copy. UFFDIO_MOVE might be a better name for something like that.
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
1) UFFDIO_COPY 2) MADV_DONTNEED
So we would just handle the operation internally without a fallback.
The recommendation to the user to make the overall operation as fast as possible would be to not use KSM, to avoid fork(), or to use MADV_DONTFORK when fork() must be used.
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Thanks,
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
Thanks,
-- Peter Xu
On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
I meant that in addition to the possibility of correctness issues due to lack of atomicity, it could also lead to memory leaks, as the user may assume that src vma is empty post-operation. IMHO, it's better to fail with errno so that the user would fix the code with necessary changes (like using DONTFORK, if forking).
Thanks,
-- Peter Xu
On 02.10.23 17:55, Lokesh Gidra wrote:
On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
I meant that in addition to the possibility of correctness issues due to lack of atomicity, it could also lead to memory leaks, as the user may assume that src vma is empty post-operation. IMHO, it's better to fail with errno so that the user would fix the code with necessary changes (like using DONTFORK, if forking).
Leaving the atomicity discussion out because I think this can just be handled (e.g., the src_vma would always be empty post-operation):
It might not necessarily be a good idea to only expose micro-operations to user space. If the user-space fallback will almost always be "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation performed is moving data, ideally with zero-copy.
[as said as reply to Peter, one could still have magic flags for users that really want to detect when zero-copy is impossible]
With a logical MOVE API users like compaction [as given in the cover letter], not every such user has to eventually implement fallback paths.
But just my 2 cents, the UFFDIO_REMAP users probably can share what the exact use cases are and if fallbacks are required at all or if no-KSM + DONTFORK just does the trick.
On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand david@redhat.com wrote:
On 02.10.23 17:55, Lokesh Gidra wrote:
On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
I meant that in addition to the possibility of correctness issues due to lack of atomicity, it could also lead to memory leaks, as the user may assume that src vma is empty post-operation. IMHO, it's better to fail with errno so that the user would fix the code with necessary changes (like using DONTFORK, if forking).
Leaving the atomicity discussion out because I think this can just be handled (e.g., the src_vma would always be empty post-operation):
It might not necessarily be a good idea to only expose micro-operations to user space. If the user-space fallback will almost always be "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation performed is moving data, ideally with zero-copy.
IMHO, such a fallback will be useful only if it's possible that only some pages in the src vma fail due to this. But even then it would be really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to control if the user wants the fallback or not. OTOH, if this is something that can be detected for the entire src vma, then failing with errno is more appropriate.
Given that the patch is already quite complicated, I humbly suggest leaving the fallback for now as a TODO.
[as said as reply to Peter, one could still have magic flags for users that really want to detect when zero-copy is impossible]
With a logical MOVE API users like compaction [as given in the cover letter], not every such user has to eventually implement fallback paths.
But just my 2 cents, the UFFDIO_REMAP users probably can share what the exact use cases are and if fallbacks are required at all or if no-KSM + DONTFORK just does the trick.
-- Cheers,
David / dhildenb
On Mon, Oct 2, 2023 at 12:34 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand david@redhat.com wrote:
On 02.10.23 17:55, Lokesh Gidra wrote:
On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
I meant that in addition to the possibility of correctness issues due to lack of atomicity, it could also lead to memory leaks, as the user may assume that src vma is empty post-operation. IMHO, it's better to fail with errno so that the user would fix the code with necessary changes (like using DONTFORK, if forking).
Leaving the atomicity discussion out because I think this can just be handled (e.g., the src_vma would always be empty post-operation):
It might not necessarily be a good idea to only expose micro-operations to user space. If the user-space fallback will almost always be "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation performed is moving data, ideally with zero-copy.
IMHO, such a fallback will be useful only if it's possible that only some pages in the src vma fail due to this. But even then it would be really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to control if the user wants the fallback or not. OTOH, if this is something that can be detected for the entire src vma, then failing with errno is more appropriate.
Given that the patch is already quite complicated, I humbly suggest leaving the fallback for now as a TODO.
Ok, I think it makes sense to implement the strict remap logic but in a way that we can easily add copy fallback if that's needed in the future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return some unique error, like EBUSY when the page is not PAE. If we need to add a copy fallback in the future, we will add a UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy mechanism. Does that sound good?
[as said as reply to Peter, one could still have magic flags for users that really want to detect when zero-copy is impossible]
With a logical MOVE API users like compaction [as given in the cover letter], not every such user has to eventually implement fallback paths.
But just my 2 cents, the UFFDIO_REMAP users probably can share what the exact use cases are and if fallbacks are required at all or if no-KSM + DONTFORK just does the trick.
-- Cheers,
David / dhildenb
On Tue, Oct 03, 2023 at 01:04:44PM -0700, Suren Baghdasaryan wrote:
Ok, I think it makes sense to implement the strict remap logic but in a way that we can easily add copy fallback if that's needed in the future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return some unique error, like EBUSY when the page is not PAE. If we need to add a copy fallback in the future, we will add a UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy mechanism. Does that sound good?
For the clear failing approach, sounds all good here.
For the name, no strong opinion, but is there any strong one over MOVE? MOVE is a fine name, however considering UFFDIO_REMAP's long history.. I tend to prefer keeping it called as REMAP - it still sounds sane, and anyone who knows REMAP will know this is exactly that.
Thanks,
On 03.10.23 22:21, Peter Xu wrote:
On Tue, Oct 03, 2023 at 01:04:44PM -0700, Suren Baghdasaryan wrote:
Ok, I think it makes sense to implement the strict remap logic but in a way that we can easily add copy fallback if that's needed in the future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return some unique error, like EBUSY when the page is not PAE. If we need to add a copy fallback in the future, we will add a UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy mechanism. Does that sound good?
For the clear failing approach, sounds all good here.
For the name, no strong opinion, but is there any strong one over MOVE?
See my reply regarding MOVE (+zero-copy optimization) vs. REMAP. Just my thoughts.
REMAP reminds me of mremap, which would never perform any copies, because it can just do more expensive page remappings (modifying VMAs etc.).
MOVE is a fine name, however considering UFFDIO_REMAP's long history.. I tend to prefer keeping it called as REMAP - it still sounds sane, and anyone who knows REMAP will know this is exactly that.
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
But again, remap vs. move is for me a semantical difference; and as I am not a native speaker others might disagree and I might be just wrong.
On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
For example:
https://lore.kernel.org/all/1425575884-2574-21-git-send-email-aarcange@redha...
On Tue, Oct 3, 2023 at 2:21 PM Peter Xu peterx@redhat.com wrote:
On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
For example:
https://lore.kernel.org/all/1425575884-2574-21-git-send-email-aarcange@redha...
There was another submission in 2019: https://lore.kernel.org/all/cover.1547251023.git.blake.caldwell@colorado.edu...
Though both times it did not generate much discussion. I don't have a strong preference though MOVE sounds more generic to me TBH (it specifies the operation rather than REMAP which hints on how that operation is carried out). But again, I'm fine either way. As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I find it weird that the default (the most efficient/desired) mode of operation needs a flag. I would prefer to have no flag initially and add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate when/if we ever need it. Makes sense?
-- Peter Xu
-- To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Oct 3, 2023 at 2:21 PM Peter Xu peterx@redhat.com wrote:
On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
For example:
https://lore.kernel.org/all/1425575884-2574-21-git-send-email-aarcange@redha...
There was another submission in 2019: https://lore.kernel.org/all/cover.1547251023.git.blake.caldwell@colorado.edu...
Though both times it did not generate much discussion. I don't have a strong preference though MOVE sounds more generic to me TBH (it specifies the operation rather than REMAP which hints on how that operation is carried out). But again, I'm fine either way.
That's a good point. IMHO, if in future we want to have the fallback implemented, then MOVE would be a more appropriate name than REMAP.
As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I find it weird that the default (the most efficient/desired) mode of operation needs a flag. I would prefer to have no flag initially and add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate when/if we ever need it. Makes sense?
Agreed!
-- Peter Xu
-- To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
On 04.10.23 01:39, Lokesh Gidra wrote:
On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Oct 3, 2023 at 2:21 PM Peter Xu peterx@redhat.com wrote:
On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
For example:
https://lore.kernel.org/all/1425575884-2574-21-git-send-email-aarcange@redha...
Sorry, I had to process a family NMI the last couple of days.
There was another submission in 2019: https://lore.kernel.org/all/cover.1547251023.git.blake.caldwell@colorado.edu...
It would be good to link them in the cover letter and shortly explain why that wasn't merged back then (if there was any reason).
Though both times it did not generate much discussion. I don't have a strong preference though MOVE sounds more generic to me TBH (it specifies the operation rather than REMAP which hints on how that operation is carried out). But again, I'm fine either way.
That's a good point. IMHO, if in future we want to have the fallback implemented, then MOVE would be a more appropriate name than REMAP.
As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I find it weird that the default (the most efficient/desired) mode of operation needs a flag. I would prefer to have no flag initially and add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate when/if we ever need it. Makes sense?
Agreed!
I agree. One could have UFFDIO_MOVE that is best-effort and documented like that, and a to-be-named future extension that always works but might be more expensive.
Ideally we'd have an interface that does not expose and/or rely on such low-level information and simply always works, but getting that would mean that we'd have to implement the fallback immediately ... so I guess we'll have to expose a best-effort interface first.
On Fri, Oct 6, 2023 at 5:30 AM David Hildenbrand david@redhat.com wrote:
On 04.10.23 01:39, Lokesh Gidra wrote:
On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Oct 3, 2023 at 2:21 PM Peter Xu peterx@redhat.com wrote:
On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
Sorry I have to ask: has this ever been discussed on the list? I don't see any pointers. If not, then probably the number of people that know about the history can be counted with my two hands and that shouldn't be the basis for making decisions.
For example:
https://lore.kernel.org/all/1425575884-2574-21-git-send-email-aarcange@redha...
Sorry, I had to process a family NMI the last couple of days.
There was another submission in 2019: https://lore.kernel.org/all/cover.1547251023.git.blake.caldwell@colorado.edu...
It would be good to link them in the cover letter and shortly explain why that wasn't merged back then (if there was any reason).
Will do. I could not find the reason but will check again.
Though both times it did not generate much discussion. I don't have a strong preference though MOVE sounds more generic to me TBH (it specifies the operation rather than REMAP which hints on how that operation is carried out). But again, I'm fine either way.
That's a good point. IMHO, if in future we want to have the fallback implemented, then MOVE would be a more appropriate name than REMAP.
As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I find it weird that the default (the most efficient/desired) mode of operation needs a flag. I would prefer to have no flag initially and add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate when/if we ever need it. Makes sense?
Agreed!
I agree. One could have UFFDIO_MOVE that is best-effort and documented like that, and a to-be-named future extension that always works but might be more expensive.
Ideally we'd have an interface that does not expose and/or rely on such low-level information and simply always works, but getting that would mean that we'd have to implement the fallback immediately ... so I guess we'll have to expose a best-effort interface first.
Sounds good. I'll try to post the next version early next week. Thanks for the input folks!
-- Cheers,
David / dhildenb
On 03.10.23 22:04, Suren Baghdasaryan wrote:
On Mon, Oct 2, 2023 at 12:34 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand david@redhat.com wrote:
On 02.10.23 17:55, Lokesh Gidra wrote:
On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra lokeshgidra@google.com wrote:
On Mon, Oct 2, 2023 at 4:21 PM Peter Xu peterx@redhat.com wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote: > In case we cannot simply remap the page, the fallback sequence (from the > cover letter) would be triggered. > > 1) UFFDIO_COPY > 2) MADV_DONTNEED > > So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
I'm not sure that means such a fallback is a problem, Suren may know better with the use case.
Although there is no problem in using fallback with our use case but as a user of userfaultfd, I'd suggest leaving it to the developer. Failing with appropriate errno makes more sense. If handled in the kernel, then the user may assume at the end of the operation that the src vma is completely unmapped. And if not correctness issues, it could lead to memory leaks.
I meant that in addition to the possibility of correctness issues due to lack of atomicity, it could also lead to memory leaks, as the user may assume that src vma is empty post-operation. IMHO, it's better to fail with errno so that the user would fix the code with necessary changes (like using DONTFORK, if forking).
Leaving the atomicity discussion out because I think this can just be handled (e.g., the src_vma would always be empty post-operation):
It might not necessarily be a good idea to only expose micro-operations to user space. If the user-space fallback will almost always be "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation performed is moving data, ideally with zero-copy.
IMHO, such a fallback will be useful only if it's possible that only some pages in the src vma fail due to this. But even then it would be really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to control if the user wants the fallback or not. OTOH, if this is something that can be detected for the entire src vma, then failing with errno is more appropriate.
Given that the patch is already quite complicated, I humbly suggest leaving the fallback for now as a TODO.
I agree about the complexity, and I hope we can reduce that further. Otherwise such things end up being a maintainance nightmare.
Ok, I think it makes sense to implement the strict remap logic but in a way that we can easily add copy fallback if that's needed in the
I think whatever we do, we should
a) never talk about any of the implementation details (mapcount, swapcount, PAE) towards the users
b) make it clear from the start that we might change the decision when we fail (to the better or the worse); users should be prepared to implement backup paths. We certainly don't want such behavior to be ABI.
I'd suggest documenting something like the following
"The operation may fail for various reasons. Usually, remapping of pages that are not exclusive to the given process fail; once KSM might dedduplicate pages or fork() COW-shares pages during fork() with child processes, they are no longer exclusive. Further, the kernel might only perform lightweight checks for detecting whether the pages are exclusive, and return -EWHATSOEVER in case that check fails. To make the operation more likely to succeed, KSM should be disabled, fork() should be avoided or MADV_DONTFORK should be configured for the source VMA before fork()."
future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return some unique error, like EBUSY when the page is not PAE. If we need to add a copy fallback in the future, we will add a UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy mechanism. Does that sound good?
To me, if we're talking about moving data, then zero-copy is the optimization and copy+delete would be the (slower) default.
If we're talking about remapping, then there is no copy; we're remapping pages.
So if we'd ever want to support the copy case, one combination would be
UFFDIO_MOVE + UFFDIO_MOVE_ZERO_COPY_ONLY
whereby we would fail if the latter is not specified.
But just my thoughts.
On 02.10.23 17:21, Peter Xu wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
If the page is writable (implies PAE), we can always move it. If it is R/O, it cannot change before we get a page fault and grab the PT lock (well, and page lock).
So I think something atomic can be implemented without too much issues.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
If the page has to be copied, grab a reference and unmap it, then copy it and map it into the new process. Should be doable and handle all kinds of situations just fine.
Just throwing out ideas to get a less low-level interface.
[if one really wants to get notified when one cannot move without a copy, one could have a flag for such power users to control the behavior]
On 02.10.23 19:33, David Hildenbrand wrote:
On 02.10.23 17:21, Peter Xu wrote:
On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
In case we cannot simply remap the page, the fallback sequence (from the cover letter) would be triggered.
- UFFDIO_COPY
- MADV_DONTNEED
So we would just handle the operation internally without a fallback.
Note that I think there will be a slight difference on whole remap atomicity, on what happens if the page is modified after UFFDIO_COPY but before DONTNEED.
If the page is writable (implies PAE), we can always move it. If it is R/O, it cannot change before we get a page fault and grab the PT lock (well, and page lock).
So I think something atomic can be implemented without too much issues.
UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost during movement, and it will generate a missing event after moved, with latest data showing up on dest.
If the page has to be copied, grab a reference and unmap it, then copy it and map it into the new process. Should be doable and handle all kinds of situations just fine.
Just throwing out ideas to get a less low-level interface.
[if one really wants to get notified when one cannot move without a copy, one could have a flag for such power users to control the behavior]
[of course, if someone would have a GUP-pin on such a page, the page exchange would be observable. Just have to documented the UFFDIO_MOVE semantics properly]
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr)
+{
pmd_t _dst_pmd, src_pmdval;
struct page *src_page;
struct folio *src_folio;
struct anon_vma *src_anon_vma, *dst_anon_vma;
spinlock_t *src_ptl, *dst_ptl;
pgtable_t src_pgtable, dst_pgtable;
struct mmu_notifier_range range;
int err = 0;
src_pmdval = *src_pmd;
src_ptl = pmd_lockptr(src_mm, src_pmd);
BUG_ON(!spin_is_locked(src_ptl));
mmap_assert_locked(src_mm);
mmap_assert_locked(dst_mm);
BUG_ON(!pmd_trans_huge(src_pmdval));
BUG_ON(!pmd_none(dst_pmdval));
BUG_ON(src_addr & ~HPAGE_PMD_MASK);
BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
src_page = pmd_page(src_pmdval);
if (unlikely(!PageAnonExclusive(src_page))) {
spin_unlock(src_ptl);
return -EBUSY;
}
src_folio = page_folio(src_page);
folio_get(src_folio);
spin_unlock(src_ptl);
/* preallocate dst_pgtable if needed */
if (dst_mm != src_mm) {
dst_pgtable = pte_alloc_one(dst_mm);
if (unlikely(!dst_pgtable)) {
err = -ENOMEM;
goto put_folio;
}
} else {
dst_pgtable = NULL;
}
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
src_addr + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
/* block all concurrent rmap walks */
folio_lock(src_folio);
/*
* split_huge_page walks the anon_vma chain without the page
* lock. Serialize against it with the anon_vma lock, the page
* lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
err = -EAGAIN;
goto unlock_folio;
}
anon_vma_lock_write(src_anon_vma);
dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
double_pt_lock(src_ptl, dst_ptl);
if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
!pmd_same(*dst_pmd, dst_pmdval) ||
folio_mapcount(src_folio) != 1)) {
I think this is also supposed to be PageAnonExclusive()?
Yes. Will fix.
double_pt_unlock(src_ptl, dst_ptl);
err = -EAGAIN;
goto put_anon_vma;
}
BUG_ON(!folio_test_head(src_folio));
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
_dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
_dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
if (dst_pgtable) {
pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
pte_free(src_mm, src_pgtable);
dst_pgtable = NULL;
mm_inc_nr_ptes(dst_mm);
mm_dec_nr_ptes(src_mm);
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
} else {
pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
}
double_pt_unlock(src_ptl, dst_ptl);
+put_anon_vma:
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
+unlock_folio:
/* unblock rmap walks */
folio_unlock(src_folio);
mmu_notifier_invalidate_range_end(&range);
if (dst_pgtable)
pte_free(dst_mm, dst_pgtable);
+put_folio:
folio_put(src_folio);
return err;
+} +#endif /* CONFIG_USERFAULTFD */
[...]
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
+{
struct anon_vma *dst_anon_vma;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte) ||
folio_test_large(src_folio) ||
folio_estimated_sharers(src_folio) != 1) {
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
BUG_ON(!folio_test_anon(src_folio));
dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
WRITE_ONCE(src_folio->mapping,
(struct address_space *) dst_anon_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
dst_addr));
orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
dst_vma);
I think there's still a theoretical issue here that you could fix by checking for the AnonExclusive flag, similar to the huge page case.
Consider the following scenario:
- process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1 2. process P1 forks and creates two children P2 and P3. afterwards, A1 is mapped in P1, P2 and P3 as a COW page, with mapcount 3. 3. process P1 removes its mapping of A1, dropping its mapcount to 2. 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages() 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
If at this point P3 does a write fault on its mapping of A1, it will still trigger copy-on-write thanks to the AnonExclusive mechanism; and this is necessary to avoid P3 mapping A1 as writable and writing data into it that will become visible to P2, if P2 and P3 are in different security contexts.
But if P3 instead moves its mapping of A1 to another address with remap_anon_pte() which only does a page mapcount check, the maybe_mkwrite() will directly make the mapping writable, circumventing the AnonExclusive mechanism.
I see. Thanks for the detailed explanation! I will add PageAnonExclusive() check in this path to prevent this scenario.
set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
if (dst_mm != src_mm) {
inc_mm_counter(dst_mm, MM_ANONPAGES);
dec_mm_counter(src_mm, MM_ANONPAGES);
}
double_pt_unlock(dst_ptl, src_ptl);
return 0;
+}
+static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl)
+{
if (!pte_swp_exclusive(orig_src_pte))
return -EBUSY;
double_pt_lock(dst_ptl, src_ptl);
if (!pte_same(*src_pte, orig_src_pte) ||
!pte_same(*dst_pte, orig_dst_pte)) {
double_pt_unlock(dst_ptl, src_ptl);
return -EAGAIN;
}
orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
if (dst_mm != src_mm) {
inc_mm_counter(dst_mm, MM_ANONPAGES);
dec_mm_counter(src_mm, MM_ANONPAGES);
I think this is the wrong counter. Looking at zap_pte_range(), in the "Genuine swap entry" case, we modify the MM_SWAPENTS counter, not MM_ANONPAGES.
Oops, my bad. Will fix.
}
double_pt_unlock(dst_ptl, src_ptl);
return 0;
+}
+/*
- The mmap_lock for reading is held by the caller. Just move the page
- from src_pmd to dst_pmd if possible, and return true if succeeded
- in moving the page.
- */
+static int remap_pages_pte(struct mm_struct *dst_mm,
struct mm_struct *src_mm,
pmd_t *dst_pmd,
pmd_t *src_pmd,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr,
unsigned long src_addr,
__u64 mode)
+{
swp_entry_t entry;
pte_t orig_src_pte, orig_dst_pte;
spinlock_t *src_ptl, *dst_ptl;
pte_t *src_pte = NULL;
pte_t *dst_pte = NULL;
struct folio *src_folio = NULL;
struct anon_vma *src_anon_vma = NULL;
struct mmu_notifier_range range;
int err = 0;
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
src_addr, src_addr + PAGE_SIZE);
mmu_notifier_invalidate_range_start(&range);
+retry:
This retry looks a bit dodgy. On this retry label, we restart almost the entire operation, including re-reading the source PTE; the only variables that carry state forward from the previous retry loop iteration are src_folio and src_anon_vma.
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
[...]
spin_lock(dst_ptl);
orig_dst_pte = *dst_pte;
spin_unlock(dst_ptl);
if (!pte_none(orig_dst_pte)) {
err = -EEXIST;
goto out;
}
spin_lock(src_ptl);
orig_src_pte = *src_pte;
Here we read an entirely new orig_src_pte value. Something like a concurrent MADV_DONTNEED+pagefault could have made the PTE point to a different page between loop iterations.
spin_unlock(src_ptl);
I think you have to insert something like the following here:
if (src_folio && (orig_dst_pte != previous_src_pte)) { err = -EAGAIN; goto out; } previous_src_pte = orig_dst_pte;
Yes, this definitely needs to be rechecked. Will fix.
Otherwise:
if (pte_none(orig_src_pte)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
err = -ENOENT;
else /* nothing to do to remap a hole */
err = 0;
goto out;
}
if (pte_present(orig_src_pte)) {
/*
* Pin and lock both source folio and anon_vma. Since we are in
* RCU read section, we can't block, so on contention have to
* unmap the ptes, obtain the lock and retry.
*/
if (!src_folio) {
If we already found a src_folio in the previous iteration (but the trylock failed), we keep using the same src_folio without rechecking if the current PTE still points to the same folio.
struct folio *folio;
/*
* Pin the page while holding the lock to be sure the
* page isn't freed under us
*/
spin_lock(src_ptl);
if (!pte_same(orig_src_pte, *src_pte)) {
spin_unlock(src_ptl);
err = -EAGAIN;
goto out;
}
folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
if (!folio || !folio_test_anon(folio) ||
folio_test_large(folio) ||
folio_estimated_sharers(folio) != 1) {
spin_unlock(src_ptl);
err = -EBUSY;
goto out;
}
folio_get(folio);
src_folio = folio;
spin_unlock(src_ptl);
/* block all concurrent rmap walks */
if (!folio_trylock(src_folio)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
folio_lock(src_folio);
goto retry;
}
}
if (!src_anon_vma) {
(And here, if we already saw a src_anon_vma but the trylock failed, we'll keep using that src_anon_vma.)
Ack. The check for previous_src_pte should handle that as well.
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
So at this point we have:
- the current src_pte
- some referenced+locked src_folio that used to be mapped exclusively
at src_addr
- (the anon_vma associated with the src_folio)
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
And then this will, without touching folio mapcounts/refcounts, delete the current PTE at src_addr, and create a PTE at dst_addr pointing to the old src_folio, leading to incorrect refcounts/mapcounts?
I assume this still points to the missing previous_src_pte check discussed in the previous comments. Is that correct or is there yet another issue?
} else {
[...]
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
[...]
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
struct vm_area_struct *src_vma, *dst_vma;
unsigned long src_addr, dst_addr;
pmd_t *src_pmd, *dst_pmd;
long err = -EINVAL;
ssize_t moved = 0;
/*
* Sanitize the command parameters:
*/
BUG_ON(src_start & ~PAGE_MASK);
BUG_ON(dst_start & ~PAGE_MASK);
BUG_ON(len & ~PAGE_MASK);
/* Does the address range wrap, or is the span zero-sized? */
BUG_ON(src_start + len <= src_start);
BUG_ON(dst_start + len <= dst_start);
/*
* Because these are read sempahores there's no risk of lock
* inversion.
*/
mmap_read_lock(dst_mm);
if (dst_mm != src_mm)
mmap_read_lock(src_mm);
/*
* Make sure the vma is not shared, that the src and dst remap
* ranges are both valid and fully within a single existing
* vma.
*/
src_vma = find_vma(src_mm, src_start);
if (!src_vma || (src_vma->vm_flags & VM_SHARED))
goto out;
if (src_start < src_vma->vm_start ||
src_start + len > src_vma->vm_end)
goto out;
dst_vma = find_vma(dst_mm, dst_start);
if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
goto out;
if (dst_start < dst_vma->vm_start ||
dst_start + len > dst_vma->vm_end)
goto out;
err = validate_remap_areas(src_vma, dst_vma);
if (err)
goto out;
for (src_addr = src_start, dst_addr = dst_start;
src_addr < src_start + len;) {
spinlock_t *ptl;
pmd_t dst_pmdval;
unsigned long step_size;
BUG_ON(dst_addr >= dst_start + len);
/*
* Below works because anonymous area would not have a
* transparent huge PUD. If file-backed support is added,
* that case would need to be handled here.
*/
src_pmd = mm_find_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
src_pmd = mm_alloc_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
err = -ENOMEM;
break;
}
}
dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
if (unlikely(!dst_pmd)) {
err = -ENOMEM;
break;
}
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
if (ptl) {
/*
* Check if we can move the pmd without
* splitting it. First check the address
* alignment to be the same in src/dst. These
* checks don't actually need the PT lock but
* it's good to do it here to optimize this
* block away at build time if
* CONFIG_TRANSPARENT_HUGEPAGE is not set.
*/
if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
spin_unlock(ptl);
split_huge_pmd(src_vma, src_pmd, src_addr);
continue;
}
err = remap_pages_huge_pmd(dst_mm, src_mm,
dst_pmd, src_pmd,
dst_pmdval,
dst_vma, src_vma,
dst_addr, src_addr);
step_size = HPAGE_PMD_SIZE;
} else {
if (pmd_none(*src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
if (unlikely(__pte_alloc(src_mm, src_pmd))) {
err = -ENOMEM;
break;
}
}
if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
err = -ENOMEM;
break;
}
err = remap_pages_pte(dst_mm, src_mm,
dst_pmd, src_pmd,
dst_vma, src_vma,
dst_addr, src_addr,
mode);
step_size = PAGE_SIZE;
}
cond_resched();
if (!err) {
dst_addr += step_size;
src_addr += step_size;
moved += step_size;
}
if ((!err || err == -EAGAIN) &&
fatal_signal_pending(current))
err = -EINTR;
if (err && err != -EAGAIN)
break;
}
+out:
mmap_read_unlock(dst_mm);
if (dst_mm != src_mm)
mmap_read_unlock(src_mm);
BUG_ON(moved < 0);
BUG_ON(err > 0);
BUG_ON(!moved && !err);
return moved ? moved : err;
+}
2.42.0.515.g380fc7ccd1-goog
On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
So at this point we have:
- the current src_pte
- some referenced+locked src_folio that used to be mapped exclusively
at src_addr
- (the anon_vma associated with the src_folio)
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
And then this will, without touching folio mapcounts/refcounts, delete the current PTE at src_addr, and create a PTE at dst_addr pointing to the old src_folio, leading to incorrect refcounts/mapcounts?
I assume this still points to the missing previous_src_pte check discussed in the previous comments. Is that correct or is there yet another issue?
This is still referring to the missing previous_src_pte check.
} else {
[...]
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
[...]
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
struct vm_area_struct *src_vma, *dst_vma;
unsigned long src_addr, dst_addr;
pmd_t *src_pmd, *dst_pmd;
long err = -EINVAL;
ssize_t moved = 0;
/*
* Sanitize the command parameters:
*/
BUG_ON(src_start & ~PAGE_MASK);
BUG_ON(dst_start & ~PAGE_MASK);
BUG_ON(len & ~PAGE_MASK);
/* Does the address range wrap, or is the span zero-sized? */
BUG_ON(src_start + len <= src_start);
BUG_ON(dst_start + len <= dst_start);
/*
* Because these are read sempahores there's no risk of lock
* inversion.
*/
mmap_read_lock(dst_mm);
if (dst_mm != src_mm)
mmap_read_lock(src_mm);
/*
* Make sure the vma is not shared, that the src and dst remap
* ranges are both valid and fully within a single existing
* vma.
*/
src_vma = find_vma(src_mm, src_start);
if (!src_vma || (src_vma->vm_flags & VM_SHARED))
goto out;
if (src_start < src_vma->vm_start ||
src_start + len > src_vma->vm_end)
goto out;
dst_vma = find_vma(dst_mm, dst_start);
if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
goto out;
if (dst_start < dst_vma->vm_start ||
dst_start + len > dst_vma->vm_end)
goto out;
err = validate_remap_areas(src_vma, dst_vma);
if (err)
goto out;
for (src_addr = src_start, dst_addr = dst_start;
src_addr < src_start + len;) {
spinlock_t *ptl;
pmd_t dst_pmdval;
unsigned long step_size;
BUG_ON(dst_addr >= dst_start + len);
/*
* Below works because anonymous area would not have a
* transparent huge PUD. If file-backed support is added,
* that case would need to be handled here.
*/
src_pmd = mm_find_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
src_pmd = mm_alloc_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
err = -ENOMEM;
break;
}
}
dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
if (unlikely(!dst_pmd)) {
err = -ENOMEM;
break;
}
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
Hm, and instead we'd bail at the pte_offset_map_nolock() in remap_pages_pte()? I guess that's unusual but works...
(It would be a thing to look out for if anyone tried to backport this, since the checks in pte_offset_map_nolock() were only introduced in 6.5, but idk if anyone's doing that)
On Wed, Sep 27, 2023 at 1:04 PM Jann Horn jannh@google.com wrote:
On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
So at this point we have:
- the current src_pte
- some referenced+locked src_folio that used to be mapped exclusively
at src_addr
- (the anon_vma associated with the src_folio)
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
And then this will, without touching folio mapcounts/refcounts, delete the current PTE at src_addr, and create a PTE at dst_addr pointing to the old src_folio, leading to incorrect refcounts/mapcounts?
I assume this still points to the missing previous_src_pte check discussed in the previous comments. Is that correct or is there yet another issue?
This is still referring to the missing previous_src_pte check.
} else {
[...]
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
[...]
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
struct vm_area_struct *src_vma, *dst_vma;
unsigned long src_addr, dst_addr;
pmd_t *src_pmd, *dst_pmd;
long err = -EINVAL;
ssize_t moved = 0;
/*
* Sanitize the command parameters:
*/
BUG_ON(src_start & ~PAGE_MASK);
BUG_ON(dst_start & ~PAGE_MASK);
BUG_ON(len & ~PAGE_MASK);
/* Does the address range wrap, or is the span zero-sized? */
BUG_ON(src_start + len <= src_start);
BUG_ON(dst_start + len <= dst_start);
/*
* Because these are read sempahores there's no risk of lock
* inversion.
*/
mmap_read_lock(dst_mm);
if (dst_mm != src_mm)
mmap_read_lock(src_mm);
/*
* Make sure the vma is not shared, that the src and dst remap
* ranges are both valid and fully within a single existing
* vma.
*/
src_vma = find_vma(src_mm, src_start);
if (!src_vma || (src_vma->vm_flags & VM_SHARED))
goto out;
if (src_start < src_vma->vm_start ||
src_start + len > src_vma->vm_end)
goto out;
dst_vma = find_vma(dst_mm, dst_start);
if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
goto out;
if (dst_start < dst_vma->vm_start ||
dst_start + len > dst_vma->vm_end)
goto out;
err = validate_remap_areas(src_vma, dst_vma);
if (err)
goto out;
for (src_addr = src_start, dst_addr = dst_start;
src_addr < src_start + len;) {
spinlock_t *ptl;
pmd_t dst_pmdval;
unsigned long step_size;
BUG_ON(dst_addr >= dst_start + len);
/*
* Below works because anonymous area would not have a
* transparent huge PUD. If file-backed support is added,
* that case would need to be handled here.
*/
src_pmd = mm_find_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
src_pmd = mm_alloc_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
err = -ENOMEM;
break;
}
}
dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
if (unlikely(!dst_pmd)) {
err = -ENOMEM;
break;
}
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
Hm, and instead we'd bail at the pte_offset_map_nolock() in remap_pages_pte()? I guess that's unusual but works...
Yes, that's what I was thinking but I agree, that seems fragile. Maybe just bail out early if (ptl && !pmd_trans_huge())?
(It would be a thing to look out for if anyone tried to backport this, since the checks in pte_offset_map_nolock() were only introduced in 6.5, but idk if anyone's doing that)
On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 1:04 PM Jann Horn jannh@google.com wrote:
On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
From: Andrea Arcangeli aarcange@redhat.com
This implements the uABI of UFFDIO_REMAP.
Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method.
Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Suren Baghdasaryan surenb@google.com
[...]
/*
* folio_referenced walks the anon_vma chain
* without the folio lock. Serialize against it with
* the anon_vma lock, the folio lock is not enough.
*/
src_anon_vma = folio_get_anon_vma(src_folio);
if (!src_anon_vma) {
/* page was unmapped from under us */
err = -EAGAIN;
goto out;
}
if (!anon_vma_trylock_write(src_anon_vma)) {
pte_unmap(&orig_src_pte);
pte_unmap(&orig_dst_pte);
src_pte = dst_pte = NULL;
/* now we can block and wait */
anon_vma_lock_write(src_anon_vma);
goto retry;
}
}
So at this point we have:
- the current src_pte
- some referenced+locked src_folio that used to be mapped exclusively
at src_addr
- (the anon_vma associated with the src_folio)
err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
dst_addr, src_addr, dst_pte, src_pte,
orig_dst_pte, orig_src_pte,
dst_ptl, src_ptl, src_folio);
And then this will, without touching folio mapcounts/refcounts, delete the current PTE at src_addr, and create a PTE at dst_addr pointing to the old src_folio, leading to incorrect refcounts/mapcounts?
I assume this still points to the missing previous_src_pte check discussed in the previous comments. Is that correct or is there yet another issue?
This is still referring to the missing previous_src_pte check.
} else {
[...]
}
+out:
if (src_anon_vma) {
anon_vma_unlock_write(src_anon_vma);
put_anon_vma(src_anon_vma);
}
if (src_folio) {
folio_unlock(src_folio);
folio_put(src_folio);
}
if (dst_pte)
pte_unmap(dst_pte);
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
return err;
+}
[...]
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
struct vm_area_struct *src_vma, *dst_vma;
unsigned long src_addr, dst_addr;
pmd_t *src_pmd, *dst_pmd;
long err = -EINVAL;
ssize_t moved = 0;
/*
* Sanitize the command parameters:
*/
BUG_ON(src_start & ~PAGE_MASK);
BUG_ON(dst_start & ~PAGE_MASK);
BUG_ON(len & ~PAGE_MASK);
/* Does the address range wrap, or is the span zero-sized? */
BUG_ON(src_start + len <= src_start);
BUG_ON(dst_start + len <= dst_start);
/*
* Because these are read sempahores there's no risk of lock
* inversion.
*/
mmap_read_lock(dst_mm);
if (dst_mm != src_mm)
mmap_read_lock(src_mm);
/*
* Make sure the vma is not shared, that the src and dst remap
* ranges are both valid and fully within a single existing
* vma.
*/
src_vma = find_vma(src_mm, src_start);
if (!src_vma || (src_vma->vm_flags & VM_SHARED))
goto out;
if (src_start < src_vma->vm_start ||
src_start + len > src_vma->vm_end)
goto out;
dst_vma = find_vma(dst_mm, dst_start);
if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
goto out;
if (dst_start < dst_vma->vm_start ||
dst_start + len > dst_vma->vm_end)
goto out;
err = validate_remap_areas(src_vma, dst_vma);
if (err)
goto out;
for (src_addr = src_start, dst_addr = dst_start;
src_addr < src_start + len;) {
spinlock_t *ptl;
pmd_t dst_pmdval;
unsigned long step_size;
BUG_ON(dst_addr >= dst_start + len);
/*
* Below works because anonymous area would not have a
* transparent huge PUD. If file-backed support is added,
* that case would need to be handled here.
*/
src_pmd = mm_find_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
err = -ENOENT;
break;
}
src_pmd = mm_alloc_pmd(src_mm, src_addr);
if (unlikely(!src_pmd)) {
err = -ENOMEM;
break;
}
}
dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
if (unlikely(!dst_pmd)) {
err = -ENOMEM;
break;
}
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
Hm, and instead we'd bail at the pte_offset_map_nolock() in remap_pages_pte()? I guess that's unusual but works...
Yes, that's what I was thinking but I agree, that seems fragile. Maybe just bail out early if (ptl && !pmd_trans_huge())?
No, actually we can still handle is_swap_pmd() case by splitting it and remapping the individual ptes. So, I can bail out only in case of pmd_devmap().
(It would be a thing to look out for if anyone tried to backport this, since the checks in pte_offset_map_nolock() were only introduced in 6.5, but idk if anyone's doing that)
On Wed, Sep 27, 2023 at 11:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 1:04 PM Jann Horn jannh@google.com wrote:
On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote:
dst_pmdval = pmdp_get_lockless(dst_pmd);
/*
* If the dst_pmd is mapped as THP don't override it and just
* be strict. If dst_pmd changes into TPH after this check, the
* remap_pages_huge_pmd() will detect the change and retry
* while remap_pages_pte() will detect the change and fail.
*/
if (unlikely(pmd_trans_huge(dst_pmdval))) {
err = -EEXIST;
break;
}
ptl = pmd_trans_huge_lock(src_pmd, src_vma);
if (ptl && !pmd_trans_huge(*src_pmd)) {
spin_unlock(ptl);
ptl = NULL;
}
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
Hm, and instead we'd bail at the pte_offset_map_nolock() in remap_pages_pte()? I guess that's unusual but works...
Yes, that's what I was thinking but I agree, that seems fragile. Maybe just bail out early if (ptl && !pmd_trans_huge())?
No, actually we can still handle is_swap_pmd() case by splitting it and remapping the individual ptes. So, I can bail out only in case of pmd_devmap().
FWIW I only learned today that "real" swap PMDs don't actually exist - only migration entries, which are encoded as swap PMDs, exist. You can see that when you look through the cases that something like __split_huge_pmd() or zap_pmd_range() actually handles.
So I think if you wanted to handle all the PMD types properly here without splitting, you could do that without _too_ much extra code. But idk if it's worth it.
On Wed, Sep 27, 2023 at 3:49 PM Jann Horn jannh@google.com wrote:
On Wed, Sep 27, 2023 at 11:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 1:04 PM Jann Horn jannh@google.com wrote:
On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Sep 27, 2023 at 5:47 AM Jann Horn jannh@google.com wrote:
On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan surenb@google.com wrote: > + dst_pmdval = pmdp_get_lockless(dst_pmd); > + /* > + * If the dst_pmd is mapped as THP don't override it and just > + * be strict. If dst_pmd changes into TPH after this check, the > + * remap_pages_huge_pmd() will detect the change and retry > + * while remap_pages_pte() will detect the change and fail. > + */ > + if (unlikely(pmd_trans_huge(dst_pmdval))) { > + err = -EEXIST; > + break; > + } > + > + ptl = pmd_trans_huge_lock(src_pmd, src_vma); > + if (ptl && !pmd_trans_huge(*src_pmd)) { > + spin_unlock(ptl); > + ptl = NULL; > + }
This still looks wrong - we do still have to split_huge_pmd() somewhere so that remap_pages_pte() works.
Hmm, I guess this extra check is not even needed...
Hm, and instead we'd bail at the pte_offset_map_nolock() in remap_pages_pte()? I guess that's unusual but works...
Yes, that's what I was thinking but I agree, that seems fragile. Maybe just bail out early if (ptl && !pmd_trans_huge())?
No, actually we can still handle is_swap_pmd() case by splitting it and remapping the individual ptes. So, I can bail out only in case of pmd_devmap().
FWIW I only learned today that "real" swap PMDs don't actually exist - only migration entries, which are encoded as swap PMDs, exist. You can see that when you look through the cases that something like __split_huge_pmd() or zap_pmd_range() actually handles.
Ah, good point.
So I think if you wanted to handle all the PMD types properly here without splitting, you could do that without _too_ much extra code. But idk if it's worth it.
Yeah, I guess I can call pmd_migration_entry_wait() and retry by returning EAGAIN, similar to how remap_pages_pte() handles PTE migration. Looks simple enough.
Thanks for all the pointers! I'll start cooking the next version.
On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
@@ -72,6 +73,7 @@ #define _UFFDIO_WAKE (0x02) #define _UFFDIO_COPY (0x03) #define _UFFDIO_ZEROPAGE (0x04) +#define _UFFDIO_REMAP (0x05) #define _UFFDIO_WRITEPROTECT (0x06) #define _UFFDIO_CONTINUE (0x07) #define _UFFDIO_POISON (0x08)
Might be good to add a feature bit (UFFD_FEATURE_REMAP) for userspace to probe?
IIUC the whole remap feature was proposed at the birth of uffd even before COPY, but now we have tons of old kernels who will not support it.
[...]
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr)
+{
- pmd_t _dst_pmd, src_pmdval;
- struct page *src_page;
- struct folio *src_folio;
- struct anon_vma *src_anon_vma, *dst_anon_vma;
- spinlock_t *src_ptl, *dst_ptl;
- pgtable_t src_pgtable, dst_pgtable;
- struct mmu_notifier_range range;
- int err = 0;
- src_pmdval = *src_pmd;
- src_ptl = pmd_lockptr(src_mm, src_pmd);
- BUG_ON(!spin_is_locked(src_ptl));
- mmap_assert_locked(src_mm);
- mmap_assert_locked(dst_mm);
- BUG_ON(!pmd_trans_huge(src_pmdval));
- BUG_ON(!pmd_none(dst_pmdval));
- BUG_ON(src_addr & ~HPAGE_PMD_MASK);
- BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
- src_page = pmd_page(src_pmdval);
- if (unlikely(!PageAnonExclusive(src_page))) {
spin_unlock(src_ptl);
return -EBUSY;
- }
- src_folio = page_folio(src_page);
- folio_get(src_folio);
- spin_unlock(src_ptl);
- /* preallocate dst_pgtable if needed */
- if (dst_mm != src_mm) {
dst_pgtable = pte_alloc_one(dst_mm);
if (unlikely(!dst_pgtable)) {
err = -ENOMEM;
goto put_folio;
}
- } else {
dst_pgtable = NULL;
- }
- mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
src_addr + HPAGE_PMD_SIZE);
- mmu_notifier_invalidate_range_start(&range);
- /* block all concurrent rmap walks */
This is not accurate either I think. Maybe we can do "s/all/most/", or just drop it (assuming the detailed and accurate version of documentation lies above remap_pages() regarding to REMAP locking)?
- folio_lock(src_folio);
[...]
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
remap_present_pte?
[...]
+/**
- remap_pages - remap arbitrary anonymous pages of an existing vma
- @dst_start: start of the destination virtual memory range
- @src_start: start of the source virtual memory range
- @len: length of the virtual memory range
- remap_pages() remaps arbitrary anonymous pages atomically in zero
- copy. It only works on non shared anonymous pages because those can
- be relocated without generating non linear anon_vmas in the rmap
- code.
- It provides a zero copy mechanism to handle userspace page faults.
- The source vma pages should have mapcount == 1, which can be
- enforced by using madvise(MADV_DONTFORK) on src vma.
- The thread receiving the page during the userland page fault
- will receive the faulting page in the source vma through the network,
- storage or any other I/O device (MADV_DONTFORK in the source vma
- avoids remap_pages() to fail with -EBUSY if the process forks before
- remap_pages() is called), then it will call remap_pages() to map the
- page in the faulting address in the destination vma.
- This userfaultfd command works purely via pagetables, so it's the
- most efficient way to move physical non shared anonymous pages
- across different virtual addresses. Unlike mremap()/mmap()/munmap()
- it does not create any new vmas. The mapping in the destination
- address is atomic.
- It only works if the vma protection bits are identical from the
- source and destination vma.
- It can remap non shared anonymous pages within the same vma too.
- If the source virtual memory range has any unmapped holes, or if
- the destination virtual memory range is not a whole unmapped hole,
- remap_pages() will fail respectively with -ENOENT or -EEXIST. This
- provides a very strict behavior to avoid any chance of memory
- corruption going unnoticed if there are userland race conditions.
- Only one thread should resolve the userland page fault at any given
- time for any given faulting address. This means that if two threads
- try to both call remap_pages() on the same destination address at the
- same time, the second thread will get an explicit error from this
- command.
- The command retval will return "len" is successful. The command
- however can be interrupted by fatal signals or errors. If
- interrupted it will return the number of bytes successfully
- remapped before the interruption if any, or the negative error if
- none. It will never return zero. Either it will return an error or
- an amount of bytes successfully moved. If the retval reports a
- "short" remap, the remap_pages() command should be repeated by
- userland with src+retval, dst+reval, len-retval if it wants to know
- about the error that interrupted it.
- The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to
- prevent -ENOENT errors to materialize if there are holes in the
- source virtual range that is being remapped. The holes will be
- accounted as successfully remapped in the retval of the
- command. This is mostly useful to remap hugepage naturally aligned
- virtual regions without knowing if there are transparent hugepage
- in the regions or not, but preventing the risk of having to split
- the hugepmd during the remap.
- If there's any rmap walk that is taking the anon_vma locks without
- first obtaining the folio lock (for example split_huge_page and
- folio_referenced), they will have to verify if the folio->mapping
Hmm, this sentence seems to be not 100% accurate, perhaps not anymore?
As split_huge_page() should need the folio lock and it'll serialize with REMAP with the folio lock too. It seems to me only folio_referenced() is the outlier so far, and that's covered by patch 1.
I did also check other users of folio_get_anon_vma() (similar to use case of split_huge_page()) and they're all with the folio lock held, so we should be good.
In summary, perhaps:
- Drop split_huge_page() example here?
- Should we document above folio_get_anon_vma() about this specialty due to UFFDIO_REMAP? I'm thiking something like:
+ * + * NOTE: the caller should normally hold folio lock when calling this. If + * not, the caller needs to double check the anon_vma didn't change after + * taking the anon_vma lock for either read or write (UFFDIO_REMAP can + * modify it concurrently without folio lock protection). See + * folio_lock_anon_vma_read() which has already covered that, and comment + * above remap_pages(). */ struct anon_vma *folio_get_anon_vma(struct folio *folio) { ... }
- has changed after taking the anon_vma lock. If it changed they
- should release the lock and retry obtaining a new anon_vma, because
- it means the anon_vma was changed by remap_pages() before the lock
- could be obtained. This is the only additional complexity added to
- the rmap code to provide this anonymous page remapping functionality.
- */
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
[...]
if (!err) {
dst_addr += step_size;
src_addr += step_size;
moved += step_size;
}
if ((!err || err == -EAGAIN) &&
fatal_signal_pending(current))
err = -EINTR;
if (err && err != -EAGAIN)
break;
The err handling is slightly harder to read. I tried to rewrite it like this:
switch (err) { case 0: dst_addr += step_size; src_addr += step_size; moved += step_size; /* fall through */ case -EAGAIN: if (fatal_signal_pending(current)) { err = -EINTR; goto out; } /* Continue with the loop */ break; default: goto out; }
Not super good but maybe slightly better? No strong opinions, but if you agree that looks better we can use it.
- }
+out:
- mmap_read_unlock(dst_mm);
- if (dst_mm != src_mm)
mmap_read_unlock(src_mm);
- BUG_ON(moved < 0);
- BUG_ON(err > 0);
- BUG_ON(!moved && !err);
- return moved ? moved : err;
+}
I think for the rest I'll read the new version (e.g. I saw discussion on proper handling of pmd swap entries, which is not yet addressed, but probably will in the next one).
Thanks,
On Thu, Sep 28, 2023 at 10:09 AM Peter Xu peterx@redhat.com wrote:
On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
@@ -72,6 +73,7 @@ #define _UFFDIO_WAKE (0x02) #define _UFFDIO_COPY (0x03) #define _UFFDIO_ZEROPAGE (0x04) +#define _UFFDIO_REMAP (0x05) #define _UFFDIO_WRITEPROTECT (0x06) #define _UFFDIO_CONTINUE (0x07) #define _UFFDIO_POISON (0x08)
Might be good to add a feature bit (UFFD_FEATURE_REMAP) for userspace to probe?
Ack.
IIUC the whole remap feature was proposed at the birth of uffd even before COPY, but now we have tons of old kernels who will not support it.
[...]
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr)
+{
pmd_t _dst_pmd, src_pmdval;
struct page *src_page;
struct folio *src_folio;
struct anon_vma *src_anon_vma, *dst_anon_vma;
spinlock_t *src_ptl, *dst_ptl;
pgtable_t src_pgtable, dst_pgtable;
struct mmu_notifier_range range;
int err = 0;
src_pmdval = *src_pmd;
src_ptl = pmd_lockptr(src_mm, src_pmd);
BUG_ON(!spin_is_locked(src_ptl));
mmap_assert_locked(src_mm);
mmap_assert_locked(dst_mm);
BUG_ON(!pmd_trans_huge(src_pmdval));
BUG_ON(!pmd_none(dst_pmdval));
BUG_ON(src_addr & ~HPAGE_PMD_MASK);
BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
src_page = pmd_page(src_pmdval);
if (unlikely(!PageAnonExclusive(src_page))) {
spin_unlock(src_ptl);
return -EBUSY;
}
src_folio = page_folio(src_page);
folio_get(src_folio);
spin_unlock(src_ptl);
/* preallocate dst_pgtable if needed */
if (dst_mm != src_mm) {
dst_pgtable = pte_alloc_one(dst_mm);
if (unlikely(!dst_pgtable)) {
err = -ENOMEM;
goto put_folio;
}
} else {
dst_pgtable = NULL;
}
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
src_addr + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
/* block all concurrent rmap walks */
This is not accurate either I think. Maybe we can do "s/all/most/", or just drop it (assuming the detailed and accurate version of documentation lies above remap_pages() regarding to REMAP locking)?
Yes, comments from the original patch need to be rechecked and I'm sure I missed some new rules. Thanks for pointing this one out! I'll drop it.
folio_lock(src_folio);
[...]
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct vm_area_struct *dst_vma,
struct vm_area_struct *src_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
spinlock_t *dst_ptl, spinlock_t *src_ptl,
struct folio *src_folio)
remap_present_pte?
Sounds good.
[...]
+/**
- remap_pages - remap arbitrary anonymous pages of an existing vma
- @dst_start: start of the destination virtual memory range
- @src_start: start of the source virtual memory range
- @len: length of the virtual memory range
- remap_pages() remaps arbitrary anonymous pages atomically in zero
- copy. It only works on non shared anonymous pages because those can
- be relocated without generating non linear anon_vmas in the rmap
- code.
- It provides a zero copy mechanism to handle userspace page faults.
- The source vma pages should have mapcount == 1, which can be
- enforced by using madvise(MADV_DONTFORK) on src vma.
- The thread receiving the page during the userland page fault
- will receive the faulting page in the source vma through the network,
- storage or any other I/O device (MADV_DONTFORK in the source vma
- avoids remap_pages() to fail with -EBUSY if the process forks before
- remap_pages() is called), then it will call remap_pages() to map the
- page in the faulting address in the destination vma.
- This userfaultfd command works purely via pagetables, so it's the
- most efficient way to move physical non shared anonymous pages
- across different virtual addresses. Unlike mremap()/mmap()/munmap()
- it does not create any new vmas. The mapping in the destination
- address is atomic.
- It only works if the vma protection bits are identical from the
- source and destination vma.
- It can remap non shared anonymous pages within the same vma too.
- If the source virtual memory range has any unmapped holes, or if
- the destination virtual memory range is not a whole unmapped hole,
- remap_pages() will fail respectively with -ENOENT or -EEXIST. This
- provides a very strict behavior to avoid any chance of memory
- corruption going unnoticed if there are userland race conditions.
- Only one thread should resolve the userland page fault at any given
- time for any given faulting address. This means that if two threads
- try to both call remap_pages() on the same destination address at the
- same time, the second thread will get an explicit error from this
- command.
- The command retval will return "len" is successful. The command
- however can be interrupted by fatal signals or errors. If
- interrupted it will return the number of bytes successfully
- remapped before the interruption if any, or the negative error if
- none. It will never return zero. Either it will return an error or
- an amount of bytes successfully moved. If the retval reports a
- "short" remap, the remap_pages() command should be repeated by
- userland with src+retval, dst+reval, len-retval if it wants to know
- about the error that interrupted it.
- The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to
- prevent -ENOENT errors to materialize if there are holes in the
- source virtual range that is being remapped. The holes will be
- accounted as successfully remapped in the retval of the
- command. This is mostly useful to remap hugepage naturally aligned
- virtual regions without knowing if there are transparent hugepage
- in the regions or not, but preventing the risk of having to split
- the hugepmd during the remap.
- If there's any rmap walk that is taking the anon_vma locks without
- first obtaining the folio lock (for example split_huge_page and
- folio_referenced), they will have to verify if the folio->mapping
Hmm, this sentence seems to be not 100% accurate, perhaps not anymore?
As split_huge_page() should need the folio lock and it'll serialize with REMAP with the folio lock too. It seems to me only folio_referenced() is the outlier so far, and that's covered by patch 1.
I did also check other users of folio_get_anon_vma() (similar to use case of split_huge_page()) and they're all with the folio lock held, so we should be good.
In summary, perhaps:
Drop split_huge_page() example here?
Should we document above folio_get_anon_vma() about this specialty due to UFFDIO_REMAP? I'm thiking something like:
- NOTE: the caller should normally hold folio lock when calling this. If
- not, the caller needs to double check the anon_vma didn't change after
- taking the anon_vma lock for either read or write (UFFDIO_REMAP can
- modify it concurrently without folio lock protection). See
- folio_lock_anon_vma_read() which has already covered that, and comment
*/
- above remap_pages().
struct anon_vma *folio_get_anon_vma(struct folio *folio) { ... }
Ack. Will fix the remap_pages description and add the comment for folio_get_anon_vma.
- has changed after taking the anon_vma lock. If it changed they
- should release the lock and retry obtaining a new anon_vma, because
- it means the anon_vma was changed by remap_pages() before the lock
- could be obtained. This is the only additional complexity added to
- the rmap code to provide this anonymous page remapping functionality.
- */
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
unsigned long dst_start, unsigned long src_start,
unsigned long len, __u64 mode)
+{
[...]
if (!err) {
dst_addr += step_size;
src_addr += step_size;
moved += step_size;
}
if ((!err || err == -EAGAIN) &&
fatal_signal_pending(current))
err = -EINTR;
if (err && err != -EAGAIN)
break;
The err handling is slightly harder to read. I tried to rewrite it like this:
switch (err) { case 0: dst_addr += step_size; src_addr += step_size; moved += step_size; /* fall through */ case -EAGAIN: if (fatal_signal_pending(current)) { err = -EINTR; goto out; } /* Continue with the loop */ break; default: goto out; }
Not super good but maybe slightly better? No strong opinions, but if you agree that looks better we can use it.
Agree that this should be improved. Let me see if I can find a cleaner way to handle these errors.
}
+out:
mmap_read_unlock(dst_mm);
if (dst_mm != src_mm)
mmap_read_unlock(src_mm);
BUG_ON(moved < 0);
BUG_ON(err > 0);
BUG_ON(!moved && !err);
return moved ? moved : err;
+}
I think for the rest I'll read the new version (e.g. I saw discussion on proper handling of pmd swap entries, which is not yet addressed, but probably will in the next one).
Appreciate your feedback! Thanks, Suren.
Thanks,
-- Peter Xu
-- To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
One more thing..
On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
+static int remap_pages_pte(struct mm_struct *dst_mm,
[...]
+retry:
- dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
- /* If an huge pmd materialized from under us fail */
- if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
- }
- src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
- /*
* We held the mmap_lock for reading so MADV_DONTNEED
* can zap transparent huge pages under us, or the
* transparent huge page fault can establish new
* transparent huge pages under us.
*/
- if (unlikely(!src_pte)) {
err = -EFAULT;
goto out;
- }
For these two places: I know that thp collapse with mmap read lock hasn't yet spread to anon (so I assume none of above could trigger yet on the failure paths), but shall we constantly return -EAGAIN here just in case we forget that in the future?
For example, for UFFDIO_COPY over shmem which we can already hit similar case, mfill_atomic_install_pte() has:
ret = -EAGAIN; dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); if (!dst_pte) goto out;
Thanks,
On Thu, Sep 28, 2023 at 11:43 AM Peter Xu peterx@redhat.com wrote:
One more thing..
On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
+static int remap_pages_pte(struct mm_struct *dst_mm,
[...]
+retry:
dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
/* If an huge pmd materialized from under us fail */
if (unlikely(!dst_pte)) {
err = -EFAULT;
goto out;
}
src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
/*
* We held the mmap_lock for reading so MADV_DONTNEED
* can zap transparent huge pages under us, or the
* transparent huge page fault can establish new
* transparent huge pages under us.
*/
if (unlikely(!src_pte)) {
err = -EFAULT;
goto out;
}
For these two places: I know that thp collapse with mmap read lock hasn't yet spread to anon (so I assume none of above could trigger yet on the failure paths), but shall we constantly return -EAGAIN here just in case we forget that in the future?
For example, for UFFDIO_COPY over shmem which we can already hit similar case, mfill_atomic_install_pte() has:
ret = -EAGAIN; dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); if (!dst_pte) goto out;
Thanks,
Retrying in this case makes sense to me. Will change.
-- Peter Xu
-- To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
Add a test for new UFFDIO_REMAP ioctl which uses uffd to remaps source into destination buffer while checking the contents of both after remapping. After the operation the content of the destination buffer should match the original source buffer's content while the source buffer should be zeroed.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++- tools/testing/selftests/mm/uffd-common.h | 1 + tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++ 3 files changed, 102 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c index 02b89860e193..2a3ffd0ce96e 100644 --- a/tools/testing/selftests/mm/uffd-common.c +++ b/tools/testing/selftests/mm/uffd-common.c @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src) *alloc_area = NULL; return -errno; } + + /* Prevent source pages from collapsing into THPs */ + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) { + *alloc_area = NULL; + return -errno; + } + return 0; }
@@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args) offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst; offset &= ~(page_size-1);
- if (copy_page(uffd, offset, args->apply_wp)) - args->missing_faults++; + /* UFFD_REMAP is supported for anon non-shared mappings. */ + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) { + if (remap_page(uffd, offset)) + args->missing_faults++; + } else { + if (copy_page(uffd, offset, args->apply_wp)) + args->missing_faults++; + } } }
@@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp) return __copy_page(ufd, offset, false, wp); }
+int remap_page(int ufd, unsigned long offset) +{ + struct uffdio_remap uffdio_remap; + + if (offset >= nr_pages * page_size) + err("unexpected offset %lu\n", offset); + uffdio_remap.dst = (unsigned long) area_dst + offset; + uffdio_remap.src = (unsigned long) area_src + offset; + uffdio_remap.len = page_size; + uffdio_remap.mode = UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES; + uffdio_remap.remap = 0; + if (ioctl(ufd, UFFDIO_REMAP, &uffdio_remap)) { + /* real retval in uffdio_remap.remap */ + if (uffdio_remap.remap != -EEXIST) + err("UFFDIO_REMAP error: %"PRId64, + (int64_t)uffdio_remap.remap); + wake_range(ufd, uffdio_remap.dst, page_size); + } else if (uffdio_remap.remap != page_size) { + err("UFFDIO_REMAP error: %"PRId64, (int64_t)uffdio_remap.remap); + } else + return 1; + return 0; +} + int uffd_open_dev(unsigned int flags) { int fd, uffd; diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h index 7c4fa964c3b0..2bbb15d1920c 100644 --- a/tools/testing/selftests/mm/uffd-common.h +++ b/tools/testing/selftests/mm/uffd-common.h @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp); void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args); int __copy_page(int ufd, unsigned long offset, bool retry, bool wp); int copy_page(int ufd, unsigned long offset, bool wp); +int remap_page(int ufd, unsigned long offset); void *uffd_poll_thread(void *arg);
int uffd_open_dev(unsigned int flags); diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c index 2709a34a39c5..a33819639187 100644 --- a/tools/testing/selftests/mm/uffd-unit-tests.c +++ b/tools/testing/selftests/mm/uffd-unit-tests.c @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp) char c; struct uffd_args args = { 0 };
+ /* Prevent source pages from being mapped more than once */ + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK)) + err("madvise(MADV_DONTFORK) failed"); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); if (uffd_register(uffd, area_dst, nr_pages * page_size, true, wp, false)) @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs) uffd_test_pass(); }
+static void uffd_remap_test(uffd_test_args_t *targs) +{ + unsigned long nr; + pthread_t uffd_mon; + char c; + unsigned long long count; + struct uffd_args args = { 0 }; + + if (uffd_register(uffd, area_dst, nr_pages * page_size, + true, false, false)) + err("register failure"); + + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) + err("uffd_poll_thread create"); + + /* + * Read each of the pages back using the UFFD-registered mapping. We + * expect that the first time we touch a page, it will result in a missing + * fault. uffd_poll_thread will resolve the fault by remapping source + * page to destination. + */ + for (nr = 0; nr < nr_pages; nr++) { + /* Check area_src content */ + count = *area_count(area_src, nr); + if (count != count_verify[nr]) + err("nr %lu source memory invalid %llu %llu\n", + nr, count, count_verify[nr]); + + /* Faulting into area_dst should remap the page */ + count = *area_count(area_dst, nr); + if (count != count_verify[nr]) + err("nr %lu memory corruption %llu %llu\n", + nr, count, count_verify[nr]); + + /* Re-check area_src content which should be empty */ + count = *area_count(area_src, nr); + if (count != 0) + err("nr %lu remap failed %llu %llu\n", + nr, count, count_verify[nr]); + } + + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c)) + err("pipe write"); + if (pthread_join(uffd_mon, NULL)) + err("join() failed"); + + if (args.missing_faults != nr_pages || args.minor_faults != 0) + uffd_test_fail("stats check error"); + else + uffd_test_pass(); +} + /* * Test the returned uffdio_register.ioctls with different register modes. * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test. @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = { .mem_targets = MEM_ALL, .uffd_feature_required = 0, }, + { + .name = "remap", + .uffd_fn = uffd_remap_test, + .mem_targets = MEM_ANON, + .uffd_feature_required = 0, + }, { .name = "wp-fork", .uffd_fn = uffd_wp_fork_test,
linux-kselftest-mirror@lists.linaro.org