The patch titled
Subject: mm/khugepaged: fix GUP-fast interaction by freeing ptes via mmu_gather
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-khugepaged-fix-gup-fast-interaction-by-freeing-ptes-via-mmu_gather.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jann Horn <jannh(a)google.com>
Subject: mm/khugepaged: fix GUP-fast interaction by freeing ptes via mmu_gather
Date: Wed, 23 Nov 2022 17:56:51 +0100
Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
ensure that the page table was not removed by khugepaged in between.
However, lockless_pages_from_mm() still requires that the page table is
not concurrently freed. We could provide this guarantee in khugepaged by
using some variant of pte_free() with appropriate delay; but such a helper
doesn't really exist outside the mmu_gather infrastructure.
To avoid having to wire up a new codepath for freeing page tables that
might have been in use in the past, fix the issue by letting khugepaged
deposit a fresh page table (if required) instead of depositing the
existing page table, and free the old page table via mmu_gather.
Link: https://lkml.kernel.org/r/20221123165652.2204925-4-jannh@google.com
Fixes: ba76149f47d8 ("thp: khugepaged")
Signed-off-by: Jann Horn <jannh(a)google.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/khugepaged.c | 47 ++++++++++++++++++++++++++++++++++++----------
1 file changed, 37 insertions(+), 10 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-fix-gup-fast-interaction-by-freeing-ptes-via-mmu_gather
+++ a/mm/khugepaged.c
@@ -975,6 +975,8 @@ static int collapse_huge_page(struct mm_
int result = SCAN_FAIL;
struct vm_area_struct *vma;
struct mmu_notifier_range range;
+ struct mmu_gather tlb;
+ pgtable_t deposit_table = NULL;
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
@@ -989,6 +991,11 @@ static int collapse_huge_page(struct mm_
result = alloc_charge_hpage(&hpage, mm, cc);
if (result != SCAN_SUCCEED)
goto out_nolock;
+ deposit_table = pte_alloc_one(mm);
+ if (!deposit_table) {
+ result = SCAN_FAIL;
+ goto out_nolock;
+ }
mmap_read_lock(mm);
result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
@@ -1041,12 +1048,12 @@ static int collapse_huge_page(struct mm_
pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
/*
- * This removes any huge TLB entry from the CPU so we won't allow
- * huge and small TLB entries for the same virtual address to
- * avoid the risk of CPU bugs in that area.
- *
- * Parallel fast GUP is fine since fast GUP will back off when
- * it detects PMD is changed.
+ * Unlink the page table from the PMD and do a TLB flush.
+ * This ensures that the CPUs can't write to the old pages anymore by
+ * the time __collapse_huge_page_copy() copies their contents, and it
+ * allows __collapse_huge_page_copy() to free the old pages.
+ * This also prevents lockless_pages_from_mm() from grabbing references
+ * on the old pages from here on.
*/
_pmd = pmdp_collapse_flush(vma, address, pmd);
spin_unlock(pmd_ptl);
@@ -1090,6 +1097,16 @@ static int collapse_huge_page(struct mm_
__SetPageUptodate(hpage);
pgtable = pmd_pgtable(_pmd);
+ /*
+ * Discard the old page table.
+ * The TLB flush that's implied here is redundant, but hard to avoid
+ * with the current API.
+ */
+ tlb_gather_mmu(&tlb, mm);
+ tlb_flush_pte_range(&tlb, address, HPAGE_PMD_SIZE);
+ pte_free_tlb(&tlb, pgtable, address);
+ tlb_finish_mmu(&tlb);
+
_pmd = mk_huge_pmd(hpage, vma->vm_page_prot);
_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
@@ -1097,7 +1114,8 @@ static int collapse_huge_page(struct mm_
BUG_ON(!pmd_none(*pmd));
page_add_new_anon_rmap(hpage, vma, address);
lru_cache_add_inactive_or_unevictable(hpage, vma);
- pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ pgtable_trans_huge_deposit(mm, pmd, deposit_table);
+ deposit_table = NULL;
set_pmd_at(mm, address, pmd, _pmd);
update_mmu_cache_pmd(vma, address, pmd);
spin_unlock(pmd_ptl);
@@ -1112,6 +1130,8 @@ out_nolock:
mem_cgroup_uncharge(page_folio(hpage));
put_page(hpage);
}
+ if (deposit_table)
+ pte_free(mm, deposit_table);
trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
return result;
}
@@ -1393,11 +1413,14 @@ static int set_huge_pmd(struct vm_area_s
* The mmap lock together with this VMA's rmap locks covers all paths towards
* the page table entries we're messing with here, except for hardware page
* table walks and lockless_pages_from_mm().
+ *
+ * This function is similar to free_pte_range().
*/
static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
pmd_t pmd;
+ struct mmu_gather tlb;
mmap_assert_write_locked(mm);
if (vma->vm_file)
@@ -1408,11 +1431,15 @@ static void collapse_and_free_pmd(struct
*/
if (vma->anon_vma)
lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
+ page_table_check_pte_clear_range(mm, addr, pmd);
- pmd = pmdp_collapse_flush(vma, addr, pmdp);
+ tlb_gather_mmu(&tlb, mm);
+ pmd = READ_ONCE(*pmdp);
+ pmd_clear(pmdp);
+ tlb_flush_pte_range(&tlb, addr, HPAGE_PMD_SIZE);
+ pte_free_tlb(&tlb, pmd_pgtable(pmd), addr);
+ tlb_finish_mmu(&tlb);
mm_dec_nr_ptes(mm);
- page_table_check_pte_clear_range(mm, addr, pmd);
- pte_free(mm, pmd_pgtable(pmd));
}
/**
_
Patches currently in -mm which might be from jannh(a)google.com are
mm-khugepaged-take-the-right-locks-for-page-table-retraction.patch
mmu_gather-use-macro-arguments-more-carefully.patch
mm-khugepaged-fix-gup-fast-interaction-by-freeing-ptes-via-mmu_gather.patch
mm-khugepaged-invoke-mmu-notifiers-in-shmem-file-collapse-paths.patch
The patch titled
Subject: mmu_gather: Use macro arguments more carefully
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mmu_gather-use-macro-arguments-more-carefully.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jann Horn <jannh(a)google.com>
Subject: mmu_gather: Use macro arguments more carefully
Date: Wed, 23 Nov 2022 17:56:50 +0100
Avoid breaking stuff when the tlb parameter is an expression like "&tlb".
The following commit relies on this when calling pte_free_tlb().
(Going forward it would probably be a good idea to change macros like this
into inline functions...)
Link: https://lkml.kernel.org/r/20221123165652.2204925-3-jannh@google.com
Fixes: a6d60245d6d9 ("asm-generic/tlb: Track which levels of the page table=
s have been cleared")
Signed-off-by: Jann Horn <jannh(a)google.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/asm-generic/tlb.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/include/asm-generic/tlb.h~mmu_gather-use-macro-arguments-more-carefully
+++ a/include/asm-generic/tlb.h
@@ -630,7 +630,7 @@ static inline void tlb_flush_p4d_range(s
#define pte_free_tlb(tlb, ptep, address) \
do { \
tlb_flush_pmd_range(tlb, address, PAGE_SIZE); \
- tlb->freed_tables = 1; \
+ (tlb)->freed_tables = 1; \
__pte_free_tlb(tlb, ptep, address); \
} while (0)
#endif
@@ -639,7 +639,7 @@ static inline void tlb_flush_p4d_range(s
#define pmd_free_tlb(tlb, pmdp, address) \
do { \
tlb_flush_pud_range(tlb, address, PAGE_SIZE); \
- tlb->freed_tables = 1; \
+ (tlb)->freed_tables = 1; \
__pmd_free_tlb(tlb, pmdp, address); \
} while (0)
#endif
@@ -648,7 +648,7 @@ static inline void tlb_flush_p4d_range(s
#define pud_free_tlb(tlb, pudp, address) \
do { \
tlb_flush_p4d_range(tlb, address, PAGE_SIZE); \
- tlb->freed_tables = 1; \
+ (tlb)->freed_tables = 1; \
__pud_free_tlb(tlb, pudp, address); \
} while (0)
#endif
@@ -657,7 +657,7 @@ static inline void tlb_flush_p4d_range(s
#define p4d_free_tlb(tlb, pudp, address) \
do { \
__tlb_adjust_range(tlb, address, PAGE_SIZE); \
- tlb->freed_tables = 1; \
+ (tlb)->freed_tables = 1; \
__p4d_free_tlb(tlb, pudp, address); \
} while (0)
#endif
_
Patches currently in -mm which might be from jannh(a)google.com are
mm-khugepaged-take-the-right-locks-for-page-table-retraction.patch
mmu_gather-use-macro-arguments-more-carefully.patch
mm-khugepaged-fix-gup-fast-interaction-by-freeing-ptes-via-mmu_gather.patch
mm-khugepaged-invoke-mmu-notifiers-in-shmem-file-collapse-paths.patch
The patch titled
Subject: mm/khugepaged: take the right locks for page table retraction
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-khugepaged-take-the-right-locks-for-page-table-retraction.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jann Horn <jannh(a)google.com>
Subject: mm/khugepaged: take the right locks for page table retraction
Date: Wed, 23 Nov 2022 17:56:49 +0100
Patch series "khugepaged fixes, take two", v2.
This patch (of 4):
pagetable walks on address ranges mapped by VMAs can be done under the
mmap lock, the lock of an anon_vma attached to the VMA, or the lock of the
VMA's address_space. Only one of these needs to be held, and it does not
need to be held in exclusive mode.
Under those circumstances, the rules for concurrent access to page table
entries are:
- Terminal page table entries (entries that don't point to another page
table) can be arbitrarily changed under the page table lock, with the
exception that they always need to be consistent for
hardware page table walks and lockless_pages_from_mm().
This includes that they can be changed into non-terminal entries.
- Non-terminal page table entries (which point to another page table)
can not be modified; readers are allowed to READ_ONCE() an entry, verify
that it is non-terminal, and then assume that its value will stay as-is.
Retracting a page table involves modifying a non-terminal entry, so
page-table-level locks are insufficient to protect against concurrent page
table traversal; it requires taking all the higher-level locks under which
it is possible to start a page walk in the relevant range in exclusive
mode.
The collapse_huge_page() path for anonymous THP already follows this rule,
but the shmem/file THP path was getting it wrong, making it possible for
concurrent rmap-based operations to cause corruption.
Link: https://lkml.kernel.org/r/20221123165652.2204925-1-jannh@google.com
Link: https://lkml.kernel.org/r/20221123165652.2204925-2-jannh@google.com
Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP")
Signed-off-by: Jann Horn <jannh(a)google.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/khugepaged.c | 55 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 51 insertions(+), 4 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-take-the-right-locks-for-page-table-retraction
+++ a/mm/khugepaged.c
@@ -1379,16 +1379,37 @@ static int set_huge_pmd(struct vm_area_s
return SCAN_SUCCEED;
}
+/*
+ * A note about locking:
+ * Trying to take the page table spinlocks would be useless here because those
+ * are only used to synchronize:
+ *
+ * - modifying terminal entries (ones that point to a data page, not to another
+ * page table)
+ * - installing *new* non-terminal entries
+ *
+ * Instead, we need roughly the same kind of protection as free_pgtables() or
+ * mm_take_all_locks() (but only for a single VMA):
+ * The mmap lock together with this VMA's rmap locks covers all paths towards
+ * the page table entries we're messing with here, except for hardware page
+ * table walks and lockless_pages_from_mm().
+ */
static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
- spinlock_t *ptl;
pmd_t pmd;
mmap_assert_write_locked(mm);
- ptl = pmd_lock(vma->vm_mm, pmdp);
+ if (vma->vm_file)
+ lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
+ /*
+ * All anon_vmas attached to the VMA have the same root and are
+ * therefore locked by the same lock.
+ */
+ if (vma->anon_vma)
+ lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
+
pmd = pmdp_collapse_flush(vma, addr, pmdp);
- spin_unlock(ptl);
mm_dec_nr_ptes(mm);
page_table_check_pte_clear_range(mm, addr, pmd);
pte_free(mm, pmd_pgtable(pmd));
@@ -1439,6 +1460,14 @@ int collapse_pte_mapped_thp(struct mm_st
if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
return SCAN_VMA_CHECK;
+ /*
+ * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+ * that got written to. Without this, we'd have to also lock the
+ * anon_vma if one exists.
+ */
+ if (vma->anon_vma)
+ return SCAN_VMA_CHECK;
+
/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
if (userfaultfd_wp(vma))
return SCAN_PTE_UFFD_WP;
@@ -1472,6 +1501,20 @@ int collapse_pte_mapped_thp(struct mm_st
goto drop_hpage;
}
+ /*
+ * We need to lock the mapping so that from here on, only GUP-fast and
+ * hardware page walks can access the parts of the page tables that
+ * we're operating on.
+ * See collapse_and_free_pmd().
+ */
+ i_mmap_lock_write(vma->vm_file->f_mapping);
+
+ /*
+ * This spinlock should be unnecessary: Nobody else should be accessing
+ * the page tables under spinlock protection here, only
+ * lockless_pages_from_mm() and the hardware page walker can access page
+ * tables while all the high-level locks are held in write mode.
+ */
start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
result = SCAN_FAIL;
@@ -1526,6 +1569,8 @@ int collapse_pte_mapped_thp(struct mm_st
/* step 4: remove pte entries */
collapse_and_free_pmd(mm, vma, haddr, pmd);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+
maybe_install_pmd:
/* step 5: install pmd entry */
result = install_pmd
@@ -1539,6 +1584,7 @@ drop_hpage:
abort:
pte_unmap_unlock(start_pte, ptl);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
goto drop_hpage;
}
@@ -1595,7 +1641,8 @@ static int retract_page_tables(struct ad
* An alternative would be drop the check, but check that page
* table is clear before calling pmdp_collapse_flush() under
* ptl. It has higher chance to recover THP for the VMA, but
- * has higher cost too.
+ * has higher cost too. It would also probably require locking
+ * the anon_vma.
*/
if (vma->anon_vma) {
result = SCAN_PAGE_ANON;
_
Patches currently in -mm which might be from jannh(a)google.com are
mm-khugepaged-take-the-right-locks-for-page-table-retraction.patch
mmu_gather-use-macro-arguments-more-carefully.patch
mm-khugepaged-fix-gup-fast-interaction-by-freeing-ptes-via-mmu_gather.patch
mm-khugepaged-invoke-mmu-notifiers-in-shmem-file-collapse-paths.patch
Make nfsd_splice_actor work with reads with a non-zero offset that doesn't end on a page boundary.
This was found when virtual machines with nfs-mounted qcow2 disks failed to boot properly (originally found
on v6.0.5, fix also needed and tested on v6.0.9 and v6.1-rc6).
Signed-off-by: Anders Blomdell <anders.blomdell(a)control.lth.se>
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2142132
Fixes: bfbfb6182ad1 "nfsd_splice_actor(): handle compound pages"
Cc: stable(a)vger.kernel.org # v6.0+
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -873,7 +873,7 @@ nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
unsigned offset = buf->offset;
page += offset / PAGE_SIZE;
- for (int i = sd->len; i > 0; i -= PAGE_SIZE)
+ for (int i = sd->len + offset % PAGE_SIZE; i > 0; i -= PAGE_SIZE)
svc_rqst_replace_page(rqstp, page++);
if (rqstp->rq_res.page_len == 0) // first call
rqstp->rq_res.page_base = offset % PAGE_SIZE;
--
Anders Blomdell Email: anders.blomdell(a)control.lth.se
Department of Automatic Control
Lund University Phone: +46 46 222 4625
P.O. Box 118
SE-221 00 Lund, Sweden