The patch titled Subject: mm: clean up apply_to_pte_range() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-cleanup-apply_to_pte_range-routine.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Alexander Gordeev agordeev@linux.ibm.com Subject: mm: clean up apply_to_pte_range() Date: Tue, 8 Apr 2025 18:07:31 +0200
Reverse 'create' vs 'mm == &init_mm' conditions and move page table mask modification out of the atomic context. This is a prerequisite for fixing missing kernel page tables lock.
Link: https://lkml.kernel.org/r/0c65bc334f17ff1d7d92d31c69d7065769bbce4e.174412812... Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") Signed-off-by: Alexander Gordeev agordeev@linux.ibm.com Cc: stable@vger.kernel.org Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Guenetr Roeck linux@roeck-us.net Cc: Hugh Dickins hughd@google.com Cc: Jeremy Fitzhardinge jeremy@goop.org Cc: Juegren Gross jgross@suse.com Cc: Nicholas Piggin npiggin@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/memory.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-)
--- a/mm/memory.c~mm-cleanup-apply_to_pte_range-routine +++ a/mm/memory.c @@ -2915,24 +2915,28 @@ static int apply_to_pte_range(struct mm_ pte_fn_t fn, void *data, bool create, pgtbl_mod_mask *mask) { + int err = create ? -ENOMEM : -EINVAL; pte_t *pte, *mapped_pte; - int err = 0; spinlock_t *ptl;
- if (create) { - mapped_pte = pte = (mm == &init_mm) ? - pte_alloc_kernel_track(pmd, addr, mask) : - pte_alloc_map_lock(mm, pmd, addr, &ptl); + if (mm == &init_mm) { + if (create) + pte = pte_alloc_kernel_track(pmd, addr, mask); + else + pte = pte_offset_kernel(pmd, addr); if (!pte) - return -ENOMEM; + return err; } else { - mapped_pte = pte = (mm == &init_mm) ? - pte_offset_kernel(pmd, addr) : - pte_offset_map_lock(mm, pmd, addr, &ptl); + if (create) + pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); + else + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); if (!pte) - return -EINVAL; + return err; + mapped_pte = pte; }
+ err = 0; arch_enter_lazy_mmu_mode();
if (fn) { @@ -2944,12 +2948,14 @@ static int apply_to_pte_range(struct mm_ } } while (addr += PAGE_SIZE, addr != end); } - *mask |= PGTBL_PTE_MODIFIED;
arch_leave_lazy_mmu_mode();
if (mm != &init_mm) pte_unmap_unlock(mapped_pte, ptl); + + *mask |= PGTBL_PTE_MODIFIED; + return err; }
_
Patches currently in -mm which might be from agordeev@linux.ibm.com are
kasan-avoid-sleepable-page-allocation-from-atomic-context.patch mm-cleanup-apply_to_pte_range-routine.patch mm-protect-kernel-pgtables-in-apply_to_pte_range.patch