The patch titled Subject: mm: lock VMAs skipped by a failed queue_pages_range() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-lock-vmas-skipped-by-a-failed-queue_pages_range.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Suren Baghdasaryan surenb@google.com Subject: mm: lock VMAs skipped by a failed queue_pages_range() Date: Mon, 18 Sep 2023 14:16:08 -0700
When queue_pages_range() encounters an unmovable page, it terminates its page walk. This walk, among other things, locks the VMAs in the range. This termination might result in some VMAs being left unlocked after queue_pages_range() completes. Since do_mbind() continues to operate on these VMAs despite the failure from queue_pages_range(), it will encounter an unlocked VMA, leading to a BUG().
This mbind() behavior has been modified several times before and might need some changes to either finish the page walk even in the presence of unmovable pages or to error out immediately after the failure to queue_pages_range(). However that requires more discussions, so to fix the immediate issue, explicitly lock the VMAs in the range if queue_pages_range() failed. The added condition does not save much but is added for documentation purposes to understand when this extra locking is needed.
Link: https://lkml.kernel.org/r/20230918211608.3580629-1-surenb@google.com Fixes: 49b0638502da ("mm: enable page walking API to lock vmas during the walk") Signed-off-by: Suren Baghdasaryan surenb@google.com Reported-by: syzbot+b591856e0f0139f83023@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000f392a60604a65085@google.com/ Acked-by: Hugh Dickins hughd@google.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Michal Hocko mhocko@suse.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Yang Shi shy828301@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/mempolicy.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/mm/mempolicy.c~mm-lock-vmas-skipped-by-a-failed-queue_pages_range +++ a/mm/mempolicy.c @@ -1342,6 +1342,9 @@ static long do_mbind(unsigned long start vma_iter_init(&vmi, mm, start); prev = vma_prev(&vmi); for_each_vma_range(vmi, vma, end) { + /* If queue_pages_range failed then not all VMAs might be locked */ + if (ret) + vma_start_write(vma); err = mbind_range(&vmi, vma, &prev, start, end, new); if (err) break; _
Patches currently in -mm which might be from surenb@google.com are
mm-lock-vmas-skipped-by-a-failed-queue_pages_range.patch