The patch titled Subject: mm/z3fold.c: claim page in the beginning of free has been added to the -mm tree. Its filename is z3fold-claim-page-in-the-beginning-of-free.patch
This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/z3fold-claim-page-in-the-beginning-... and later at http://ozlabs.org/~akpm/mmotm/broken-out/z3fold-claim-page-in-the-beginning-...
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated there every 3-4 working days
------------------------------------------------------ From: Vitaly Wool vitalywool@gmail.com Subject: mm/z3fold.c: claim page in the beginning of free
There's a really hard to reproduce race in z3fold between z3fold_free() and z3fold_reclaim_page(). z3fold_reclaim_page() can claim the page after z3fold_free() has checked if the page was claimed and z3fold_free() will then schedule this page for compaction which may in turn lead to random page faults (since that page would have been reclaimed by then). Fix that by claiming page in the beginning of z3fold_free().
Link: http://lkml.kernel.org/r/20190926104844.4f0c6efa1366b8f5741eaba9@gmail.com Signed-off-by: Vitaly Wool vitalywool@gmail.com Reported-by: Markus Linnala markus.linnala@gmail.com Cc: Markus Linnala markus.linnala@gmail.com Cc: Dan Streetman ddstreet@ieee.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Henry Burns henrywolfeburns@gmail.com Cc: Shakeel Butt shakeelb@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/z3fold.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/mm/z3fold.c~z3fold-claim-page-in-the-beginning-of-free +++ a/mm/z3fold.c @@ -998,9 +998,11 @@ static void z3fold_free(struct z3fold_po struct z3fold_header *zhdr; struct page *page; enum buddy bud; + bool page_claimed;
zhdr = handle_to_z3fold_header(handle); page = virt_to_page(zhdr); + page_claimed = test_and_set_bit(PAGE_CLAIMED, &page->private);
if (test_bit(PAGE_HEADLESS, &page->private)) { /* if a headless page is under reclaim, just leave. @@ -1008,7 +1010,7 @@ static void z3fold_free(struct z3fold_po * has not been set before, we release this page * immediately so we don't care about its value any more. */ - if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) { + if (!page_claimed) { spin_lock(&pool->lock); list_del(&page->lru); spin_unlock(&pool->lock); @@ -1044,7 +1046,7 @@ static void z3fold_free(struct z3fold_po atomic64_dec(&pool->pages_nr); return; } - if (test_bit(PAGE_CLAIMED, &page->private)) { + if (page_claimed) { z3fold_page_unlock(zhdr); return; } _
Patches currently in -mm which might be from vitalywool@gmail.com are
z3fold-claim-page-in-the-beginning-of-free.patch