Beste begunstigde,
Je hebt een liefdadigheidsdonatie van ($ 10.000.000,00) van Mr. Mike Weirsky, een winnaar van een powerball-jackpotloterij van $ 273 miljoen. Ik doneer aan 5 willekeurige personen als je deze e-mail ontvangt, dan is je e-mail geselecteerd na een spin-ball. Ik heb vrijwillig besloten om het bedrag van $ 10 miljoen USD aan jou te doneren als een van de geselecteerde 5, om mijn winst te verifiëren
Vriendelijk antwoord op: mike.weirsky.foundation003(a)gmail.com
Voor uw claim.
In some cases it appears the invalidation of a hwpoisoned page
fails because the page is still mapped in another process. This
can cause a program to be continuously restarted and die when
it page faults on the page that was not invalidated. Avoid that
problem by unmapping the hwpoisoned page when we find it.
Another issue is that sometimes we end up oopsing in finish_fault,
if the code tries to do something with the now-NULL vmf->page.
I did not hit this error when submitting the previous patch because
there are several opportunities for alloc_set_pte to bail out before
accessing vmf->page, and that apparently happened on those systems,
and most of the time on other systems, too.
However, across several million systems that error does occur a
handful of times a day. It can be avoided by returning VM_FAULT_NOPAGE
which will cause do_read_fault to return before calling finish_fault.
Fixes: e53ac7374e64 ("mm: invalidate hwpoison page cache page in fault path")
Cc: Oscar Salvador <osalvador(a)suse.de>
Cc: Miaohe Lin <linmiaohe(a)huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Cc: Mel Gorman <mgorman(a)suse.de>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: stable(a)vger.kernel.org
---
mm/memory.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index be44d0b36b18..76e3af9639d9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3918,14 +3918,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
return ret;
if (unlikely(PageHWPoison(vmf->page))) {
+ struct page *page = vmf->page;
vm_fault_t poisonret = VM_FAULT_HWPOISON;
if (ret & VM_FAULT_LOCKED) {
+ if (page_mapped(page))
+ unmap_mapping_pages(page_mapping(page),
+ page->index, 1, false);
/* Retry if a clean page was removed from the cache. */
- if (invalidate_inode_page(vmf->page))
- poisonret = 0;
- unlock_page(vmf->page);
+ if (invalidate_inode_page(page))
+ poisonret = VM_FAULT_NOPAGE;
+ unlock_page(page);
}
- put_page(vmf->page);
+ put_page(page);
vmf->page = NULL;
return poisonret;
}
--
2.35.1
If an mremap() syscall with old_size=0 ends up in move_page_tables(),
it will call invalidate_range_start()/invalidate_range_end() unnecessarily,
i.e. with an empty range.
This causes a WARN in KVM's mmu_notifier. In the past, empty ranges
have been diagnosed to be off-by-one bugs, hence the WARNing.
Given the low (so far) number of unique reports, the benefits of
detecting more buggy callers seem to outweigh the cost of having
to fix cases such as this one, where userspace is doing something
silly. In this particular case, an early return from move_page_tables()
is enough to fix the issue.
Reported-by: syzbot+6bde52d89cfdf9f61425(a)syzkaller.appspotmail.com
Cc: linux-mm(a)kvack.org
Cc: akpm(a)linux-foundation.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
---
mm/mremap.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/mremap.c b/mm/mremap.c
index 002eec83e91e..0e175aef536e 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -486,6 +486,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
pmd_t *old_pmd, *new_pmd;
pud_t *old_pud, *new_pud;
+ if (!len)
+ return 0;
+
old_end = old_addr + len;
flush_cache_range(vma, old_addr, old_end);
--
2.31.1