From: Lance Yang lance.yang@linux.dev
When splitting an mTHP and replacing a zero-filled subpage with the shared zeropage, try_to_map_unused_to_zeropage() currently drops several important PTE bits.
For userspace tools like CRIU, which rely on the soft-dirty mechanism for incremental snapshots, losing the soft-dirty bit means modified pages are missed, leading to inconsistent memory state after restore.
As pointed out by David, the more critical uffd-wp bit is also dropped. This breaks the userfaultfd write-protection mechanism, causing writes to be silently missed by monitoring applications, which can lead to data corruption.
Preserve both the soft-dirty and uffd-wp bits from the old PTE when creating the new zeropage mapping to ensure they are correctly tracked.
Cc: stable@vger.kernel.org Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Suggested-by: David Hildenbrand david@redhat.com Suggested-by: Dev Jain dev.jain@arm.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Dev Jain dev.jain@arm.com Signed-off-by: Lance Yang lance.yang@linux.dev --- v3 -> v4: - Minor formatting tweak in try_to_map_unused_to_zeropage() function signature (per David and Dev) - Collect Reviewed-by from Dev - thanks! - https://lore.kernel.org/linux-mm/20250930060557.85133-1-lance.yang@linux.dev...
v2 -> v3: - ptep_get() gets called only once per iteration (per Dev) - https://lore.kernel.org/linux-mm/20250930043351.34927-1-lance.yang@linux.dev...
v1 -> v2: - Avoid calling ptep_get() multiple times (per Dev) - Double-check the uffd-wp bit (per David) - Collect Acked-by from David - thanks! - https://lore.kernel.org/linux-mm/20250928044855.76359-1-lance.yang@linux.dev...
mm/migrate.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index ce83c2c3c287..21a2a1bf89f7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -296,8 +296,7 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list) }
static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, - struct folio *folio, - unsigned long idx) + struct folio *folio, pte_t old_pte, unsigned long idx) { struct page *page = folio_page(folio, idx); pte_t newpte; @@ -306,7 +305,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, return false; VM_BUG_ON_PAGE(!PageAnon(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page); + VM_BUG_ON_PAGE(pte_present(old_pte), page);
if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) || mm_forbids_zeropage(pvmw->vma->vm_mm)) @@ -322,6 +321,12 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address), pvmw->vma->vm_page_prot)); + + if (pte_swp_soft_dirty(old_pte)) + newpte = pte_mksoft_dirty(newpte); + if (pte_swp_uffd_wp(old_pte)) + newpte = pte_mkuffd_wp(newpte); + set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio)); @@ -344,7 +349,7 @@ static bool remove_migration_pte(struct folio *folio,
while (page_vma_mapped_walk(&pvmw)) { rmap_t rmap_flags = RMAP_NONE; - pte_t old_pte; + pte_t old_pte = ptep_get(pvmw.pte); pte_t pte; swp_entry_t entry; struct page *new; @@ -365,12 +370,11 @@ static bool remove_migration_pte(struct folio *folio, } #endif if (rmap_walk_arg->map_unused_to_zeropage && - try_to_map_unused_to_zeropage(&pvmw, folio, idx)) + try_to_map_unused_to_zeropage(&pvmw, folio, old_pte, idx)) continue;
folio_get(folio); pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); - old_pte = ptep_get(pvmw.pte);
entry = pte_to_swp_entry(old_pte); if (!is_migration_entry_young(entry))
On 2025/9/30 15:10, Lance Yang wrote:
From: Lance Yang lance.yang@linux.dev
When splitting an mTHP and replacing a zero-filled subpage with the shared zeropage, try_to_map_unused_to_zeropage() currently drops several important PTE bits.
For userspace tools like CRIU, which rely on the soft-dirty mechanism for incremental snapshots, losing the soft-dirty bit means modified pages are missed, leading to inconsistent memory state after restore.
As pointed out by David, the more critical uffd-wp bit is also dropped. This breaks the userfaultfd write-protection mechanism, causing writes to be silently missed by monitoring applications, which can lead to data corruption.
Preserve both the soft-dirty and uffd-wp bits from the old PTE when creating the new zeropage mapping to ensure they are correctly tracked.
Cc: stable@vger.kernel.org Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Suggested-by: David Hildenbrand david@redhat.com Suggested-by: Dev Jain dev.jain@arm.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Dev Jain dev.jain@arm.com Signed-off-by: Lance Yang lance.yang@linux.dev
v3 -> v4:
- Minor formatting tweak in try_to_map_unused_to_zeropage() function signature (per David and Dev)
- Collect Reviewed-by from Dev - thanks!
- https://lore.kernel.org/linux-mm/20250930060557.85133-1-lance.yang@linux.dev...
v2 -> v3:
- ptep_get() gets called only once per iteration (per Dev)
- https://lore.kernel.org/linux-mm/20250930043351.34927-1-lance.yang@linux.dev...
v1 -> v2:
- Avoid calling ptep_get() multiple times (per Dev)
- Double-check the uffd-wp bit (per David)
- Collect Acked-by from David - thanks!
- https://lore.kernel.org/linux-mm/20250928044855.76359-1-lance.yang@linux.dev...
mm/migrate.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index ce83c2c3c287..21a2a1bf89f7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -296,8 +296,7 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list) } static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
struct folio *folio,
unsigned long idx)
{ struct page *page = folio_page(folio, idx); pte_t newpte;struct folio *folio, pte_t old_pte, unsigned long idx)
@@ -306,7 +305,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, return false; VM_BUG_ON_PAGE(!PageAnon(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page);
- VM_BUG_ON_PAGE(pte_present(old_pte), page);
if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) || mm_forbids_zeropage(pvmw->vma->vm_mm)) @@ -322,6 +321,12 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address), pvmw->vma->vm_page_prot));
- if (pte_swp_soft_dirty(old_pte))
newpte = pte_mksoft_dirty(newpte);
- if (pte_swp_uffd_wp(old_pte))
newpte = pte_mkuffd_wp(newpte);
- set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio)); @@ -344,7 +349,7 @@ static bool remove_migration_pte(struct folio *folio, while (page_vma_mapped_walk(&pvmw)) { rmap_t rmap_flags = RMAP_NONE;
pte_t old_pte;
pte_t old_pte = ptep_get(pvmw.pte);
Oops, I just found a NULL pointer dereference bug in my changes to remove_migration_pte() when we encounter a PMD-mapped THP migration entry.
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION /* PMD-mapped THP migration entry */ if (!pvmw.pte) { VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) || !folio_test_pmd_mappable(folio), folio); remove_migration_pmd(&pvmw, new); continue; } #endif
ptep_get() is called too early... before the !pvmw.pte check for PMD-mapped entries.
The initialization of old_pte must be moved to after that if block.
Sorry for the churn :( Lance
pte_t pte; swp_entry_t entry; struct page *new;
@@ -365,12 +370,11 @@ static bool remove_migration_pte(struct folio *folio, } #endif if (rmap_walk_arg->map_unused_to_zeropage &&
try_to_map_unused_to_zeropage(&pvmw, folio, idx))
try_to_map_unused_to_zeropage(&pvmw, folio, old_pte, idx)) continue;
folio_get(folio); pte = mk_pte(new, READ_ONCE(vma->vm_page_prot));
old_pte = ptep_get(pvmw.pte);
entry = pte_to_swp_entry(old_pte); if (!is_migration_entry_young(entry))
syzbot ci has tested the following series
[v4] mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage https://lore.kernel.org/all/20250930071053.36158-1-lance.yang@linux.dev * [PATCH v4 1/1] mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage
and found the following issue: general protection fault in remove_migration_pte
Full report is available here: https://ci.syzbot.org/series/8cc7e52f-a859-4251-bd08-9787cdaf7928
***
general protection fault in remove_migration_pte
tree: linux-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next base: 262858079afde6d367ce3db183c74d8a43a0e83f arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/97ee4826-5d29-472d-a85d-51543b0e45de/config C repro: https://ci.syzbot.org/findings/f4819db2-21f2-4280-8bc4-942445398953/c_repro syz repro: https://ci.syzbot.org/findings/f4819db2-21f2-4280-8bc4-942445398953/syz_repr...
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] CPU: 0 UID: 0 PID: 6025 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:ptep_get include/linux/pgtable.h:340 [inline] RIP: 0010:remove_migration_pte+0x369/0x2320 mm/migrate.c:352 Code: 00 48 8d 43 20 48 89 44 24 68 49 8d 47 40 48 89 84 24 e8 00 00 00 4c 89 64 24 48 4c 8b b4 24 50 01 00 00 4c 89 f0 48 c1 e8 03 <42> 80 3c 28 00 74 08 4c 89 f7 e8 f8 3e ff ff 49 8b 06 48 89 44 24 RSP: 0018:ffffc90002fb73e0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff88802957e300 RCX: 1ffffd40008c9006 RDX: 0000000000000000 RSI: 0000000000030dff RDI: 0000000000030c00 RBP: ffffc90002fb75d0 R08: 0000000000000003 R09: 0000000000000004 R10: dffffc0000000000 R11: fffff520005f6e34 R12: ffffea0004648008 R13: dffffc0000000000 R14: 0000000000000000 R15: ffffea0004648000 FS: 00005555624de500(0000) GS:ffff8880b83fc000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000300 CR3: 000000010d8b8000 CR4: 00000000000006f0 Call Trace: <TASK> rmap_walk_anon+0x553/0x730 mm/rmap.c:2855 remove_migration_ptes mm/migrate.c:469 [inline] migrate_folio_move mm/migrate.c:1381 [inline] migrate_folios_move mm/migrate.c:1711 [inline] migrate_pages_batch+0x202e/0x35e0 mm/migrate.c:1967 migrate_pages_sync mm/migrate.c:1997 [inline] migrate_pages+0x1bcc/0x2930 mm/migrate.c:2106 migrate_to_node mm/mempolicy.c:1244 [inline] do_migrate_pages+0x5ee/0x800 mm/mempolicy.c:1343 kernel_migrate_pages mm/mempolicy.c:1858 [inline] __do_sys_migrate_pages mm/mempolicy.c:1876 [inline] __se_sys_migrate_pages+0x544/0x650 mm/mempolicy.c:1872 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fb18e18ec29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffdca5c9838 EFLAGS: 00000246 ORIG_RAX: 0000000000000100 RAX: ffffffffffffffda RBX: 00007fb18e3d5fa0 RCX: 00007fb18e18ec29 RDX: 0000200000000300 RSI: 0000000000000003 RDI: 0000000000000000 RBP: 00007fb18e211e41 R08: 0000000000000000 R09: 0000000000000000 R10: 0000200000000040 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fb18e3d5fa0 R14: 00007fb18e3d5fa0 R15: 0000000000000004 </TASK> Modules linked in: ---[ end trace 0000000000000000 ]--- RIP: 0010:ptep_get include/linux/pgtable.h:340 [inline] RIP: 0010:remove_migration_pte+0x369/0x2320 mm/migrate.c:352 Code: 00 48 8d 43 20 48 89 44 24 68 49 8d 47 40 48 89 84 24 e8 00 00 00 4c 89 64 24 48 4c 8b b4 24 50 01 00 00 4c 89 f0 48 c1 e8 03 <42> 80 3c 28 00 74 08 4c 89 f7 e8 f8 3e ff ff 49 8b 06 48 89 44 24 RSP: 0018:ffffc90002fb73e0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff88802957e300 RCX: 1ffffd40008c9006 RDX: 0000000000000000 RSI: 0000000000030dff RDI: 0000000000030c00 RBP: ffffc90002fb75d0 R08: 0000000000000003 R09: 0000000000000004 R10: dffffc0000000000 R11: fffff520005f6e34 R12: ffffea0004648008 R13: dffffc0000000000 R14: 0000000000000000 R15: ffffea0004648000 FS: 00005555624de500(0000) GS:ffff8880b83fc000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000300 CR3: 000000010d8b8000 CR4: 00000000000006f0 ---------------- Code disassembly (best guess): 0: 00 48 8d add %cl,-0x73(%rax) 3: 43 20 48 89 rex.XB and %cl,-0x77(%r8) 7: 44 24 68 rex.R and $0x68,%al a: 49 8d 47 40 lea 0x40(%r15),%rax e: 48 89 84 24 e8 00 00 mov %rax,0xe8(%rsp) 15: 00 16: 4c 89 64 24 48 mov %r12,0x48(%rsp) 1b: 4c 8b b4 24 50 01 00 mov 0x150(%rsp),%r14 22: 00 23: 4c 89 f0 mov %r14,%rax 26: 48 c1 e8 03 shr $0x3,%rax * 2a: 42 80 3c 28 00 cmpb $0x0,(%rax,%r13,1) <-- trapping instruction 2f: 74 08 je 0x39 31: 4c 89 f7 mov %r14,%rdi 34: e8 f8 3e ff ff call 0xffff3f31 39: 49 8b 06 mov (%r14),%rax 3c: 48 rex.W 3d: 89 .byte 0x89 3e: 44 rex.R 3f: 24 .byte 0x24
***
If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com
--- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
On 2025/9/30 19:16, syzbot ci wrote:
syzbot ci has tested the following series
[v4] mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage https://lore.kernel.org/all/20250930071053.36158-1-lance.yang@linux.dev
- [PATCH v4 1/1] mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage
and found the following issue: general protection fault in remove_migration_pte
Full report is available here: https://ci.syzbot.org/series/8cc7e52f-a859-4251-bd08-9787cdaf7928
general protection fault in remove_migration_pte
tree: linux-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next base: 262858079afde6d367ce3db183c74d8a43a0e83f arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/97ee4826-5d29-472d-a85d-51543b0e45de/config C repro: https://ci.syzbot.org/findings/f4819db2-21f2-4280-8bc4-942445398953/c_repro syz repro: https://ci.syzbot.org/findings/f4819db2-21f2-4280-8bc4-942445398953/syz_repr...
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
This is a known issue that I introduced in the v3 patch. I spotted this exact NULL pointer dereference bug[1] myself and have already sent out a v5 version[2] with the fix.
The root cause is that ptep_get() is called before the !pwmw.pte check, which handles PMD-mapped THP migration entries.
[1] https://lore.kernel.org/linux-mm/2d21c9bc-e299-4ca6-85ba-b01a1f346d9d@linux.... [2] https://lore.kernel.org/linux-mm/20250930081040.80926-1-lance.yang@linux.dev
Thanks, Lance
CPU: 0 UID: 0 PID: 6025 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:ptep_get include/linux/pgtable.h:340 [inline] RIP: 0010:remove_migration_pte+0x369/0x2320 mm/migrate.c:352 Code: 00 48 8d 43 20 48 89 44 24 68 49 8d 47 40 48 89 84 24 e8 00 00 00 4c 89 64 24 48 4c 8b b4 24 50 01 00 00 4c 89 f0 48 c1 e8 03 <42> 80 3c 28 00 74 08 4c 89 f7 e8 f8 3e ff ff 49 8b 06 48 89 44 24 RSP: 0018:ffffc90002fb73e0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff88802957e300 RCX: 1ffffd40008c9006 RDX: 0000000000000000 RSI: 0000000000030dff RDI: 0000000000030c00 RBP: ffffc90002fb75d0 R08: 0000000000000003 R09: 0000000000000004 R10: dffffc0000000000 R11: fffff520005f6e34 R12: ffffea0004648008 R13: dffffc0000000000 R14: 0000000000000000 R15: ffffea0004648000 FS: 00005555624de500(0000) GS:ffff8880b83fc000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000300 CR3: 000000010d8b8000 CR4: 00000000000006f0 Call Trace:
<TASK> rmap_walk_anon+0x553/0x730 mm/rmap.c:2855 remove_migration_ptes mm/migrate.c:469 [inline] migrate_folio_move mm/migrate.c:1381 [inline] migrate_folios_move mm/migrate.c:1711 [inline] migrate_pages_batch+0x202e/0x35e0 mm/migrate.c:1967 migrate_pages_sync mm/migrate.c:1997 [inline] migrate_pages+0x1bcc/0x2930 mm/migrate.c:2106 migrate_to_node mm/mempolicy.c:1244 [inline] do_migrate_pages+0x5ee/0x800 mm/mempolicy.c:1343 kernel_migrate_pages mm/mempolicy.c:1858 [inline] __do_sys_migrate_pages mm/mempolicy.c:1876 [inline] __se_sys_migrate_pages+0x544/0x650 mm/mempolicy.c:1872 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fb18e18ec29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffdca5c9838 EFLAGS: 00000246 ORIG_RAX: 0000000000000100 RAX: ffffffffffffffda RBX: 00007fb18e3d5fa0 RCX: 00007fb18e18ec29 RDX: 0000200000000300 RSI: 0000000000000003 RDI: 0000000000000000 RBP: 00007fb18e211e41 R08: 0000000000000000 R09: 0000000000000000 R10: 0000200000000040 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fb18e3d5fa0 R14: 00007fb18e3d5fa0 R15: 0000000000000004 </TASK> Modules linked in: ---[ end trace 0000000000000000 ]--- RIP: 0010:ptep_get include/linux/pgtable.h:340 [inline] RIP: 0010:remove_migration_pte+0x369/0x2320 mm/migrate.c:352 Code: 00 48 8d 43 20 48 89 44 24 68 49 8d 47 40 48 89 84 24 e8 00 00 00 4c 89 64 24 48 4c 8b b4 24 50 01 00 00 4c 89 f0 48 c1 e8 03 <42> 80 3c 28 00 74 08 4c 89 f7 e8 f8 3e ff ff 49 8b 06 48 89 44 24 RSP: 0018:ffffc90002fb73e0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff88802957e300 RCX: 1ffffd40008c9006 RDX: 0000000000000000 RSI: 0000000000030dff RDI: 0000000000030c00 RBP: ffffc90002fb75d0 R08: 0000000000000003 R09: 0000000000000004 R10: dffffc0000000000 R11: fffff520005f6e34 R12: ffffea0004648008 R13: dffffc0000000000 R14: 0000000000000000 R15: ffffea0004648000 FS: 00005555624de500(0000) GS:ffff8880b83fc000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000300 CR3: 000000010d8b8000 CR4: 00000000000006f0 ---------------- Code disassembly (best guess): 0: 00 48 8d add %cl,-0x73(%rax) 3: 43 20 48 89 rex.XB and %cl,-0x77(%r8) 7: 44 24 68 rex.R and $0x68,%al a: 49 8d 47 40 lea 0x40(%r15),%rax e: 48 89 84 24 e8 00 00 mov %rax,0xe8(%rsp) 15: 00 16: 4c 89 64 24 48 mov %r12,0x48(%rsp) 1b: 4c 8b b4 24 50 01 00 mov 0x150(%rsp),%r14 22: 00 23: 4c 89 f0 mov %r14,%rax 26: 48 c1 e8 03 shr $0x3,%rax * 2a: 42 80 3c 28 00 cmpb $0x0,(%rax,%r13,1) <-- trapping instruction 2f: 74 08 je 0x39 31: 4c 89 f7 mov %r14,%rdi 34: e8 f8 3e ff ff call 0xffff3f31 39: 49 8b 06 mov (%r14),%rax 3c: 48 rex.W 3d: 89 .byte 0x89 3e: 44 rex.R 3f: 24 .byte 0x24
If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com
This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
linux-stable-mirror@lists.linaro.org