The arm64 page table dump code can race with concurrent modification of the kernel page tables. When a leaf entries are modified concurrently, the dump code may log stale or inconsistent information for a VA range, but this is otherwise not harmful.
When intermediate levels of table are freed, the dump code will continue to use memory which has been freed and potentially reallocated for another purpose. In such cases, the dump code may dereference bogus addresses, leading to a number of potential problems.
This problem was fixed for ptdump_show() earlier via commit 'bf2b59f60ee1 ("arm64/mm: Hold memory hotplug lock while walking for kernel page table dump")' but a same was missed for ptdump_check_wx() which faced the race condition as well. Let's just take the memory hotplug lock while executing ptdump_check_wx().
Cc: stable@vger.kernel.org Fixes: bbd6ec605c0f ("arm64/mm: Enable memory hot remove") Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Cc: Ryan Roberts ryan.roberts@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reported-by: Dev Jain dev.jain@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- This patch applies on v6.16-rc1
Dev Jain found this via code inspection.
arch/arm64/mm/ptdump.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 421a5de806c62..551f80d41e8d2 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -328,7 +328,7 @@ static struct ptdump_info kernel_ptdump_info __ro_after_init = { .mm = &init_mm, };
-bool ptdump_check_wx(void) +static bool __ptdump_check_wx(void) { struct ptdump_pg_state st = { .seq = NULL, @@ -367,6 +367,16 @@ bool ptdump_check_wx(void) } }
+bool ptdump_check_wx(void) +{ + bool ret; + + get_online_mems(); + ret = __ptdump_check_wx(); + put_online_mems(); + return ret; +} + static int __init ptdump_init(void) { u64 page_offset = _PAGE_OFFSET(vabits_actual);