The quilt patch titled Subject: kasan: unpoison pcpu chunks with base address tag has been removed from the -mm tree. Its filename was kasan-unpoison-pcpu-chunks-with-base-address-tag.patch
This patch was dropped because an updated version will be issued
------------------------------------------------------ From: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com Subject: kasan: unpoison pcpu chunks with base address tag Date: Tue, 04 Nov 2025 14:49:08 +0000
Patch series "kasan: vmalloc: Fix incorrect tag assignment with multiple vm_structs".
A KASAN tag mismatch, possibly resulting in a kernel panic, can be observed on systems with a tag-based KASAN enabled and with multiple NUMA nodes. Initially it was only noticed on x86 [1] but later a similar issue was also reported on arm64 [2].
Specifically the problem is related to how vm_structs interact with pcpu_chunks - both when they are allocated, assigned and when pcpu_chunk addresses are derived.
When vm_structs are allocated they are tagged if vmalloc support is enabled along the KASAN mode. Later when first pcpu chunk is allocated it gets its 'base_addr' field set to the first allocated vm_struct. With that it inherits that vm_struct's tag.
When pcpu_chunk addresses are later derived (by pcpu_chunk_addr(), for example in pcpu_alloc_noprof()) the base_addr field is used and offsets are added to it. If the initial conditions are satisfied then some of the offsets will point into memory allocated with a different vm_struct. So while the lower bits will get accurately derived the tag bits in the top of the pointer won't match the shadow memory contents.
The solution (proposed at v2 of the x86 KASAN series [3]) is to tag the vm_structs the same when allocating them for the per cpu allocator (in pcpu_get_vm_areas()).
Originally these patches were part of the x86 KASAN series [4].
This patch (of 2):
A KASAN tag mismatch, possibly causing a kernel panic, can be observed on systems with a tag-based KASAN enabled and with multiple NUMA nodes. It was reported on arm64 and reproduced on x86. It can be explained in the following points:
1. There can be more than one virtual memory chunk. 2. Chunk's base address has a tag. 3. The base address points at the first chunk and thus inherits the tag of the first chunk. 4. The subsequent chunks will be accessed with the tag from the first chunk. 5. Thus, the subsequent chunks need to have their tag set to match that of the first chunk.
Refactor code by moving it into a helper in preparation for the actual fix.
Link: https://lkml.kernel.org/r/821677dd824d003cc5b7a77891db4723e23518ea.176226702... Link: https://lore.kernel.org/all/e7e04692866d02e6d3b32bb43b998e5d17092ba4.1738686... [1] Link: https://lore.kernel.org/all/aMUrW1Znp1GEj7St@MiWiFi-R3L-srv/ [2] Link: https://lore.kernel.org/all/CAPAsAGxDRv_uFeMYu9TwhBVWHCCtkSxoWY4xmFB_vowMbi8... [3] Link: https://lore.kernel.org/all/cover.1761763681.git.m.wieczorretman@pm.me/ [4] Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS") Signed-off-by: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com Tested-by: Baoquan He bhe@redhat.com Cc: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Dmitriy Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: "Uladzislau Rezki (Sony)" urezki@gmail.com Cc: Vincenzo Frascino vincenzo.frascino@arm.com Cc: stable@vger.kernel.org [6.1+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
include/linux/kasan.h | 10 ++++++++++ mm/kasan/common.c | 11 +++++++++++ mm/vmalloc.c | 4 +--- 3 files changed, 22 insertions(+), 3 deletions(-)
--- a/include/linux/kasan.h~kasan-unpoison-pcpu-chunks-with-base-address-tag +++ a/include/linux/kasan.h @@ -614,6 +614,13 @@ static __always_inline void kasan_poison __kasan_poison_vmalloc(start, size); }
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms); +static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ + if (kasan_enabled()) + __kasan_unpoison_vmap_areas(vms, nr_vms); +} + #else /* CONFIG_KASAN_VMALLOC */
static inline void kasan_populate_early_vm_area_shadow(void *start, @@ -638,6 +645,9 @@ static inline void *kasan_unpoison_vmall static inline void kasan_poison_vmalloc(const void *start, unsigned long size) { }
+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ } + #endif /* CONFIG_KASAN_VMALLOC */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ --- a/mm/kasan/common.c~kasan-unpoison-pcpu-chunks-with-base-address-tag +++ a/mm/kasan/common.c @@ -28,6 +28,7 @@ #include <linux/string.h> #include <linux/types.h> #include <linux/bug.h> +#include <linux/vmalloc.h>
#include "kasan.h" #include "../slab.h" @@ -582,3 +583,13 @@ bool __kasan_check_byte(const void *addr } return true; } + +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ + int area; + + for (area = 0 ; area < nr_vms ; area++) { + kasan_poison(vms[area]->addr, vms[area]->size, + arch_kasan_get_tag(vms[area]->addr), false); + } +} --- a/mm/vmalloc.c~kasan-unpoison-pcpu-chunks-with-base-address-tag +++ a/mm/vmalloc.c @@ -4870,9 +4870,7 @@ retry: * With hardware tag-based KASAN, marking is skipped for * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). */ - for (area = 0; area < nr_vms; area++) - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr, - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL); + kasan_unpoison_vmap_areas(vms, nr_vms);
kfree(vas); return vms; _
Patches currently in -mm which might be from maciej.wieczor-retman@intel.com are
kasan-unpoison-vms-addresses-with-a-common-tag.patch