From: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com
The problem presented here is related to NUMA systems and tag-based KASAN modes - software and hardware ones. It can be explained in the following points:
1. There can be more than one virtual memory chunk. 2. Chunk's base address has a tag. 3. The base address points at the first chunk and thus inherits the tag of the first chunk. 4. The subsequent chunks will be accessed with the tag from the first chunk. 5. Thus, the subsequent chunks need to have their tag set to match that of the first chunk.
Refactor code by moving it into a helper in preparation for the actual fix.
Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS") Cc: stable@vger.kernel.org # 6.1+ Signed-off-by: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com Tested-by: Baoquan He bhe@redhat.com --- Changelog v6: - Add Baoquan's tested-by tag. - Move patch to the beginning of the series as it is a fix. - Move the refactored code to tags.c because both software and hardware modes compile it. - Add fixes tag.
Changelog v4: - Redo the patch message numbered list. - Do the refactoring in this patch and move additions to the next new one.
Changelog v3: - Remove last version of this patch that just resets the tag on base_addr and add this patch that unpoisons all areas with the same tag instead.
include/linux/kasan.h | 10 ++++++++++ mm/kasan/tags.c | 11 +++++++++++ mm/vmalloc.c | 4 +--- 3 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d12e1a5f5a9a..b00849ea8ffd 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -614,6 +614,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start, __kasan_poison_vmalloc(start, size); }
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms); +static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ + if (kasan_enabled()) + __kasan_unpoison_vmap_areas(vms, nr_vms); +} + #else /* CONFIG_KASAN_VMALLOC */
static inline void kasan_populate_early_vm_area_shadow(void *start, @@ -638,6 +645,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start, static inline void kasan_poison_vmalloc(const void *start, unsigned long size) { }
+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ } + #endif /* CONFIG_KASAN_VMALLOC */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c index b9f31293622b..ecc17c7c675a 100644 --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -18,6 +18,7 @@ #include <linux/static_key.h> #include <linux/string.h> #include <linux/types.h> +#include <linux/vmalloc.h>
#include "kasan.h" #include "../slab.h" @@ -146,3 +147,13 @@ void __kasan_save_free_info(struct kmem_cache *cache, void *object) { save_stack_info(cache, object, 0, true); } + +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms) +{ + int area; + + for (area = 0 ; area < nr_vms ; area++) { + kasan_poison(vms[area]->addr, vms[area]->size, + arch_kasan_get_tag(vms[area]->addr), false); + } +} diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 798b2ed21e46..934c8bfbcebf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4870,9 +4870,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, * With hardware tag-based KASAN, marking is skipped for * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). */ - for (area = 0; area < nr_vms; area++) - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr, - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL); + kasan_unpoison_vmap_areas(vms, nr_vms);
kfree(vas); return vms;