On 10/27/25 1:00 PM, Harry Yoo wrote:
When the SLAB_STORE_USER debug flag is used, any metadata placed after the original kmalloc request size (orig_size) is not properly aligned on 64-bit architectures because its type is unsigned int. When both KASAN and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.
kasan_alloc_meta is properly aligned. It consists of 4 32-bit words, so the proper alignment is 32bit regardless of architecture bitness.
kasan_free_meta however requires 'unsigned long' alignment and could be misaligned if placed at 32-bit boundary on 64-bit arch
Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
Because not all architectures support unaligned memory accesses, ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta) in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta are aligned by adding __aligned(__alignof__(unsigned long)).
__aligned() attribute ensures nothing. It tells compiler what alignment to expect and affects compiler controlled placement of struct in memory (e.g. stack/.bss/.data) But it can't enforce placement in dynamic memory.
Also for struct kasan_free_meta, struct track alignof(unsigned long) already dictated by C standard, so adding this __aligned() have zero effect. And there is no reason to increase alignment requirement for kasan_alloc_meta struct.
For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to make clear that its size remains unsigned int but it must be aligned to a word boundary. On 64-bit architectures, this reserves 8 bytes for orig_size, which is acceptable since kmalloc's original request size tracking is intended for debugging rather than production use.
I would suggest to use 'unsigned long' for orig_size. It changes nothing for 32-bit, and it shouldn't increase memory usage for 64-bit since we currently wasting it anyway to align next object to ARCH_KMALLOC_MINALIGN.