On Wed, Jan 07, 2026 at 12:43:17PM +0100, Vlastimil Babka wrote:
On 1/5/26 09:02, Harry Yoo wrote:
When both KASAN and SLAB_STORE_USER are enabled, accesses to struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. This occurs because orig_size is currently defined as unsigned int, which only guarantees 4-byte alignment. When struct kasan_alloc_meta is placed after orig_size, it may end up at a 4-byte boundary rather than the required 8-byte boundary on 64-bit systems.
Oops.
Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
Change orig_size from unsigned int to unsigned long to ensure proper alignment for any subsequent metadata. This should not waste additional memory because kmalloc objects are already aligned to at least ARCH_KMALLOC_MINALIGN.
I'll add:
Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/
since that's useful context and discussion.
Looks good to me.
Suggested-by: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: stable@vger.kernel.org Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") Signed-off-by: Harry Yoo harry.yoo@oracle.com
As the problem was introduced in 6.1, doesn't seem urgent to push as 6.19 rc fix, so keeping it as part of the series
Yeah, no need to hurry.
(where it's a necessary prerequisity per the Closes: link above)
Technically it's not a necessary prerequisite anymore because the series doesn't unpoison slabobj_ext anymore, but later patches depend on it because of the change in object layout.
and stable backporting later seems indeed sufficient. Thanks.
backporting later sounds reasonable.
Thanks!