6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pasha Tatashin pasha.tatashin@soleen.com
commit fa759cd75bce5489eed34596daa53f721849a86f upstream.
KHO allocates metadata for its preserved memory map using the slab allocator via kzalloc(). This metadata is temporary and is used by the next kernel during early boot to find preserved memory.
A problem arises when KFENCE is enabled. kzalloc() calls can be randomly intercepted by kfence_alloc(), which services the allocation from a dedicated KFENCE memory pool. This pool is allocated early in boot via memblock.
When booting via KHO, the memblock allocator is restricted to a "scratch area", forcing the KFENCE pool to be allocated within it. This creates a conflict, as the scratch area is expected to be ephemeral and overwriteable by a subsequent kexec. If KHO metadata is placed in this KFENCE pool, it leads to memory corruption when the next kernel is loaded.
To fix this, modify KHO to allocate its metadata directly from the buddy allocator instead of slab.
Link: https://lkml.kernel.org/r/20251021000852.2924827-4-pasha.tatashin@soleen.com Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation") Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com Reviewed-by: Pratyush Yadav pratyush@kernel.org Reviewed-by: Mike Rapoport (Microsoft) rppt@kernel.org Reviewed-by: David Matlack dmatlack@google.com Cc: Alexander Graf graf@amazon.com Cc: Christian Brauner brauner@kernel.org Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Jonathan Corbet corbet@lwn.net Cc: Masahiro Yamada masahiroy@kernel.org Cc: Miguel Ojeda ojeda@kernel.org Cc: Randy Dunlap rdunlap@infradead.org Cc: Samiullah Khawaja skhawaja@google.com Cc: Tejun Heo tj@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/gfp.h | 3 +++ kernel/kexec_handover.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
--- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -7,6 +7,7 @@ #include <linux/mmzone.h> #include <linux/topology.h> #include <linux/alloc_tag.h> +#include <linux/cleanup.h> #include <linux/sched.h>
struct vm_area_struct; @@ -463,4 +464,6 @@ static inline struct folio *folio_alloc_ /* This should be paired with folio_put() rather than free_contig_range(). */ #define folio_alloc_gigantic(...) alloc_hooks(folio_alloc_gigantic_noprof(__VA_ARGS__))
+DEFINE_FREE(free_page, void *, free_page((unsigned long)_T)) + #endif /* __LINUX_GFP_H */ --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -102,7 +102,7 @@ static void *xa_load_or_alloc(struct xar if (res) return res;
- void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL); + void *elm __free(free_page) = (void *)get_zeroed_page(GFP_KERNEL);
if (!elm) return ERR_PTR(-ENOMEM); @@ -266,9 +266,9 @@ static_assert(sizeof(struct khoser_mem_c static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk, unsigned long order) { - struct khoser_mem_chunk *chunk __free(kfree) = NULL; + struct khoser_mem_chunk *chunk __free(free_page) = NULL;
- chunk = kzalloc(PAGE_SIZE, GFP_KERNEL); + chunk = (void *)get_zeroed_page(GFP_KERNEL); if (!chunk) return ERR_PTR(-ENOMEM);