Hibernation assumes the memory layout after resume be the same as that before sleep, but CONFIG_RANDOM_KMALLOC_CACHES breaks this assumption. At least on LoongArch and ARM64 we observed failures of resuming from hibernation (on LoongArch non-boot CPUs fail to bringup, on ARM64 some devices are unusable).
software_resume_initcall(), the function which resume the target kernel is a initcall function. So, move the random_kmalloc_seed initialisation after all initcalls.
Cc: stable@vger.kernel.org Fixes: 3c6152940584290668 ("Randomized slab caches for kmalloc()") Reported-by: Yuli Wang wangyuli@uniontech.com Signed-off-by: Huacai Chen chenhuacai@loongson.cn --- init/main.c | 3 +++ mm/slab_common.c | 3 --- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/init/main.c b/init/main.c index 2a1757826397..1362957bdbe4 100644 --- a/init/main.c +++ b/init/main.c @@ -1458,6 +1458,9 @@ static int __ref kernel_init(void *unused) /* need to finish all async __init code before freeing the memory */ async_synchronize_full();
+#ifdef CONFIG_RANDOM_KMALLOC_CACHES + random_kmalloc_seed = get_random_u64(); +#endif system_state = SYSTEM_FREEING_INITMEM; kprobe_free_init_mem(); ftrace_free_init_mem(); diff --git a/mm/slab_common.c b/mm/slab_common.c index 4030907b6b7d..23e324aee218 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -971,9 +971,6 @@ void __init create_kmalloc_caches(void) for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) new_kmalloc_cache(i, type); } -#ifdef CONFIG_RANDOM_KMALLOC_CACHES - random_kmalloc_seed = get_random_u64(); -#endif
/* Kmalloc array is now usable */ slab_state = UP;