The full RNG initialization relies on some timestamps, made possible with general functions like time_init() and timekeeping_init(). However, these are only available rather late in initialization. Meanwhile, other things, such as memory allocator functions, make use of the RNG much earlier.
So split RNG initialization into two phases. We can give arch randomness very early on, and then later, after timekeeping and such are available, initialize the rest.
This ensures that, for example, slabs are properly randomized if RDRAND is available. Without this, CONFIG_SLAB_FREELIST_RANDOM=y loses a degree of its security, because its random seed is potentially deterministic, since it hasn't yet incorporated RDRAND. It also makes it possible to use a better seed in kfence, which currently relies on only the cycle counter.
Another positive consequence is that on systems with RDRAND, running with CONFIG_WARN_ALL_UNSEEDED_RANDOM=y results in no warnings at all.
Cc: Kees Cook keescook@chromium.org Cc: Andrew Morton akpm@linux-foundation.org Cc: stable@vger.kernel.org Signed-off-by: Jason A. Donenfeld Jason@zx2c4.com --- drivers/char/random.c | 47 ++++++++++++++++++++++++------------------ include/linux/random.h | 3 ++- init/main.c | 17 +++++++-------- 3 files changed, 37 insertions(+), 30 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c index a90d96f4b3bb..1cb53495e8f7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -772,18 +772,13 @@ static int random_pm_notification(struct notifier_block *nb, unsigned long actio static struct notifier_block pm_notifier = { .notifier_call = random_pm_notification };
/* - * The first collection of entropy occurs at system boot while interrupts - * are still turned off. Here we push in latent entropy, RDSEED, a timestamp, - * utsname(), and the command line. Depending on the above configuration knob, - * RDSEED may be considered sufficient for initialization. Note that much - * earlier setup may already have pushed entropy into the input pool by the - * time we get here. + * This is called extremely early, before time keeping functionality is + * available, but arch randomness is. Interrupts are not yet enabled. */ -int __init random_init(const char *command_line) +void __init random_init_early(const char *command_line) { - ktime_t now = ktime_get_real(); - size_t i, longs, arch_bits; unsigned long entropy[BLAKE2S_BLOCK_SIZE / sizeof(long)]; + size_t i, longs, arch_bits;
#if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -803,34 +798,46 @@ int __init random_init(const char *command_line) i += longs; continue; } - entropy[0] = random_get_entropy(); - _mix_pool_bytes(entropy, sizeof(*entropy)); arch_bits -= sizeof(*entropy) * 8; ++i; } - _mix_pool_bytes(&now, sizeof(now)); - _mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(command_line, strlen(command_line)); + + if (trust_cpu) + credit_init_bits(arch_bits); +} + +/* + * This is called a little bit after the prior function, and now there is + * access to timestamps counters. Interrupts are not yet enabled. + */ +void __init random_init(void) +{ + unsigned long entropy = random_get_entropy(); + ktime_t now = ktime_get_real(); + + _mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&entropy, sizeof(entropy)); add_latent_entropy();
/* - * If we were initialized by the bootloader before jump labels are - * initialized, then we should enable the static branch here, where + * If we were initialized by the cpu or bootloader before jump labels + * are initialized, then we should enable the static branch here, where * it's guaranteed that jump labels have been initialized. */ if (!static_branch_likely(&crng_is_ready) && crng_init >= CRNG_READY) crng_set_ready(NULL);
+ /* Reseed if already seeded by earlier phases. */ if (crng_ready()) crng_reseed(); - else if (trust_cpu) - _credit_init_bits(arch_bits);
WARN_ON(register_pm_notifier(&pm_notifier));
- WARN(!random_get_entropy(), "Missing cycle counter and fallback timer; RNG " - "entropy collection will consequently suffer."); - return 0; + WARN(!entropy, "Missing cycle counter and fallback timer; RNG " + "entropy collection will consequently suffer."); }
/* diff --git a/include/linux/random.h b/include/linux/random.h index 3fec206487f6..a9e6e16f9774 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -72,7 +72,8 @@ static inline unsigned long get_random_canary(void) return get_random_long() & CANARY_MASK; }
-int __init random_init(const char *command_line); +void __init random_init_early(const char *command_line); +void __init random_init(void); bool rng_is_initialized(void); int wait_for_random_bytes(void);
diff --git a/init/main.c b/init/main.c index 1fe7942f5d4a..0866e5d0d467 100644 --- a/init/main.c +++ b/init/main.c @@ -976,6 +976,9 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) parse_args("Setting extra init args", extra_init_args, NULL, 0, -1, -1, NULL, set_init_arg);
+ /* Architectural and non-timekeeping rng init, before allocator init */ + random_init_early(command_line); + /* * These use large bootmem allocations and must precede * kmem_cache_init() @@ -1035,17 +1038,13 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) hrtimers_init(); softirq_init(); timekeeping_init(); - kfence_init(); time_init();
- /* - * For best initial stack canary entropy, prepare it after: - * - setup_arch() for any UEFI RNG entropy and boot cmdline access - * - timekeeping_init() for ktime entropy used in random_init() - * - time_init() for making random_get_entropy() work on some platforms - * - random_init() to initialize the RNG from from early entropy sources - */ - random_init(command_line); + /* This must be after timekeeping is initialized */ + random_init(); + + /* These make use of the fully initialized rng */ + kfence_init(); boot_init_stack_canary();
perf_event_init();
Am Mon, Sep 26, 2022 at 11:31:29PM +0200 schrieb Jason A. Donenfeld:
The full RNG initialization relies on some timestamps, made possible with general functions like time_init() and timekeeping_init(). However, these are only available rather late in initialization. Meanwhile, other things, such as memory allocator functions, make use of the RNG much earlier.
So split RNG initialization into two phases. We can give arch randomness very early on, and then later, after timekeeping and such are available, initialize the rest.
This ensures that, for example, slabs are properly randomized if RDRAND is available. Without this, CONFIG_SLAB_FREELIST_RANDOM=y loses a degree of its security, because its random seed is potentially deterministic, since it hasn't yet incorporated RDRAND. It also makes it possible to use a better seed in kfence, which currently relies on only the cycle counter.
Another positive consequence is that on systems with RDRAND, running with CONFIG_WARN_ALL_UNSEEDED_RANDOM=y results in no warnings at all.
Nice improvement. One question, though:
#if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -803,34 +798,46 @@ int __init random_init(const char *command_line) i += longs; continue; }
entropy[0] = random_get_entropy();
arch_bits -= sizeof(*entropy) * 8; ++i; }_mix_pool_bytes(entropy, sizeof(*entropy));
Previously, random_get_entropy() was mixed into the pool ARRAY_SIZE(entropy) times.
+/*
- This is called a little bit after the prior function, and now there is
- access to timestamps counters. Interrupts are not yet enabled.
- */
+void __init random_init(void) +{
- unsigned long entropy = random_get_entropy();
- ktime_t now = ktime_get_real();
- _mix_pool_bytes(utsname(), sizeof(*(utsname())));
But now, it's only mixed into the pool once. Is this change on purpose?
Thanks, Dominik
On Tue, Sep 27, 2022 at 8:35 AM Dominik Brodowski linux@dominikbrodowski.net wrote:
#if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -803,34 +798,46 @@ int __init random_init(const char *command_line) i += longs; continue; }
entropy[0] = random_get_entropy();
_mix_pool_bytes(entropy, sizeof(*entropy)); arch_bits -= sizeof(*entropy) * 8; ++i; }
Previously, random_get_entropy() was mixed into the pool ARRAY_SIZE(entropy) times.
+/*
- This is called a little bit after the prior function, and now there is
- access to timestamps counters. Interrupts are not yet enabled.
- */
+void __init random_init(void) +{
unsigned long entropy = random_get_entropy();
ktime_t now = ktime_get_real();
_mix_pool_bytes(utsname(), sizeof(*(utsname())));
But now, it's only mixed into the pool once. Is this change on purpose?
Yea, it is. I don't think it's really doing much of use. Before we did it because it was convenient -- because we simply could. But in reality mostly what we care about is capturing when it gets to that point in the execution. For jitter, the actual jitter function (try_to_generate_entropy()) is better here.
However, before feeling too sad about it, remember that extract_entropy() is still filling a block with rdtsc when rdrand fails, the same way as this function was. So it's still in there anyway.
Jason
Am Tue, Sep 27, 2022 at 10:28:11AM +0200 schrieb Jason A. Donenfeld:
On Tue, Sep 27, 2022 at 8:35 AM Dominik Brodowski linux@dominikbrodowski.net wrote:
#if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -803,34 +798,46 @@ int __init random_init(const char *command_line) i += longs; continue; }
entropy[0] = random_get_entropy();
_mix_pool_bytes(entropy, sizeof(*entropy)); arch_bits -= sizeof(*entropy) * 8; ++i; }
Previously, random_get_entropy() was mixed into the pool ARRAY_SIZE(entropy) times.
+/*
- This is called a little bit after the prior function, and now there is
- access to timestamps counters. Interrupts are not yet enabled.
- */
+void __init random_init(void) +{
unsigned long entropy = random_get_entropy();
ktime_t now = ktime_get_real();
_mix_pool_bytes(utsname(), sizeof(*(utsname())));
But now, it's only mixed into the pool once. Is this change on purpose?
Yea, it is. I don't think it's really doing much of use. Before we did it because it was convenient -- because we simply could. But in reality mostly what we care about is capturing when it gets to that point in the execution. For jitter, the actual jitter function (try_to_generate_entropy()) is better here.
However, before feeling too sad about it, remember that extract_entropy() is still filling a block with rdtsc when rdrand fails, the same way as this function was. So it's still in there anyway.
With that explanation on the record (I think it's important to make such subtle changes explicit),
Reviewed-by: Dominik Brodowski linux@dominikbrodowski.net
Thanks, Dominik
On Tue, Sep 27, 2022 at 10:30 AM Dominik Brodowski linux@dominikbrodowski.net wrote:
Am Tue, Sep 27, 2022 at 10:28:11AM +0200 schrieb Jason A. Donenfeld:
On Tue, Sep 27, 2022 at 8:35 AM Dominik Brodowski linux@dominikbrodowski.net wrote:
#if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -803,34 +798,46 @@ int __init random_init(const char *command_line) i += longs; continue; }
entropy[0] = random_get_entropy();
_mix_pool_bytes(entropy, sizeof(*entropy)); arch_bits -= sizeof(*entropy) * 8; ++i; }
Previously, random_get_entropy() was mixed into the pool ARRAY_SIZE(entropy) times.
+/*
- This is called a little bit after the prior function, and now there is
- access to timestamps counters. Interrupts are not yet enabled.
- */
+void __init random_init(void) +{
unsigned long entropy = random_get_entropy();
ktime_t now = ktime_get_real();
_mix_pool_bytes(utsname(), sizeof(*(utsname())));
But now, it's only mixed into the pool once. Is this change on purpose?
Yea, it is. I don't think it's really doing much of use. Before we did it because it was convenient -- because we simply could. But in reality mostly what we care about is capturing when it gets to that point in the execution. For jitter, the actual jitter function (try_to_generate_entropy()) is better here.
However, before feeling too sad about it, remember that extract_entropy() is still filling a block with rdtsc when rdrand fails, the same way as this function was. So it's still in there anyway.
With that explanation on the record (I think it's important to make such subtle changes explicit),
Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
I'll augment the commit message to note this too. Thanks for the review.
Jason
linux-stable-mirror@lists.linaro.org