In early boot, Linux creates identity virtual->physical address mappings so that it can enable the MMU before full memory management is ready. To ensure some available physical memory to back these structures, vmlinux.lds reserves some space (and defines marker symbols) in the middle of the kernel image. However, because they are defined outside of PROGBITS sections, they aren't pre-initialized -- at least as far as ELF is concerned.
In the typical case, this isn't actually a problem: the boot image is prepared with objcopy, which zero-fills the gaps, so these structures are incidentally zero-initialized (an all-zeroes entry is considered absent, so zero-initialization is appropriate).
However, that is just a happy accident: the `vmlinux` ELF output authoritatively represents the state of memory at entry. If the ELF says a region of memory isn't initialized, we must treat it as uninitialized. Indeed, certain bootloaders (e.g. Broadcom CFE) ingest the ELF directly -- sidestepping the objcopy-produced image entirely -- and therefore do not initialize the gaps. This results in the early boot code crashing when it attempts to create identity mappings.
Therefore, add boot-time zero-initialization for the following: - __pi_init_idmap_pg_dir..__pi_init_idmap_pg_end - idmap_pg_dir - reserved_pg_dir - tramp_pg_dir # Already done, but this patch corrects the size
Note, swapper_pg_dir is already initialized (by copy from idmap_pg_dir) before use, so this patch does not need to address it.
Cc: stable@vger.kernel.org Signed-off-by: Sam Edwards CFSworks@gmail.com --- arch/arm64/kernel/head.S | 12 ++++++++++++ arch/arm64/mm/mmu.c | 3 ++- 2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index ca04b338cb0d..0c3be11d0006 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -86,6 +86,18 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args
+ adrp x0, reserved_pg_dir + add x1, x0, #PAGE_SIZE +0: str xzr, [x0], 8 + cmp x0, x1 + b.lo 0b + + adrp x0, __pi_init_idmap_pg_dir + adrp x1, __pi_init_idmap_pg_end +1: str xzr, [x0], 8 + cmp x0, x1 + b.lo 1b + adrp x1, early_init_stack mov sp, x1 mov x29, xzr diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 34e5d78af076..aaf823565a65 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -761,7 +761,7 @@ static int __init map_entry_trampoline(void) pgprot_val(prot) &= ~PTE_NG;
/* Map only the text into the trampoline page table */ - memset(tramp_pg_dir, 0, PGD_SIZE); + memset(tramp_pg_dir, 0, PAGE_SIZE); __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, entry_tramp_text_size(), prot, pgd_pgtable_alloc_init_mm, NO_BLOCK_MAPPINGS); @@ -806,6 +806,7 @@ static void __init create_idmap(void) u64 end = __pa_symbol(__idmap_text_end); u64 ptep = __pa_symbol(idmap_ptes);
+ memset(idmap_pg_dir, 0, PAGE_SIZE); __pi_map_range(&ptep, start, end, start, PAGE_KERNEL_ROX, IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false, __phys_to_virt(ptep) - ptep);