Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
Some patches needed minor adjustment for context.
Thanks
Michal
Coiby Xu (3): kexec: clean up arch_kexec_kernel_verify_sig kexec, KEYS: make the code in bzImage64_verify_sig generic arm64: kexec_file: use more system keyrings to verify kernel image signature
Naveen N. Rao (2): kexec_file: drop weak attribute from functions kexec: drop weak attribute from functions
Sven Schnelle (1): s390/kexec_file: move kernel image size check
arch/arm64/include/asm/kexec.h | 20 ++++++- arch/arm64/kernel/kexec_image.c | 11 +--- arch/powerpc/include/asm/kexec.h | 14 +++++ arch/s390/boot/head.S | 2 - arch/s390/include/asm/kexec.h | 14 +++++ arch/s390/include/asm/setup.h | 1 - arch/s390/kernel/machine_kexec_file.c | 17 +----- arch/x86/include/asm/kexec.h | 12 ++++ arch/x86/kernel/kexec-bzimage64.c | 20 +------ include/linux/kexec.h | 82 ++++++++++++++++++++++---- kernel/kexec_core.c | 27 --------- kernel/kexec_file.c | 83 ++++++++++----------------- 12 files changed, 163 insertions(+), 140 deletions(-)
From: Sven Schnelle svens@linux.ibm.com
commit 277c8389386e2ccb8417afe4e36f67fc5dcd735d upstream.
In preparation of adding support for command lines with variable sizes on s390, the check whether the new kernel image is at least HEAD_END bytes long isn't correct. Move the check to kexec_file_add_components() so we can get the size of the parm area and check the size there.
The '.org HEAD_END' directive can now also be removed from head.S. This was used in the past to reserve space for the early sccb buffer, but with commit f1d3c5323772 ("s390/boot: move sclp early buffer from fixed address in asm to C") this is no longer required.
Signed-off-by: Sven Schnelle svens@linux.ibm.com Reviewed-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- arch/s390/boot/head.S | 2 -- arch/s390/include/asm/setup.h | 1 - arch/s390/kernel/machine_kexec_file.c | 17 ++--------------- 3 files changed, 2 insertions(+), 18 deletions(-)
diff --git a/arch/s390/boot/head.S b/arch/s390/boot/head.S index 40f4cff538b8..f3a8dba7dd5d 100644 --- a/arch/s390/boot/head.S +++ b/arch/s390/boot/head.S @@ -383,5 +383,3 @@ SYM_DATA_START(parmarea) .byte 0 .org PARMAREA+__PARMAREA_SIZE SYM_DATA_END(parmarea) - - .org HEAD_END diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h index b6606ffd85d8..121e1a8c41d7 100644 --- a/arch/s390/include/asm/setup.h +++ b/arch/s390/include/asm/setup.h @@ -11,7 +11,6 @@ #include <linux/build_bug.h>
#define PARMAREA 0x10400 -#define HEAD_END 0x11000
/* * Machine features detected in early.c diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c index 3459362c54ac..29a9178ff0d4 100644 --- a/arch/s390/kernel/machine_kexec_file.c +++ b/arch/s390/kernel/machine_kexec_file.c @@ -243,7 +243,8 @@ void *kexec_file_add_components(struct kimage *image, if (ret) goto out;
- if (image->cmdline_buf_len >= ARCH_COMMAND_LINE_SIZE) { + if (image->kernel_buf_len < PARMAREA + sizeof(struct parmarea) || + image->cmdline_buf_len >= ARCH_COMMAND_LINE_SIZE) { ret = -EINVAL; goto out; } @@ -333,20 +334,6 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi, return 0; }
-int arch_kexec_kernel_image_probe(struct kimage *image, void *buf, - unsigned long buf_len) -{ - /* A kernel must be at least large enough to contain head.S. During - * load memory in head.S will be accessed, e.g. to register the next - * command line. If the next kernel were smaller the current kernel - * will panic at load. - */ - if (buf_len < HEAD_END) - return -ENOEXEC; - - return kexec_image_probe_default(image, buf, buf_len); -} - int arch_kimage_file_post_load_cleanup(struct kimage *image) { vfree(image->arch.ipl_buf);
From: "Naveen N. Rao" naveen.n.rao@linux.vnet.ibm.com
commit 65d9a9a60fd71be964effb2e94747a6acb6e7015 upstream.
As requested (http://lkml.kernel.org/r/87ee0q7b92.fsf@email.froward.int.ebiederm.org), this series converts weak functions in kexec to use the #ifdef approach.
Quoting the 3e35142ef99fe ("kexec_file: drop weak attribute from arch_kexec_apply_relocations[_add]") changelog:
: Since commit d1bcae833b32f1 ("ELF: Don't generate unused section symbols") : [1], binutils (v2.36+) started dropping section symbols that it thought : were unused. This isn't an issue in general, but with kexec_file.c, gcc : is placing kexec_arch_apply_relocations[_add] into a separate : .text.unlikely section and the section symbol ".text.unlikely" is being : dropped. Due to this, recordmcount is unable to find a non-weak symbol in : .text.unlikely to generate a relocation record against.
This patch (of 2);
Drop __weak attribute from functions in kexec_file.c: - arch_kexec_kernel_image_probe() - arch_kimage_file_post_load_cleanup() - arch_kexec_kernel_image_load() - arch_kexec_locate_mem_hole() - arch_kexec_kernel_verify_sig()
arch_kexec_kernel_image_load() calls into kexec_image_load_default(), so drop the static attribute for the latter.
arch_kexec_kernel_verify_sig() is not overridden by any architecture, so drop the __weak attribute.
Link: https://lkml.kernel.org/r/cover.1656659357.git.naveen.n.rao@linux.vnet.ibm.c... Link: https://lkml.kernel.org/r/2cd7ca1fe4d6bb6ca38e3283c717878388ed6788.165665935... Signed-off-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Suggested-by: Eric Biederman ebiederm@xmission.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- arch/arm64/include/asm/kexec.h | 4 ++- arch/powerpc/include/asm/kexec.h | 9 +++++++ arch/s390/include/asm/kexec.h | 3 +++ arch/x86/include/asm/kexec.h | 6 +++++ include/linux/kexec.h | 44 +++++++++++++++++++++++++++----- kernel/kexec_file.c | 35 ++----------------------- 6 files changed, 61 insertions(+), 40 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 00dbcc71aeb2..91d81824f869 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -103,7 +103,9 @@ extern const struct kexec_file_ops kexec_image_ops;
struct kimage;
-extern int arch_kimage_file_post_load_cleanup(struct kimage *image); +int arch_kimage_file_post_load_cleanup(struct kimage *image); +#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup + extern int load_other_segments(struct kimage *image, unsigned long kernel_load_addr, unsigned long kernel_size, char *initrd, unsigned long initrd_len, diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h index 88d0d7cf3a79..6152fa220054 100644 --- a/arch/powerpc/include/asm/kexec.h +++ b/arch/powerpc/include/asm/kexec.h @@ -119,6 +119,15 @@ int setup_purgatory(struct kimage *image, const void *slave_code, #ifdef CONFIG_PPC64 struct kexec_buf;
+int arch_kexec_kernel_image_probe(struct kimage *image, void *buf, unsigned long buf_len); +#define arch_kexec_kernel_image_probe arch_kexec_kernel_image_probe + +int arch_kimage_file_post_load_cleanup(struct kimage *image); +#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup + +int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf); +#define arch_kexec_locate_mem_hole arch_kexec_locate_mem_hole + int load_crashdump_segments_ppc64(struct kimage *image, struct kexec_buf *kbuf); int setup_purgatory_ppc64(struct kimage *image, const void *slave_code, diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h index 63098df81c9f..d13bd221cd37 100644 --- a/arch/s390/include/asm/kexec.h +++ b/arch/s390/include/asm/kexec.h @@ -92,5 +92,8 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi, const Elf_Shdr *relsec, const Elf_Shdr *symtab); #define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add + +int arch_kimage_file_post_load_cleanup(struct kimage *image); +#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup #endif #endif /*_S390_KEXEC_H */ diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index c7c924e15011..5b6e2ae54906 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -193,6 +193,12 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi, const Elf_Shdr *relsec, const Elf_Shdr *symtab); #define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add + +void *arch_kexec_kernel_image_load(struct kimage *image); +#define arch_kexec_kernel_image_load arch_kexec_kernel_image_load + +int arch_kimage_file_post_load_cleanup(struct kimage *image); +#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup #endif #endif
diff --git a/include/linux/kexec.h b/include/linux/kexec.h index cf042d41c87b..f1e5327a7bf8 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -182,21 +182,53 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name, void *buf, unsigned int size, bool get_value); void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name); +void *kexec_image_load_default(struct kimage *image); + +#ifndef arch_kexec_kernel_image_probe +static inline int +arch_kexec_kernel_image_probe(struct kimage *image, void *buf, unsigned long buf_len) +{ + return kexec_image_probe_default(image, buf, buf_len); +} +#endif + +#ifndef arch_kimage_file_post_load_cleanup +static inline int arch_kimage_file_post_load_cleanup(struct kimage *image) +{ + return kexec_image_post_load_cleanup_default(image); +} +#endif + +#ifndef arch_kexec_kernel_image_load +static inline void *arch_kexec_kernel_image_load(struct kimage *image) +{ + return kexec_image_load_default(image); +} +#endif
-/* Architectures may override the below functions */ -int arch_kexec_kernel_image_probe(struct kimage *image, void *buf, - unsigned long buf_len); -void *arch_kexec_kernel_image_load(struct kimage *image); -int arch_kimage_file_post_load_cleanup(struct kimage *image); #ifdef CONFIG_KEXEC_SIG int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, unsigned long buf_len); #endif -int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf);
extern int kexec_add_buffer(struct kexec_buf *kbuf); int kexec_locate_mem_hole(struct kexec_buf *kbuf);
+#ifndef arch_kexec_locate_mem_hole +/** + * arch_kexec_locate_mem_hole - Find free memory to place the segments. + * @kbuf: Parameters for the memory search. + * + * On success, kbuf->mem will have the start address of the memory region found. + * + * Return: 0 on success, negative errno on error. + */ +static inline int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf) +{ + return kexec_locate_mem_hole(kbuf); +} +#endif + /* Alignment required for elf header segment */ #define ELF_CORE_HEADER_ALIGN 4096
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index f7a4fd4d243f..620021679405 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -62,14 +62,7 @@ int kexec_image_probe_default(struct kimage *image, void *buf, return ret; }
-/* Architectures can provide this probe function */ -int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf, - unsigned long buf_len) -{ - return kexec_image_probe_default(image, buf, buf_len); -} - -static void *kexec_image_load_default(struct kimage *image) +void *kexec_image_load_default(struct kimage *image) { if (!image->fops || !image->fops->load) return ERR_PTR(-ENOEXEC); @@ -80,11 +73,6 @@ static void *kexec_image_load_default(struct kimage *image) image->cmdline_buf_len); }
-void * __weak arch_kexec_kernel_image_load(struct kimage *image) -{ - return kexec_image_load_default(image); -} - int kexec_image_post_load_cleanup_default(struct kimage *image) { if (!image->fops || !image->fops->cleanup) @@ -93,11 +81,6 @@ int kexec_image_post_load_cleanup_default(struct kimage *image) return image->fops->cleanup(image->image_loader_data); }
-int __weak arch_kimage_file_post_load_cleanup(struct kimage *image) -{ - return kexec_image_post_load_cleanup_default(image); -} - #ifdef CONFIG_KEXEC_SIG static int kexec_image_verify_sig_default(struct kimage *image, void *buf, unsigned long buf_len) @@ -110,8 +93,7 @@ static int kexec_image_verify_sig_default(struct kimage *image, void *buf, return image->fops->verify_sig(buf, buf_len); }
-int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, - unsigned long buf_len) +int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, unsigned long buf_len) { return kexec_image_verify_sig_default(image, buf, buf_len); } @@ -616,19 +598,6 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf) return ret == 1 ? 0 : -EADDRNOTAVAIL; }
-/** - * arch_kexec_locate_mem_hole - Find free memory to place the segments. - * @kbuf: Parameters for the memory search. - * - * On success, kbuf->mem will have the start address of the memory region found. - * - * Return: 0 on success, negative errno on error. - */ -int __weak arch_kexec_locate_mem_hole(struct kexec_buf *kbuf) -{ - return kexec_locate_mem_hole(kbuf); -} - /** * kexec_add_buffer - place a buffer in a kexec segment * @kbuf: Buffer contents and memory parameters.
From: "Naveen N. Rao" naveen.n.rao@linux.vnet.ibm.com
commit 0738eceb6201691534df07e0928d0a6168a35787 upstream.
Drop __weak attribute from functions in kexec_core.c: - machine_kexec_post_load() - arch_kexec_protect_crashkres() - arch_kexec_unprotect_crashkres() - crash_free_reserved_phys_range()
Link: https://lkml.kernel.org/r/c0f6219e03cb399d166d518ab505095218a902dd.165665935... Signed-off-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Suggested-by: Eric Biederman ebiederm@xmission.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- arch/arm64/include/asm/kexec.h | 18 ++++++++++++++++-- arch/powerpc/include/asm/kexec.h | 5 +++++ arch/s390/include/asm/kexec.h | 11 +++++++++++ arch/x86/include/asm/kexec.h | 6 ++++++ include/linux/kexec.h | 32 ++++++++++++++++++++++++++++---- kernel/kexec_core.c | 27 --------------------------- 6 files changed, 66 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 91d81824f869..ae3695a15610 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -84,12 +84,28 @@ static inline void crash_setup_regs(struct pt_regs *newregs, extern bool crash_is_nosave(unsigned long pfn); extern void crash_prepare_suspend(void); extern void crash_post_resume(void); + +void crash_free_reserved_phys_range(unsigned long begin, unsigned long end); +#define crash_free_reserved_phys_range crash_free_reserved_phys_range #else static inline bool crash_is_nosave(unsigned long pfn) {return false; } static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif
+struct kimage; + +#if defined(CONFIG_KEXEC_CORE) +int machine_kexec_post_load(struct kimage *image); +#define machine_kexec_post_load machine_kexec_post_load + +void arch_kexec_protect_crashkres(void); +#define arch_kexec_protect_crashkres arch_kexec_protect_crashkres + +void arch_kexec_unprotect_crashkres(void); +#define arch_kexec_unprotect_crashkres arch_kexec_unprotect_crashkres +#endif + #define ARCH_HAS_KIMAGE_ARCH
struct kimage_arch { @@ -101,8 +117,6 @@ struct kimage_arch { #ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops;
-struct kimage; - int arch_kimage_file_post_load_cleanup(struct kimage *image); #define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h index 6152fa220054..d8394e77e987 100644 --- a/arch/powerpc/include/asm/kexec.h +++ b/arch/powerpc/include/asm/kexec.h @@ -97,6 +97,11 @@ static inline bool kdump_in_progress(void) void relocate_new_kernel(unsigned long indirection_page, unsigned long reboot_code_buffer, unsigned long start_address) __noreturn;
+#if defined(CONFIG_CRASH_DUMP) && defined(CONFIG_PPC_RTAS) +void crash_free_reserved_phys_range(unsigned long begin, unsigned long end); +#define crash_free_reserved_phys_range crash_free_reserved_phys_range +#endif + #ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_elf64_ops;
diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h index d13bd221cd37..4f713092e68c 100644 --- a/arch/s390/include/asm/kexec.h +++ b/arch/s390/include/asm/kexec.h @@ -85,6 +85,17 @@ struct kimage_arch { extern const struct kexec_file_ops s390_kexec_image_ops; extern const struct kexec_file_ops s390_kexec_elf_ops;
+#ifdef CONFIG_CRASH_DUMP +void crash_free_reserved_phys_range(unsigned long begin, unsigned long end); +#define crash_free_reserved_phys_range crash_free_reserved_phys_range + +void arch_kexec_protect_crashkres(void); +#define arch_kexec_protect_crashkres arch_kexec_protect_crashkres + +void arch_kexec_unprotect_crashkres(void); +#define arch_kexec_unprotect_crashkres arch_kexec_unprotect_crashkres +#endif + #ifdef CONFIG_KEXEC_FILE struct purgatory_info; int arch_kexec_apply_relocations_add(struct purgatory_info *pi, diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 5b6e2ae54906..4fd92330f23d 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -186,6 +186,12 @@ extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages); #define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+void arch_kexec_protect_crashkres(void); +#define arch_kexec_protect_crashkres arch_kexec_protect_crashkres + +void arch_kexec_unprotect_crashkres(void); +#define arch_kexec_unprotect_crashkres arch_kexec_unprotect_crashkres + #ifdef CONFIG_KEXEC_FILE struct purgatory_info; int arch_kexec_apply_relocations_add(struct purgatory_info *pi, diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f1e5327a7bf8..1638c8d7d216 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -384,7 +384,10 @@ extern void machine_kexec_cleanup(struct kimage *image); extern int kernel_kexec(void); extern struct page *kimage_alloc_control_pages(struct kimage *image, unsigned int order); -int machine_kexec_post_load(struct kimage *image); + +#ifndef machine_kexec_post_load +static inline int machine_kexec_post_load(struct kimage *image) { return 0; } +#endif
extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); @@ -423,10 +426,21 @@ extern bool kexec_in_progress;
int crash_shrink_memory(unsigned long new_size); size_t crash_get_memory_size(void); -void crash_free_reserved_phys_range(unsigned long begin, unsigned long end);
-void arch_kexec_protect_crashkres(void); -void arch_kexec_unprotect_crashkres(void); +#ifndef arch_kexec_protect_crashkres +/* + * Protection mechanism for crashkernel reserved memory after + * the kdump kernel is loaded. + * + * Provide an empty default implementation here -- architecture + * code may override this + */ +static inline void arch_kexec_protect_crashkres(void) { } +#endif + +#ifndef arch_kexec_unprotect_crashkres +static inline void arch_kexec_unprotect_crashkres(void) { } +#endif
#ifndef page_to_boot_pfn static inline unsigned long page_to_boot_pfn(struct page *page) @@ -456,6 +470,16 @@ static inline phys_addr_t boot_phys_to_phys(unsigned long boot_phys) } #endif
+#ifndef crash_free_reserved_phys_range +static inline void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) +{ + unsigned long addr; + + for (addr = begin; addr < end; addr += PAGE_SIZE) + free_reserved_page(boot_pfn_to_page(addr >> PAGE_SHIFT)); +} +#endif + static inline unsigned long virt_to_boot_phys(void *addr) { return phys_to_boot_phys(__pa((unsigned long)addr)); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 5a5d192a89ac..0951df148c1e 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -591,11 +591,6 @@ static void kimage_free_extra_pages(struct kimage *image)
}
-int __weak machine_kexec_post_load(struct kimage *image) -{ - return 0; -} - void kimage_terminate(struct kimage *image) { if (*image->entry != 0) @@ -1000,15 +995,6 @@ size_t crash_get_memory_size(void) return size; }
-void __weak crash_free_reserved_phys_range(unsigned long begin, - unsigned long end) -{ - unsigned long addr; - - for (addr = begin; addr < end; addr += PAGE_SIZE) - free_reserved_page(boot_pfn_to_page(addr >> PAGE_SHIFT)); -} - int crash_shrink_memory(unsigned long new_size) { int ret = 0; @@ -1205,16 +1191,3 @@ int kernel_kexec(void) mutex_unlock(&kexec_mutex); return error; } - -/* - * Protection mechanism for crashkernel reserved memory after - * the kdump kernel is loaded. - * - * Provide an empty default implementation here -- architecture - * code may override this - */ -void __weak arch_kexec_protect_crashkres(void) -{} - -void __weak arch_kexec_unprotect_crashkres(void) -{}
From: Coiby Xu coxu@redhat.com
commit 689a71493bd2f31c024f8c0395f85a1fd4b2138e upstream.
Before commit 105e10e2cf1c ("kexec_file: drop weak attribute from functions"), there was already no arch-specific implementation of arch_kexec_kernel_verify_sig. With weak attribute dropped by that commit, arch_kexec_kernel_verify_sig is completely useless. So clean it up.
Note later patches are dependent on this patch so it should be backported to the stable tree as well.
Cc: stable@vger.kernel.org Suggested-by: Eric W. Biederman ebiederm@xmission.com Reviewed-by: Michal Suchanek msuchanek@suse.de Acked-by: Baoquan He bhe@redhat.com Signed-off-by: Coiby Xu coxu@redhat.com [zohar@linux.ibm.com: reworded patch description "Note"] Link: https://lore.kernel.org/linux-integrity/20220714134027.394370-1-coxu@redhat.... Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- include/linux/kexec.h | 5 ----- kernel/kexec_file.c | 33 +++++++++++++-------------------- 2 files changed, 13 insertions(+), 25 deletions(-)
diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 1638c8d7d216..46f113961dbc 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -206,11 +206,6 @@ static inline void *arch_kexec_kernel_image_load(struct kimage *image) } #endif
-#ifdef CONFIG_KEXEC_SIG -int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, - unsigned long buf_len); -#endif - extern int kexec_add_buffer(struct kexec_buf *kbuf); int kexec_locate_mem_hole(struct kexec_buf *kbuf);
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 620021679405..8d73d6d4f0a6 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -81,24 +81,6 @@ int kexec_image_post_load_cleanup_default(struct kimage *image) return image->fops->cleanup(image->image_loader_data); }
-#ifdef CONFIG_KEXEC_SIG -static int kexec_image_verify_sig_default(struct kimage *image, void *buf, - unsigned long buf_len) -{ - if (!image->fops || !image->fops->verify_sig) { - pr_debug("kernel loader does not support signature verification.\n"); - return -EKEYREJECTED; - } - - return image->fops->verify_sig(buf, buf_len); -} - -int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, unsigned long buf_len) -{ - return kexec_image_verify_sig_default(image, buf, buf_len); -} -#endif - /* * Free up memory used by kernel, initrd, and command line. This is temporary * memory allocation which is not needed any more after these buffers have @@ -141,13 +123,24 @@ void kimage_file_post_load_cleanup(struct kimage *image) }
#ifdef CONFIG_KEXEC_SIG +static int kexec_image_verify_sig(struct kimage *image, void *buf, + unsigned long buf_len) +{ + if (!image->fops || !image->fops->verify_sig) { + pr_debug("kernel loader does not support signature verification.\n"); + return -EKEYREJECTED; + } + + return image->fops->verify_sig(buf, buf_len); +} + static int kimage_validate_signature(struct kimage *image) { int ret;
- ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf, - image->kernel_buf_len); + ret = kexec_image_verify_sig(image, image->kernel_buf, + image->kernel_buf_len); if (ret) {
if (sig_enforce) {
From: Coiby Xu coxu@redhat.com
commit c903dae8941deb55043ee46ded29e84e97cd84bb upstream.
commit 278311e417be ("kexec, KEYS: Make use of platform keyring for signature verify") adds platform keyring support on x86 kexec but not arm64.
The code in bzImage64_verify_sig uses the keys on the .builtin_trusted_keys, .machine, if configured and enabled, .secondary_trusted_keys, also if configured, and .platform keyrings to verify the signed kernel image as PE file.
Cc: kexec@lists.infradead.org Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Reviewed-by: Michal Suchanek msuchanek@suse.de Signed-off-by: Coiby Xu coxu@redhat.com Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- arch/x86/kernel/kexec-bzimage64.c | 20 +------------------- include/linux/kexec.h | 7 +++++++ kernel/kexec_file.c | 17 +++++++++++++++++ 3 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c index 170d0fd68b1f..f299b48f9c9f 100644 --- a/arch/x86/kernel/kexec-bzimage64.c +++ b/arch/x86/kernel/kexec-bzimage64.c @@ -17,7 +17,6 @@ #include <linux/kernel.h> #include <linux/mm.h> #include <linux/efi.h> -#include <linux/verification.h>
#include <asm/bootparam.h> #include <asm/setup.h> @@ -528,28 +527,11 @@ static int bzImage64_cleanup(void *loader_data) return 0; }
-#ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG -static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len) -{ - int ret; - - ret = verify_pefile_signature(kernel, kernel_len, - VERIFY_USE_SECONDARY_KEYRING, - VERIFYING_KEXEC_PE_SIGNATURE); - if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING)) { - ret = verify_pefile_signature(kernel, kernel_len, - VERIFY_USE_PLATFORM_KEYRING, - VERIFYING_KEXEC_PE_SIGNATURE); - } - return ret; -} -#endif - const struct kexec_file_ops kexec_bzImage64_ops = { .probe = bzImage64_probe, .load = bzImage64_load, .cleanup = bzImage64_cleanup, #ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG - .verify_sig = bzImage64_verify_sig, + .verify_sig = kexec_kernel_verify_pe_sig, #endif }; diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 46f113961dbc..1dd7b679fcf9 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -19,6 +19,7 @@ #include <asm/io.h>
#include <uapi/linux/kexec.h> +#include <linux/verification.h>
#ifdef CONFIG_KEXEC_CORE #include <linux/list.h> @@ -206,6 +207,12 @@ static inline void *arch_kexec_kernel_image_load(struct kimage *image) } #endif
+#ifdef CONFIG_KEXEC_SIG +#ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION +int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len); +#endif +#endif + extern int kexec_add_buffer(struct kexec_buf *kbuf); int kexec_locate_mem_hole(struct kexec_buf *kbuf);
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 8d73d6d4f0a6..289bb20e6075 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -123,6 +123,23 @@ void kimage_file_post_load_cleanup(struct kimage *image) }
#ifdef CONFIG_KEXEC_SIG +#ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION +int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len) +{ + int ret; + + ret = verify_pefile_signature(kernel, kernel_len, + VERIFY_USE_SECONDARY_KEYRING, + VERIFYING_KEXEC_PE_SIGNATURE); + if (ret == -ENOKEY && IS_ENABLED(CONFIG_INTEGRITY_PLATFORM_KEYRING)) { + ret = verify_pefile_signature(kernel, kernel_len, + VERIFY_USE_PLATFORM_KEYRING, + VERIFYING_KEXEC_PE_SIGNATURE); + } + return ret; +} +#endif + static int kexec_image_verify_sig(struct kimage *image, void *buf, unsigned long buf_len) {
From: Coiby Xu coxu@redhat.com
commit 0d519cadf75184a24313568e7f489a7fc9b1be3b upstream.
Currently, when loading a kernel image via the kexec_file_load() system call, arm64 can only use the .builtin_trusted_keys keyring to verify a signature whereas x86 can use three more keyrings i.e. .secondary_trusted_keys, .machine and .platform keyrings. For example, one resulting problem is kexec'ing a kernel image would be rejected with the error "Lockdown: kexec: kexec of unsigned images is restricted; see man kernel_lockdown.7".
This patch set enables arm64 to make use of the same keyrings as x86 to verify the signature kexec'ed kernel image.
Fixes: 732b7b93d849 ("arm64: kexec_file: add kernel signature verification support") Cc: stable@vger.kernel.org # 105e10e2cf1c: kexec_file: drop weak attribute from functions Cc: stable@vger.kernel.org # 34d5960af253: kexec: clean up arch_kexec_kernel_verify_sig Cc: stable@vger.kernel.org # 83b7bb2d49ae: kexec, KEYS: make the code in bzImage64_verify_sig generic Acked-by: Baoquan He bhe@redhat.com Cc: kexec@lists.infradead.org Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Co-developed-by: Michal Suchanek msuchanek@suse.de Signed-off-by: Michal Suchanek msuchanek@suse.de Acked-by: Will Deacon will@kernel.org Signed-off-by: Coiby Xu coxu@redhat.com Signed-off-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Michal Suchanek msuchanek@suse.de --- arch/arm64/kernel/kexec_image.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/arch/arm64/kernel/kexec_image.c b/arch/arm64/kernel/kexec_image.c index 9ec34690e255..5ed6a585f21f 100644 --- a/arch/arm64/kernel/kexec_image.c +++ b/arch/arm64/kernel/kexec_image.c @@ -14,7 +14,6 @@ #include <linux/kexec.h> #include <linux/pe.h> #include <linux/string.h> -#include <linux/verification.h> #include <asm/byteorder.h> #include <asm/cpufeature.h> #include <asm/image.h> @@ -130,18 +129,10 @@ static void *image_load(struct kimage *image, return NULL; }
-#ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG -static int image_verify_sig(const char *kernel, unsigned long kernel_len) -{ - return verify_pefile_signature(kernel, kernel_len, NULL, - VERIFYING_KEXEC_PE_SIGNATURE); -} -#endif - const struct kexec_file_ops kexec_image_ops = { .probe = image_probe, .load = image_load, #ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG - .verify_sig = image_verify_sig, + .verify_sig = kexec_kernel_verify_pe_sig, #endif };
On Fri, 2022-09-23 at 19:10 +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
Some patches needed minor adjustment for context.
In general when backporting this patch set, there should be a dependency on backporting these commits as well. In this instance for linux-5.15.y, they've already been backported.
543ce63b664e ("lockdown: Fix kexec lockdown bypass with ima policy") af16df54b89d ("ima: force signature verification when CONFIG_KEXEC_SIG is configured")
Hello,
On Fri, Sep 23, 2022 at 03:03:36PM -0400, Mimi Zohar wrote:
On Fri, 2022-09-23 at 19:10 +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
Some patches needed minor adjustment for context.
In general when backporting this patch set, there should be a dependency on backporting these commits as well. In this instance for linux-5.15.y, they've already been backported.
543ce63b664e ("lockdown: Fix kexec lockdown bypass with ima policy") af16df54b89d ("ima: force signature verification when CONFIG_KEXEC_SIG is configured")
Thanks for bringing these up. It might be in general useful to backport these fixes as well.
However, this patchset does one very specific thing: it lifts the x86 kexec_file signature verification to arch-independent and uses it on arm64 to unify all features (and any existing warts) between EFI architectures.
So unless I am missing something the fixes you pointed out are completely independent of this.
Thanks
Michal
On Fri, Sep 23, 2022 at 09:16:50PM +0200, Michal Suchánek wrote:
Hello,
On Fri, Sep 23, 2022 at 03:03:36PM -0400, Mimi Zohar wrote:
On Fri, 2022-09-23 at 19:10 +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
Some patches needed minor adjustment for context.
In general when backporting this patch set, there should be a dependency on backporting these commits as well. In this instance for linux-5.15.y, they've already been backported.
543ce63b664e ("lockdown: Fix kexec lockdown bypass with ima policy")
AFAICT this is everywhere relevant, likely because it's considered a CVE fix.
af16df54b89d ("ima: force signature verification when CONFIG_KEXEC_SIG is configured")
This is missing in 5.4, and 5.4 is missing this prerequisite: fd7af71be542 ("kexec: do not verify the signature without the lockdown or mandatory signature")
Thanks for bringing these up. It might be in general useful to backport these fixes as well.
However, this patchset does one very specific thing: it lifts the x86 kexec_file signature verification to arch-independent and uses it on arm64 to unify all features (and any existing warts) between EFI architectures.
So unless I am missing something the fixes you pointed out are completely independent of this.
Thanks
Michal
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
thanks,
greg k-h
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Thanks
Michal
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Again, what's wrong with 5.19 for anyone who wants this? Who does want this?
thanks,
greg k-h
On Sat, Sep 24, 2022 at 12:13:34PM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Not sure. It was defective, not using the facilities available at the time correctly. Which translates to kernels that can be kexec'd on x86 failing to kexec on arm64 without any explanation (signed with same key, built for the appropriate arch).
Again, what's wrong with 5.19 for anyone who wants this? Who does want this?
Not sure, really.
The final patch was repeatedly backported to stable and failed to build because the prerequisites were missing.
So this is a backport that includes the prerequisites for it to build.
If nobody wanted this why is it repeatedly backported generating the failure messages?
Thanks
Michal
On Sat, Sep 24, 2022 at 01:55:23PM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 12:13:34PM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Not sure. It was defective, not using the facilities available at the time correctly. Which translates to kernels that can be kexec'd on x86 failing to kexec on arm64 without any explanation (signed with same key, built for the appropriate arch).
Feature parity across architectures is not a "regression", but rather a "this feature is not implemented for this architecture yet" type of thing.
Again, what's wrong with 5.19 for anyone who wants this? Who does want this?
Not sure, really.
The final patch was repeatedly backported to stable and failed to build because the prerequisites were missing.
That's because it was tagged, but now that you show the full set of requirements, it's pretty obvious to me that this is not relevant for going this far back.
thanks,
greg k-h
On Mon, Sep 26, 2022 at 08:47:32AM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 01:55:23PM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 12:13:34PM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote:
Hello,
this is backport of commit 0d519cadf751 ("arm64: kexec_file: use more system keyrings to verify kernel image signature") to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Not sure. It was defective, not using the facilities available at the time correctly. Which translates to kernels that can be kexec'd on x86 failing to kexec on arm64 without any explanation (signed with same key, built for the appropriate arch).
Feature parity across architectures is not a "regression", but rather a "this feature is not implemented for this architecture yet" type of thing.
That depends on the view - before kexec verification you could boot any kernel, now you can boot some kernels signed with a valid key, but not others - the initial implementation is buggy, probably because it is based on an old version of the x86 code.
Again, what's wrong with 5.19 for anyone who wants this? Who does want this?
Not sure, really.
The final patch was repeatedly backported to stable and failed to build because the prerequisites were missing.
That's because it was tagged, but now that you show the full set of requirements, it's pretty obvious to me that this is not relevant for going this far back.
That also works.
Thanks
Michal
On Mon, Sep 26, 2022 at 09:40:25AM +0200, Michal Such??nek wrote:
On Mon, Sep 26, 2022 at 08:47:32AM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 01:55:23PM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 12:13:34PM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote:
On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote: > Hello, > > this is backport of commit 0d519cadf751 > ("arm64: kexec_file: use more system keyrings to verify kernel image signature") > to table 5.15 tree including the preparatory patches.
This feels to me like a new feature for arm64, one that has never worked before and you are just making it feature-parity with x86, right?
Or is this a regression fix somewhere? Why is this needed in 5.15.y and why can't people who need this new feature just use a newer kernel version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Not sure. It was defective, not using the facilities available at the time correctly. Which translates to kernels that can be kexec'd on x86 failing to kexec on arm64 without any explanation (signed with same key, built for the appropriate arch).
Feature parity across architectures is not a "regression", but rather a "this feature is not implemented for this architecture yet" type of thing.
That depends on the view - before kexec verification you could boot any kernel, now you can boot some kernels signed with a valid key, but not others - the initial implementation is buggy, probably because it is based on an old version of the x86 code.
Buggy? The feature of supporting platform ring had been slipped in just before I submitted the latest patch series which was eventually merged. (I should have noticed it though.)
Looking at changes in the commit 278311e417be ("kexec, KEYS: Make use of platform keyring for signature verify"), it seems to be obvious that it is a new feature because it introduced a new Kconfig option, CONFIG_INTEGRITY_PLATFORM_KEYRING, which allows for enabling/disabling platform ring support.
-Takahiro Akashi
Again, what's wrong with 5.19 for anyone who wants this? Who does want this?
Not sure, really.
The final patch was repeatedly backported to stable and failed to build because the prerequisites were missing.
That's because it was tagged, but now that you show the full set of requirements, it's pretty obvious to me that this is not relevant for going this far back.
That also works.
Thanks
Michal
On Tue, Sep 27, 2022 at 11:39:52AM +0900, AKASHI Takahiro wrote:
On Mon, Sep 26, 2022 at 09:40:25AM +0200, Michal Such??nek wrote:
On Mon, Sep 26, 2022 at 08:47:32AM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 01:55:23PM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 12:13:34PM +0200, Greg Kroah-Hartman wrote:
On Sat, Sep 24, 2022 at 11:45:21AM +0200, Michal Suchánek wrote:
On Sat, Sep 24, 2022 at 11:19:19AM +0200, Greg Kroah-Hartman wrote: > On Fri, Sep 23, 2022 at 07:10:28PM +0200, Michal Suchanek wrote: > > Hello, > > > > this is backport of commit 0d519cadf751 > > ("arm64: kexec_file: use more system keyrings to verify kernel image signature") > > to table 5.15 tree including the preparatory patches. > > This feels to me like a new feature for arm64, one that has never worked > before and you are just making it feature-parity with x86, right? > > Or is this a regression fix somewhere? Why is this needed in 5.15.y and > why can't people who need this new feature just use a newer kernel > version (5.19?)
It's half-broken implementation of the kexec kernel verification. At the time it was implemented for arm64 we had the platform and secondary keyrings and x86 was using them but on arm64 the initial implementation ignores them.
Ok, so it's something that never worked. Adding support to get it to work doesn't really fall into the stable kernel rules, right?
Not sure. It was defective, not using the facilities available at the time correctly. Which translates to kernels that can be kexec'd on x86 failing to kexec on arm64 without any explanation (signed with same key, built for the appropriate arch).
Feature parity across architectures is not a "regression", but rather a "this feature is not implemented for this architecture yet" type of thing.
That depends on the view - before kexec verification you could boot any kernel, now you can boot some kernels signed with a valid key, but not others - the initial implementation is buggy, probably because it is based on an old version of the x86 code.
Buggy? The feature of supporting platform ring had been slipped in just before I submitted the latest patch series which was eventually merged. (I should have noticed it though.)
It's difficult to notice another in-flight patch that does not conflict with yours, and is for a different architecture. That's why we have followup patches and Fixes tags.
However, the support for secondary keyring was added in 4.19 by commit ea93102f3224 ("Fix kexec forbidding kernels signed with keys in the secondary keyring to boot") which was not supported by the arm64 code either.
Looking at changes in the commit 278311e417be ("kexec, KEYS: Make use of platform keyring for signature verify"), it seems to be obvious that it is a new feature because it introduced a new Kconfig option, CONFIG_INTEGRITY_PLATFORM_KEYRING, which allows for enabling/disabling platform ring support.
Yes, and that feature exists since 5.1, and we are talking about 5.15 here. Not making use of the keyring that is supported by the kernel results in inability to kexec kernels that are signed by a valid key, arguably a bug.
Thanks
Michal
linux-stable-mirror@lists.linaro.org