The kernel has recently added support for shadow stacks, currently x86 only using their CET feature but both arm64 and RISC-V have equivalent features (GCS and Zicfiss respectively), I am actively working on GCS[1]. With shadow stacks the hardware maintains an additional stack containing only the return addresses for branch instructions which is not generally writeable by userspace and ensures that any returns are to the recorded addresses. This provides some protection against ROP attacks and making it easier to collect call stacks. These shadow stacks are allocated in the address space of the userspace process.
Our API for shadow stacks does not currently offer userspace any flexiblity for managing the allocation of shadow stacks for newly created threads, instead the kernel allocates a new shadow stack with the same size as the normal stack whenever a thread is created with the feature enabled. The stacks allocated in this way are freed by the kernel when the thread exits or shadow stacks are disabled for the thread. This lack of flexibility and control isn't ideal, in the vast majority of cases the shadow stack will be over allocated and the implicit allocation and deallocation is not consistent with other interfaces. As far as I can tell the interface is done in this manner mainly because the shadow stack patches were in development since before clone3() was implemented.
Since clone3() is readily extensible let's add support for specifying a shadow stack when creating a new thread or process in a similar manner to how the normal stack is specified, keeping the current implicit allocation behaviour if one is not specified either with clone3() or through the use of clone(). The user must provide a shadow stack address and size, this must point to memory mapped for use as a shadow stackby map_shadow_stack() with a shadow stack token at the top of the stack.
Please note that the x86 portions of this code are build tested only, I don't appear to have a system that can run CET avaible to me, I have done testing with an integration into my pending work for GCS. There is some possibility that the arm64 implementation may require the use of clone3() and explicit userspace allocation of shadow stacks, this is still under discussion.
Please further note that the token consumption done by clone3() is not currently implemented in an atomic fashion, Rick indicated that he would look into fixing this if people are OK with the implementation.
A new architecture feature Kconfig option for shadow stacks is added as here, this was suggested as part of the review comments for the arm64 GCS series and since we need to detect if shadow stacks are supported it seemed sensible to roll it in here.
[1] https://lore.kernel.org/r/20231009-arm64-gcs-v6-0-78e55deaa4dd@kernel.org/
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v8: - Fix token verification with user specified shadow stack. - Don't track user managed shadow stacks for child processes. - Link to v7: https://lore.kernel.org/r/20240731-clone3-shadow-stack-v7-0-a9532eebfb1d@ker...
Changes in v7: - Rebase onto v6.11-rc1. - Typo fixes. - Link to v6: https://lore.kernel.org/r/20240623-clone3-shadow-stack-v6-0-9ee7783b1fb9@ker...
Changes in v6: - Rebase onto v6.10-rc3. - Ensure we don't try to free the parent shadow stack in error paths of x86 arch code. - Spelling fixes in userspace API document. - Additional cleanups and improvements to the clone3() tests to support the shadow stack tests. - Link to v5: https://lore.kernel.org/r/20240203-clone3-shadow-stack-v5-0-322c69598e4b@ker...
Changes in v5: - Rebase onto v6.8-rc2. - Rework ABI to have the user allocate the shadow stack memory with map_shadow_stack() and a token. - Force inlining of the x86 shadow stack enablement. - Move shadow stack enablement out into a shared header for reuse by other tests. - Link to v4: https://lore.kernel.org/r/20231128-clone3-shadow-stack-v4-0-8b28ffe4f676@ker...
Changes in v4: - Formatting changes. - Use a define for minimum shadow stack size and move some basic validation to fork.c. - Link to v3: https://lore.kernel.org/r/20231120-clone3-shadow-stack-v3-0-a7b8ed3e2acc@ker...
Changes in v3: - Rebase onto v6.7-rc2. - Remove stale shadow_stack in internal kargs. - If a shadow stack is specified unconditionally use it regardless of CLONE_ parameters. - Force enable shadow stacks in the selftest. - Update changelogs for RISC-V feature rename. - Link to v2: https://lore.kernel.org/r/20231114-clone3-shadow-stack-v2-0-b613f8681155@ker...
Changes in v2: - Rebase onto v6.7-rc1. - Remove ability to provide preallocated shadow stack, just specify the desired size. - Link to v1: https://lore.kernel.org/r/20231023-clone3-shadow-stack-v1-0-d867d0b5d4d0@ker...
--- Mark Brown (9): Documentation: userspace-api: Add shadow stack API documentation selftests: Provide helper header for shadow stack testing mm: Introduce ARCH_HAS_USER_SHADOW_STACK fork: Add shadow stack support to clone3() selftests/clone3: Remove redundant flushes of output streams selftests/clone3: Factor more of main loop into test_clone3() selftests/clone3: Explicitly handle child exits due to signals selftests/clone3: Allow tests to flag if -E2BIG is a valid error code selftests/clone3: Test shadow stack support
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 ++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/shstk.h | 11 +- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 105 +++++++--- fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 2 +- include/linux/sched/task.h | 13 ++ include/uapi/linux/sched.h | 13 +- kernel/fork.c | 76 ++++++-- mm/Kconfig | 6 + tools/testing/selftests/clone3/clone3.c | 224 ++++++++++++++++++---- tools/testing/selftests/clone3/clone3_selftests.h | 40 +++- tools/testing/selftests/ksft_shstk.h | 63 ++++++ 15 files changed, 513 insertions(+), 87 deletions(-) --- base-commit: 8400291e289ee6b2bf9779ff1c83a291501f017b change-id: 20231019-clone3-shadow-stack-15d40d2bf536
Best regards,
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 ++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+)
diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index 274cc7546efc..c39709bfba2c 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -59,6 +59,7 @@ Everything else
ELF netlink/index + shadow_stack sysfs-platform_profile vduse futex2 diff --git a/Documentation/userspace-api/shadow_stack.rst b/Documentation/userspace-api/shadow_stack.rst new file mode 100644 index 000000000000..c576ad3d7ec1 --- /dev/null +++ b/Documentation/userspace-api/shadow_stack.rst @@ -0,0 +1,41 @@ +============= +Shadow Stacks +============= + +Introduction +============ + +Several architectures have features which provide backward edge +control flow protection through a hardware maintained stack, only +writeable by userspace through very limited operations. This feature +is referred to as shadow stacks on Linux, on x86 it is part of Intel +Control Enforcement Technology (CET), on arm64 it is Guarded Control +Stacks feature (FEAT_GCS) and for RISC-V it is the Zicfiss extension. +It is expected that this feature will normally be managed by the +system dynamic linker and libc in ways broadly transparent to +application code, this document covers interfaces and considerations. + + +Enabling +======== + +Shadow stacks default to disabled when a userspace process is +executed, they can be enabled for the current thread with a syscall: + + - For x86 the ARCH_SHSTK_ENABLE arch_prctl() + +It is expected that this will normally be done by the dynamic linker. +Any new threads created by a thread with shadow stacks enabled will +themselves have shadow stacks enabled. + + +Enablement considerations +========================= + +- Returning from the function that enables shadow stacks without first + disabling them will cause a shadow stack exception. This includes + any syscall wrapper or other library functions, the syscall will need + to be inlined. +- A lock feature allows userspace to prevent disabling of shadow stacks. +- Those that change the stack context like longjmp() or use of ucontext + changes on signal return will need support from libc.
On Thu, Aug 08, 2024 at 09:15:22AM +0100, Mark Brown wrote:
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
While almost all users of shadow stacks should be relying on the dynamic linker and libc to enable the feature there are several low level test programs where it is useful to enable without any libc support, allowing testing without full system enablement. This low level testing is helpful during bringup of the support itself, and also in enabling coverage by automated testing without needing all system components in the target root filesystems to have enablement.
Provide a header with helpers for this purpose, intended for use only by test programs directly exercising shadow stack interfaces.
Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/ksft_shstk.h | 63 ++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+)
diff --git a/tools/testing/selftests/ksft_shstk.h b/tools/testing/selftests/ksft_shstk.h new file mode 100644 index 000000000000..85d0747c1802 --- /dev/null +++ b/tools/testing/selftests/ksft_shstk.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Helpers for shadow stack enablement, this is intended to only be + * used by low level test programs directly exercising interfaces for + * working with shadow stacks. + * + * Copyright (C) 2024 ARM Ltd. + */ + +#ifndef __KSFT_SHSTK_H +#define __KSFT_SHSTK_H + +#include <asm/mman.h> + +/* This is currently only defined for x86 */ +#ifndef SHADOW_STACK_SET_TOKEN +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) +#endif + +static bool shadow_stack_enabled; + +#ifdef __x86_64__ +#define ARCH_SHSTK_ENABLE 0x5001 +#define ARCH_SHSTK_SHSTK (1ULL << 0) + +#define ARCH_PRCTL(arg1, arg2) \ +({ \ + long _ret; \ + register long _num asm("eax") = __NR_arch_prctl; \ + register long _arg1 asm("rdi") = (long)(arg1); \ + register long _arg2 asm("rsi") = (long)(arg2); \ + \ + asm volatile ( \ + "syscall\n" \ + : "=a"(_ret) \ + : "r"(_arg1), "r"(_arg2), \ + "0"(_num) \ + : "rcx", "r11", "memory", "cc" \ + ); \ + _ret; \ +}) + +#define ENABLE_SHADOW_STACK +static inline __attribute__((always_inline)) void enable_shadow_stack(void) +{ + int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK); + if (ret == 0) + shadow_stack_enabled = true; +} + +#endif + +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 453 +#endif + +#ifndef ENABLE_SHADOW_STACK +static inline void enable_shadow_stack(void) { } +#endif + +#endif + +
Since multiple architectures have support for shadow stacks and we need to select support for this feature in several places in the generic code provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Deepak Gupta debug@rivosinc.com Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Mark Brown broonie@kernel.org --- arch/x86/Kconfig | 1 + fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 2 +- mm/Kconfig | 6 ++++++ 4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 007bab9f2a0e..320e1f411163 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1957,6 +1957,7 @@ config X86_USER_SHADOW_STACK depends on AS_WRUSS depends on X86_64 select ARCH_USES_HIGH_VMA_FLAGS + select ARCH_HAS_USER_SHADOW_STACK select X86_CET help Shadow stack protection is a hardware feature that detects function diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5f171ad7b436..0ea49725f524 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -984,7 +984,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR [ilog2(VM_UFFD_MINOR)] = "ui", #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ -#ifdef CONFIG_X86_USER_SHADOW_STACK +#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK [ilog2(VM_SHADOW_STACK)] = "ss", #endif #ifdef CONFIG_64BIT diff --git a/include/linux/mm.h b/include/linux/mm.h index c4b238a20b76..3357625c1db3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -342,7 +342,7 @@ extern unsigned int kobjsize(const void *objp); #endif #endif /* CONFIG_ARCH_HAS_PKEYS */
-#ifdef CONFIG_X86_USER_SHADOW_STACK +#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK /* * VM_SHADOW_STACK should not be set with VM_SHARED because of lack of * support core mm. diff --git a/mm/Kconfig b/mm/Kconfig index b72e7d040f78..3167be663bca 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1263,6 +1263,12 @@ config IOMMU_MM_DATA config EXECMEM bool
+config ARCH_HAS_USER_SHADOW_STACK + bool + help + The architecture has hardware support for userspace shadow call + stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss). + source "mm/damon/Kconfig"
endmenu
On Thu, Aug 08, 2024 at 09:15:24AM +0100, Mark Brown wrote:
Since multiple architectures have support for shadow stacks and we need to select support for this feature in several places in the generic code provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Deepak Gupta debug@rivosinc.com Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Unlike with the normal stack there is no API for configuring the the shadow stack for a new thread, instead the kernel will dynamically allocate a new shadow stack with the same size as the normal stack. This appears to be due to the shadow stack series having been in development since before the more extensible clone3() was added rather than anything more deliberate.
Add parameters to clone3() specifying the location and size of a shadow stack for the newly created process. If no shadow stack is specified then the existing implicit allocation behaviour is maintained.
If a stack is specified then it is required to have an architecture defined token placed on the stack, this will be consumed by the new task. If the token is not provided then this will be reported as a segmentation fault with si_code SEGV_CPERR, as a runtime shadow stack protection error would be. This allows architectures to implement the validation of the token in the child process context.
If the architecture does not support shadow stacks the shadow stack parameters must be zero, architectures that do support the feature are expected to enforce the same requirement on individual systems that lack shadow stack support.
Update the existing x86 implementation to pay attention to the newly added arguments, in order to maintain compatibility we use the existing behaviour if no shadow stack is specified. Minimal validation is done of the supplied parameters, detailed enforcement is left to when the thread is executed. Since we are now using more fields from the kernel_clone_args we pass that into the shadow stack code rather than individual fields.
At present this implementation does not consume the shadow stack token atomically as would be desirable, it uses a separate read and write.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/x86/include/asm/shstk.h | 11 +++-- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 105 ++++++++++++++++++++++++++++++++++--------- include/linux/sched/task.h | 13 ++++++ include/uapi/linux/sched.h | 13 ++++-- kernel/fork.c | 76 ++++++++++++++++++++++++++----- 6 files changed, 178 insertions(+), 42 deletions(-)
diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index 4cb77e004615..252feeda6999 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -6,6 +6,7 @@ #include <linux/types.h>
struct task_struct; +struct kernel_clone_args; struct ksignal;
#ifdef CONFIG_X86_USER_SHADOW_STACK @@ -16,8 +17,8 @@ struct thread_shstk {
long shstk_prctl(struct task_struct *task, int option, unsigned long arg2); void reset_thread_features(void); -unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, - unsigned long stack_size); +unsigned long shstk_alloc_thread_stack(struct task_struct *p, + const struct kernel_clone_args *args); void shstk_free(struct task_struct *p); int setup_signal_shadow_stack(struct ksignal *ksig); int restore_signal_shadow_stack(void); @@ -28,8 +29,10 @@ static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, - unsigned long clone_flags, - unsigned long stack_size) { return 0; } + const struct kernel_clone_args *args) +{ + return 0; +} static inline void shstk_free(struct task_struct *p) {} static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; } static inline int restore_signal_shadow_stack(void) { return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index f63f8fd00a91..59456ab8d93f 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) * is disabled, new_ssp will remain 0, and fpu_clone() will know not to * update it. */ - new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size); + new_ssp = shstk_alloc_thread_stack(p, args); if (IS_ERR_VALUE(new_ssp)) return PTR_ERR((void *)new_ssp);
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 059685612362..d7005974aff5 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -191,44 +191,105 @@ void reset_thread_features(void) current->thread.features_locked = 0; }
-unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, - unsigned long stack_size) +int arch_shstk_post_fork(struct task_struct *t, struct kernel_clone_args *args) +{ + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + struct mm_struct *mm; + unsigned long addr, ssp; + u64 expected; + u64 val; + int ret = -EINVAL; + + ssp = args->shadow_stack + args->shadow_stack_size; + addr = ssp - SS_FRAME_SIZE; + expected = ssp | BIT(0); + + mm = get_task_mm(t); + if (!mm) + return -EFAULT; + + /* This should really be an atomic cmpxchg. It is not. */ + if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE) != sizeof(val)) + goto out; + + if (val != expected) + goto out; + val = 0; + if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out; + + ret = 0; + +out: + mmput(mm); + return ret; +} + +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) { struct thread_shstk *shstk = &tsk->thread.shstk; + unsigned long clone_flags = args->flags; unsigned long addr, size;
/* * If shadow stack is not enabled on the new thread, skip any - * switch to a new shadow stack. + * implicit switch to a new shadow stack and reject attempts to + * explciitly specify one. */ - if (!features_enabled(ARCH_SHSTK_SHSTK)) + if (!features_enabled(ARCH_SHSTK_SHSTK)) { + if (args->shadow_stack || args->shadow_stack_size) + return (unsigned long)ERR_PTR(-EINVAL); + return 0; + }
/* - * For CLONE_VFORK the child will share the parents shadow stack. - * Make sure to clear the internal tracking of the thread shadow - * stack so the freeing logic run for child knows to leave it alone. + * If the user specified a shadow stack then do some basic + * validation and use it, otherwise fall back to a default + * shadow stack size if the clone_flags don't indicate an + * allocation is unneeded. */ - if (clone_flags & CLONE_VFORK) { + if (args->shadow_stack) { + addr = args->shadow_stack; + size = args->shadow_stack_size; shstk->base = 0; shstk->size = 0; - return 0; - } + } else { + /* + * For CLONE_VFORK the child will share the parents + * shadow stack. Make sure to clear the internal + * tracking of the thread shadow stack so the freeing + * logic run for child knows to leave it alone. + */ + if (clone_flags & CLONE_VFORK) { + shstk->base = 0; + shstk->size = 0; + return 0; + }
- /* - * For !CLONE_VM the child will use a copy of the parents shadow - * stack. - */ - if (!(clone_flags & CLONE_VM)) - return 0; + /* + * For !CLONE_VM the child will use a copy of the + * parents shadow stack. + */ + if (!(clone_flags & CLONE_VM)) + return 0;
- size = adjust_shstk_size(stack_size); - addr = alloc_shstk(0, size, 0, false); - if (IS_ERR_VALUE(addr)) - return addr; + size = args->stack_size; + size = adjust_shstk_size(size); + addr = alloc_shstk(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return addr;
- shstk->base = addr; - shstk->size = size; + /* We allocated the shadow stack, we should deallocate it. */ + shstk->base = addr; + shstk->size = size; + }
return addr + size; } diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d362aacf9f89..56b2013d7fe5 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -43,6 +43,8 @@ struct kernel_clone_args { void *fn_arg; struct cgroup *cgrp; struct css_set *cset; + unsigned long shadow_stack; + unsigned long shadow_stack_size; };
/* @@ -230,4 +232,15 @@ static inline void task_unlock(struct task_struct *p)
DEFINE_GUARD(task_lock, struct task_struct *, task_lock(_T), task_unlock(_T))
+#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK +int arch_shstk_post_fork(struct task_struct *p, + struct kernel_clone_args *args); +#else +static inline int arch_shstk_post_fork(struct task_struct *p, + struct kernel_clone_args *args) +{ + return 0; +} +#endif + #endif /* _LINUX_SCHED_TASK_H */ diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..8b7af52548fd 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -84,6 +84,10 @@ * kernel's limit of nested PID namespaces. * @cgroup: If CLONE_INTO_CGROUP is specified set this to * a file descriptor for the cgroup. + * @shadow_stack: Pointer to the memory allocated for the child + * shadow stack. + * @shadow_stack_size: Specify the size of the shadow stack for + * the child process. * * The structure is versioned by size and thus extensible. * New struct members must go at the end of the struct and @@ -101,12 +105,15 @@ struct clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; + __aligned_u64 shadow_stack; + __aligned_u64 shadow_stack_size; }; #endif
-#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ -#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ -#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ +#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER3 104 /* sizeof fourth published struct */
/* * Scheduling policies diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..18278c72681c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -128,6 +128,11 @@ */ #define MAX_THREADS FUTEX_TID_MASK
+/* + * Require that shadow stacks can store at least one element + */ +#define SHADOW_STACK_SIZE_MIN sizeof(void *) + /* * Protected counters by write_lock_irq(&tasklist_lock) */ @@ -2729,6 +2734,19 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node) return copy_process(NULL, 0, node, &args); }
+static void shstk_post_fork(struct task_struct *p, + struct kernel_clone_args *args) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK)) + return; + + if (!args->shadow_stack) + return; + + if (arch_shstk_post_fork(p, args) != 0) + force_sig_fault_to_task(SIGSEGV, SEGV_CPERR, NULL, p); +} + /* * Ok, this is the main fork-routine. * @@ -2790,6 +2808,8 @@ pid_t kernel_clone(struct kernel_clone_args *args) */ trace_sched_process_fork(current, p);
+ shstk_post_fork(p, args); + pid = get_task_pid(p, PIDTYPE_PID); nr = pid_vnr(pid);
@@ -2939,7 +2959,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, CLONE_ARGS_SIZE_VER1); BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) != CLONE_ARGS_SIZE_VER2); - BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2); + BUILD_BUG_ON(offsetofend(struct clone_args, shadow_stack_size) != + CLONE_ARGS_SIZE_VER3); + BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER3);
if (unlikely(usize > PAGE_SIZE)) return -E2BIG; @@ -2972,16 +2994,18 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, return -EINVAL;
*kargs = (struct kernel_clone_args){ - .flags = args.flags, - .pidfd = u64_to_user_ptr(args.pidfd), - .child_tid = u64_to_user_ptr(args.child_tid), - .parent_tid = u64_to_user_ptr(args.parent_tid), - .exit_signal = args.exit_signal, - .stack = args.stack, - .stack_size = args.stack_size, - .tls = args.tls, - .set_tid_size = args.set_tid_size, - .cgroup = args.cgroup, + .flags = args.flags, + .pidfd = u64_to_user_ptr(args.pidfd), + .child_tid = u64_to_user_ptr(args.child_tid), + .parent_tid = u64_to_user_ptr(args.parent_tid), + .exit_signal = args.exit_signal, + .stack = args.stack, + .stack_size = args.stack_size, + .tls = args.tls, + .set_tid_size = args.set_tid_size, + .cgroup = args.cgroup, + .shadow_stack = args.shadow_stack, + .shadow_stack_size = args.shadow_stack_size, };
if (args.set_tid && @@ -3022,6 +3046,34 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs) return true; }
+/** + * clone3_shadow_stack_valid - check and prepare shadow stack + * @kargs: kernel clone args + * + * Verify that shadow stacks are only enabled if supported. + */ +static inline bool clone3_shadow_stack_valid(struct kernel_clone_args *kargs) +{ + if (kargs->shadow_stack) { + if (!kargs->shadow_stack_size) + return false; + + if (kargs->shadow_stack_size < SHADOW_STACK_SIZE_MIN) + return false; + + if (kargs->shadow_stack_size > rlimit(RLIMIT_STACK)) + return false; + + /* + * The architecture must check support on the specific + * machine. + */ + return IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK); + } else { + return !kargs->shadow_stack_size; + } +} + static bool clone3_args_valid(struct kernel_clone_args *kargs) { /* Verify that no unknown flags are passed along. */ @@ -3044,7 +3096,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs) kargs->exit_signal) return false;
- if (!clone3_stack_valid(kargs)) + if (!clone3_stack_valid(kargs) || !clone3_shadow_stack_valid(kargs)) return false;
return true;
On Thu, Aug 08, 2024 at 09:15:25AM +0100, Mark Brown wrote:
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 059685612362..d7005974aff5 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -191,44 +191,105 @@ void reset_thread_features(void) current->thread.features_locked = 0; } -unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags,
unsigned long stack_size)
+int arch_shstk_post_fork(struct task_struct *t, struct kernel_clone_args *args) +{
- /*
* SSP is aligned, so reserved bits and mode bit are a zero, just mark
* the token 64-bit.
*/
- struct mm_struct *mm;
- unsigned long addr, ssp;
- u64 expected;
- u64 val;
- int ret = -EINVAL;
- ssp = args->shadow_stack + args->shadow_stack_size;
- addr = ssp - SS_FRAME_SIZE;
- expected = ssp | BIT(0);
- mm = get_task_mm(t);
- if (!mm)
return -EFAULT;
- /* This should really be an atomic cmpxchg. It is not. */
- if (access_remote_vm(mm, addr, &val, sizeof(val),
FOLL_FORCE) != sizeof(val))
goto out;
If we restrict the shadow stack creation only to the CLONE_VM case, we'd not need the remote vm access, it's in the current mm context already. More on this below.
- if (val != expected)
goto out;
- val = 0;
- if (access_remote_vm(mm, addr, &val, sizeof(val),
FOLL_FORCE | FOLL_WRITE) != sizeof(val))
goto out;
I'm confused that we need to consume the token here. I could not find the default shadow stack allocation doing this, only setting it via create_rstor_token() (or I did not search enough). In the default case, is the user consuming it? To me the only difference should been the default allocation vs the one passed by the user via clone3(), with the latter maybe requiring the user to set the token initially.
- ret = 0;
+out:
- mmput(mm);
- return ret;
+}
+unsigned long shstk_alloc_thread_stack(struct task_struct *tsk,
const struct kernel_clone_args *args)
{ struct thread_shstk *shstk = &tsk->thread.shstk;
- unsigned long clone_flags = args->flags; unsigned long addr, size;
/* * If shadow stack is not enabled on the new thread, skip any
* switch to a new shadow stack.
* implicit switch to a new shadow stack and reject attempts to
* explciitly specify one.
Nit: explicitly.
*/
- if (!features_enabled(ARCH_SHSTK_SHSTK))
- if (!features_enabled(ARCH_SHSTK_SHSTK)) {
if (args->shadow_stack || args->shadow_stack_size)
return (unsigned long)ERR_PTR(-EINVAL);
- return 0;
- }
/*
* For CLONE_VFORK the child will share the parents shadow stack.
* Make sure to clear the internal tracking of the thread shadow
* stack so the freeing logic run for child knows to leave it alone.
* If the user specified a shadow stack then do some basic
* validation and use it, otherwise fall back to a default
* shadow stack size if the clone_flags don't indicate an
*/* allocation is unneeded.
- if (clone_flags & CLONE_VFORK) {
- if (args->shadow_stack) {
addr = args->shadow_stack;
shstk->base = 0; shstk->size = 0;size = args->shadow_stack_size;
return 0;
- }
- } else {
/*
* For CLONE_VFORK the child will share the parents
* shadow stack. Make sure to clear the internal
* tracking of the thread shadow stack so the freeing
* logic run for child knows to leave it alone.
*/
if (clone_flags & CLONE_VFORK) {
shstk->base = 0;
shstk->size = 0;
return 0;
}
I think we should leave the CLONE_VFORK check on its own independent of the clone3() arguments. If one passes both CLONE_VFORK and specific shadow stack address/size, they should be ignored (or maybe return an error if you want to make it stricter).
- /*
* For !CLONE_VM the child will use a copy of the parents shadow
* stack.
*/
- if (!(clone_flags & CLONE_VM))
return 0;
/*
* For !CLONE_VM the child will use a copy of the
* parents shadow stack.
*/
if (!(clone_flags & CLONE_VM))
return 0;
Is the !CLONE_VM case specific only to the default shadow stack allocation? Sorry if this has been discussed already (or I completely forgot) but I thought we'd only implement this for the thread creation case. The typical fork() for a new process should inherit the parent's layout, so applicable to the clone3() with the shadow stack arguments as well (which should be ignored or maybe return an error with !CLONE_VM).
[...]
diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..18278c72681c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -128,6 +128,11 @@ */ #define MAX_THREADS FUTEX_TID_MASK +/*
- Require that shadow stacks can store at least one element
- */
+#define SHADOW_STACK_SIZE_MIN sizeof(void *)
/*
- Protected counters by write_lock_irq(&tasklist_lock)
*/ @@ -2729,6 +2734,19 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node) return copy_process(NULL, 0, node, &args); } +static void shstk_post_fork(struct task_struct *p,
struct kernel_clone_args *args)
+{
- if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK))
return;
- if (!args->shadow_stack)
return;
- if (arch_shstk_post_fork(p, args) != 0)
force_sig_fault_to_task(SIGSEGV, SEGV_CPERR, NULL, p);
+}
/*
- Ok, this is the main fork-routine.
@@ -2790,6 +2808,8 @@ pid_t kernel_clone(struct kernel_clone_args *args) */ trace_sched_process_fork(current, p);
- shstk_post_fork(p, args);
Do we need this post fork call? Can we not handle the setup via the copy_thread() path in shstk_alloc_thread_stack()?
On Fri, Aug 09, 2024 at 07:19:26PM +0100, Catalin Marinas wrote:
On Thu, Aug 08, 2024 at 09:15:25AM +0100, Mark Brown wrote:
- /* This should really be an atomic cmpxchg. It is not. */
- if (access_remote_vm(mm, addr, &val, sizeof(val),
FOLL_FORCE) != sizeof(val))
goto out;
If we restrict the shadow stack creation only to the CLONE_VM case, we'd not need the remote vm access, it's in the current mm context already. More on this below.
The discussion in previous iterations was that it seemed better to allow even surprising use cases since it simplifies the analysis of what we have covered. If the user has specified a shadow stack we just do what they asked for and let them worry about if it's useful.
- if (val != expected)
goto out;
I'm confused that we need to consume the token here. I could not find the default shadow stack allocation doing this, only setting it via create_rstor_token() (or I did not search enough). In the default case, is the user consuming it? To me the only difference should been the default allocation vs the one passed by the user via clone3(), with the latter maybe requiring the user to set the token initially.
As discussed for a couple of previous versions if we don't have the token and userspace can specify any old shadow stack page as the shadow stack this allows clone3() to be used to overwrite the shadow stack of another thread, you can point to a shadow stack page which is currently in use and then run some code that causes shadow stack writes. This could potentially then in turn be used as part of a bigger exploit chain, probably it's hard to get anything beyond just causing the other thread to fault but won't be impossible.
With a kernel allocated shadow stack this is not an issue since we are placing the shadow stack in new memory, userspace can't control where we place it so it can't overwrite an existing shadow stack.
/*
* For CLONE_VFORK the child will share the parents
* shadow stack. Make sure to clear the internal
* tracking of the thread shadow stack so the freeing
* logic run for child knows to leave it alone.
*/
if (clone_flags & CLONE_VFORK) {
shstk->base = 0;
shstk->size = 0;
return 0;
}
I think we should leave the CLONE_VFORK check on its own independent of the clone3() arguments. If one passes both CLONE_VFORK and specific shadow stack address/size, they should be ignored (or maybe return an error if you want to make it stricter).
This is existing logic from the current x86 code that's been reindented due to the addition of explicitly specified shadow stacks, it's not new behaviour. It is needed to stop the child thinking it has the parent's shadow stack in the CLONE_VFORK case.
- /*
* For !CLONE_VM the child will use a copy of the parents shadow
* stack.
*/
- if (!(clone_flags & CLONE_VM))
return 0;
/*
* For !CLONE_VM the child will use a copy of the
* parents shadow stack.
*/
if (!(clone_flags & CLONE_VM))
return 0;
Is the !CLONE_VM case specific only to the default shadow stack allocation? Sorry if this has been discussed already (or I completely forgot) but I thought we'd only implement this for the thread creation case. The typical fork() for a new process should inherit the parent's layout, so applicable to the clone3() with the shadow stack arguments as well (which should be ignored or maybe return an error with !CLONE_VM).
This is again all existing behaviour for the case where the user has not specified a shadow stack reindented, as mentioned above if the user has specified one explicitly then we just do what we were asked. The existing behaviour is to only create a new shadow stack for the child in the CLONE_VM case and leave the child using the same shadow stack as the parent in the copied mm for !CLONE_VM.
@@ -2790,6 +2808,8 @@ pid_t kernel_clone(struct kernel_clone_args *args) */ trace_sched_process_fork(current, p);
- shstk_post_fork(p, args);
Do we need this post fork call? Can we not handle the setup via the copy_thread() path in shstk_alloc_thread_stack()?
It looks like we do actually have the new mm in the process before we call copy_thread() so we could move things into there though we'd loose a small bit of factoring out of the error handling (at one point I had more code factored out but right now it's quite small, looking again we could also factor out the get_task_mm()/mput()). ISTR having the new process' mm was the biggest reason for this initially but looking again I'm not sure why that was. It does still feel like even the small amount that's factored out currently is useful though, a bit less duplication in the architecture code which feels welcome here.
On Sat, Aug 10, 2024 at 12:06:12AM +0100, Mark Brown wrote:
On Fri, Aug 09, 2024 at 07:19:26PM +0100, Catalin Marinas wrote:
On Thu, Aug 08, 2024 at 09:15:25AM +0100, Mark Brown wrote:
- /* This should really be an atomic cmpxchg. It is not. */
- if (access_remote_vm(mm, addr, &val, sizeof(val),
FOLL_FORCE) != sizeof(val))
goto out;
If we restrict the shadow stack creation only to the CLONE_VM case, we'd not need the remote vm access, it's in the current mm context already. More on this below.
The discussion in previous iterations was that it seemed better to allow even surprising use cases since it simplifies the analysis of what we have covered. If the user has specified a shadow stack we just do what they asked for and let them worry about if it's useful.
Thanks for the summary of the past discussions, the patch makes more sense now. I guess it's easier to follow a clone*() syscall where one can set a new stack pointer even in the !CLONE_VM case. Just let it set the shadow stack as well with the new ABI.
However, the x86 would be slightly inconsistent here between clone() and clone3(). I guess it depends how you look at it. The classic clone() syscall, if one doesn't pass CLONE_VM but does set new stack, there's no new shadow stack allocated which I'd expect since it's a new stack. Well, I doubt anyone cares about this scenario. Are there real cases of !CLONE_VM but with a new stack?
- if (val != expected)
goto out;
I'm confused that we need to consume the token here. I could not find the default shadow stack allocation doing this, only setting it via create_rstor_token() (or I did not search enough). In the default case, is the user consuming it? To me the only difference should been the default allocation vs the one passed by the user via clone3(), with the latter maybe requiring the user to set the token initially.
As discussed for a couple of previous versions if we don't have the token and userspace can specify any old shadow stack page as the shadow stack this allows clone3() to be used to overwrite the shadow stack of another thread, you can point to a shadow stack page which is currently in use and then run some code that causes shadow stack writes. This could potentially then in turn be used as part of a bigger exploit chain, probably it's hard to get anything beyond just causing the other thread to fault but won't be impossible.
With a kernel allocated shadow stack this is not an issue since we are placing the shadow stack in new memory, userspace can't control where we place it so it can't overwrite an existing shadow stack.
IIUC, the kernel-allocated shadow stack will have the token always set while the user-allocated one will be cleared. I was looking to understand the inconsistency between these two cases in terms of the final layout of the new shadow stack: one with the token, the other without. I can see the need for checking but maybe start with requiring it to be 0 and setting the token before returning, for consistency with clone().
In the kernel-allocated shadow stack, is the token used for anything? I can see it's used for signal delivery and return but I couldn't figure out what it is used for in a thread's shadow stack.
Also, can one not use the clone3() to point to the clone()-allocated shadow stack? Maybe that's unlikely as an app tends to stick to one syscall flavour or the other.
/*
* For CLONE_VFORK the child will share the parents
* shadow stack. Make sure to clear the internal
* tracking of the thread shadow stack so the freeing
* logic run for child knows to leave it alone.
*/
if (clone_flags & CLONE_VFORK) {
shstk->base = 0;
shstk->size = 0;
return 0;
}
I think we should leave the CLONE_VFORK check on its own independent of the clone3() arguments. If one passes both CLONE_VFORK and specific shadow stack address/size, they should be ignored (or maybe return an error if you want to make it stricter).
This is existing logic from the current x86 code that's been reindented due to the addition of explicitly specified shadow stacks, it's not new behaviour. It is needed to stop the child thinking it has the parent's shadow stack in the CLONE_VFORK case.
I figured that. But similar to the current !CLONE_VM behaviour where no new shadow stack is allocated even if a new stack is passed to clone(), I was thinking of something similar here for consistency: don't set up a shadow stack in the CLONE_VFORK case or at least allow it only if a new stack is being set up (if we extend this to clone(), it would be a small ABI change).
- /*
* For !CLONE_VM the child will use a copy of the parents shadow
* stack.
*/
- if (!(clone_flags & CLONE_VM))
return 0;
/*
* For !CLONE_VM the child will use a copy of the
* parents shadow stack.
*/
if (!(clone_flags & CLONE_VM))
return 0;
Is the !CLONE_VM case specific only to the default shadow stack allocation? Sorry if this has been discussed already (or I completely forgot) but I thought we'd only implement this for the thread creation case. The typical fork() for a new process should inherit the parent's layout, so applicable to the clone3() with the shadow stack arguments as well (which should be ignored or maybe return an error with !CLONE_VM).
This is again all existing behaviour for the case where the user has not specified a shadow stack reindented, as mentioned above if the user has specified one explicitly then we just do what we were asked. The existing behaviour is to only create a new shadow stack for the child in the CLONE_VM case and leave the child using the same shadow stack as the parent in the copied mm for !CLONE_VM.
I guess I was rather questioning the current choices than the new clone3() ABI. But even for the new clone3() ABI, does it make sense to set up a shadow stack if the current stack isn't changed? We'll end up with a lot of possible combinations that will never get tested but potentially become obscure ABI. Limiting the options to the sane choices only helps with validation and unsurprising changes later on.
@@ -2790,6 +2808,8 @@ pid_t kernel_clone(struct kernel_clone_args *args) */ trace_sched_process_fork(current, p);
- shstk_post_fork(p, args);
Do we need this post fork call? Can we not handle the setup via the copy_thread() path in shstk_alloc_thread_stack()?
It looks like we do actually have the new mm in the process before we call copy_thread() so we could move things into there though we'd loose a small bit of factoring out of the error handling (at one point I had more code factored out but right now it's quite small, looking again we could also factor out the get_task_mm()/mput()). ISTR having the new process' mm was the biggest reason for this initially but looking again I'm not sure why that was. It does still feel like even the small amount that's factored out currently is useful though, a bit less duplication in the architecture code which feels welcome here.
I think you can probably keep this. My comment was based on the assumption that we only support the CLONE_VM case where we wouldn't need the access_remote_vm(), just some direct write similar to write_user_shstk_64().
I still think we should have limited this ABI to the CLONE_VM and !CLONE_VFORK cases but I don't have a strong view if the consensus was to allow it for classic fork() and vfork() like uses (I just think they won't be used).
On Tue, Aug 13, 2024 at 05:25:47PM +0100, Catalin Marinas wrote:
However, the x86 would be slightly inconsistent here between clone() and clone3(). I guess it depends how you look at it. The classic clone() syscall, if one doesn't pass CLONE_VM but does set new stack, there's no new shadow stack allocated which I'd expect since it's a new stack. Well, I doubt anyone cares about this scenario. Are there real cases of !CLONE_VM but with a new stack?
ISTR the concerns were around someone being clever with vfork() but I don't remember anything super concrete. In terms of the inconsistency here that was actually another thing that came up - if userspace specifies a stack for clone3() it'll just get used even with CLONE_VFORK so it seemed to make sense to do the same thing for the shadow stack. This was part of the thinking when we were looking at it, if you can specify a regular stack you should be able to specify a shadow stack.
I'm confused that we need to consume the token here. I could not find the default shadow stack allocation doing this, only setting it via create_rstor_token() (or I did not search enough). In the default case,
As discussed for a couple of previous versions if we don't have the token and userspace can specify any old shadow stack page as the shadow stack this allows clone3() to be used to overwrite the shadow stack of another thread, you can point to a shadow stack page which is currently
IIUC, the kernel-allocated shadow stack will have the token always set while the user-allocated one will be cleared. I was looking to
No, when the kernel allocates we don't bother with tokens at all. We only look for and clear a token with the user specified shadow stack.
understand the inconsistency between these two cases in terms of the final layout of the new shadow stack: one with the token, the other without. I can see the need for checking but maybe start with requiring it to be 0 and setting the token before returning, for consistency with clone().
The layout should be the same, the shadow stack will point to where the token would be - the only difference is if we checked to see if there was a token there. Since we either clear the token on use or allocate a fresh page in both cases the value there will be 0.
In the kernel-allocated shadow stack, is the token used for anything? I can see it's used for signal delivery and return but I couldn't figure out what it is used for in a thread's shadow stack.
For arm64 we place differently formatted tokens there during signal handling, and a token is placed at the top of the stack as part of the architected stack pivoting instructions (and a token at the destination consumed). I believe x86 has the same pivoting behaviour but ICBW. A user specified shadow stack is handled in a very similar way to what would happen if the newly created thread immediately pivoted to the specified stack.
Also, can one not use the clone3() to point to the clone()-allocated shadow stack? Maybe that's unlikely as an app tends to stick to one syscall flavour or the other.
A valid token will only be present on an inactive stack. If a thread pivots away from a kernel allocated stack then another thread could be started using the original kernel allocated stack, any program doing this should think carefully about the lifecycle of the kernel allocated stack but it's possible. If a thread has not pivoted away from it's stack then there won't be a token at the top of the stack and it won't be possible to pivot to it.
/*
* For CLONE_VFORK the child will share the parents
* shadow stack. Make sure to clear the internal
* tracking of the thread shadow stack so the freeing
* logic run for child knows to leave it alone.
*/
if (clone_flags & CLONE_VFORK) {
shstk->base = 0;
shstk->size = 0;
return 0;
}
I think we should leave the CLONE_VFORK check on its own independent of the clone3() arguments. If one passes both CLONE_VFORK and specific shadow stack address/size, they should be ignored (or maybe return an error if you want to make it stricter).
This is existing logic from the current x86 code that's been reindented due to the addition of explicitly specified shadow stacks, it's not new behaviour. It is needed to stop the child thinking it has the parent's shadow stack in the CLONE_VFORK case.
I figured that. But similar to the current !CLONE_VM behaviour where no new shadow stack is allocated even if a new stack is passed to clone(), I was thinking of something similar here for consistency: don't set up a shadow stack in the CLONE_VFORK case or at least allow it only if a new stack is being set up (if we extend this to clone(), it would be a small ABI change).
We could restrict specifying a shadow stack to only be supported when a regular stack is also specified, if we're doing that I'd prefer to do it in all cases rather than only for vfork() since that reduces the number of special cases and we don't restrict normal stacks like that.
This is again all existing behaviour for the case where the user has not specified a shadow stack reindented, as mentioned above if the user has specified one explicitly then we just do what we were asked. The existing behaviour is to only create a new shadow stack for the child in the CLONE_VM case and leave the child using the same shadow stack as the parent in the copied mm for !CLONE_VM.
I guess I was rather questioning the current choices than the new clone3() ABI. But even for the new clone3() ABI, does it make sense to set up a shadow stack if the current stack isn't changed? We'll end up with a lot of possible combinations that will never get tested but potentially become obscure ABI. Limiting the options to the sane choices only helps with validation and unsurprising changes later on.
OTOH if we add the restrictions it's more code (and more test code) to check, and thinking about if we've missed some important use case. Not that it's a *huge* amount of code, like I say I'd not be too unhappy with adding a restriction on having a regular stack specified in order to specify a shadow stack.
On Tue, Aug 13, 2024 at 07:58:26PM +0100, Mark Brown wrote:
On Tue, Aug 13, 2024 at 05:25:47PM +0100, Catalin Marinas wrote:
However, the x86 would be slightly inconsistent here between clone() and clone3(). I guess it depends how you look at it. The classic clone() syscall, if one doesn't pass CLONE_VM but does set new stack, there's no new shadow stack allocated which I'd expect since it's a new stack. Well, I doubt anyone cares about this scenario. Are there real cases of !CLONE_VM but with a new stack?
ISTR the concerns were around someone being clever with vfork() but I don't remember anything super concrete. In terms of the inconsistency here that was actually another thing that came up - if userspace specifies a stack for clone3() it'll just get used even with CLONE_VFORK so it seemed to make sense to do the same thing for the shadow stack. This was part of the thinking when we were looking at it, if you can specify a regular stack you should be able to specify a shadow stack.
Yes, I agree. But by this logic, I was wondering why the current clone() behaviour does not allocate a shadow stack when a new stack is requested with CLONE_VFORK. That's rather theoretical though and we may not want to change the ABI.
I'm confused that we need to consume the token here. I could not find the default shadow stack allocation doing this, only setting it via create_rstor_token() (or I did not search enough). In the default case,
As discussed for a couple of previous versions if we don't have the token and userspace can specify any old shadow stack page as the shadow stack this allows clone3() to be used to overwrite the shadow stack of another thread, you can point to a shadow stack page which is currently
IIUC, the kernel-allocated shadow stack will have the token always set while the user-allocated one will be cleared. I was looking to
No, when the kernel allocates we don't bother with tokens at all. We only look for and clear a token with the user specified shadow stack.
Ah, you are right, I misread the alloc_shstk() function. It takes a set_res_tok parameter which is false for the normal allocation.
I guess I was rather questioning the current choices than the new clone3() ABI. But even for the new clone3() ABI, does it make sense to set up a shadow stack if the current stack isn't changed? We'll end up with a lot of possible combinations that will never get tested but potentially become obscure ABI. Limiting the options to the sane choices only helps with validation and unsurprising changes later on.
OTOH if we add the restrictions it's more code (and more test code) to check, and thinking about if we've missed some important use case. Not that it's a *huge* amount of code, like I say I'd not be too unhappy with adding a restriction on having a regular stack specified in order to specify a shadow stack.
I guess we just follow the normal stack behaviour for clone3(), at least we'd be consistent with that.
Anyway, I understood this patch now and the ABI decisions. FWIW:
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
On Wed, Aug 14, 2024 at 10:38:54AM +0100, Catalin Marinas wrote:
On Tue, Aug 13, 2024 at 07:58:26PM +0100, Mark Brown wrote:
ISTR the concerns were around someone being clever with vfork() but I don't remember anything super concrete. In terms of the inconsistency here that was actually another thing that came up - if userspace specifies a stack for clone3() it'll just get used even with CLONE_VFORK so it seemed to make sense to do the same thing for the shadow stack. This was part of the thinking when we were looking at it, if you can specify a regular stack you should be able to specify a shadow stack.
Yes, I agree. But by this logic, I was wondering why the current clone() behaviour does not allocate a shadow stack when a new stack is requested with CLONE_VFORK. That's rather theoretical though and we may not want to change the ABI.
The default for vfork() is to reuse both the normal and shadow stacks, clone3() does make it all much more flexible. All the shadow stack ABI predates clone3(), even if it ended up getting merged after.
Anyway, I understood this patch now and the ABI decisions. FWIW:
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Thanks!
On Thu, 2024-08-08 at 09:15 +0100, Mark Brown wrote:
+int arch_shstk_post_fork(struct task_struct *t, struct kernel_clone_args *args) +{ + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + struct mm_struct *mm; + unsigned long addr, ssp; + u64 expected; + u64 val; + int ret = -EINVAL;
We should probably? if (!features_enabled(ARCH_SHSTK_SHSTK)) return 0;
+ ssp = args->shadow_stack + args->shadow_stack_size; + addr = ssp - SS_FRAME_SIZE; + expected = ssp | BIT(0);
+ mm = get_task_mm(t); + if (!mm) + return -EFAULT;
We could check that the VMA is shadow stack here. I'm not sure what could go wrong though. If you point it to RW memory it could start the thread with that as a shadow stack and just blow up at the first call. It might be nicer to fail earlier though.
+ /* This should really be an atomic cmpxchg. It is not. */ + if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE) != sizeof(val)) + goto out;
+ if (val != expected) + goto out; + val = 0;
After a token is consumed normally, it doesn't set it to zero. Instead it sets it to a "previous-ssp token". I don't think we actually want to do that here though because it involves the old SSP, which doesn't really apply in this case. I don't see any problem with zero, but was there any special thinking behind it?
+ if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out;
The GUPs still seem a bit unfortunate for a couple reasons: - We could do a CMPXCHG version and are just not (I see ARM has identical code in gcs_consume_token()). It's not the only race like this though FWIW. - I *think* this is the only unprivileged FOLL_FORCE that can write to the current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack? - Lingering doubts about the wisdom of doing GUPs during task creation.
I don't think they are show stoppers, but the VMA check would be nice to have in the first upstream support.
+ ret = 0;
+out: + mmput(mm); + return ret; +}
[snip]
+static void shstk_post_fork(struct task_struct *p, + struct kernel_clone_args *args) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK)) + return;
+ if (!args->shadow_stack) + return;
+ if (arch_shstk_post_fork(p, args) != 0) + force_sig_fault_to_task(SIGSEGV, SEGV_CPERR, NULL, p); +}
Hmm, is this forcing the signal on the new task, which is set up on a user provided shadow stack that failed the token check? It would handle the signal with an arbitrary SSP then I think. We should probably fail the clone call in the parent instead, which can be done by doing the work in copy_process(). Do you see a problem with doing it at the end of copy_process()? I don't know if there could be ordering constraints.
On Thu, Aug 15, 2024 at 12:18:23AM +0000, Edgecombe, Rick P wrote:
On Thu, 2024-08-08 at 09:15 +0100, Mark Brown wrote:
+ ssp = args->shadow_stack + args->shadow_stack_size; + addr = ssp - SS_FRAME_SIZE; + expected = ssp | BIT(0);
+ mm = get_task_mm(t); + if (!mm) + return -EFAULT;
We could check that the VMA is shadow stack here. I'm not sure what could go wrong though. If you point it to RW memory it could start the thread with that as a shadow stack and just blow up at the first call. It might be nicer to fail earlier though.
Sure, I wasn't doing anything since like you say the new thread will fail anyway but we can do the check. As you point out below it'll close down the possibility of writing to memory.
+ /* This should really be an atomic cmpxchg. It is not. */ + if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE) != sizeof(val)) + goto out;
+ if (val != expected) + goto out; + val = 0;
After a token is consumed normally, it doesn't set it to zero. Instead it sets it to a "previous-ssp token". I don't think we actually want to do that here though because it involves the old SSP, which doesn't really apply in this case. I don't see any problem with zero, but was there any special thinking behind it?
I wasn't aware of the x86 behaviour for pivots here, 0 was just a default thing to choose for an invalid value. arm64 will also leave a value on the outgoing stack as a product of the two step pivots we have but it's not really something you'd look for.
+ if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out;
The GUPs still seem a bit unfortunate for a couple reasons:
- We could do a CMPXCHG version and are just not (I see ARM has identical code
in gcs_consume_token()). It's not the only race like this though FWIW.
- I *think* this is the only unprivileged FOLL_FORCE that can write to the
current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack?
- Lingering doubts about the wisdom of doing GUPs during task creation.
I don't think they are show stoppers, but the VMA check would be nice to have in the first upstream support.
The check you suggest for shadow stack memory should avoid abuse of the FOLL_FORCE at least. It'd be a bit narrow, you'd only be able to overwrite a value where we managed to read a valid token, but it's there.
+static void shstk_post_fork(struct task_struct *p, + struct kernel_clone_args *args) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK)) + return;
+ if (!args->shadow_stack) + return;
+ if (arch_shstk_post_fork(p, args) != 0) + force_sig_fault_to_task(SIGSEGV, SEGV_CPERR, NULL, p); +}
Hmm, is this forcing the signal on the new task, which is set up on a user provided shadow stack that failed the token check? It would handle the signal with an arbitrary SSP then I think. We should probably fail the clone call in the parent instead, which can be done by doing the work in copy_process(). Do
One thing I was thinking when writing this was that I wanted to make it possible to implement the check in the vDSO if there's any architectures that could do so, avoiding any need to GUP, but I can't see that that's actually been possible.
you see a problem with doing it at the end of copy_process()? I don't know if there could be ordering constraints.
I was concerned when I was writing the code about ordring constraints, but I did revise what the code was doing several times and as I was saying in reply to Catalin I'm no longer sure those apply.
On Thu, Aug 15, 2024 at 12:18:23AM +0000, Edgecombe, Rick P wrote:
On Thu, 2024-08-08 at 09:15 +0100, Mark Brown wrote:
+int arch_shstk_post_fork(struct task_struct *t, struct kernel_clone_args *args)
[...]
+ /* This should really be an atomic cmpxchg. It is not. */ + if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE) != sizeof(val)) + goto out;
+ if (val != expected) + goto out; + val = 0;
After a token is consumed normally, it doesn't set it to zero. Instead it sets it to a "previous-ssp token". I don't think we actually want to do that here though because it involves the old SSP, which doesn't really apply in this case. I don't see any problem with zero, but was there any special thinking behind it?
BTW, since it's the parent setting up the shadow stack in its own address space before forking, I think at least the read can avoid access_remote_vm() and we could do it earlier, even before the new process is created.
+ if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out;
The GUPs still seem a bit unfortunate for a couple reasons:
- We could do a CMPXCHG version and are just not (I see ARM has identical code
in gcs_consume_token()). It's not the only race like this though FWIW.
- I *think* this is the only unprivileged FOLL_FORCE that can write to the
current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack?
- Lingering doubts about the wisdom of doing GUPs during task creation.
I don't like the access_remote_vm() either. In the common (practically only) case with CLONE_VM, the mm is actually current->mm, so no need for a GUP.
We could, in theory, consume this token in the parent before the child mm is created. The downside is that if a parent forks multiple processes using the same shadow stack, it will have to set the token each time. I'd be fine with this, that's really only for the mostly theoretical case where one doesn't use CLONE_VM and still want a separate stack and shadow stack.
I don't think they are show stoppers, but the VMA check would be nice to have in the first upstream support.
Good point.
On Fri, Aug 16, 2024 at 09:44:46AM +0100, Catalin Marinas wrote:
We could, in theory, consume this token in the parent before the child mm is created. The downside is that if a parent forks multiple processes using the same shadow stack, it will have to set the token each time. I'd be fine with this, that's really only for the mostly theoretical case where one doesn't use CLONE_VM and still want a separate stack and shadow stack.
I originally implemented things that way but people did complain about the !CLONE_VM case, which does TBH seem reasonable. Note that the parent won't as standard be able to set the token again - since the shadow stack is not writable to userspace by default it'd instead need to allocate a whole new shadow stack for each child.
I change back to parsing the token in the parent but I don't want to end up in a cycle of bouncing between the two implementations depending on who's reviewed the most recent version.
On Fri, Aug 16, 2024 at 11:51:57AM +0100, Mark Brown wrote:
On Fri, Aug 16, 2024 at 09:44:46AM +0100, Catalin Marinas wrote:
We could, in theory, consume this token in the parent before the child mm is created. The downside is that if a parent forks multiple processes using the same shadow stack, it will have to set the token each time. I'd be fine with this, that's really only for the mostly theoretical case where one doesn't use CLONE_VM and still want a separate stack and shadow stack.
I originally implemented things that way but people did complain about the !CLONE_VM case, which does TBH seem reasonable. Note that the parent won't as standard be able to set the token again - since the shadow stack is not writable to userspace by default it'd instead need to allocate a whole new shadow stack for each child.
Ah, good point.
I change back to parsing the token in the parent but I don't want to end up in a cycle of bouncing between the two implementations depending on who's reviewed the most recent version.
You and others spent a lot more time looking at shadow stacks than me. I'm not necessarily asking to change stuff but rather understand the choices made.
On Fri, Aug 16, 2024 at 04:29:13PM +0100, Catalin Marinas wrote:
On Fri, Aug 16, 2024 at 11:51:57AM +0100, Mark Brown wrote:
I change back to parsing the token in the parent but I don't want to end up in a cycle of bouncing between the two implementations depending on who's reviewed the most recent version.
You and others spent a lot more time looking at shadow stacks than me. I'm not necessarily asking to change stuff but rather understand the choices made.
I'm a little ambivalent on this - on the one hand accessing the child's memory is not a thing of great beauty but on the other hand it does make the !CLONE_VM case more solid. My general instinct is that the ugliness is less of an issue than the "oh, there's a gap there" stuff with the !CLONE_VM case since it's more "why are we doing that?" than "we missed this".
On Fri, 2024-08-16 at 09:44 +0100, Catalin Marinas wrote:
After a token is consumed normally, it doesn't set it to zero. Instead it sets it to a "previous-ssp token". I don't think we actually want to do that here though because it involves the old SSP, which doesn't really apply in this case. I don't see any problem with zero, but was there any special thinking behind it?
BTW, since it's the parent setting up the shadow stack in its own address space before forking, I think at least the read can avoid access_remote_vm() and we could do it earlier, even before the new process is created.
Hmm. Makes sense. It's a bit racy since the parent could consume that token from another thread, but it would be a race in any case.
+ if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out;
The GUPs still seem a bit unfortunate for a couple reasons: - We could do a CMPXCHG version and are just not (I see ARM has identical code in gcs_consume_token()). It's not the only race like this though FWIW. - I *think* this is the only unprivileged FOLL_FORCE that can write to the current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack? - Lingering doubts about the wisdom of doing GUPs during task creation.
I don't like the access_remote_vm() either. In the common (practically only) case with CLONE_VM, the mm is actually current->mm, so no need for a GUP.
On the x86 side, we don't have a shadow stack access CMPXCHG. We will have to GUP and do a normal CMPXCHG off of the direct map to handle it fully properly in any case (CLONE_VM or not).
We could, in theory, consume this token in the parent before the child mm is created. The downside is that if a parent forks multiple processes using the same shadow stack, it will have to set the token each time. I'd be fine with this, that's really only for the mostly theoretical case where one doesn't use CLONE_VM and still want a separate stack and shadow stack.
I don't think they are show stoppers, but the VMA check would be nice to have in the first upstream support.
Good point.
On Fri, Aug 16, 2024 at 02:52:28PM +0000, Edgecombe, Rick P wrote:
On Fri, 2024-08-16 at 09:44 +0100, Catalin Marinas wrote:
BTW, since it's the parent setting up the shadow stack in its own address space before forking, I think at least the read can avoid access_remote_vm() and we could do it earlier, even before the new process is created.
Hmm. Makes sense. It's a bit racy since the parent could consume that token from another thread, but it would be a race in any case.
So it sounds like we might be coming round to this? I've got a new version that verifies the VM_SHADOW_STACK good to go but if we're going to switch back to consuming the token in the parent context I may as well do that. Like I said in the other mail I'd rather not flip flop on this.
On Fri, Aug 16, 2024 at 02:52:28PM +0000, Edgecombe, Rick P wrote:
On Fri, 2024-08-16 at 09:44 +0100, Catalin Marinas wrote:
After a token is consumed normally, it doesn't set it to zero. Instead it sets it to a "previous-ssp token". I don't think we actually want to do that here though because it involves the old SSP, which doesn't really apply in this case. I don't see any problem with zero, but was there any special thinking behind it?
BTW, since it's the parent setting up the shadow stack in its own address space before forking, I think at least the read can avoid access_remote_vm() and we could do it earlier, even before the new process is created.
Hmm. Makes sense. It's a bit racy since the parent could consume that token from another thread, but it would be a race in any case.
More on the race below. If we handle it properly, we don't need the separate checks.
+ if (access_remote_vm(mm, addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE) != sizeof(val)) + goto out;
The GUPs still seem a bit unfortunate for a couple reasons: - We could do a CMPXCHG version and are just not (I see ARM has identical code in gcs_consume_token()). It's not the only race like this though FWIW. - I *think* this is the only unprivileged FOLL_FORCE that can write to the current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack? - Lingering doubts about the wisdom of doing GUPs during task creation.
I don't like the access_remote_vm() either. In the common (practically only) case with CLONE_VM, the mm is actually current->mm, so no need for a GUP.
On the x86 side, we don't have a shadow stack access CMPXCHG. We will have to GUP and do a normal CMPXCHG off of the direct map to handle it fully properly in any case (CLONE_VM or not).
I guess we could do the same here and for the arm64 gcs_consume_token(). Basically get_user_page_vma_remote() gives us the page together with the vma that you mentioned needs checking. We can then do a cmpxchg directly on the page_address(). It's probably faster anyway than doing GUP twice.
On Fri, Aug 16, 2024 at 04:38:48PM +0100, Catalin Marinas wrote:
On Fri, Aug 16, 2024 at 02:52:28PM +0000, Edgecombe, Rick P wrote:
On the x86 side, we don't have a shadow stack access CMPXCHG. We will have to GUP and do a normal CMPXCHG off of the direct map to handle it fully properly in any case (CLONE_VM or not).
I guess we could do the same here and for the arm64 gcs_consume_token(). Basically get_user_page_vma_remote() gives us the page together with the vma that you mentioned needs checking. We can then do a cmpxchg directly on the page_address(). It's probably faster anyway than doing GUP twice.
There was some complication with get_user_page_vma_remote() while I was working on an earlier version which meant I didn't use it, though with adding checking of VMAs perhaps whatever it was isn't such an issue any more.
On Thu, Aug 15, 2024 at 2:18 AM Edgecombe, Rick P rick.p.edgecombe@intel.com wrote:
On Thu, 2024-08-08 at 09:15 +0100, Mark Brown wrote:
if (access_remote_vm(mm, addr, &val, sizeof(val),
FOLL_FORCE | FOLL_WRITE) != sizeof(val))
goto out;
The GUPs still seem a bit unfortunate for a couple reasons:
- We could do a CMPXCHG version and are just not (I see ARM has identical code
in gcs_consume_token()). It's not the only race like this though FWIW.
- I *think* this is the only unprivileged FOLL_FORCE that can write to the
current process in the kernel. As is, it could be used on normal RO mappings, at least in a limited way. Maybe another point for the VMA check. We'd want to check that it is normal shadow stack?
Yeah, having a FOLL_FORCE write in clone3 would be a weakness for userspace CFI and probably make it possible to violate mseal() restrictions that are supposed to enforce that address space regions are read-only.
- Lingering doubts about the wisdom of doing GUPs during task creation.
I don't think they are show stoppers, but the VMA check would be nice to have in the first upstream support.
[...]
+static void shstk_post_fork(struct task_struct *p,
struct kernel_clone_args *args)
+{
if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK))
return;
if (!args->shadow_stack)
return;
if (arch_shstk_post_fork(p, args) != 0)
force_sig_fault_to_task(SIGSEGV, SEGV_CPERR, NULL, p);
+}
Hmm, is this forcing the signal on the new task, which is set up on a user provided shadow stack that failed the token check? It would handle the signal with an arbitrary SSP then I think. We should probably fail the clone call in the parent instead, which can be done by doing the work in copy_process(). Do you see a problem with doing it at the end of copy_process()? I don't know if there could be ordering constraints.
FWIW I think we have things like force_fatal_sig() and force_exit_sig() to send signals that userspace can't catch with signal handlers - if you have to do the copying after the new task has been set up, something along those lines might be the right way to kill the child.
Though, did anyone in the thread yet suggest that you could do this before the child process has fully materialized but after the child MM has been set up? Somewhere in copy_process() between copy_mm() and the "/* No more failure paths after this point. */" comment?
On Fri, Aug 16, 2024 at 07:08:09PM +0200, Jann Horn wrote:
Yeah, having a FOLL_FORCE write in clone3 would be a weakness for userspace CFI and probably make it possible to violate mseal() restrictions that are supposed to enforce that address space regions are read-only.
Note that this will only happen for shadow stack pages (with the new version) and only for a valid token at the specific address. mseal()ing a shadow stack to be read only is hopefully not going to go terribly well for userspace.
Though, did anyone in the thread yet suggest that you could do this before the child process has fully materialized but after the child MM has been set up? Somewhere in copy_process() between copy_mm() and the "/* No more failure paths after this point. */" comment?
Yes, I'e got a version that does that waiting to go pending some discussion on if we even do the check for the token in the child mm.
Since there were widespread issues with output not being flushed the kselftest framework was modified to explicitly set the output streams unbuffered in commit 58e2847ad2e6 ("selftests: line buffer test program's stdout") so there is no need to explicitly flush in the clone3 tests.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3_selftests.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 3d2663fe50ba..39b5dcba663c 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -35,8 +35,6 @@ struct __clone_args {
static pid_t sys_clone3(struct __clone_args *args, size_t size) { - fflush(stdout); - fflush(stderr); return syscall(__NR_clone3, args, size); }
In order to make it easier to add more configuration for the tests and more support for runtime detection of when tests can be run pass the structure describing the tests into test_clone3() rather than picking the arguments out of it and have that function do all the per-test work.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 77 ++++++++++++++++----------------- 1 file changed, 37 insertions(+), 40 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index e61f07973ce5..e066b201fa64 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -30,6 +30,19 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, };
+typedef bool (*filter_function)(void); +typedef size_t (*size_function)(void); + +struct test { + const char *name; + uint64_t flags; + size_t size; + size_function size_function; + int expected; + enum test_mode test_mode; + filter_function filter; +}; + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -109,30 +122,40 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) return 0; }
-static bool test_clone3(uint64_t flags, size_t size, int expected, - enum test_mode test_mode) +static void test_clone3(const struct test *test) { + size_t size; int ret;
+ if (test->filter && test->filter()) { + ksft_test_result_skip("%s\n", test->name); + return; + } + + if (test->size_function) + size = test->size_function(); + else + size = test->size; + + ksft_print_msg("Running test '%s'\n", test->name); + ksft_print_msg( "[%d] Trying clone3() with flags %#" PRIx64 " (size %zu)\n", - getpid(), flags, size); - ret = call_clone3(flags, size, test_mode); + getpid(), test->flags, size); + ret = call_clone3(test->flags, size, test->test_mode); ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", - getpid(), ret, expected); - if (ret != expected) { + getpid(), ret, test->expected); + if (ret != test->expected) { ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", - getpid(), ret, expected); - return false; + getpid(), ret, test->expected); + ksft_test_result_fail("%s\n", test->name); + return; }
- return true; + ksft_test_result_pass("%s\n", test->name); }
-typedef bool (*filter_function)(void); -typedef size_t (*size_function)(void); - static bool not_root(void) { if (getuid() != 0) { @@ -160,16 +183,6 @@ static size_t page_size_plus_8(void) return getpagesize() + 8; }
-struct test { - const char *name; - uint64_t flags; - size_t size; - size_function size_function; - int expected; - enum test_mode test_mode; - filter_function filter; -}; - static const struct test tests[] = { { .name = "simple clone3()", @@ -319,24 +332,8 @@ int main(int argc, char *argv[]) ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported();
- for (i = 0; i < ARRAY_SIZE(tests); i++) { - if (tests[i].filter && tests[i].filter()) { - ksft_test_result_skip("%s\n", tests[i].name); - continue; - } - - if (tests[i].size_function) - size = tests[i].size_function(); - else - size = tests[i].size; - - ksft_print_msg("Running test '%s'\n", tests[i].name); - - ksft_test_result(test_clone3(tests[i].flags, size, - tests[i].expected, - tests[i].test_mode), - "%s\n", tests[i].name); - } + for (i = 0; i < ARRAY_SIZE(tests); i++) + test_clone3(&tests[i]);
ksft_finished(); }
In order to improve diagnostics and allow tests to explicitly look for signals check to see if the child exited due to a signal and if it did print the code and return it as a positive value, distinct from the negative errnos currently returned.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index e066b201fa64..3b3a08e6a34d 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -111,6 +111,13 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) ksft_print_msg("waitpid() returned %s\n", strerror(errno)); return -errno; } + + if (WIFSIGNALED(status)) { + ksft_print_msg("Child exited with signal %d\n", + WTERMSIG(status)); + return WTERMSIG(status); + } + if (!WIFEXITED(status)) { ksft_print_msg("Child did not exit normally, status 0x%x\n", status);
The clone_args structure is extensible, with the syscall passing in the length of the structure. Inside the kernel we use copy_struct_from_user() to read the struct but this has the unfortunate side effect of silently accepting some overrun in the structure size providing the extra data is all zeros. This means that we can't discover the clone3() features that the running kernel supports by simply probing with various struct sizes. We need to check this for the benefit of test systems which run newer kselftests on old kernels.
Add a flag which can be set on a test to indicate that clone3() may return -E2BIG due to the use of newer struct versions. Currently no tests need this but it will become an issue for testing clone3() support for shadow stacks, the support for shadow stacks is already present on x86.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 3b3a08e6a34d..26221661e9ae 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -39,6 +39,7 @@ struct test { size_t size; size_function size_function; int expected; + bool e2big_valid; enum test_mode test_mode; filter_function filter; }; @@ -153,6 +154,11 @@ static void test_clone3(const struct test *test) ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", getpid(), ret, test->expected); if (ret != test->expected) { + if (test->e2big_valid && ret == -E2BIG) { + ksft_print_msg("Test reported -E2BIG\n"); + ksft_test_result_skip("%s\n", test->name); + return; + } ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", getpid(), ret, test->expected);
Add basic test coverage for specifying the shadow stack for a newly created thread via clone3(), including coverage of the newly extended argument structure. We check that a user specified shadow stack can be provided, and that invalid combinations of parameters are rejected.
In order to facilitate testing on systems without userspace shadow stack support we manually enable shadow stacks on startup, this is architecture specific due to the use of an arch_prctl() on x86. Due to interactions with potential userspace locking of features we actually detect support for shadow stacks on the running system by attempting to allocate a shadow stack page during initialisation using map_shadow_stack(), warning if this succeeds when the enable failed.
In order to allow testing of user configured shadow stacks on architectures with that feature we need to ensure that we do not return from the function where the clone3() syscall is called in the child process, doing so would trigger a shadow stack underflow. To do this we use inline assembly rather than the standard syscall wrapper to call clone3(). In order to avoid surprises we also use a syscall rather than the libc exit() function., this should be overly cautious.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 134 +++++++++++++++++++++- tools/testing/selftests/clone3/clone3_selftests.h | 38 ++++++ 2 files changed, 171 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 26221661e9ae..81c2e8648e8b 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -3,6 +3,7 @@ /* Based on Christian Brauner's clone3() example */
#define _GNU_SOURCE +#include <asm/mman.h> #include <errno.h> #include <inttypes.h> #include <linux/types.h> @@ -11,6 +12,7 @@ #include <stdint.h> #include <stdio.h> #include <stdlib.h> +#include <sys/mman.h> #include <sys/syscall.h> #include <sys/types.h> #include <sys/un.h> @@ -19,8 +21,12 @@ #include <sched.h>
#include "../kselftest.h" +#include "../ksft_shstk.h" #include "clone3_selftests.h"
+static bool shadow_stack_supported; +static size_t max_supported_args_size; + enum test_mode { CLONE3_ARGS_NO_TEST, CLONE3_ARGS_ALL_0, @@ -28,6 +34,10 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NEG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_CSIG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, + CLONE3_ARGS_SHADOW_STACK, + CLONE3_ARGS_SHADOW_STACK_NO_SIZE, + CLONE3_ARGS_SHADOW_STACK_NO_POINTER, + CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, };
typedef bool (*filter_function)(void); @@ -44,6 +54,44 @@ struct test { filter_function filter; };
+ +/* + * We check for shadow stack support by attempting to use + * map_shadow_stack() since features may have been locked by the + * dynamic linker resulting in spurious errors when we attempt to + * enable on startup. We warn if the enable failed. + */ +static void test_shadow_stack_supported(void) +{ + long ret; + + ret = syscall(__NR_map_shadow_stack, 0, getpagesize(), 0); + if (ret == -1) { + ksft_print_msg("map_shadow_stack() not supported\n"); + } else if ((void *)ret == MAP_FAILED) { + ksft_print_msg("Failed to map shadow stack\n"); + } else { + ksft_print_msg("Shadow stack supportd\n"); + shadow_stack_supported = true; + + if (!shadow_stack_enabled) + ksft_print_msg("Mapped but did not enable shadow stack\n"); + } +} + +static unsigned long long get_shadow_stack_page(unsigned long flags) +{ + unsigned long long page; + + page = syscall(__NR_map_shadow_stack, 0, getpagesize(), flags); + if ((void *)page == MAP_FAILED) { + ksft_print_msg("map_shadow_stack() failed: %d\n", errno); + return 0; + } + + return page; +} + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -89,6 +137,21 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) case CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG: args.exit_signal = 0x00000000000000f0ULL; break; + case CLONE3_ARGS_SHADOW_STACK: + /* We need to specify a normal stack too to avoid corruption */ + args.shadow_stack = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + args.shadow_stack_size = getpagesize(); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_POINTER: + args.shadow_stack_size = getpagesize(); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_SIZE: + args.shadow_stack = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_TOKEN: + args.shadow_stack = get_shadow_stack_page(0); + args.shadow_stack_size = getpagesize(); + break; }
memcpy(&args_ext.args, &args, sizeof(struct __clone_args)); @@ -102,7 +165,12 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
if (pid == 0) { ksft_print_msg("I am the child, my PID is %d\n", getpid()); - _exit(EXIT_SUCCESS); + /* + * Use a raw syscall to ensure we don't get issues + * with manually specified shadow stack and exit handlers. + */ + syscall(__NR_exit, EXIT_SUCCESS); + ksft_print_msg("CHILD FAILED TO EXIT PID is %d\n", getpid()); }
ksft_print_msg("I am the parent (%d). My child's pid is %d\n", @@ -191,6 +259,26 @@ static bool no_timenamespace(void) return true; }
+static bool have_shadow_stack(void) +{ + if (shadow_stack_supported) { + ksft_print_msg("Shadow stack supported\n"); + return true; + } + + return false; +} + +static bool no_shadow_stack(void) +{ + if (!shadow_stack_supported) { + ksft_print_msg("Shadow stack not supported\n"); + return true; + } + + return false; +} + static size_t page_size_plus_8(void) { return getpagesize() + 8; @@ -334,6 +422,47 @@ static const struct test tests[] = { .expected = -EINVAL, .test_mode = CLONE3_ARGS_NO_TEST, }, + { + .name = "Shadow stack on system with shadow stack", + .size = 0, + .expected = 0, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with no pointer", + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_POINTER, + }, + { + .name = "Shadow stack with no size", + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_SIZE, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with no token", + .flags = CLONE_VM, + .size = 0, + .expected = SIGSEGV, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack on system without shadow stack", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = have_shadow_stack, + }, };
int main(int argc, char *argv[]) @@ -341,9 +470,12 @@ int main(int argc, char *argv[]) size_t size; int i;
+ enable_shadow_stack(); + ksft_print_header(); ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported(); + test_shadow_stack_supported();
for (i = 0; i < ARRAY_SIZE(tests); i++) test_clone3(&tests[i]); diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 39b5dcba663c..38d82934668a 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -31,12 +31,50 @@ struct __clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; +#ifndef CLONE_ARGS_SIZE_VER2 +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#endif + __aligned_u64 shadow_stack; + __aligned_u64 shadow_stack_size; +#ifndef CLONE_ARGS_SIZE_VER3 +#define CLONE_ARGS_SIZE_VER3 104 /* sizeof fourth published struct */ +#endif };
+/* + * For architectures with shadow stack support we need to be + * absolutely sure that the clone3() syscall will be inline and not a + * function call so we open code. + */ +#ifdef __x86_64__ +static pid_t __always_inline sys_clone3(struct __clone_args *args, size_t size) +{ + long ret; + register long _num __asm__ ("rax") = __NR_clone3; + register long _args __asm__ ("rdi") = (long)(args); + register long _size __asm__ ("rsi") = (long)(size); + + __asm__ volatile ( + "syscall\n" + : "=a"(ret) + : "r"(_args), "r"(_size), + "0"(_num) + : "rcx", "r11", "memory", "cc" + ); + + if (ret < 0) { + errno = -ret; + return -1; + } + + return ret; +} +#else static pid_t sys_clone3(struct __clone_args *args, size_t size) { return syscall(__NR_clone3, args, size); } +#endif
static inline void test_clone3_supported(void) {
On Thu, Aug 08, 2024 at 09:15:21AM +0100, Mark Brown wrote:
The kernel has recently added support for shadow stacks, currently x86 only using their CET feature but both arm64 and RISC-V have equivalent features (GCS and Zicfiss respectively), I am actively working on GCS[1]. With shadow stacks the hardware maintains an additional stack containing only the return addresses for branch instructions which is not generally writeable by userspace and ensures that any returns are to the recorded addresses. This provides some protection against ROP attacks and making it easier to collect call stacks. These shadow stacks are allocated in the address space of the userspace process.
Our API for shadow stacks does not currently offer userspace any flexiblity for managing the allocation of shadow stacks for newly created threads, instead the kernel allocates a new shadow stack with the same size as the normal stack whenever a thread is created with the feature enabled. The stacks allocated in this way are freed by the kernel when the thread exits or shadow stacks are disabled for the thread. This lack of flexibility and control isn't ideal, in the vast majority of cases the shadow stack will be over allocated and the implicit allocation and deallocation is not consistent with other interfaces. As far as I can tell the interface is done in this manner mainly because the shadow stack patches were in development since before clone3() was implemented.
Since clone3() is readily extensible let's add support for specifying a shadow stack when creating a new thread or process in a similar manner to how the normal stack is specified, keeping the current implicit allocation behaviour if one is not specified either with clone3() or through the use of clone(). The user must provide a shadow stack address and size, this must point to memory mapped for use as a shadow stackby map_shadow_stack() with a shadow stack token at the top of the stack.
Please note that the x86 portions of this code are build tested only, I don't appear to have a system that can run CET avaible to me, I have done testing with an integration into my pending work for GCS. There is some possibility that the arm64 implementation may require the use of clone3() and explicit userspace allocation of shadow stacks, this is still under discussion.
Please further note that the token consumption done by clone3() is not currently implemented in an atomic fashion, Rick indicated that he would look into fixing this if people are OK with the implementation.
A new architecture feature Kconfig option for shadow stacks is added as here, this was suggested as part of the review comments for the arm64 GCS series and since we need to detect if shadow stacks are supported it seemed sensible to roll it in here.
[1] https://lore.kernel.org/r/20231009-arm64-gcs-v6-0-78e55deaa4dd@kernel.org/
Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org
(Testing was done on CET hardware.)
On Thu, 2024-08-08 at 10:54 -0700, Kees Cook wrote:
Tested-by: Kees Cook kees@kernel.org
I regression tested it with the CET enabled glibc selftests. No issues.
On Thu, Aug 8, 2024 at 10:16 AM Mark Brown broonie@kernel.org wrote:
Since clone3() is readily extensible let's add support for specifying a shadow stack when creating a new thread or process in a similar manner to how the normal stack is specified, keeping the current implicit allocation behaviour if one is not specified either with clone3() or through the use of clone(). The user must provide a shadow stack address and size, this must point to memory mapped for use as a shadow stackby map_shadow_stack() with a shadow stack token at the top of the stack.
As a heads-up so you don't get surprised by this in the future:
Because clone3() does not pass the flags in a register like clone() does, it is not available in places like docker containers that use the default Docker seccomp policy (https://github.com/moby/moby/blob/master/profiles/seccomp/default.json). Docker uses seccomp to filter clone() arguments (to prevent stuff like namespace creation), and that's not possible with clone3(), so clone3() is blocked.
The same thing applies to things like sandboxed renderer processes of web browsers - they want to block anything other than creating normal threads, so they use seccomp to block stuff like namespace creation and creating new processes.
I briefly mentioned this here during clone3 development, though I probably should have been more explicit about how it would be beneficial for clone3 to pass flags in a register: https://lore.kernel.org/all/CAG48ez3q=BeNcuVTKBN79kJui4vC6nw0Bfq6xc-i0neheT17TA@mail.gmail.com/
So if you want your feature to be available in such contexts, you'll probably have to either add a new syscall clone4() that passes the flags in a register; or do the plumbing work required to make it possible to seccomp-filter things other than register contexts (by invoking seccomp again from the clone3 handler with some kinda pseudo-syscall?); or change the signature of the existing syscall (but that would require something like using the high bit of the size to signal that there's a flags argument in another register, which is probably more ugly than just adding a new syscall).
On Fri, Aug 16, 2024 at 05:52:20PM +0200, Jann Horn wrote:
As a heads-up so you don't get surprised by this in the future:
Because clone3() does not pass the flags in a register like clone() does, it is not available in places like docker containers that use the default Docker seccomp policy (https://github.com/moby/moby/blob/master/profiles/seccomp/default.json). Docker uses seccomp to filter clone() arguments (to prevent stuff like namespace creation), and that's not possible with clone3(), so clone3() is blocked.
This is probably fine, the existing shadow stack ABI provides a sensible default behaviour for things that just use regular clone(). This series just adds more control for things using clone3(), the main issue would be anything that *needs* to specify stack size/placement and can't use clone3(). That would need a separate userspace API if required, and we'd still want to extend clone3() anyway.
linux-kselftest-mirror@lists.linaro.org