From: "David A. Long" dave.long@linaro.org
V4.9 backport of spectre patches from Russell M. King's spectre branch. Patches have been kvm-unit-test'ed on an arndale, run through kernelci, and handed off to ARM for functional testing.
Julien Thierry (9): ARM: 8789/1: signal: copy registers using __copy_to_user() ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user() ARM: 8793/1: signal: replace __put_user_error with __put_user ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limit ARM: 8795/1: spectre-v1.1: use put_user() for __put_user() ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization ARM: 8797/1: spectre-v1.1: harden __copy_to_user ARM: 8810/1: vfp: Fix wrong assignement to ufp_exc
Russell King (7): ARM: make lookup_processor_type() non-__init ARM: split out processor lookup ARM: clean up per-processor check_bugs method call ARM: add PROC_VTABLE and PROC_TABLE macros ARM: spectre-v2: per-CPU vtables to work around big.Little systems ARM: ensure that processor vtables is not lost after boot ARM: fix the cockup in the previous patch
arch/arm/include/asm/assembler.h | 11 +++++ arch/arm/include/asm/cputype.h | 1 + arch/arm/include/asm/proc-fns.h | 61 +++++++++++++++++++++----- arch/arm/include/asm/thread_info.h | 4 +- arch/arm/include/asm/uaccess.h | 49 ++++++++++++++++++--- arch/arm/kernel/bugs.c | 4 +- arch/arm/kernel/head-common.S | 6 +-- arch/arm/kernel/setup.c | 40 ++++++++++------- arch/arm/kernel/signal.c | 70 ++++++++++++++++-------------- arch/arm/kernel/smp.c | 32 ++++++++++++++ arch/arm/kernel/sys_oabi-compat.c | 8 +++- arch/arm/lib/copy_from_user.S | 6 +-- arch/arm/lib/copy_to_user.S | 6 ++- arch/arm/lib/uaccess_with_memcpy.c | 3 +- arch/arm/mm/proc-macros.S | 10 +++++ arch/arm/mm/proc-v7-bugs.c | 17 +------- arch/arm/vfp/vfpmodule.c | 20 ++++----- 17 files changed, 240 insertions(+), 108 deletions(-)
From: Julien Thierry julien.thierry@arm.com
Commit 5ca451cf6ed04443774bbb7ee45332dafa42e99f upstream.
When saving the ARM integer registers, use __copy_to_user() to copy them into user signal frame, rather than __put_user_error(). This has the benefit of disabling/enabling PAN once for the whole copy intead of once per write.
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/signal.c | 49 ++++++++++++++++++++++------------------ 1 file changed, 27 insertions(+), 22 deletions(-)
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index 6bee5c9b1133..fbb325ff8acc 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -256,30 +256,35 @@ static int setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set) { struct aux_sigframe __user *aux; + struct sigcontext context; int err = 0;
- __put_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err); - __put_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err); - __put_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err); - __put_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err); - __put_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err); - __put_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err); - __put_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err); - __put_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err); - __put_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err); - __put_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err); - __put_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err); - __put_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err); - __put_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err); - __put_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err); - __put_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err); - __put_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err); - __put_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err); - - __put_user_error(current->thread.trap_no, &sf->uc.uc_mcontext.trap_no, err); - __put_user_error(current->thread.error_code, &sf->uc.uc_mcontext.error_code, err); - __put_user_error(current->thread.address, &sf->uc.uc_mcontext.fault_address, err); - __put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err); + context = (struct sigcontext) { + .arm_r0 = regs->ARM_r0, + .arm_r1 = regs->ARM_r1, + .arm_r2 = regs->ARM_r2, + .arm_r3 = regs->ARM_r3, + .arm_r4 = regs->ARM_r4, + .arm_r5 = regs->ARM_r5, + .arm_r6 = regs->ARM_r6, + .arm_r7 = regs->ARM_r7, + .arm_r8 = regs->ARM_r8, + .arm_r9 = regs->ARM_r9, + .arm_r10 = regs->ARM_r10, + .arm_fp = regs->ARM_fp, + .arm_ip = regs->ARM_ip, + .arm_sp = regs->ARM_sp, + .arm_lr = regs->ARM_lr, + .arm_pc = regs->ARM_pc, + .arm_cpsr = regs->ARM_cpsr, + + .trap_no = current->thread.trap_no, + .error_code = current->thread.error_code, + .fault_address = current->thread.address, + .oldmask = set->sig[0], + }; + + err |= __copy_to_user(&sf->uc.uc_mcontext, &context, sizeof(context));
err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set));
From: Julien Thierry julien.thierry@arm.com
Commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 upstream.
Use __copy_to_user() rather than __put_user_error() for individual members when saving VFP state. This has the benefit of disabling/enabling PAN once per copied struct intead of once per write.
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/thread_info.h | 4 ++-- arch/arm/kernel/signal.c | 13 +++++++------ arch/arm/vfp/vfpmodule.c | 20 ++++++++------------ 3 files changed, 17 insertions(+), 20 deletions(-)
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index 57d2ad9c75ca..df8420672c7e 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -124,8 +124,8 @@ extern void vfp_flush_hwstate(struct thread_info *); struct user_vfp; struct user_vfp_exc;
-extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *, - struct user_vfp_exc __user *); +extern int vfp_preserve_user_clear_hwstate(struct user_vfp *, + struct user_vfp_exc *); extern int vfp_restore_user_hwstate(struct user_vfp *, struct user_vfp_exc *); #endif diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index fbb325ff8acc..135b1a8e12eb 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -94,17 +94,18 @@ static int restore_iwmmxt_context(struct iwmmxt_sigframe *frame)
static int preserve_vfp_context(struct vfp_sigframe __user *frame) { - const unsigned long magic = VFP_MAGIC; - const unsigned long size = VFP_STORAGE_SIZE; + struct vfp_sigframe kframe; int err = 0;
- __put_user_error(magic, &frame->magic, err); - __put_user_error(size, &frame->size, err); + memset(&kframe, 0, sizeof(kframe)); + kframe.magic = VFP_MAGIC; + kframe.size = VFP_STORAGE_SIZE;
+ err = vfp_preserve_user_clear_hwstate(&kframe.ufp, &kframe.ufp_exc); if (err) - return -EFAULT; + return err;
- return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc); + return __copy_to_user(frame, &kframe, sizeof(kframe)); }
static int restore_vfp_context(struct vfp_sigframe __user *auxp) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 8e5e97989fda..df3fa52c0aa3 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -554,12 +554,11 @@ void vfp_flush_hwstate(struct thread_info *thread) * Save the current VFP state into the provided structures and prepare * for entry into a new function (signal handler). */ -int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, - struct user_vfp_exc __user *ufp_exc) +int vfp_preserve_user_clear_hwstate(struct user_vfp *ufp, + struct user_vfp_exc *ufp_exc) { struct thread_info *thread = current_thread_info(); struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; - int err = 0;
/* Ensure that the saved hwstate is up-to-date. */ vfp_sync_hwstate(thread); @@ -568,22 +567,19 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, * Copy the floating point registers. There can be unused * registers see asm/hwcap.h for details. */ - err |= __copy_to_user(&ufp->fpregs, &hwstate->fpregs, - sizeof(hwstate->fpregs)); + memcpy(&ufp->fpregs, &hwstate->fpregs, sizeof(hwstate->fpregs)); + /* * Copy the status and control register. */ - __put_user_error(hwstate->fpscr, &ufp->fpscr, err); + ufp->fpscr = hwstate->fpscr;
/* * Copy the exception registers. */ - __put_user_error(hwstate->fpexc, &ufp_exc->fpexc, err); - __put_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); - __put_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); - - if (err) - return -EFAULT; + ufp_exc->fpexc = hwstate->fpexc; + ufp_exc->fpinst = hwstate->fpinst; + ufp_exc->fpinst2 = ufp_exc->fpinst2;
/* Ensure that VFP is disabled. */ vfp_flush_hwstate(thread);
From: Julien Thierry julien.thierry@arm.com
Commit 319508902600c2688e057750148487996396e9ca upstream.
Copy events to user using __copy_to_user() rather than copy members of individually with __put_user_error(). This has the benefit of disabling/enabling PAN once per event intead of once per event member.
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/sys_oabi-compat.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c index 640748e27035..d844c5c9364b 100644 --- a/arch/arm/kernel/sys_oabi-compat.c +++ b/arch/arm/kernel/sys_oabi-compat.c @@ -276,6 +276,7 @@ asmlinkage long sys_oabi_epoll_wait(int epfd, int maxevents, int timeout) { struct epoll_event *kbuf; + struct oabi_epoll_event e; mm_segment_t fs; long ret, err, i;
@@ -294,8 +295,11 @@ asmlinkage long sys_oabi_epoll_wait(int epfd, set_fs(fs); err = 0; for (i = 0; i < ret; i++) { - __put_user_error(kbuf[i].events, &events->events, err); - __put_user_error(kbuf[i].data, &events->data, err); + e.events = kbuf[i].events; + e.data = kbuf[i].data; + err = __copy_to_user(events, &e, sizeof(e)); + if (err) + break; events++; } kfree(kbuf);
From: Julien Thierry julien.thierry@arm.com
Commit 18ea66bd6e7a95bdc598223d72757190916af28b upstream.
With Spectre-v1.1 mitigations, __put_user_error is pointless. In an attempt to remove it, replace its references in frame setups with __put_user.
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/signal.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index 135b1a8e12eb..0a066f03b5ec 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -302,7 +302,7 @@ setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set) if (err == 0) err |= preserve_vfp_context(&aux->vfp); #endif - __put_user_error(0, &aux->end_magic, err); + err |= __put_user(0, &aux->end_magic);
return err; } @@ -434,7 +434,7 @@ setup_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) /* * Set uc.uc_flags to a value which sc.trap_no would never have. */ - __put_user_error(0x5ac3c35a, &frame->uc.uc_flags, err); + err = __put_user(0x5ac3c35a, &frame->uc.uc_flags);
err |= setup_sigframe(frame, regs, set); if (err == 0) @@ -454,8 +454,8 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
err |= copy_siginfo_to_user(&frame->info, &ksig->info);
- __put_user_error(0, &frame->sig.uc.uc_flags, err); - __put_user_error(NULL, &frame->sig.uc.uc_link, err); + err |= __put_user(0, &frame->sig.uc.uc_flags); + err |= __put_user(NULL, &frame->sig.uc.uc_link);
err |= __save_altstack(&frame->sig.uc.uc_stack, regs->ARM_sp); err |= setup_sigframe(&frame->sig, regs, set);
From: Julien Thierry julien.thierry@arm.com
Commit 621afc677465db231662ed126ae1f355bf8eac47 upstream.
A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines.
This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit.
Porting commit c2f0ad4fc089cff8 ("arm64: uaccess: Prevent speculative use of the current addr_limit").
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/uaccess.h | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 7b17460127fd..9ae888775743 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -99,6 +99,14 @@ extern int __put_user_bad(void); static inline void set_fs(mm_segment_t fs) { current_thread_info()->addr_limit = fs; + + /* + * Prevent a mispredicted conditional call to set_fs from forwarding + * the wrong address limit to access_ok under speculation. + */ + dsb(nsh); + isb(); + modify_domain(DOMAIN_KERNEL, fs ? DOMAIN_CLIENT : DOMAIN_MANAGER); }
From: Julien Thierry julien.thierry@arm.com
Commit e3aa6243434fd9a82e84bb79ab1abd14f2d9a5a7 upstream.
When Spectre mitigation is required, __put_user() needs to include check_uaccess. This is already the case for put_user(), so just make __put_user() an alias of put_user().
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/uaccess.h | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 9ae888775743..b61acd62cffb 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -400,6 +400,14 @@ do { \ __pu_err; \ })
+#ifdef CONFIG_CPU_SPECTRE +/* + * When mitigating Spectre variant 1.1, all accessors need to include + * verification of the address space. + */ +#define __put_user(x, ptr) put_user(x, ptr) + +#else #define __put_user(x, ptr) \ ({ \ long __pu_err = 0; \ @@ -407,12 +415,6 @@ do { \ __pu_err; \ })
-#define __put_user_error(x, ptr, err) \ -({ \ - __put_user_switch((x), (ptr), (err), __put_user_nocheck); \ - (void) 0; \ -}) - #define __put_user_nocheck(x, __pu_ptr, __err, __size) \ do { \ unsigned long __pu_addr = (unsigned long)__pu_ptr; \ @@ -492,6 +494,7 @@ do { \ : "r" (x), "i" (-EFAULT) \ : "cc")
+#endif /* !CONFIG_CPU_SPECTRE */
#ifdef CONFIG_MMU extern unsigned long __must_check
From: Julien Thierry julien.thierry@arm.com
Commit afaf6838f4bc896a711180b702b388b8cfa638fc upstream.
Introduce C and asm helpers to sanitize user address, taking the address range they target into account.
Use asm helper for existing sanitization in __copy_from_user().
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/assembler.h | 11 +++++++++++ arch/arm/include/asm/uaccess.h | 26 ++++++++++++++++++++++++++ arch/arm/lib/copy_from_user.S | 6 +----- 3 files changed, 38 insertions(+), 5 deletions(-)
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index e616f61f859d..7d727506096f 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -465,6 +465,17 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) #endif .endm
+ .macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req +#ifdef CONFIG_CPU_SPECTRE + sub \tmp, \limit, #1 + subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr + addhs \tmp, \tmp, #1 @ if (tmp >= 0) { + subhss \tmp, \tmp, \size @ tmp = limit - (addr + size) } + movlo \addr, #0 @ if (tmp < 0) addr = NULL + csdb +#endif + .endm + .macro uaccess_disable, tmp, isb=1 #ifdef CONFIG_CPU_SW_DOMAIN_PAN /* diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index b61acd62cffb..0f6c6b873bc5 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -129,6 +129,32 @@ static inline void set_fs(mm_segment_t fs) #define __inttype(x) \ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+/* + * Sanitise a uaccess pointer such that it becomes NULL if addr+size + * is above the current addr_limit. + */ +#define uaccess_mask_range_ptr(ptr, size) \ + ((__typeof__(ptr))__uaccess_mask_range_ptr(ptr, size)) +static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr, + size_t size) +{ + void __user *safe_ptr = (void __user *)ptr; + unsigned long tmp; + + asm volatile( + " sub %1, %3, #1\n" + " subs %1, %1, %0\n" + " addhs %1, %1, #1\n" + " subhss %1, %1, %2\n" + " movlo %0, #0\n" + : "+r" (safe_ptr), "=&r" (tmp) + : "r" (size), "r" (current_thread_info()->addr_limit) + : "cc"); + + csdb(); + return safe_ptr; +} + /* * Single-value transfer routines. They automatically use the right * size if we just have the right pointer type. Note that the functions diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index a826df3d3814..6709a8d33963 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -93,11 +93,7 @@ ENTRY(arm_copy_from_user) #ifdef CONFIG_CPU_SPECTRE get_thread_info r3 ldr r3, [r3, #TI_ADDR_LIMIT] - adds ip, r1, r2 @ ip=addr+size - sub r3, r3, #1 @ addr_limit - 1 - cmpcc ip, r3 @ if (addr+size > addr_limit - 1) - movcs r1, #0 @ addr = NULL - csdb + uaccess_mask_range_ptr r1, r2, r3, ip #endif
#include "copy_template.S"
From: Julien Thierry julien.thierry@arm.com
Commit a1d09e074250fad24f1b993f327b18cc6812eb7a upstream.
Sanitize user pointer given to __copy_to_user, both for standard version and memcopy version of the user accessor.
Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/lib/copy_to_user.S | 6 +++++- arch/arm/lib/uaccess_with_memcpy.c | 3 ++- 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/arm/lib/copy_to_user.S b/arch/arm/lib/copy_to_user.S index caf5019d8161..970abe521197 100644 --- a/arch/arm/lib/copy_to_user.S +++ b/arch/arm/lib/copy_to_user.S @@ -94,6 +94,11 @@
ENTRY(__copy_to_user_std) WEAK(arm_copy_to_user) +#ifdef CONFIG_CPU_SPECTRE + get_thread_info r3 + ldr r3, [r3, #TI_ADDR_LIMIT] + uaccess_mask_range_ptr r0, r2, r3, ip +#endif
#include "copy_template.S"
@@ -108,4 +113,3 @@ ENDPROC(__copy_to_user_std) rsb r0, r0, r2 copy_abort_end .popsection - diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c index 6bd1089b07e0..f598d792bace 100644 --- a/arch/arm/lib/uaccess_with_memcpy.c +++ b/arch/arm/lib/uaccess_with_memcpy.c @@ -152,7 +152,8 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n) n = __copy_to_user_std(to, from, n); uaccess_restore(ua_flags); } else { - n = __copy_to_user_memcpy(to, from, n); + n = __copy_to_user_memcpy(uaccess_mask_range_ptr(to, n), + from, n); } return n; }
From: Julien Thierry julien.thierry@arm.com
Commit 5df7a99bdd0de4a0480320264c44c04543c29d5a upstream.
In vfp_preserve_user_clear_hwstate, ufp_exc->fpinst2 gets assigned to itself. It should actually be hwstate->fpinst2 that gets assigned to the ufp_exc field.
Fixes commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 ("ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state").
Reported-by: David Binderman dcb314@hotmail.com Signed-off-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/vfp/vfpmodule.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index df3fa52c0aa3..00dd8cf36632 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -579,7 +579,7 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp *ufp, */ ufp_exc->fpexc = hwstate->fpexc; ufp_exc->fpinst = hwstate->fpinst; - ufp_exc->fpinst2 = ufp_exc->fpinst2; + ufp_exc->fpinst2 = hwstate->fpinst2;
/* Ensure that VFP is disabled. */ vfp_flush_hwstate(thread);
From: Russell King rmk+kernel@armlinux.org.uk
Commit 899a42f836678a595f7d2bc36a5a0c2b03d08cbc upstream.
Move lookup_processor_type() out of the __init section so it is callable from (eg) the secondary startup code during hotplug.
Reviewed-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/kernel/head-common.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index 8733012d231f..7e662bdd5cb3 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -122,6 +122,9 @@ __mmap_switched_data: .long init_thread_union + THREAD_START_SP @ sp .size __mmap_switched_data, . - __mmap_switched_data
+ __FINIT + .text + /* * This provides a C-API version of __lookup_processor_type */ @@ -133,9 +136,6 @@ ENTRY(lookup_processor_type) ldmfd sp!, {r4 - r6, r9, pc} ENDPROC(lookup_processor_type)
- __FINIT - .text - /* * Read processor ID register (CP#15, CR0), and look up in the linker-built * supported processor list. Note that we can't use the absolute addresses
From: Russell King rmk+kernel@armlinux.org.uk
Commit 65987a8553061515b5851b472081aedb9837a391 upstream.
Split out the lookup of the processor type and associated error handling from the rest of setup_processor() - we will need to use this in the secondary CPU bringup path for big.Little Spectre variant 2 mitigation.
Reviewed-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/cputype.h | 1 + arch/arm/kernel/setup.c | 31 +++++++++++++++++++------------ 2 files changed, 20 insertions(+), 12 deletions(-)
diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h index c55db1e22f0c..b9356dbfded0 100644 --- a/arch/arm/include/asm/cputype.h +++ b/arch/arm/include/asm/cputype.h @@ -106,6 +106,7 @@ #define ARM_CPU_PART_SCORPION 0x510002d0
extern unsigned int processor_id; +struct proc_info_list *lookup_processor(u32 midr);
#ifdef CONFIG_CPU_CP15 #define read_cpuid(reg) \ diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index f4e54503afa9..8d5c3a118abe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -667,22 +667,29 @@ static void __init smp_build_mpidr_hash(void) } #endif
-static void __init setup_processor(void) +/* + * locate processor in the list of supported processor types. The linker + * builds this table for us from the entries in arch/arm/mm/proc-*.S + */ +struct proc_info_list *lookup_processor(u32 midr) { - struct proc_info_list *list; + struct proc_info_list *list = lookup_processor_type(midr);
- /* - * locate processor in the list of supported processor - * types. The linker builds this table for us from the - * entries in arch/arm/mm/proc-*.S - */ - list = lookup_processor_type(read_cpuid_id()); if (!list) { - pr_err("CPU configuration botched (ID %08x), unable to continue.\n", - read_cpuid_id()); - while (1); + pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n", + smp_processor_id(), midr); + while (1) + /* can't use cpu_relax() here as it may require MMU setup */; }
+ return list; +} + +static void __init setup_processor(void) +{ + unsigned int midr = read_cpuid_id(); + struct proc_info_list *list = lookup_processor(midr); + cpu_name = list->cpu_name; __cpu_architecture = __get_cpu_architecture();
@@ -700,7 +707,7 @@ static void __init setup_processor(void) #endif
pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n", - cpu_name, read_cpuid_id(), read_cpuid_id() & 15, + list->cpu_name, midr, midr & 15, proc_arch[cpu_architecture()], get_cr());
snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
From: Russell King rmk+kernel@armlinux.org.uk
Commit 945aceb1db8885d3a35790cf2e810f681db52756 upstream.
Call the per-processor type check_bugs() method in the same way as we do other per-processor functions - move the "processor." detail into proc-fns.h.
Reviewed-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/proc-fns.h | 1 + arch/arm/kernel/bugs.c | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index f379f5f849a9..19939e88efca 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -99,6 +99,7 @@ extern void cpu_do_suspend(void *); extern void cpu_do_resume(void *); #else #define cpu_proc_init processor._proc_init +#define cpu_check_bugs processor.check_bugs #define cpu_proc_fin processor._proc_fin #define cpu_reset processor.reset #define cpu_do_idle processor._do_idle diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c index 7be511310191..d41d3598e5e5 100644 --- a/arch/arm/kernel/bugs.c +++ b/arch/arm/kernel/bugs.c @@ -6,8 +6,8 @@ void check_other_bugs(void) { #ifdef MULTI_CPU - if (processor.check_bugs) - processor.check_bugs(); + if (cpu_check_bugs) + cpu_check_bugs(); #endif }
From: Russell King rmk+kernel@armlinux.org.uk
Commit e209950fdd065d2cc46e6338e47e52841b830cba upstream.
Allow the way we access members of the processor vtable to be changed at compile time. We will need to move to per-CPU vtables to fix the Spectre variant 2 issues on big.Little systems.
However, we have a couple of calls that do not need the vtable treatment, and indeed cause a kernel warning due to the (later) use of smp_processor_id(), so also introduce the PROC_TABLE macro for these which always use CPU 0's function pointers.
Reviewed-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/proc-fns.h | 39 ++++++++++++++++++++++----------- arch/arm/kernel/setup.c | 4 +--- 2 files changed, 27 insertions(+), 16 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 19939e88efca..a1a71b068edc 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -23,7 +23,7 @@ struct mm_struct; /* * Don't change this structure - ASM code relies on it. */ -extern struct processor { +struct processor { /* MISC * get data abort address/flags */ @@ -79,9 +79,13 @@ extern struct processor { unsigned int suspend_size; void (*do_suspend)(void *); void (*do_resume)(void *); -} processor; +};
#ifndef MULTI_CPU +static inline void init_proc_vtable(const struct processor *p) +{ +} + extern void cpu_proc_init(void); extern void cpu_proc_fin(void); extern int cpu_do_idle(void); @@ -98,18 +102,27 @@ extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); extern void cpu_do_suspend(void *); extern void cpu_do_resume(void *); #else -#define cpu_proc_init processor._proc_init -#define cpu_check_bugs processor.check_bugs -#define cpu_proc_fin processor._proc_fin -#define cpu_reset processor.reset -#define cpu_do_idle processor._do_idle -#define cpu_dcache_clean_area processor.dcache_clean_area -#define cpu_set_pte_ext processor.set_pte_ext -#define cpu_do_switch_mm processor.switch_mm
-/* These three are private to arch/arm/kernel/suspend.c */ -#define cpu_do_suspend processor.do_suspend -#define cpu_do_resume processor.do_resume +extern struct processor processor; +#define PROC_VTABLE(f) processor.f +#define PROC_TABLE(f) processor.f +static inline void init_proc_vtable(const struct processor *p) +{ + processor = *p; +} + +#define cpu_proc_init PROC_VTABLE(_proc_init) +#define cpu_check_bugs PROC_VTABLE(check_bugs) +#define cpu_proc_fin PROC_VTABLE(_proc_fin) +#define cpu_reset PROC_VTABLE(reset) +#define cpu_do_idle PROC_VTABLE(_do_idle) +#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area) +#define cpu_set_pte_ext PROC_TABLE(set_pte_ext) +#define cpu_do_switch_mm PROC_VTABLE(switch_mm) + +/* These two are private to arch/arm/kernel/suspend.c */ +#define cpu_do_suspend PROC_VTABLE(do_suspend) +#define cpu_do_resume PROC_VTABLE(do_resume) #endif
extern void cpu_resume(void); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 8d5c3a118abe..2eebb67fa08b 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -693,9 +693,7 @@ static void __init setup_processor(void) cpu_name = list->cpu_name; __cpu_architecture = __get_cpu_architecture();
-#ifdef MULTI_CPU - processor = *list->proc; -#endif + init_proc_vtable(list->proc); #ifdef MULTI_TLB cpu_tlb = *list->tlb; #endif
From: Russell King rmk+kernel@armlinux.org.uk
Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream.
In big.Little systems, some CPUs require the Spectre workarounds in paths such as the context switch, but other CPUs do not. In order to handle these differences, we need per-CPU vtables.
We are unable to use the kernel's per-CPU variables to support this as per-CPU is not initialised at times when we need access to the vtables, so we have to use an array indexed by logical CPU number.
We use an array-of-pointers to avoid having function pointers in the kernel's read/write .data section.
Note: Added include of linux/slab.h in arch/arm/smp.c.
Reviewed-by: Julien Thierry julien.thierry@arm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/include/asm/proc-fns.h | 23 +++++++++++++++++++++++ arch/arm/kernel/setup.c | 5 +++++ arch/arm/kernel/smp.c | 32 ++++++++++++++++++++++++++++++++ arch/arm/mm/proc-v7-bugs.c | 17 ++--------------- 4 files changed, 62 insertions(+), 15 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index a1a71b068edc..1bfcc3bcfc6d 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -104,12 +104,35 @@ extern void cpu_do_resume(void *); #else
extern struct processor processor; +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +#include <linux/smp.h> +/* + * This can't be a per-cpu variable because we need to access it before + * per-cpu has been initialised. We have a couple of functions that are + * called in a pre-emptible context, and so can't use smp_processor_id() + * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the + * function pointers for these are identical across all CPUs. + */ +extern struct processor *cpu_vtable[]; +#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f +#define PROC_TABLE(f) cpu_vtable[0]->f +static inline void init_proc_vtable(const struct processor *p) +{ + unsigned int cpu = smp_processor_id(); + *cpu_vtable[cpu] = *p; + WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area != + cpu_vtable[0]->dcache_clean_area); + WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext != + cpu_vtable[0]->set_pte_ext); +} +#else #define PROC_VTABLE(f) processor.f #define PROC_TABLE(f) processor.f static inline void init_proc_vtable(const struct processor *p) { processor = *p; } +#endif
#define cpu_proc_init PROC_VTABLE(_proc_init) #define cpu_check_bugs PROC_VTABLE(check_bugs) diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 2eebb67fa08b..4764742db7b0 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -115,6 +115,11 @@ EXPORT_SYMBOL(elf_hwcap2);
#ifdef MULTI_CPU struct processor processor __ro_after_init; +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +struct processor *cpu_vtable[NR_CPUS] = { + [0] = &processor, +}; +#endif #endif #ifdef MULTI_TLB struct cpu_tlb_fns cpu_tlb __ro_after_init; diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 4b129aac7233..8faf869e9fb2 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -27,6 +27,7 @@ #include <linux/completion.h> #include <linux/cpufreq.h> #include <linux/irq_work.h> +#include <linux/slab.h>
#include <linux/atomic.h> #include <asm/bugs.h> @@ -40,6 +41,7 @@ #include <asm/mmu_context.h> #include <asm/pgtable.h> #include <asm/pgalloc.h> +#include <asm/procinfo.h> #include <asm/processor.h> #include <asm/sections.h> #include <asm/tlbflush.h> @@ -100,6 +102,30 @@ static unsigned long get_arch_pgd(pgd_t *pgd) #endif }
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +static int secondary_biglittle_prepare(unsigned int cpu) +{ + if (!cpu_vtable[cpu]) + cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL); + + return cpu_vtable[cpu] ? 0 : -ENOMEM; +} + +static void secondary_biglittle_init(void) +{ + init_proc_vtable(lookup_processor(read_cpuid_id())->proc); +} +#else +static int secondary_biglittle_prepare(unsigned int cpu) +{ + return 0; +} + +static void secondary_biglittle_init(void) +{ +} +#endif + int __cpu_up(unsigned int cpu, struct task_struct *idle) { int ret; @@ -107,6 +133,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) if (!smp_ops.smp_boot_secondary) return -ENOSYS;
+ ret = secondary_biglittle_prepare(cpu); + if (ret) + return ret; + /* * We need to tell the secondary core where to find * its stack and the page tables. @@ -358,6 +388,8 @@ asmlinkage void secondary_start_kernel(void) struct mm_struct *mm = &init_mm; unsigned int cpu;
+ secondary_biglittle_init(); + /* * The identity mapping is uncached (strongly ordered), so * switch away from it before attempting any exclusive accesses. diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index 5544b82a2e7a..9a07916af8dd 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -52,8 +52,6 @@ static void cpu_v7_spectre_init(void) case ARM_CPU_PART_CORTEX_A17: case ARM_CPU_PART_CORTEX_A73: case ARM_CPU_PART_CORTEX_A75: - if (processor.switch_mm != cpu_v7_bpiall_switch_mm) - goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = harden_branch_predictor_bpiall; spectre_v2_method = "BPIALL"; @@ -61,8 +59,6 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A15: case ARM_CPU_PART_BRAHMA_B15: - if (processor.switch_mm != cpu_v7_iciallu_switch_mm) - goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = harden_branch_predictor_iciallu; spectre_v2_method = "ICIALLU"; @@ -88,11 +84,9 @@ static void cpu_v7_spectre_init(void) ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 != 0) break; - if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu) - goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = call_hvc_arch_workaround_1; - processor.switch_mm = cpu_v7_hvc_switch_mm; + cpu_do_switch_mm = cpu_v7_hvc_switch_mm; spectre_v2_method = "hypervisor"; break;
@@ -101,11 +95,9 @@ static void cpu_v7_spectre_init(void) ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 != 0) break; - if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu) - goto bl_error; per_cpu(harden_branch_predictor_fn, cpu) = call_smc_arch_workaround_1; - processor.switch_mm = cpu_v7_smc_switch_mm; + cpu_do_switch_mm = cpu_v7_smc_switch_mm; spectre_v2_method = "firmware"; break;
@@ -119,11 +111,6 @@ static void cpu_v7_spectre_init(void) if (spectre_v2_method) pr_info("CPU%u: Spectre v2: using %s workaround\n", smp_processor_id(), spectre_v2_method); - return; - -bl_error: - pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n", - cpu); } #else static void cpu_v7_spectre_init(void)
From: Russell King rmk+kernel@armlinux.org.uk
Commit 3a4d0c2172bcf15b7a3d9d498b2b355f9864286b upstream.
Marek Szyprowski reported problems with CPU hotplug in current kernels. This was tracked down to the processor vtables being located in an init section, and therefore discarded after kernel boot, despite being required after boot to properly initialise the non-boot CPUs.
Arrange for these tables to end up in .rodata when required.
Reported-by: Marek Szyprowski m.szyprowski@samsung.com Tested-by: Krzysztof Kozlowski krzk@kernel.org Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems") Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/proc-macros.S | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S index 7d9176c4a21d..7be1d7921342 100644 --- a/arch/arm/mm/proc-macros.S +++ b/arch/arm/mm/proc-macros.S @@ -275,6 +275,13 @@ .endm
.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0 +/* + * If we are building for big.Little with branch predictor hardening, + * we need the processor function tables to remain available after boot. + */ +#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) + .section ".rodata" +#endif .type \name()_processor_functions, #object .align 2 ENTRY(\name()_processor_functions) @@ -310,6 +317,9 @@ ENTRY(\name()_processor_functions) .endif
.size \name()_processor_functions, . - \name()_processor_functions +#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) + .previous +#endif .endm
.macro define_cache_functions name:req
From: Russell King rmk+kernel@armlinux.org.uk
Commit d6951f582cc50ba0ad22ef46b599740966599b14 upstream.
The intention in the previous patch was to only place the processor tables in the .rodata section if big.Little was being built and we wanted the branch target hardening, but instead (due to the way it was tested) it ended up always placing the tables into the .rodata section.
Although harmless, let's correct this anyway.
Fixes: 3a4d0c2172bc ("ARM: ensure that processor vtables is not lost after boot") Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: David A. Long dave.long@linaro.org --- arch/arm/mm/proc-macros.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S index 7be1d7921342..f8bb65032b79 100644 --- a/arch/arm/mm/proc-macros.S +++ b/arch/arm/mm/proc-macros.S @@ -279,7 +279,7 @@ * If we are building for big.Little with branch predictor hardening, * we need the processor function tables to remain available after boot. */ -#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) .section ".rodata" #endif .type \name()_processor_functions, #object @@ -317,7 +317,7 @@ ENTRY(\name()_processor_functions) .endif
.size \name()_processor_functions, . - \name()_processor_functions -#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) .previous #endif .endm
Hi David,
On 14/02/2019 14:49, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.9 backport of spectre patches from Russell M. King's spectre branch. Patches have been kvm-unit-test'ed on an arndale, run through kernelci, and handed off to ARM for functional testing.
As for 4.14 and 4.19, I reviewed those patches and checked to code generation for Spectre-v1.1. So for all the patches:
Reviewed-by: Julien Thierry julien.thierry@arm.com
However, 4.9 stable does not boot on TC2 (the arm32 big.Little platform I have at hand), even without this backport (so at least it isn't these patches breaking things). So I've been unable to test the Spectre-v2 big.Little patches.
Thanks,
On Thu, Feb 14, 2019 at 09:49:14AM -0500, David Long wrote:
From: "David A. Long" dave.long@linaro.org
V4.9 backport of spectre patches from Russell M. King's spectre branch. Patches have been kvm-unit-test'ed on an arndale, run through kernelci, and handed off to ARM for functional testing.
Julien Thierry (9): ARM: 8789/1: signal: copy registers using __copy_to_user() ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user() ARM: 8793/1: signal: replace __put_user_error with __put_user ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limit ARM: 8795/1: spectre-v1.1: use put_user() for __put_user() ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization ARM: 8797/1: spectre-v1.1: harden __copy_to_user ARM: 8810/1: vfp: Fix wrong assignement to ufp_exc
Russell King (7): ARM: make lookup_processor_type() non-__init ARM: split out processor lookup ARM: clean up per-processor check_bugs method call ARM: add PROC_VTABLE and PROC_TABLE macros ARM: spectre-v2: per-CPU vtables to work around big.Little systems ARM: ensure that processor vtables is not lost after boot ARM: fix the cockup in the previous patch
Queued for 4.9, thank you.
-- Thanks, Sasha
linux-stable-mirror@lists.linaro.org