This is a note to let you know that I've just added the patch titled
x86/nospec: Fix header guards names
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-nospec-fix-header-guards-names.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:30:27 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Fri, 26 Jan 2018 13:11:37 +0100
Subject: x86/nospec: Fix header guards names
From: Borislav Petkov <bp(a)suse.de>
(cherry picked from commit 7a32fc51ca938e67974cbb9db31e1a43f98345a9)
... to adhere to the _ASM_X86_ naming scheme.
No functional change.
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: riel(a)redhat.com
Cc: ak(a)linux.intel.com
Cc: peterz(a)infradead.org
Cc: David Woodhouse <dwmw2(a)infradead.org>
Cc: jikos(a)kernel.org
Cc: luto(a)amacapital.net
Cc: dave.hansen(a)intel.com
Cc: torvalds(a)linux-foundation.org
Cc: keescook(a)google.com
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: tim.c.chen(a)linux.intel.com
Cc: gregkh(a)linux-foundation.org
Cc: pjt(a)google.com
Link: https://lkml.kernel.org/r/20180126121139.31959-3-bp@alien8.de
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/nospec-branch.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __NOSPEC_BRANCH_H__
-#define __NOSPEC_BRANCH_H__
+#ifndef _ASM_X86_NOSPEC_BRANCH_H_
+#define _ASM_X86_NOSPEC_BRANCH_H_
#include <asm/alternative.h>
#include <asm/alternative-asm.h>
@@ -232,4 +232,4 @@ static inline void indirect_branch_predi
}
#endif /* __ASSEMBLY__ */
-#endif /* __NOSPEC_BRANCH_H__ */
+#endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.9/x86-cpufeatures-add-intel-feature-bits-for-speculation-control.patch
queue-4.9/x86-retpoline-simplify-vmexit_fill_rsb.patch
queue-4.9/x86-cpufeatures-clean-up-spectre-v2-related-cpuid-flags.patch
queue-4.9/x86-cpufeatures-add-cpuid_7_edx-cpuid-leaf.patch
queue-4.9/x86-microcode-amd-do-not-load-when-running-on-a-hypervisor.patch
queue-4.9/x86-nospec-fix-header-guards-names.patch
queue-4.9/x86-alternative-print-unadorned-pointers.patch
queue-4.9/x86-spectre-fix-spelling-mistake-vunerable-vulnerable.patch
queue-4.9/x86-pti-mark-constant-arrays-as-__initconst.patch
queue-4.9/x86-bugs-drop-one-mitigation-from-dmesg.patch
queue-4.9/x86-pti-do-not-enable-pti-on-cpus-which-are-not-vulnerable-to-meltdown.patch
This is a note to let you know that I've just added the patch titled
x86: Introduce barrier_nospec
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-introduce-barrier_nospec.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 29 Jan 2018 17:02:33 -0800
Subject: x86: Introduce barrier_nospec
From: Dan Williams <dan.j.williams(a)intel.com>
(cherry picked from commit b3d7ad85b80bbc404635dca80f5b129f6242bc7a)
Rename the open coded form of this instruction sequence from
rdtsc_ordered() into a generic barrier primitive, barrier_nospec().
One of the mitigations for Spectre variant1 vulnerabilities is to fence
speculative execution after successfully validating a bounds check. I.e.
force the result of a bounds check to resolve in the instruction pipeline
to ensure speculative execution honors that result before potentially
operating on out-of-bounds data.
No functional changes.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Suggested-by: Andi Kleen <ak(a)linux.intel.com>
Suggested-by: Ingo Molnar <mingo(a)redhat.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: Tom Lendacky <thomas.lendacky(a)amd.com>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727415361.33451.9049453007262764675.stgit@dwil…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/barrier.h | 4 ++++
arch/x86/include/asm/msr.h | 3 +--
2 files changed, 5 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -47,6 +47,10 @@ static inline unsigned long array_index_
/* Override the default implementation from linux/nospec.h. */
#define array_index_mask_nospec array_index_mask_nospec
+/* Prevent speculative execution past this barrier. */
+#define barrier_nospec() alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC, \
+ "lfence", X86_FEATURE_LFENCE_RDTSC)
+
#ifdef CONFIG_X86_PPRO_FENCE
#define dma_rmb() rmb()
#else
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -188,8 +188,7 @@ static __always_inline unsigned long lon
* that some other imaginary CPU is updating continuously with a
* time stamp.
*/
- alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC,
- "lfence", X86_FEATURE_LFENCE_RDTSC);
+ barrier_nospec();
return rdtsc();
}
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch
This is a note to let you know that I've just added the patch titled
x86/kvm: Update spectre-v1 mitigation
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kvm-update-spectre-v1-mitigation.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 31 Jan 2018 17:47:03 -0800
Subject: x86/kvm: Update spectre-v1 mitigation
From: Dan Williams <dan.j.williams(a)intel.com>
(cherry picked from commit 085331dfc6bbe3501fb936e657331ca943827600)
Commit 75f139aaf896 "KVM: x86: Add memory barrier on vmcs field lookup"
added a raw 'asm("lfence");' to prevent a bounds check bypass of
'vmcs_field_to_offset_table'.
The lfence can be avoided in this path by using the array_index_nospec()
helper designed for these types of fixes.
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Andrew Honig <ahonig(a)google.com>
Cc: kvm(a)vger.kernel.org
Cc: Jim Mattson <jmattson(a)google.com>
Link: https://lkml.kernel.org/r/151744959670.6342.3001723920950249067.stgit@dwill…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/vmx.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -33,6 +33,7 @@
#include <linux/slab.h>
#include <linux/tboot.h>
#include <linux/hrtimer.h>
+#include <linux/nospec.h>
#include "kvm_cache_regs.h"
#include "x86.h"
@@ -856,21 +857,18 @@ static const unsigned short vmcs_field_t
static inline short vmcs_field_to_offset(unsigned long field)
{
- BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
+ const size_t size = ARRAY_SIZE(vmcs_field_to_offset_table);
+ unsigned short offset;
- if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
+ BUILD_BUG_ON(size > SHRT_MAX);
+ if (field >= size)
return -ENOENT;
- /*
- * FIXME: Mitigation for CVE-2017-5753. To be replaced with a
- * generic mechanism.
- */
- asm("lfence");
-
- if (vmcs_field_to_offset_table[field] == 0)
+ field = array_index_nospec(field, size);
+ offset = vmcs_field_to_offset_table[field];
+ if (offset == 0)
return -ENOENT;
-
- return vmcs_field_to_offset_table[field];
+ return offset;
}
static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch
This is a note to let you know that I've just added the patch titled
x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 29 Jan 2018 17:02:39 -0800
Subject: x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
From: Dan Williams <dan.j.williams(a)intel.com>
(cherry picked from commit b3bbfb3fb5d25776b8e3f361d2eedaabb0b496cd)
For __get_user() paths, do not allow the kernel to speculate on the value
of a user controlled pointer. In addition to the 'stac' instruction for
Supervisor Mode Access Protection (SMAP), a barrier_nospec() causes the
access_ok() result to resolve in the pipeline before the CPU might take any
speculative action on the pointer value. Given the cost of 'stac' the
speculation barrier is placed after 'stac' to hopefully overlap the cost of
disabling SMAP with the cost of flushing the instruction pipeline.
Since __get_user is a major kernel interface that deals with user
controlled pointers, the __uaccess_begin_nospec() mechanism will prevent
speculative execution past an access_ok() permission check. While
speculative execution past access_ok() is not enough to lead to a kernel
memory leak, it is a necessary precondition.
To be clear, __uaccess_begin_nospec() is addressing a class of potential
problems near __get_user() usages.
Note, that while the barrier_nospec() in __uaccess_begin_nospec() is used
to protect __get_user(), pointer masking similar to array_index_nospec()
will be used for get_user() since it incorporates a bounds check near the
usage.
uaccess_try_nospec provides the same mechanism for get_user_try.
No functional changes.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Suggested-by: Andi Kleen <ak(a)linux.intel.com>
Suggested-by: Ingo Molnar <mingo(a)redhat.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: Tom Lendacky <thomas.lendacky(a)amd.com>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727415922.33451.5796614273104346583.stgit@dwil…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/uaccess.h | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -123,6 +123,11 @@ extern int __get_user_bad(void);
#define __uaccess_begin() stac()
#define __uaccess_end() clac()
+#define __uaccess_begin_nospec() \
+({ \
+ stac(); \
+ barrier_nospec(); \
+})
/*
* This is a type: either unsigned long, if the argument fits into
@@ -474,6 +479,10 @@ struct __large_struct { unsigned long bu
__uaccess_begin(); \
barrier();
+#define uaccess_try_nospec do { \
+ current->thread.uaccess_err = 0; \
+ __uaccess_begin_nospec(); \
+
#define uaccess_catch(err) \
__uaccess_end(); \
(err) |= (current->thread.uaccess_err ? -EFAULT : 0); \
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch
This is a note to let you know that I've just added the patch titled
x86: Implement array_index_mask_nospec
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-implement-array_index_mask_nospec.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 29 Jan 2018 17:02:28 -0800
Subject: x86: Implement array_index_mask_nospec
From: Dan Williams <dan.j.williams(a)intel.com>
(cherry picked from commit babdde2698d482b6c0de1eab4f697cf5856c5859)
array_index_nospec() uses a mask to sanitize user controllable array
indexes, i.e. generate a 0 mask if 'index' >= 'size', and a ~0 mask
otherwise. While the default array_index_mask_nospec() handles the
carry-bit from the (index - size) result in software.
The x86 array_index_mask_nospec() does the same, but the carry-bit is
handled in the processor CF flag without conditional instructions in the
control flow.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727414808.33451.1873237130672785331.stgit@dwil…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/barrier.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -23,6 +23,30 @@
#define wmb() asm volatile("sfence" ::: "memory")
#endif
+/**
+ * array_index_mask_nospec() - generate a mask that is ~0UL when the
+ * bounds check succeeds and 0 otherwise
+ * @index: array element index
+ * @size: number of elements in array
+ *
+ * Returns:
+ * 0 - (index < size)
+ */
+static inline unsigned long array_index_mask_nospec(unsigned long index,
+ unsigned long size)
+{
+ unsigned long mask;
+
+ asm ("cmp %1,%2; sbb %0,%0;"
+ :"=r" (mask)
+ :"r"(size),"r" (index)
+ :"cc");
+ return mask;
+}
+
+/* Override the default implementation from linux/nospec.h. */
+#define array_index_mask_nospec array_index_mask_nospec
+
#ifdef CONFIG_X86_PPRO_FENCE
#define dma_rmb() rmb()
#else
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch
This is a note to let you know that I've just added the patch titled
x86/get_user: Use pointer masking to limit speculation
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-get_user-use-pointer-masking-to-limit-speculation.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 29 Jan 2018 17:02:54 -0800
Subject: x86/get_user: Use pointer masking to limit speculation
From: Dan Williams <dan.j.williams(a)intel.com>
(cherry picked from commit c7f631cb07e7da06ac1d231ca178452339e32a94)
Quoting Linus:
I do think that it would be a good idea to very expressly document
the fact that it's not that the user access itself is unsafe. I do
agree that things like "get_user()" want to be protected, but not
because of any direct bugs or problems with get_user() and friends,
but simply because get_user() is an excellent source of a pointer
that is obviously controlled from a potentially attacking user
space. So it's a prime candidate for then finding _subsequent_
accesses that can then be used to perturb the cache.
Unlike the __get_user() case get_user() includes the address limit check
near the pointer de-reference. With that locality the speculation can be
mitigated with pointer narrowing rather than a barrier, i.e.
array_index_nospec(). Where the narrowing is performed by:
cmp %limit, %ptr
sbb %mask, %mask
and %mask, %ptr
With respect to speculation the value of %ptr is either less than %limit
or NULL.
Co-developed-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: Kees Cook <keescook(a)chromium.org>
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: torvalds(a)linux-foundation.org
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727417469.33451.11804043010080838495.stgit@dwi…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/lib/getuser.S | 10 ++++++++++
1 file changed, 10 insertions(+)
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -39,6 +39,8 @@ ENTRY(__get_user_1)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
1: movzbl (%_ASM_AX),%edx
xor %eax,%eax
@@ -53,6 +55,8 @@ ENTRY(__get_user_2)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
2: movzwl -1(%_ASM_AX),%edx
xor %eax,%eax
@@ -67,6 +71,8 @@ ENTRY(__get_user_4)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
3: movl -3(%_ASM_AX),%edx
xor %eax,%eax
@@ -82,6 +88,8 @@ ENTRY(__get_user_8)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
4: movq -7(%_ASM_AX),%rdx
xor %eax,%eax
@@ -93,6 +101,8 @@ ENTRY(__get_user_8)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user_8
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
4: movl -7(%_ASM_AX),%edx
5: movl -3(%_ASM_AX),%ecx
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch
This is a note to let you know that I've just added the patch titled
x86/entry/64: Push extra regs right away
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-entry-64-push-extra-regs-right-away.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:30:27 CET 2018
From: Andy Lutomirski <luto(a)kernel.org>
Date: Sun, 28 Jan 2018 10:38:49 -0800
Subject: x86/entry/64: Push extra regs right away
From: Andy Lutomirski <luto(a)kernel.org>
(cherry picked from commit d1f7732009e0549eedf8ea1db948dc37be77fd46)
With the fast path removed there is no point in splitting the push of the
normal and the extra register set. Just push the extra regs right away.
[ tglx: Split out from 'x86/entry/64: Remove the SYSCALL64 fast path' ]
Signed-off-by: Andy Lutomirski <luto(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Kernel Hardening <kernel-hardening(a)lists.openwall.com>
Link: https://lkml.kernel.org/r/462dff8d4d64dfbfc851fbf3130641809d980ecd.15171644…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -177,10 +177,14 @@ GLOBAL(entry_SYSCALL_64_after_swapgs)
pushq %r9 /* pt_regs->r9 */
pushq %r10 /* pt_regs->r10 */
pushq %r11 /* pt_regs->r11 */
- sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
+ pushq %rbx /* pt_regs->rbx */
+ pushq %rbp /* pt_regs->rbp */
+ pushq %r12 /* pt_regs->r12 */
+ pushq %r13 /* pt_regs->r13 */
+ pushq %r14 /* pt_regs->r14 */
+ pushq %r15 /* pt_regs->r15 */
/* IRQs are off. */
- SAVE_EXTRA_REGS
movq %rsp, %rdi
call do_syscall_64 /* returns with IRQs disabled */
Patches currently in stable-queue which might be from luto(a)kernel.org are
queue-4.9/x86-entry-64-push-extra-regs-right-away.patch
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/x86-asm-move-status-from-thread_struct-to-thread_info.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/x86-entry-64-remove-the-syscall64-fast-path.patch
queue-4.9/x86-asm-fix-inline-asm-call-constraints-for-gcc-4.4.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-spectre-fix-spelling-mistake-vunerable-vulnerable.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch
queue-4.9/x86-pti-mark-constant-arrays-as-__initconst.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
This is a note to let you know that I've just added the patch titled
x86/entry/64: Remove the SYSCALL64 fast path
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-entry-64-remove-the-syscall64-fast-path.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:30:27 CET 2018
From: Andy Lutomirski <luto(a)kernel.org>
Date: Sun, 28 Jan 2018 10:38:49 -0800
Subject: x86/entry/64: Remove the SYSCALL64 fast path
From: Andy Lutomirski <luto(a)kernel.org>
(cherry picked from commit 21d375b6b34ff511a507de27bf316b3dde6938d9)
The SYCALLL64 fast path was a nice, if small, optimization back in the good
old days when syscalls were actually reasonably fast. Now there is PTI to
slow everything down, and indirect branches are verboten, making everything
messier. The retpoline code in the fast path is particularly nasty.
Just get rid of the fast path. The slow path is barely slower.
[ tglx: Split out the 'push all extra regs' part ]
Signed-off-by: Andy Lutomirski <luto(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Kernel Hardening <kernel-hardening(a)lists.openwall.com>
Link: https://lkml.kernel.org/r/462dff8d4d64dfbfc851fbf3130641809d980ecd.15171644…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 123 --------------------------------------------
arch/x86/entry/syscall_64.c | 7 --
2 files changed, 3 insertions(+), 127 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -179,94 +179,11 @@ GLOBAL(entry_SYSCALL_64_after_swapgs)
pushq %r11 /* pt_regs->r11 */
sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
- /*
- * If we need to do entry work or if we guess we'll need to do
- * exit work, go straight to the slow path.
- */
- movq PER_CPU_VAR(current_task), %r11
- testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
- jnz entry_SYSCALL64_slow_path
-
-entry_SYSCALL_64_fastpath:
- /*
- * Easy case: enable interrupts and issue the syscall. If the syscall
- * needs pt_regs, we'll call a stub that disables interrupts again
- * and jumps to the slow path.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_NONE)
-#if __SYSCALL_MASK == ~0
- cmpq $__NR_syscall_max, %rax
-#else
- andl $__SYSCALL_MASK, %eax
- cmpl $__NR_syscall_max, %eax
-#endif
- ja 1f /* return -ENOSYS (already in pt_regs->ax) */
- movq %r10, %rcx
-
- /*
- * This call instruction is handled specially in stub_ptregs_64.
- * It might end up jumping to the slow path. If it jumps, RAX
- * and all argument registers are clobbered.
- */
-#ifdef CONFIG_RETPOLINE
- movq sys_call_table(, %rax, 8), %rax
- call __x86_indirect_thunk_rax
-#else
- call *sys_call_table(, %rax, 8)
-#endif
-.Lentry_SYSCALL_64_after_fastpath_call:
-
- movq %rax, RAX(%rsp)
-1:
-
- /*
- * If we get here, then we know that pt_regs is clean for SYSRET64.
- * If we see that no exit work is required (which we are required
- * to check with IRQs off), then we can go straight to SYSRET64.
- */
- DISABLE_INTERRUPTS(CLBR_NONE)
- TRACE_IRQS_OFF
- movq PER_CPU_VAR(current_task), %r11
- testl $_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
- jnz 1f
-
- LOCKDEP_SYS_EXIT
- TRACE_IRQS_ON /* user mode is traced as IRQs on */
- movq RIP(%rsp), %rcx
- movq EFLAGS(%rsp), %r11
- RESTORE_C_REGS_EXCEPT_RCX_R11
- /*
- * This opens a window where we have a user CR3, but are
- * running in the kernel. This makes using the CS
- * register useless for telling whether or not we need to
- * switch CR3 in NMIs. Normal interrupts are OK because
- * they are off here.
- */
- SWITCH_USER_CR3
- movq RSP(%rsp), %rsp
- USERGS_SYSRET64
-
-1:
- /*
- * The fast path looked good when we started, but something changed
- * along the way and we need to switch to the slow path. Calling
- * raise(3) will trigger this, for example. IRQs are off.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_NONE)
- SAVE_EXTRA_REGS
- movq %rsp, %rdi
- call syscall_return_slowpath /* returns with IRQs disabled */
- jmp return_from_SYSCALL_64
-
-entry_SYSCALL64_slow_path:
/* IRQs are off. */
SAVE_EXTRA_REGS
movq %rsp, %rdi
call do_syscall_64 /* returns with IRQs disabled */
-return_from_SYSCALL_64:
RESTORE_EXTRA_REGS
TRACE_IRQS_IRETQ /* we're about to change IF */
@@ -339,6 +256,7 @@ return_from_SYSCALL_64:
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
RESTORE_C_REGS_EXCEPT_RCX_R11
+
/*
* This opens a window where we have a user CR3, but are
* running in the kernel. This makes using the CS
@@ -363,45 +281,6 @@ opportunistic_sysret_failed:
jmp restore_c_regs_and_iret
END(entry_SYSCALL_64)
-ENTRY(stub_ptregs_64)
- /*
- * Syscalls marked as needing ptregs land here.
- * If we are on the fast path, we need to save the extra regs,
- * which we achieve by trying again on the slow path. If we are on
- * the slow path, the extra regs are already saved.
- *
- * RAX stores a pointer to the C function implementing the syscall.
- * IRQs are on.
- */
- cmpq $.Lentry_SYSCALL_64_after_fastpath_call, (%rsp)
- jne 1f
-
- /*
- * Called from fast path -- disable IRQs again, pop return address
- * and jump to slow path
- */
- DISABLE_INTERRUPTS(CLBR_NONE)
- TRACE_IRQS_OFF
- popq %rax
- jmp entry_SYSCALL64_slow_path
-
-1:
- JMP_NOSPEC %rax /* Called from C */
-END(stub_ptregs_64)
-
-.macro ptregs_stub func
-ENTRY(ptregs_\func)
- leaq \func(%rip), %rax
- jmp stub_ptregs_64
-END(ptregs_\func)
-.endm
-
-/* Instantiate ptregs_stub for each ptregs-using syscall */
-#define __SYSCALL_64_QUAL_(sym)
-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_stub sym
-#define __SYSCALL_64(nr, sym, qual) __SYSCALL_64_QUAL_##qual(sym)
-#include <asm/syscalls_64.h>
-
/*
* %rdi: prev task
* %rsi: next task
--- a/arch/x86/entry/syscall_64.c
+++ b/arch/x86/entry/syscall_64.c
@@ -6,14 +6,11 @@
#include <asm/asm-offsets.h>
#include <asm/syscall.h>
-#define __SYSCALL_64_QUAL_(sym) sym
-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_##sym
-
-#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long __SYSCALL_64_QUAL_##qual(sym)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
#include <asm/syscalls_64.h>
#undef __SYSCALL_64
-#define __SYSCALL_64(nr, sym, qual) [nr] = __SYSCALL_64_QUAL_##qual(sym),
+#define __SYSCALL_64(nr, sym, qual) [nr] = sym,
extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
Patches currently in stable-queue which might be from luto(a)kernel.org are
queue-4.9/x86-entry-64-push-extra-regs-right-away.patch
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/x86-asm-move-status-from-thread_struct-to-thread_info.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/x86-entry-64-remove-the-syscall64-fast-path.patch
queue-4.9/x86-asm-fix-inline-asm-call-constraints-for-gcc-4.4.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-spectre-fix-spelling-mistake-vunerable-vulnerable.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch
queue-4.9/x86-pti-mark-constant-arrays-as-__initconst.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
This is a note to let you know that I've just added the patch titled
x86/cpuid: Fix up "virtual" IBRS/IBPB/STIBP feature bits on Intel
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-cpuid-fix-up-virtual-ibrs-ibpb-stibp-feature-bits-on-intel.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Thu Feb 8 03:32:24 CET 2018
From: David Woodhouse <dwmw(a)amazon.co.uk>
Date: Tue, 30 Jan 2018 14:30:23 +0000
Subject: x86/cpuid: Fix up "virtual" IBRS/IBPB/STIBP feature bits on Intel
From: David Woodhouse <dwmw(a)amazon.co.uk>
(cherry picked from commit 7fcae1118f5fd44a862aa5c3525248e35ee67c3b)
Despite the fact that all the other code there seems to be doing it, just
using set_cpu_cap() in early_intel_init() doesn't actually work.
For CPUs with PKU support, setup_pku() calls get_cpu_cap() after
c->c_init() has set those feature bits. That resets those bits back to what
was queried from the hardware.
Turning the bits off for bad microcode is easy to fix. That can just use
setup_clear_cpu_cap() to force them off for all CPUs.
I was less keen on forcing the feature bits *on* that way, just in case
of inconsistencies. I appreciate that the kernel is going to get this
utterly wrong if CPU features are not consistent, because it has already
applied alternatives by the time secondary CPUs are brought up.
But at least if setup_force_cpu_cap() isn't being used, we might have a
chance of *detecting* the lack of the corresponding bit and either
panicking or refusing to bring the offending CPU online.
So ensure that the appropriate feature bits are set within get_cpu_cap()
regardless of how many extra times it's called.
Fixes: 2961298e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: karahmed(a)amazon.de
Cc: peterz(a)infradead.org
Cc: bp(a)alien8.de
Link: https://lkml.kernel.org/r/1517322623-15261-1-git-send-email-dwmw@amazon.co.…
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/cpu/common.c | 21 +++++++++++++++++++++
arch/x86/kernel/cpu/intel.c | 27 ++++++++-------------------
2 files changed, 29 insertions(+), 19 deletions(-)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -718,6 +718,26 @@ static void apply_forced_caps(struct cpu
}
}
+static void init_speculation_control(struct cpuinfo_x86 *c)
+{
+ /*
+ * The Intel SPEC_CTRL CPUID bit implies IBRS and IBPB support,
+ * and they also have a different bit for STIBP support. Also,
+ * a hypervisor might have set the individual AMD bits even on
+ * Intel CPUs, for finer-grained selection of what's available.
+ *
+ * We use the AMD bits in 0x8000_0008 EBX as the generic hardware
+ * features, which are visible in /proc/cpuinfo and used by the
+ * kernel. So set those accordingly from the Intel bits.
+ */
+ if (cpu_has(c, X86_FEATURE_SPEC_CTRL)) {
+ set_cpu_cap(c, X86_FEATURE_IBRS);
+ set_cpu_cap(c, X86_FEATURE_IBPB);
+ }
+ if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
+ set_cpu_cap(c, X86_FEATURE_STIBP);
+}
+
void get_cpu_cap(struct cpuinfo_x86 *c)
{
u32 eax, ebx, ecx, edx;
@@ -812,6 +832,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
init_scattered_cpuid_features(c);
+ init_speculation_control(c);
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -140,28 +140,17 @@ static void early_init_intel(struct cpui
rdmsr(MSR_IA32_UCODE_REV, lower_word, c->microcode);
}
- /*
- * The Intel SPEC_CTRL CPUID bit implies IBRS and IBPB support,
- * and they also have a different bit for STIBP support. Also,
- * a hypervisor might have set the individual AMD bits even on
- * Intel CPUs, for finer-grained selection of what's available.
- */
- if (cpu_has(c, X86_FEATURE_SPEC_CTRL)) {
- set_cpu_cap(c, X86_FEATURE_IBRS);
- set_cpu_cap(c, X86_FEATURE_IBPB);
- }
- if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
- set_cpu_cap(c, X86_FEATURE_STIBP);
-
/* Now if any of them are set, check the blacklist and clear the lot */
- if ((cpu_has(c, X86_FEATURE_IBRS) || cpu_has(c, X86_FEATURE_IBPB) ||
+ if ((cpu_has(c, X86_FEATURE_SPEC_CTRL) ||
+ cpu_has(c, X86_FEATURE_INTEL_STIBP) ||
+ cpu_has(c, X86_FEATURE_IBRS) || cpu_has(c, X86_FEATURE_IBPB) ||
cpu_has(c, X86_FEATURE_STIBP)) && bad_spectre_microcode(c)) {
pr_warn("Intel Spectre v2 broken microcode detected; disabling Speculation Control\n");
- clear_cpu_cap(c, X86_FEATURE_IBRS);
- clear_cpu_cap(c, X86_FEATURE_IBPB);
- clear_cpu_cap(c, X86_FEATURE_STIBP);
- clear_cpu_cap(c, X86_FEATURE_SPEC_CTRL);
- clear_cpu_cap(c, X86_FEATURE_INTEL_STIBP);
+ setup_clear_cpu_cap(X86_FEATURE_IBRS);
+ setup_clear_cpu_cap(X86_FEATURE_IBPB);
+ setup_clear_cpu_cap(X86_FEATURE_STIBP);
+ setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
+ setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
}
/*
Patches currently in stable-queue which might be from dwmw(a)amazon.co.uk are
queue-4.9/x86-entry-64-push-extra-regs-right-away.patch
queue-4.9/kvm-vmx-introduce-alloc_loaded_vmcs.patch
queue-4.9/kvm-nvmx-eliminate-vmcs02-pool.patch
queue-4.9/kvm-vmx-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/kvm-x86-add-ibpb-support.patch
queue-4.9/kvm-svm-allow-direct-access-to-msr_ia32_spec_ctrl.patch
queue-4.9/x86-cpufeatures-add-intel-feature-bits-for-speculation-control.patch
queue-4.9/x86-retpoline-simplify-vmexit_fill_rsb.patch
queue-4.9/x86-paravirt-remove-noreplace-paravirt-cmdline-option.patch
queue-4.9/x86-cpufeatures-clean-up-spectre-v2-related-cpuid-flags.patch
queue-4.9/documentation-document-array_index_nospec.patch
queue-4.9/x86-usercopy-replace-open-coded-stac-clac-with-__uaccess_-begin-end.patch
queue-4.9/x86-asm-move-status-from-thread_struct-to-thread_info.patch
queue-4.9/x86-cpufeatures-add-cpuid_7_edx-cpuid-leaf.patch
queue-4.9/kvm-x86-make-indirect-calls-in-emulator-speculation-safe.patch
queue-4.9/x86-entry-64-remove-the-syscall64-fast-path.patch
queue-4.9/x86-cpufeature-blacklist-spec_ctrl-pred_cmd-on-early-spectre-v2-microcodes.patch
queue-4.9/x86-nospec-fix-header-guards-names.patch
queue-4.9/x86-retpoline-avoid-retpolines-for-built-in-__init-functions.patch
queue-4.9/vfs-fdtable-prevent-bounds-check-bypass-via-speculative-execution.patch
queue-4.9/x86-uaccess-use-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/x86-cpu-bugs-make-retpoline-module-warning-conditional.patch
queue-4.9/x86-spectre-check-config_retpoline-in-command-line-parser.patch
queue-4.9/x86-implement-array_index_mask_nospec.patch
queue-4.9/x86-alternative-print-unadorned-pointers.patch
queue-4.9/x86-cpuid-fix-up-virtual-ibrs-ibpb-stibp-feature-bits-on-intel.patch
queue-4.9/array_index_nospec-sanitize-speculative-array-de-references.patch
queue-4.9/kvm-vmx-make-indirect-call-speculation-safe.patch
queue-4.9/x86-cpufeatures-add-amd-feature-bits-for-speculation-control.patch
queue-4.9/x86-spectre-fix-spelling-mistake-vunerable-vulnerable.patch
queue-4.9/module-retpoline-warn-about-missing-retpoline-in-module.patch
queue-4.9/x86-kvm-update-spectre-v1-mitigation.patch
queue-4.9/x86-get_user-use-pointer-masking-to-limit-speculation.patch
queue-4.9/x86-syscall-sanitize-syscall-table-de-references-under-speculation.patch
queue-4.9/kvm-nvmx-vmx_complete_nested_posted_interrupt-can-t-fail.patch
queue-4.9/x86-spectre-simplify-spectre_v2-command-line-parsing.patch
queue-4.9/x86-msr-add-definitions-for-new-speculation-control-msrs.patch
queue-4.9/x86-pti-make-unpoison-of-pgd-for-trusted-boot-work-for-real.patch
queue-4.9/kvm-vmx-make-msr-bitmaps-per-vcpu.patch
queue-4.9/x86-speculation-add-basic-ibpb-indirect-branch-prediction-barrier-support.patch
queue-4.9/kvm-nvmx-mark-vmcs12-pages-dirty-on-l2-exit.patch
queue-4.9/x86-pti-mark-constant-arrays-as-__initconst.patch
queue-4.9/x86-speculation-fix-typo-ibrs_att-which-should-be-ibrs_all.patch
queue-4.9/x86-spectre-report-get_user-mitigation-for-spectre_v1.patch
queue-4.9/x86-introduce-barrier_nospec.patch
queue-4.9/kvm-vmx-emulate-msr_ia32_arch_capabilities.patch
queue-4.9/x86-bugs-drop-one-mitigation-from-dmesg.patch
queue-4.9/x86-retpoline-remove-the-esp-rsp-thunk.patch
queue-4.9/x86-pti-do-not-enable-pti-on-cpus-which-are-not-vulnerable-to-meltdown.patch
queue-4.9/x86-introduce-__uaccess_begin_nospec-and-uaccess_try_nospec.patch
queue-4.9/nl80211-sanitize-array-index-in-parse_txq_params.patch