This is a note to let you know that I've just added the patch titled
x86/spectre: Report get_user mitigation for spectre_v1
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86spectre_Report_get_user_mitigation_for_spectre_v1.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/spectre: Report get_user mitigation for spectre_v1
From: Dan Williams dan.j.williams(a)intel.com
Date: Mon Jan 29 17:03:21 2018 -0800
From: Dan Williams dan.j.williams(a)intel.com
commit edfbae53dab8348fca778531be9f4855d2ca0360
Reflect the presence of get_user(), __get_user(), and 'syscall' protections
in sysfs. The expectation is that new and better tooling will allow the
kernel to grow more usages of array_index_nospec(), for now, only claim
mitigation for __user pointer de-references.
Reported-by: Jiri Slaby <jslaby(a)suse.cz>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: torvalds(a)linux-foundation.org
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727420158.33451.11658324346540434635.stgit@dwi…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/cpu/bugs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -297,7 +297,7 @@ ssize_t cpu_show_spectre_v1(struct devic
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
return sprintf(buf, "Not affected\n");
- return sprintf(buf, "Vulnerable\n");
+ return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
ssize_t cpu_show_spectre_v2(struct device *dev,
Patches currently in stable-queue which might be from jslaby(a)suse.cz are
queue-4.15/x86spectre_Report_get_user_mitigation_for_spectre_v1.patch
This is a note to let you know that I've just added the patch titled
x86/retpoline: Avoid retpolines for built-in __init functions
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86retpoline_Avoid_retpolines_for_built-in___init_functions.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/retpoline: Avoid retpolines for built-in __init functions
From: David Woodhouse dwmw(a)amazon.co.uk
Date: Thu Feb 1 11:27:20 2018 +0000
From: David Woodhouse dwmw(a)amazon.co.uk
commit 66f793099a636862a71c59d4a6ba91387b155e0c
There's no point in building init code with retpolines, since it runs before
any potentially hostile userspace does. And before the retpoline is actually
ALTERNATIVEd into place, for much of it.
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: karahmed(a)amazon.de
Cc: peterz(a)infradead.org
Cc: bp(a)alien8.de
Link: https://lkml.kernel.org/r/1517484441-1420-2-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/init.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -5,6 +5,13 @@
#include <linux/compiler.h>
#include <linux/types.h>
+/* Built-in __init functions needn't be compiled with retpoline */
+#if defined(RETPOLINE) && !defined(MODULE)
+#define __noretpoline __attribute__((indirect_branch("keep")))
+#else
+#define __noretpoline
+#endif
+
/* These macros are used to mark some functions or
* initialized data (doesn't apply to uninitialized data)
* as `initialization' functions. The kernel can take this
@@ -40,7 +47,7 @@
/* These are for everybody (although not all archs will actually
discard it in modules) */
-#define __init __section(.init.text) __cold __latent_entropy
+#define __init __section(.init.text) __cold __latent_entropy __noretpoline
#define __initdata __section(.init.data)
#define __initconst __section(.init.rodata)
#define __exitdata __section(.exit.data)
Patches currently in stable-queue which might be from dwmw(a)amazon.co.uk are
queue-4.15/x86spectre_Simplify_spectre_v2_command_line_parsing.patch
queue-4.15/x86pti_Mark_constant_arrays_as___initconst.patch
queue-4.15/x86pti_Do_not_enable_PTI_on_CPUs_which_are_not_vulnerable_to_Meltdown.patch
queue-4.15/x86speculation_Use_Indirect_Branch_Prediction_Barrier_in_context_switch.patch
queue-4.15/x86cpuid_Fix_up_virtual_IBRSIBPBSTIBP_feature_bits_on_Intel.patch
queue-4.15/x86spectre_Fix_spelling_mistake_vunerable-_vulnerable.patch
queue-4.15/x86cpufeature_Blacklist_SPEC_CTRLPRED_CMD_on_early_Spectre_v2_microcodes.patch
queue-4.15/x86cpufeatures_Add_Intel_feature_bits_for_Speculation_Control.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/x86spectre_Check_CONFIG_RETPOLINE_in_command_line_parser.patch
queue-4.15/x86msr_Add_definitions_for_new_speculation_control_MSRs.patch
queue-4.15/KVMVMX_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86cpufeatures_Add_CPUID_7_EDX_CPUID_leaf.patch
queue-4.15/x86cpufeatures_Add_AMD_feature_bits_for_Speculation_Control.patch
queue-4.15/KVMSVM_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/KVMx86_Add_IBPB_support.patch
queue-4.15/KVMVMX_Emulate_MSR_IA32_ARCH_CAPABILITIES.patch
queue-4.15/x86speculation_Add_basic_IBPB_(Indirect_Branch_Prediction_Barrier)_support.patch
queue-4.15/x86speculation_Simplify_indirect_branch_prediction_barrier().patch
queue-4.15/x86speculation_Fix_typo_IBRS_ATT_which_should_be_IBRS_ALL.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/x86retpoline_Simplify_vmexit_fill_RSB().patch
queue-4.15/x86cpufeatures_Clean_up_Spectre_v2_related_CPUID_flags.patch
queue-4.15/KVMx86_Update_the_reverse_cpuid_list_to_include_CPUID_7_EDX.patch
queue-4.15/x86retpoline_Avoid_retpolines_for_built-in___init_functions.patch
This is a note to let you know that I've just added the patch titled
x86/spectre: Check CONFIG_RETPOLINE in command line parser
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86spectre_Check_CONFIG_RETPOLINE_in_command_line_parser.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/spectre: Check CONFIG_RETPOLINE in command line parser
From: Dou Liyang douly.fnst(a)cn.fujitsu.com
Date: Tue Jan 30 14:13:50 2018 +0800
From: Dou Liyang douly.fnst(a)cn.fujitsu.com
commit 9471eee9186a46893726e22ebb54cade3f9bc043
The spectre_v2 option 'auto' does not check whether CONFIG_RETPOLINE is
enabled. As a consequence it fails to emit the appropriate warning and sets
feature flags which have no effect at all.
Add the missing IS_ENABLED() check.
Fixes: da285121560e ("x86/spectre: Add boot time option to select Spectre v2 mitigation")
Signed-off-by: Dou Liyang <douly.fnst(a)cn.fujitsu.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: ak(a)linux.intel.com
Cc: peterz(a)infradead.org
Cc: Tomohiro" <misono.tomohiro(a)jp.fujitsu.com>
Cc: dave.hansen(a)intel.com
Cc: bp(a)alien8.de
Cc: arjan(a)linux.intel.com
Cc: dwmw(a)amazon.co.uk
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/f5892721-7528-3647-08fb-f8d10e65ad87@cn.fujitsu.c…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/cpu/bugs.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -213,10 +213,10 @@ static void __init spectre_v2_select_mit
return;
case SPECTRE_V2_CMD_FORCE:
- /* FALLTRHU */
case SPECTRE_V2_CMD_AUTO:
- goto retpoline_auto;
-
+ if (IS_ENABLED(CONFIG_RETPOLINE))
+ goto retpoline_auto;
+ break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
Patches currently in stable-queue which might be from douly.fnst(a)cn.fujitsu.com are
queue-4.15/x86spectre_Check_CONFIG_RETPOLINE_in_command_line_parser.patch
This is a note to let you know that I've just added the patch titled
x86/paravirt: Remove 'noreplace-paravirt' cmdline option
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/paravirt: Remove 'noreplace-paravirt' cmdline option
From: Josh Poimboeuf jpoimboe(a)redhat.com
Date: Tue Jan 30 22:13:33 2018 -0600
From: Josh Poimboeuf jpoimboe(a)redhat.com
commit 12c69f1e94c89d40696e83804dd2f0965b5250cd
The 'noreplace-paravirt' option disables paravirt patching, leaving the
original pv indirect calls in place.
That's highly incompatible with retpolines, unless we want to uglify
paravirt even further and convert the paravirt calls to retpolines.
As far as I can tell, the option doesn't seem to be useful for much
other than introducing surprising corner cases and making the kernel
vulnerable to Spectre v2. It was probably a debug option from the early
paravirt days. So just remove it.
Signed-off-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Juergen Gross <jgross(a)suse.com>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Ashok Raj <ashok.raj(a)intel.com>
Cc: Greg KH <gregkh(a)linuxfoundation.org>
Cc: Jun Nakajima <jun.nakajima(a)intel.com>
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Rusty Russell <rusty(a)rustcorp.com.au>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Asit Mallick <asit.k.mallick(a)intel.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Jason Baron <jbaron(a)akamai.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Alok Kataria <akataria(a)vmware.com>
Cc: Arjan Van De Ven <arjan.van.de.ven(a)intel.com>
Cc: David Woodhouse <dwmw2(a)infradead.org>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Link: https://lkml.kernel.org/r/20180131041333.2x6blhxirc2kclrq@treble
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/admin-guide/kernel-parameters.txt | 2 --
arch/x86/kernel/alternative.c | 14 --------------
2 files changed, 16 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2742,8 +2742,6 @@
norandmaps Don't use address space randomization. Equivalent to
echo 0 > /proc/sys/kernel/randomize_va_space
- noreplace-paravirt [X86,IA-64,PV_OPS] Don't patch paravirt_ops
-
noreplace-smp [X86-32,SMP] Don't replace SMP instructions
with UP alternatives
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -46,17 +46,6 @@ static int __init setup_noreplace_smp(ch
}
__setup("noreplace-smp", setup_noreplace_smp);
-#ifdef CONFIG_PARAVIRT
-static int __initdata_or_module noreplace_paravirt = 0;
-
-static int __init setup_noreplace_paravirt(char *str)
-{
- noreplace_paravirt = 1;
- return 1;
-}
-__setup("noreplace-paravirt", setup_noreplace_paravirt);
-#endif
-
#define DPRINTK(fmt, args...) \
do { \
if (debug_alternative) \
@@ -599,9 +588,6 @@ void __init_or_module apply_paravirt(str
struct paravirt_patch_site *p;
char insnbuf[MAX_PATCH_LEN];
- if (noreplace_paravirt)
- return;
-
for (p = start; p < end; p++) {
unsigned int used;
Patches currently in stable-queue which might be from jpoimboe(a)redhat.com are
queue-4.15/objtool_Add_support_for_alternatives_at_the_end_of_a_section.patch
queue-4.15/x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/x86alternative_Print_unadorned_pointers.patch
queue-4.15/x86bugs_Drop_one_mitigation_from_dmesg.patch
queue-4.15/x86nospec_Fix_header_guards_names.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/objtool_Warn_on_stripped_section_symbol.patch
queue-4.15/objtool_Improve_retpoline_alternative_handling.patch
This is a note to let you know that I've just added the patch titled
x86/pti: Mark constant arrays as __initconst
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86pti_Mark_constant_arrays_as___initconst.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/pti: Mark constant arrays as __initconst
From: Arnd Bergmann arnd(a)arndb.de
Date: Fri Feb 2 22:39:23 2018 +0100
From: Arnd Bergmann arnd(a)arndb.de
commit 4bf5d56d429cbc96c23d809a08f63cd29e1a702e
I'm seeing build failures from the two newly introduced arrays that
are marked 'const' and '__initdata', which are mutually exclusive:
arch/x86/kernel/cpu/common.c:882:43: error: 'cpu_no_speculation' causes a section type conflict with 'e820_table_firmware_init'
arch/x86/kernel/cpu/common.c:895:43: error: 'cpu_no_meltdown' causes a section type conflict with 'e820_table_firmware_init'
The correct annotation is __initconst.
Fixes: fec9434a12f3 ("x86/pti: Do not enable PTI on CPUs which are not vulnerable to Meltdown")
Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ricardo Neri <ricardo.neri-calderon(a)linux.intel.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)suse.de>
Cc: Thomas Garnier <thgarnie(a)google.com>
Cc: David Woodhouse <dwmw(a)amazon.co.uk>
Link: https://lkml.kernel.org/r/20180202213959.611210-1-arnd@arndb.de
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/cpu/common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -900,7 +900,7 @@ static void identify_cpu_without_cpuid(s
#endif
}
-static const __initdata struct x86_cpu_id cpu_no_speculation[] = {
+static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CEDARVIEW, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CLOVERVIEW, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_LINCROFT, X86_FEATURE_ANY },
@@ -913,7 +913,7 @@ static const __initdata struct x86_cpu_i
{}
};
-static const __initdata struct x86_cpu_id cpu_no_meltdown[] = {
+static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
{ X86_VENDOR_AMD },
{}
};
Patches currently in stable-queue which might be from arnd(a)arndb.de are
queue-4.15/x86pti_Mark_constant_arrays_as___initconst.patch
queue-4.15/auxdisplay-img-ascii-lcd-add-missing-module_description-author-license.patch
This is a note to let you know that I've just added the patch titled
x86/mm: Fix overlap of i386 CPU_ENTRY_AREA with FIX_BTMAP
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86mm_Fix_overlap_of_i386_CPU_ENTRY_AREA_with_FIX_BTMAP.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/mm: Fix overlap of i386 CPU_ENTRY_AREA with FIX_BTMAP
From: William Grant william.grant(a)canonical.com
Date: Tue Jan 30 22:22:55 2018 +1100
From: William Grant william.grant(a)canonical.com
commit 55f49fcb879fbeebf2a8c1ac7c9e6d90df55f798
Since commit 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the
fixmap"), i386's CPU_ENTRY_AREA has been mapped to the memory area just
below FIXADDR_START. But already immediately before FIXADDR_START is the
FIX_BTMAP area, which means that early_ioremap can collide with the entry
area.
It's especially bad on PAE where FIX_BTMAP_BEGIN gets aligned to exactly
match CPU_ENTRY_AREA_BASE, so the first early_ioremap slot clobbers the
IDT and causes interrupts during early boot to reset the system.
The overlap wasn't a problem before the CPU entry area was introduced,
as the fixmap has classically been preceded by the pkmap or vmalloc
areas, neither of which is used until early_ioremap is out of the
picture.
Relocate CPU_ENTRY_AREA to below FIX_BTMAP, not just below the permanent
fixmap area.
Fixes: commit 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
Signed-off-by: William Grant <william.grant(a)canonical.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/7041d181-a019-e8b9-4e4e-48215f841e2c@canonical.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/fixmap.h | 6 ++++--
arch/x86/include/asm/pgtable_32_types.h | 5 +++--
2 files changed, 7 insertions(+), 4 deletions(-)
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -137,8 +137,10 @@ enum fixed_addresses {
extern void reserve_top_address(unsigned long reserve);
-#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
-#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+#define FIXADDR_TOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_TOT_START (FIXADDR_TOP - FIXADDR_TOT_SIZE)
extern int fixmaps_set;
--- a/arch/x86/include/asm/pgtable_32_types.h
+++ b/arch/x86/include/asm/pgtable_32_types.h
@@ -44,8 +44,9 @@ extern bool __vmalloc_start_set; /* set
*/
#define CPU_ENTRY_AREA_PAGES (NR_CPUS * 40)
-#define CPU_ENTRY_AREA_BASE \
- ((FIXADDR_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) & PMD_MASK)
+#define CPU_ENTRY_AREA_BASE \
+ ((FIXADDR_TOT_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) \
+ & PMD_MASK)
#define PKMAP_BASE \
((CPU_ENTRY_AREA_BASE - PAGE_SIZE) & PMD_MASK)
Patches currently in stable-queue which might be from william.grant(a)canonical.com are
queue-4.15/x86mm_Fix_overlap_of_i386_CPU_ENTRY_AREA_with_FIX_BTMAP.patch
This is a note to let you know that I've just added the patch titled
x86/get_user: Use pointer masking to limit speculation
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86get_user_Use_pointer_masking_to_limit_speculation.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/get_user: Use pointer masking to limit speculation
From: Dan Williams dan.j.williams(a)intel.com
Date: Mon Jan 29 17:02:54 2018 -0800
From: Dan Williams dan.j.williams(a)intel.com
commit c7f631cb07e7da06ac1d231ca178452339e32a94
Quoting Linus:
I do think that it would be a good idea to very expressly document
the fact that it's not that the user access itself is unsafe. I do
agree that things like "get_user()" want to be protected, but not
because of any direct bugs or problems with get_user() and friends,
but simply because get_user() is an excellent source of a pointer
that is obviously controlled from a potentially attacking user
space. So it's a prime candidate for then finding _subsequent_
accesses that can then be used to perturb the cache.
Unlike the __get_user() case get_user() includes the address limit check
near the pointer de-reference. With that locality the speculation can be
mitigated with pointer narrowing rather than a barrier, i.e.
array_index_nospec(). Where the narrowing is performed by:
cmp %limit, %ptr
sbb %mask, %mask
and %mask, %ptr
With respect to speculation the value of %ptr is either less than %limit
or NULL.
Co-developed-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-arch(a)vger.kernel.org
Cc: Kees Cook <keescook(a)chromium.org>
Cc: kernel-hardening(a)lists.openwall.com
Cc: gregkh(a)linuxfoundation.org
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: torvalds(a)linux-foundation.org
Cc: alan(a)linux.intel.com
Link: https://lkml.kernel.org/r/151727417469.33451.11804043010080838495.stgit@dwi…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/lib/getuser.S | 10 ++++++++++
1 file changed, 10 insertions(+)
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -40,6 +40,8 @@ ENTRY(__get_user_1)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
1: movzbl (%_ASM_AX),%edx
xor %eax,%eax
@@ -54,6 +56,8 @@ ENTRY(__get_user_2)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
2: movzwl -1(%_ASM_AX),%edx
xor %eax,%eax
@@ -68,6 +72,8 @@ ENTRY(__get_user_4)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
3: movl -3(%_ASM_AX),%edx
xor %eax,%eax
@@ -83,6 +89,8 @@ ENTRY(__get_user_8)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
4: movq -7(%_ASM_AX),%rdx
xor %eax,%eax
@@ -94,6 +102,8 @@ ENTRY(__get_user_8)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user_8
+ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
+ and %_ASM_DX, %_ASM_AX
ASM_STAC
4: movl -7(%_ASM_AX),%edx
5: movl -3(%_ASM_AX),%ecx
Patches currently in stable-queue which might be from torvalds(a)linux-foundation.org are
queue-4.15/objtool_Add_support_for_alternatives_at_the_end_of_a_section.patch
queue-4.15/x86pti_Do_not_enable_PTI_on_CPUs_which_are_not_vulnerable_to_Meltdown.patch
queue-4.15/x86_Introduce_barrier_nospec.patch
queue-4.15/x86speculation_Use_Indirect_Branch_Prediction_Barrier_in_context_switch.patch
queue-4.15/x86get_user_Use_pointer_masking_to_limit_speculation.patch
queue-4.15/x86_Introduce___uaccess_begin_nospec()_and_uaccess_try_nospec.patch
queue-4.15/x86cpufeature_Blacklist_SPEC_CTRLPRED_CMD_on_early_Spectre_v2_microcodes.patch
queue-4.15/x86cpufeatures_Add_Intel_feature_bits_for_Speculation_Control.patch
queue-4.15/x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/x86msr_Add_definitions_for_new_speculation_control_MSRs.patch
queue-4.15/x86alternative_Print_unadorned_pointers.patch
queue-4.15/KVMVMX_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86cpufeatures_Add_CPUID_7_EDX_CPUID_leaf.patch
queue-4.15/array_index_nospec_Sanitize_speculative_array_de-references.patch
queue-4.15/Documentation_Document_array_index_nospec.patch
queue-4.15/x86entry64_Remove_the_SYSCALL64_fast_path.patch
queue-4.15/x86bugs_Drop_one_mitigation_from_dmesg.patch
queue-4.15/x86cpufeatures_Add_AMD_feature_bits_for_Speculation_Control.patch
queue-4.15/KVMSVM_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86asm_Move_status_from_thread_struct_to_thread_info.patch
queue-4.15/KVMx86_Add_IBPB_support.patch
queue-4.15/x86_Implement_array_index_mask_nospec.patch
queue-4.15/KVMVMX_Emulate_MSR_IA32_ARCH_CAPABILITIES.patch
queue-4.15/nl80211_Sanitize_array_index_in_parse_txq_params.patch
queue-4.15/moduleretpoline_Warn_about_missing_retpoline_in_module.patch
queue-4.15/x86speculation_Add_basic_IBPB_(Indirect_Branch_Prediction_Barrier)_support.patch
queue-4.15/x86speculation_Simplify_indirect_branch_prediction_barrier().patch
queue-4.15/x86nospec_Fix_header_guards_names.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/x86uaccess_Use___uaccess_begin_nospec()_and_uaccess_try_nospec.patch
queue-4.15/x86entry64_Push_extra_regs_right_away.patch
queue-4.15/x86usercopy_Replace_open_coded_stacclac_with___uaccess_begin_end.patch
queue-4.15/vfs_fdtable_Prevent_bounds-check_bypass_via_speculative_execution.patch
queue-4.15/x86retpoline_Simplify_vmexit_fill_RSB().patch
queue-4.15/objtool_Warn_on_stripped_section_symbol.patch
queue-4.15/x86spectre_Report_get_user_mitigation_for_spectre_v1.patch
queue-4.15/x86cpufeatures_Clean_up_Spectre_v2_related_CPUID_flags.patch
queue-4.15/x86syscall_Sanitize_syscall_table_de-references_under_speculation.patch
queue-4.15/objtool_Improve_retpoline_alternative_handling.patch
This is a note to let you know that I've just added the patch titled
x86/kvm: Update spectre-v1 mitigation
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86kvm_Update_spectre-v1_mitigation.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/kvm: Update spectre-v1 mitigation
From: Dan Williams dan.j.williams(a)intel.com
Date: Wed Jan 31 17:47:03 2018 -0800
From: Dan Williams dan.j.williams(a)intel.com
commit 085331dfc6bbe3501fb936e657331ca943827600
Commit 75f139aaf896 "KVM: x86: Add memory barrier on vmcs field lookup"
added a raw 'asm("lfence");' to prevent a bounds check bypass of
'vmcs_field_to_offset_table'.
The lfence can be avoided in this path by using the array_index_nospec()
helper designed for these types of fixes.
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Andrew Honig <ahonig(a)google.com>
Cc: kvm(a)vger.kernel.org
Cc: Jim Mattson <jmattson(a)google.com>
Link: https://lkml.kernel.org/r/151744959670.6342.3001723920950249067.stgit@dwill…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/vmx.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -34,6 +34,7 @@
#include <linux/tboot.h>
#include <linux/hrtimer.h>
#include <linux/frame.h>
+#include <linux/nospec.h>
#include "kvm_cache_regs.h"
#include "x86.h"
@@ -898,21 +899,18 @@ static const unsigned short vmcs_field_t
static inline short vmcs_field_to_offset(unsigned long field)
{
- BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
+ const size_t size = ARRAY_SIZE(vmcs_field_to_offset_table);
+ unsigned short offset;
- if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
+ BUILD_BUG_ON(size > SHRT_MAX);
+ if (field >= size)
return -ENOENT;
- /*
- * FIXME: Mitigation for CVE-2017-5753. To be replaced with a
- * generic mechanism.
- */
- asm("lfence");
-
- if (vmcs_field_to_offset_table[field] == 0)
+ field = array_index_nospec(field, size);
+ offset = vmcs_field_to_offset_table[field];
+ if (offset == 0)
return -ENOENT;
-
- return vmcs_field_to_offset_table[field];
+ return offset;
}
static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.15/x86kvm_Update_spectre-v1_mitigation.patch
queue-4.15/x86_Introduce_barrier_nospec.patch
queue-4.15/x86get_user_Use_pointer_masking_to_limit_speculation.patch
queue-4.15/x86_Introduce___uaccess_begin_nospec()_and_uaccess_try_nospec.patch
queue-4.15/x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/KVMVMX_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/array_index_nospec_Sanitize_speculative_array_de-references.patch
queue-4.15/Documentation_Document_array_index_nospec.patch
queue-4.15/KVMSVM_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/KVMx86_Add_IBPB_support.patch
queue-4.15/x86_Implement_array_index_mask_nospec.patch
queue-4.15/KVMVMX_Emulate_MSR_IA32_ARCH_CAPABILITIES.patch
queue-4.15/nl80211_Sanitize_array_index_in_parse_txq_params.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/x86uaccess_Use___uaccess_begin_nospec()_and_uaccess_try_nospec.patch
queue-4.15/x86usercopy_Replace_open_coded_stacclac_with___uaccess_begin_end.patch
queue-4.15/vfs_fdtable_Prevent_bounds-check_bypass_via_speculative_execution.patch
queue-4.15/x86spectre_Report_get_user_mitigation_for_spectre_v1.patch
queue-4.15/x86syscall_Sanitize_syscall_table_de-references_under_speculation.patch
This is a note to let you know that I've just added the patch titled
x86/entry/64: Push extra regs right away
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86entry64_Push_extra_regs_right_away.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/entry/64: Push extra regs right away
From: Andy Lutomirski luto(a)kernel.org
Date: Sun Jan 28 10:38:49 2018 -0800
From: Andy Lutomirski luto(a)kernel.org
commit d1f7732009e0549eedf8ea1db948dc37be77fd46
With the fast path removed there is no point in splitting the push of the
normal and the extra register set. Just push the extra regs right away.
[ tglx: Split out from 'x86/entry/64: Remove the SYSCALL64 fast path' ]
Signed-off-by: Andy Lutomirski <luto(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Kernel Hardening <kernel-hardening(a)lists.openwall.com>
Link: https://lkml.kernel.org/r/462dff8d4d64dfbfc851fbf3130641809d980ecd.15171644…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -236,13 +236,17 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
pushq %r9 /* pt_regs->r9 */
pushq %r10 /* pt_regs->r10 */
pushq %r11 /* pt_regs->r11 */
- sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
- UNWIND_HINT_REGS extra=0
+ pushq %rbx /* pt_regs->rbx */
+ pushq %rbp /* pt_regs->rbp */
+ pushq %r12 /* pt_regs->r12 */
+ pushq %r13 /* pt_regs->r13 */
+ pushq %r14 /* pt_regs->r14 */
+ pushq %r15 /* pt_regs->r15 */
+ UNWIND_HINT_REGS
TRACE_IRQS_OFF
/* IRQs are off. */
- SAVE_EXTRA_REGS
movq %rsp, %rdi
call do_syscall_64 /* returns with IRQs disabled */
Patches currently in stable-queue which might be from luto(a)kernel.org are
queue-4.15/objtool_Add_support_for_alternatives_at_the_end_of_a_section.patch
queue-4.15/x86pti_Mark_constant_arrays_as___initconst.patch
queue-4.15/x86speculation_Use_Indirect_Branch_Prediction_Barrier_in_context_switch.patch
queue-4.15/x86spectre_Fix_spelling_mistake_vunerable-_vulnerable.patch
queue-4.15/x86get_user_Use_pointer_masking_to_limit_speculation.patch
queue-4.15/x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/KVMVMX_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86entry64_Remove_the_SYSCALL64_fast_path.patch
queue-4.15/KVMSVM_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86asm_Move_status_from_thread_struct_to_thread_info.patch
queue-4.15/KVMx86_Add_IBPB_support.patch
queue-4.15/KVMVMX_Emulate_MSR_IA32_ARCH_CAPABILITIES.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/x86entry64_Push_extra_regs_right_away.patch
queue-4.15/objtool_Warn_on_stripped_section_symbol.patch
queue-4.15/x86syscall_Sanitize_syscall_table_de-references_under_speculation.patch
queue-4.15/objtool_Improve_retpoline_alternative_handling.patch
This is a note to let you know that I've just added the patch titled
x86/entry/64: Remove the SYSCALL64 fast path
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86entry64_Remove_the_SYSCALL64_fast_path.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
Subject: x86/entry/64: Remove the SYSCALL64 fast path
From: Andy Lutomirski luto(a)kernel.org
Date: Sun Jan 28 10:38:49 2018 -0800
From: Andy Lutomirski luto(a)kernel.org
commit 21d375b6b34ff511a507de27bf316b3dde6938d9
The SYCALLL64 fast path was a nice, if small, optimization back in the good
old days when syscalls were actually reasonably fast. Now there is PTI to
slow everything down, and indirect branches are verboten, making everything
messier. The retpoline code in the fast path is particularly nasty.
Just get rid of the fast path. The slow path is barely slower.
[ tglx: Split out the 'push all extra regs' part ]
Signed-off-by: Andy Lutomirski <luto(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Kernel Hardening <kernel-hardening(a)lists.openwall.com>
Link: https://lkml.kernel.org/r/462dff8d4d64dfbfc851fbf3130641809d980ecd.15171644…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 117 --------------------------------------------
arch/x86/entry/syscall_64.c | 7 --
2 files changed, 2 insertions(+), 122 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -241,86 +241,11 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
TRACE_IRQS_OFF
- /*
- * If we need to do entry work or if we guess we'll need to do
- * exit work, go straight to the slow path.
- */
- movq PER_CPU_VAR(current_task), %r11
- testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
- jnz entry_SYSCALL64_slow_path
-
-entry_SYSCALL_64_fastpath:
- /*
- * Easy case: enable interrupts and issue the syscall. If the syscall
- * needs pt_regs, we'll call a stub that disables interrupts again
- * and jumps to the slow path.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_NONE)
-#if __SYSCALL_MASK == ~0
- cmpq $__NR_syscall_max, %rax
-#else
- andl $__SYSCALL_MASK, %eax
- cmpl $__NR_syscall_max, %eax
-#endif
- ja 1f /* return -ENOSYS (already in pt_regs->ax) */
- movq %r10, %rcx
-
- /*
- * This call instruction is handled specially in stub_ptregs_64.
- * It might end up jumping to the slow path. If it jumps, RAX
- * and all argument registers are clobbered.
- */
-#ifdef CONFIG_RETPOLINE
- movq sys_call_table(, %rax, 8), %rax
- call __x86_indirect_thunk_rax
-#else
- call *sys_call_table(, %rax, 8)
-#endif
-.Lentry_SYSCALL_64_after_fastpath_call:
-
- movq %rax, RAX(%rsp)
-1:
-
- /*
- * If we get here, then we know that pt_regs is clean for SYSRET64.
- * If we see that no exit work is required (which we are required
- * to check with IRQs off), then we can go straight to SYSRET64.
- */
- DISABLE_INTERRUPTS(CLBR_ANY)
- TRACE_IRQS_OFF
- movq PER_CPU_VAR(current_task), %r11
- testl $_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
- jnz 1f
-
- LOCKDEP_SYS_EXIT
- TRACE_IRQS_ON /* user mode is traced as IRQs on */
- movq RIP(%rsp), %rcx
- movq EFLAGS(%rsp), %r11
- addq $6*8, %rsp /* skip extra regs -- they were preserved */
- UNWIND_HINT_EMPTY
- jmp .Lpop_c_regs_except_rcx_r11_and_sysret
-
-1:
- /*
- * The fast path looked good when we started, but something changed
- * along the way and we need to switch to the slow path. Calling
- * raise(3) will trigger this, for example. IRQs are off.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_ANY)
- SAVE_EXTRA_REGS
- movq %rsp, %rdi
- call syscall_return_slowpath /* returns with IRQs disabled */
- jmp return_from_SYSCALL_64
-
-entry_SYSCALL64_slow_path:
/* IRQs are off. */
SAVE_EXTRA_REGS
movq %rsp, %rdi
call do_syscall_64 /* returns with IRQs disabled */
-return_from_SYSCALL_64:
TRACE_IRQS_IRETQ /* we're about to change IF */
/*
@@ -393,7 +318,6 @@ syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
UNWIND_HINT_EMPTY
POP_EXTRA_REGS
-.Lpop_c_regs_except_rcx_r11_and_sysret:
popq %rsi /* skip r11 */
popq %r10
popq %r9
@@ -424,47 +348,6 @@ syscall_return_via_sysret:
USERGS_SYSRET64
END(entry_SYSCALL_64)
-ENTRY(stub_ptregs_64)
- /*
- * Syscalls marked as needing ptregs land here.
- * If we are on the fast path, we need to save the extra regs,
- * which we achieve by trying again on the slow path. If we are on
- * the slow path, the extra regs are already saved.
- *
- * RAX stores a pointer to the C function implementing the syscall.
- * IRQs are on.
- */
- cmpq $.Lentry_SYSCALL_64_after_fastpath_call, (%rsp)
- jne 1f
-
- /*
- * Called from fast path -- disable IRQs again, pop return address
- * and jump to slow path
- */
- DISABLE_INTERRUPTS(CLBR_ANY)
- TRACE_IRQS_OFF
- popq %rax
- UNWIND_HINT_REGS extra=0
- jmp entry_SYSCALL64_slow_path
-
-1:
- JMP_NOSPEC %rax /* Called from C */
-END(stub_ptregs_64)
-
-.macro ptregs_stub func
-ENTRY(ptregs_\func)
- UNWIND_HINT_FUNC
- leaq \func(%rip), %rax
- jmp stub_ptregs_64
-END(ptregs_\func)
-.endm
-
-/* Instantiate ptregs_stub for each ptregs-using syscall */
-#define __SYSCALL_64_QUAL_(sym)
-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_stub sym
-#define __SYSCALL_64(nr, sym, qual) __SYSCALL_64_QUAL_##qual(sym)
-#include <asm/syscalls_64.h>
-
/*
* %rdi: prev task
* %rsi: next task
--- a/arch/x86/entry/syscall_64.c
+++ b/arch/x86/entry/syscall_64.c
@@ -7,14 +7,11 @@
#include <asm/asm-offsets.h>
#include <asm/syscall.h>
-#define __SYSCALL_64_QUAL_(sym) sym
-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_##sym
-
-#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long __SYSCALL_64_QUAL_##qual(sym)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
#include <asm/syscalls_64.h>
#undef __SYSCALL_64
-#define __SYSCALL_64(nr, sym, qual) [nr] = __SYSCALL_64_QUAL_##qual(sym),
+#define __SYSCALL_64(nr, sym, qual) [nr] = sym,
extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
Patches currently in stable-queue which might be from luto(a)kernel.org are
queue-4.15/objtool_Add_support_for_alternatives_at_the_end_of_a_section.patch
queue-4.15/x86pti_Mark_constant_arrays_as___initconst.patch
queue-4.15/x86speculation_Use_Indirect_Branch_Prediction_Barrier_in_context_switch.patch
queue-4.15/x86spectre_Fix_spelling_mistake_vunerable-_vulnerable.patch
queue-4.15/x86get_user_Use_pointer_masking_to_limit_speculation.patch
queue-4.15/x86paravirt_Remove_noreplace-paravirt_cmdline_option.patch
queue-4.15/KVM_VMX_Make_indirect_call_speculation_safe.patch
queue-4.15/KVMVMX_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86entry64_Remove_the_SYSCALL64_fast_path.patch
queue-4.15/KVMSVM_Allow_direct_access_to_MSR_IA32_SPEC_CTRL.patch
queue-4.15/x86asm_Move_status_from_thread_struct_to_thread_info.patch
queue-4.15/KVMx86_Add_IBPB_support.patch
queue-4.15/KVMVMX_Emulate_MSR_IA32_ARCH_CAPABILITIES.patch
queue-4.15/KVM_x86_Make_indirect_calls_in_emulator_speculation_safe.patch
queue-4.15/x86entry64_Push_extra_regs_right_away.patch
queue-4.15/objtool_Warn_on_stripped_section_symbol.patch
queue-4.15/x86syscall_Sanitize_syscall_table_de-references_under_speculation.patch
queue-4.15/objtool_Improve_retpoline_alternative_handling.patch