This is the start of the stable review cycle for the 3.16.77 release. There are 25 patches in this series, which will be posted as responses to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri Nov 15 00:00:00 UTC 2019. Anything received after that time might be too late.
All the patches have also been committed to the linux-3.16.y-rc branch of https://git.kernel.org/pub/scm/linux/kernel/git/bwh/linux-stable-rc.git . A shortlog and diffstat can be found below.
Ben.
-------------
Ben Hutchings (1): KVM: Introduce kvm_get_arch_capabilities() [5b76a3cff011df2dcb6186c965a2e4d809a05ad4]
Hui Peng (1): ath6kl: fix a NULL-ptr-deref bug in ath6kl_usb_alloc_urb_from_pipe() [39d170b3cb62ba98567f5c4f40c27b5864b304e5]
Imre Deak (1): drm/i915/gen8+: Add RC6 CTX corruption WA [7e34f4e4aad3fd34c02b294a3cf2321adf5b4438]
Jakub Kicinski (1): net: netem: fix error path for corrupted GSO frames [a7fa12d15855904aff1716e1fc723c03ba38c5cc]
Josh Poimboeuf (1): x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUs [012206a822a8b6ac09125bfaa210a95b9eb8f1c1]
Laura Abbott (1): rtlwifi: Fix potential overflow on P2P code [8c55dedb795be8ec0cf488f98c03a1c2176f7fb1]
Michal Hocko (1): x86/tsx: Add config options to set tsx=on|off|auto [db616173d787395787ecc93eef075fa975227b10]
Ori Nimron (5): appletalk: enforce CAP_NET_RAW for raw sockets [6cc03e8aa36c51f3b26a0d21a3c4ce2809c842ac] ax25: enforce CAP_NET_RAW for raw sockets [0614e2b73768b502fc32a75349823356d98aae2c] ieee802154: enforce CAP_NET_RAW for raw sockets [e69dbd4619e7674c1679cba49afd9dd9ac347eef] mISDN: enforce CAP_NET_RAW for raw sockets [b91ee4aa2a2199ba4d4650706c272985a5a32d80] nfc: enforce CAP_NET_RAW for raw sockets [3a359798b176183ef09efb7a3dc59abad1cc7104]
Paolo Bonzini (1): KVM: x86: use Intel speculation bugs and features as derived in generic x86 code [0c54914d0c52a15db9954a76ce80fee32cf318f4]
Pawan Gupta (8): kvm/x86: Export MDS_NO=0 to guests when TSX is enabled [e1d38b63acd843cfdd4222bf19a26700fd5c699e] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default [95c5824f75f3ba4c9e8e5a4b1a623c95390ac266] x86/cpu: Add a helper function x86_read_arch_cap_msr() [286836a70433fb64131d2590f4bf512097c255e1] x86/msr: Add the IA32_TSX_CTRL MSR [c2955f270a84762343000f103e0640d29c7a96f3] x86/speculation/taa: Add documentation for TSX Async Abort [a7a248c593e4fd7a67c50b5f5318fe42a0db335e] x86/speculation/taa: Add mitigation for TSX Async Abort [1b42f017415b46c317e71d41c34ec088417a1883] x86/speculation/taa: Add sysfs reporting for TSX Async Abort [6608b45ac5ecb56f9e171252229c39580cc85f0f] x86/tsx: Add "auto" option to the tsx= cmdline parameter [7531a3596e3272d1f6841e0d601a614555dc6b65]
Sean Young (1): media: technisat-usb2: break out of loop at end of buffer [0c4df39e504bf925ab666132ac3c98d6cbbe380b]
Vandana BN (1): media: usb:zr364xx:Fix KASAN:null-ptr-deref Read in zr364xx_vidioc_querycap [5d2e73a5f80a5b5aff3caf1ec6d39b5b3f54b26e]
Vineela Tummalapalli (1): x86/bugs: Add ITLB_MULTIHIT bug infrastructure [db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae]
Will Deacon (1): cfg80211: wext: avoid copying malformed SSIDs [4ac2813cc867ae563a1ba5a9414bfb554e5796fa]
Documentation/ABI/testing/sysfs-devices-system-cpu | 2 + Documentation/hw-vuln/tsx_async_abort.rst | 268 +++++++++++++++++++++ Documentation/kernel-parameters.txt | 62 +++++ Documentation/x86/tsx_async_abort.rst | 117 +++++++++ Makefile | 4 +- arch/x86/Kconfig | 45 ++++ arch/x86/include/asm/cpufeatures.h | 2 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/asm/nospec-branch.h | 4 +- arch/x86/include/asm/processor.h | 7 + arch/x86/include/uapi/asm/msr-index.h | 16 ++ arch/x86/kernel/cpu/Makefile | 2 +- arch/x86/kernel/cpu/bugs.c | 143 ++++++++++- arch/x86/kernel/cpu/common.c | 93 ++++--- arch/x86/kernel/cpu/cpu.h | 18 ++ arch/x86/kernel/cpu/intel.c | 5 + arch/x86/kernel/cpu/tsx.c | 140 +++++++++++ arch/x86/kvm/cpuid.c | 7 + arch/x86/kvm/x86.c | 40 ++- drivers/base/cpu.c | 17 ++ drivers/gpu/drm/i915/i915_drv.c | 4 + drivers/gpu/drm/i915/i915_drv.h | 5 + drivers/gpu/drm/i915/i915_reg.h | 2 + drivers/gpu/drm/i915/intel_display.c | 9 + drivers/gpu/drm/i915/intel_drv.h | 3 + drivers/gpu/drm/i915/intel_pm.c | 146 ++++++++++- drivers/isdn/mISDN/socket.c | 2 + drivers/media/usb/dvb-usb/technisat-usb2.c | 21 +- drivers/media/usb/zr364xx/zr364xx.c | 3 +- drivers/net/wireless/ath/ath6kl/usb.c | 8 + drivers/net/wireless/rtlwifi/ps.c | 6 + include/linux/cpu.h | 5 + net/appletalk/ddp.c | 5 + net/ax25/af_ax25.c | 2 + net/ieee802154/af_ieee802154.c | 3 + net/nfc/llcp_sock.c | 7 +- net/sched/sch_netem.c | 9 +- net/wireless/wext-sme.c | 8 +- 38 files changed, 1172 insertions(+), 69 deletions(-)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings ben@decadent.org.uk
Extracted from commit 5b76a3cff011 "KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry". We will need this to let a nested hypervisor know that we have applied the mitigation for TAA.
Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/x86.c | 13 +++++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1062,6 +1062,7 @@ int kvm_arch_interrupt_allowed(struct kv int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
+u64 kvm_get_arch_capabilities(void); void kvm_define_shared_msr(unsigned index, u32 msr); int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -911,6 +911,16 @@ static u32 emulated_msrs[] = {
static unsigned num_emulated_msrs;
+u64 kvm_get_arch_capabilities(void) +{ + u64 data; + + rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data); + + return data; +} +EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities); + bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) { if (efer & efer_reserved_bits) @@ -6969,8 +6979,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu int r;
if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, - vcpu->arch.arch_capabilities); + vcpu->arch.arch_capabilities = kvm_get_arch_capabilities(); vcpu->arch.mtrr_state.have_fixed = 1; r = vcpu_load(vcpu); if (r)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini pbonzini@redhat.com
commit 0c54914d0c52a15db9954a76ce80fee32cf318f4 upstream.
Similar to AMD bits, set the Intel bits from the vendor-independent feature and bug flags, because KVM_GET_SUPPORTED_CPUID does not care about the vendor and they should be set on AMD processors as well.
Suggested-by: Jim Mattson jmattson@google.com Reviewed-by: Jim Mattson jmattson@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/kvm/cpuid.c | 7 +++++++ arch/x86/kvm/x86.c | 8 ++++++++ 2 files changed, 15 insertions(+)
--- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -395,6 +395,13 @@ static inline int __do_cpuid_ent(struct entry->ebx |= F(TSC_ADJUST); entry->edx &= kvm_cpuid_7_0_edx_x86_features; cpuid_mask(&entry->edx, 10); + if (boot_cpu_has(X86_FEATURE_IBPB) && + boot_cpu_has(X86_FEATURE_IBRS)) + entry->edx |= F(SPEC_CTRL); + if (boot_cpu_has(X86_FEATURE_STIBP)) + entry->edx |= F(INTEL_STIBP); + if (boot_cpu_has(X86_FEATURE_SSBD)) + entry->edx |= F(SPEC_CTRL_SSBD); /* * We emulate ARCH_CAPABILITIES in software even * if the host doesn't support it. --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -917,8 +917,16 @@ u64 kvm_get_arch_capabilities(void)
rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data);
+ if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN)) + data |= ARCH_CAP_RDCL_NO; + if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS)) + data |= ARCH_CAP_SSB_NO; + if (!boot_cpu_has_bug(X86_BUG_MDS)) + data |= ARCH_CAP_MDS_NO; + return data; } + EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit c2955f270a84762343000f103e0640d29c7a96f3 upstream.
Transactional Synchronization Extensions (TSX) may be used on certain processors as part of a speculative side channel attack. A microcode update for existing processors that are vulnerable to this attack will add a new MSR - IA32_TSX_CTRL to allow the system administrator the option to disable TSX as one of the possible mitigations.
The CPUs which get this new MSR after a microcode upgrade are the ones which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all CPU buffers takes care of the TAA case as well.
[ Note that future processors that are not vulnerable will also support the IA32_TSX_CTRL MSR. ]
Add defines for the new IA32_TSX_CTRL MSR and its bits.
TSX has two sub-features:
1. Restricted Transactional Memory (RTM) is an explicitly-used feature where new instructions begin and end TSX transactions. 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of "old" style locks are used by software.
Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the IA32_TSX_CTRL MSR.
There are two control bits in IA32_TSX_CTRL MSR:
Bit 0: When set, it disables the Restricted Transactional Memory (RTM) sub-feature of TSX (will force all transactions to abort on the XBEGIN instruction).
Bit 1: When set, it disables the enumeration of the RTM and HLE feature (i.e. it will make CPUID(EAX=7).EBX{bit4} and CPUID(EAX=7).EBX{bit11} read as 0).
The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled by the new microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}, unless disabled by IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR.
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Neelima Krishnan neelima.krishnan@intel.com Reviewed-by: Mark Gross mgross@linux.intel.com Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 3.16: - Don't use BIT() in msr-index.h because it's a UAPI header - Adjust filename, context, indentation] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/include/uapi/asm/msr-index.h | 5 +++++ 1 file changed, 5 insertions(+)
--- a/arch/x86/include/uapi/asm/msr-index.h +++ b/arch/x86/include/uapi/asm/msr-index.h @@ -70,10 +70,15 @@ * Microarchitectural Data * Sampling (MDS) vulnerabilities. */ +#define ARCH_CAP_TSX_CTRL_MSR (1UL << 7) /* MSR for TSX control is available. */
#define MSR_IA32_BBL_CR_CTL 0x00000119 #define MSR_IA32_BBL_CR_CTL3 0x0000011e
+#define MSR_IA32_TSX_CTRL 0x00000122 +#define TSX_CTRL_RTM_DISABLE (1UL << 0) /* Disable RTM feature */ +#define TSX_CTRL_CPUID_CLEAR (1UL << 1) /* Disable TSX enumeration */ + #define MSR_IA32_SYSENTER_CS 0x00000174 #define MSR_IA32_SYSENTER_ESP 0x00000175 #define MSR_IA32_SYSENTER_EIP 0x00000176
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit 286836a70433fb64131d2590f4bf512097c255e1 upstream.
Add a helper function to read the IA32_ARCH_CAPABILITIES MSR.
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Neelima Krishnan neelima.krishnan@intel.com Reviewed-by: Mark Gross mgross@linux.intel.com Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/kernel/cpu/common.c | 15 +++++++++++---- arch/x86/kernel/cpu/cpu.h | 2 ++ 2 files changed, 13 insertions(+), 4 deletions(-)
--- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -878,19 +878,26 @@ static bool __init cpu_matches(unsigned return m && !!(m->driver_data & which); }
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) +u64 x86_read_arch_cap_msr(void) { u64 ia32_cap = 0;
+ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); + + return ia32_cap; +} + +static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) +{ + u64 ia32_cap = x86_read_arch_cap_msr(); + if (cpu_matches(NO_SPECULATION)) return;
setup_force_cpu_bug(X86_BUG_SPECTRE_V1); setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
- if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); - if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) && !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -48,4 +48,6 @@ extern void cpu_detect_cache_sizes(struc
extern void x86_spec_ctrl_setup_ap(void);
+extern u64 x86_read_arch_cap_msr(void); + #endif /* ARCH_X86_CPU_H */
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit 95c5824f75f3ba4c9e8e5a4b1a623c95390ac266 upstream.
Add a kernel cmdline parameter "tsx" to control the Transactional Synchronization Extensions (TSX) feature. On CPUs that support TSX control, use "tsx=on|off" to enable or disable TSX. Not specifying this option is equivalent to "tsx=off". This is because on certain processors TSX may be used as a part of a speculative side channel attack.
Carve out the TSX controlling functionality into a separate compilation unit because TSX is a CPU feature while the TSX async abort control machinery will go to cpu/bugs.c.
[ bp: - Massage, shorten and clear the arg buffer. - Clarifications of the tsx= possible options - Josh. - Expand on TSX_CTRL availability - Pawan. ]
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 3.16: - Drop __ro_after_init attribute - Adjust filenames, context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- Documentation/kernel-parameters.txt | 26 ++++++ arch/x86/kernel/cpu/Makefile | 2 +- arch/x86/kernel/cpu/common.c | 2 + arch/x86/kernel/cpu/cpu.h | 16 ++++ arch/x86/kernel/cpu/intel.c | 5 ++ arch/x86/kernel/cpu/tsx.c | 125 ++++++++++++++++++++++++++++ 6 files changed, 175 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/cpu/tsx.c
--- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -3581,6 +3581,32 @@ bytes respectively. Such letter suffixes platforms where RDTSC is slow and this accounting can add overhead.
+ tsx= [X86] Control Transactional Synchronization + Extensions (TSX) feature in Intel processors that + support TSX control. + + This parameter controls the TSX feature. The options are: + + on - Enable TSX on the system. Although there are + mitigations for all known security vulnerabilities, + TSX has been known to be an accelerator for + several previous speculation-related CVEs, and + so there may be unknown security risks associated + with leaving it enabled. + + off - Disable TSX on the system. (Note that this + option takes effect only on newer CPUs which are + not vulnerable to MDS, i.e., have + MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get + the new IA32_TSX_CTRL MSR through a microcode + update. This new MSR allows for the reliable + deactivation of the TSX functionality.) + + Not specifying this option is equivalent to tsx=off. + + See Documentation/hw-vuln/tsx_async_abort.rst + for more details. + turbografx.map[2|3]= [HW,JOY] TurboGraFX parallel port interface Format: --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -18,7 +18,7 @@ obj-y += rdrand.o obj-y += match.o obj-y += bugs.o
-obj-$(CONFIG_CPU_SUP_INTEL) += intel.o +obj-$(CONFIG_CPU_SUP_INTEL) += intel.o tsx.o obj-$(CONFIG_CPU_SUP_AMD) += amd.o obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1242,6 +1242,8 @@ void __init identify_boot_cpu(void) vgetcpu_set_mode(); #endif cpu_detect_tlb(&boot_cpu_data); + + tsx_init(); }
void identify_secondary_cpu(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -43,6 +43,22 @@ struct _tlb_table { extern const struct cpu_dev *const __x86_cpu_dev_start[], *const __x86_cpu_dev_end[];
+#ifdef CONFIG_CPU_SUP_INTEL +enum tsx_ctrl_states { + TSX_CTRL_ENABLE, + TSX_CTRL_DISABLE, + TSX_CTRL_NOT_SUPPORTED, +}; + +extern enum tsx_ctrl_states tsx_ctrl_state; + +extern void __init tsx_init(void); +extern void tsx_enable(void); +extern void tsx_disable(void); +#else +static inline void tsx_init(void) { } +#endif /* CONFIG_CPU_SUP_INTEL */ + extern void get_cpu_cap(struct cpuinfo_x86 *c); extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
--- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -566,6 +566,11 @@ static void init_intel(struct cpuinfo_x8 wrmsrl(MSR_IA32_ENERGY_PERF_BIAS, epb); } } + + if (tsx_ctrl_state == TSX_CTRL_ENABLE) + tsx_enable(); + if (tsx_ctrl_state == TSX_CTRL_DISABLE) + tsx_disable(); }
#ifdef CONFIG_X86_32 --- /dev/null +++ b/arch/x86/kernel/cpu/tsx.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Intel Transactional Synchronization Extensions (TSX) control. + * + * Copyright (C) 2019 Intel Corporation + * + * Author: + * Pawan Gupta pawan.kumar.gupta@linux.intel.com + */ + +#include <linux/cpufeature.h> + +#include <asm/cmdline.h> + +#include "cpu.h" + +enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED; + +void tsx_disable(void) +{ + u64 tsx; + + rdmsrl(MSR_IA32_TSX_CTRL, tsx); + + /* Force all transactions to immediately abort */ + tsx |= TSX_CTRL_RTM_DISABLE; + + /* + * Ensure TSX support is not enumerated in CPUID. + * This is visible to userspace and will ensure they + * do not waste resources trying TSX transactions that + * will always abort. + */ + tsx |= TSX_CTRL_CPUID_CLEAR; + + wrmsrl(MSR_IA32_TSX_CTRL, tsx); +} + +void tsx_enable(void) +{ + u64 tsx; + + rdmsrl(MSR_IA32_TSX_CTRL, tsx); + + /* Enable the RTM feature in the cpu */ + tsx &= ~TSX_CTRL_RTM_DISABLE; + + /* + * Ensure TSX support is enumerated in CPUID. + * This is visible to userspace and will ensure they + * can enumerate and use the TSX feature. + */ + tsx &= ~TSX_CTRL_CPUID_CLEAR; + + wrmsrl(MSR_IA32_TSX_CTRL, tsx); +} + +static bool __init tsx_ctrl_is_supported(void) +{ + u64 ia32_cap = x86_read_arch_cap_msr(); + + /* + * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this + * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES. + * + * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a + * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES + * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get + * MSR_IA32_TSX_CTRL support even after a microcode update. Thus, + * tsx= cmdline requests will do nothing on CPUs without + * MSR_IA32_TSX_CTRL support. + */ + return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR); +} + +void __init tsx_init(void) +{ + char arg[4] = {}; + int ret; + + if (!tsx_ctrl_is_supported()) + return; + + ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg)); + if (ret >= 0) { + if (!strcmp(arg, "on")) { + tsx_ctrl_state = TSX_CTRL_ENABLE; + } else if (!strcmp(arg, "off")) { + tsx_ctrl_state = TSX_CTRL_DISABLE; + } else { + tsx_ctrl_state = TSX_CTRL_DISABLE; + pr_err("tsx: invalid option, defaulting to off\n"); + } + } else { + /* tsx= not provided, defaulting to off */ + tsx_ctrl_state = TSX_CTRL_DISABLE; + } + + if (tsx_ctrl_state == TSX_CTRL_DISABLE) { + tsx_disable(); + + /* + * tsx_disable() will change the state of the + * RTM CPUID bit. Clear it here since it is now + * expected to be not set. + */ + setup_clear_cpu_cap(X86_FEATURE_RTM); + } else if (tsx_ctrl_state == TSX_CTRL_ENABLE) { + + /* + * HW defaults TSX to be enabled at bootup. + * We may still need the TSX enable support + * during init for special cases like + * kexec after TSX is disabled. + */ + tsx_enable(); + + /* + * tsx_enable() will change the state of the + * RTM CPUID bit. Force it here since it is now + * expected to be set. + */ + setup_force_cpu_cap(X86_FEATURE_RTM); + } +}
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit 1b42f017415b46c317e71d41c34ec088417a1883 upstream.
TSX Async Abort (TAA) is a side channel vulnerability to the internal buffers in some Intel processors similar to Microachitectural Data Sampling (MDS). In this case, certain loads may speculatively pass invalid data to dependent operations when an asynchronous abort condition is pending in a TSX transaction.
This includes loads with no fault or assist condition. Such loads may speculatively expose stale data from the uarch data structures as in MDS. Scope of exposure is within the same-thread and cross-thread. This issue affects all current processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.
On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0, CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers using VERW or L1D_FLUSH, there is no additional mitigation needed for TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling the Transactional Synchronization Extensions (TSX) feature.
A new MSR IA32_TSX_CTRL in future and current processors after a microcode update can be used to control the TSX feature. There are two bits in that MSR:
* TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted Transactional Memory (RTM).
* TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled with updated microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}.
The second mitigation approach is similar to MDS which is clearing the affected CPU buffers on return to user space and when entering a guest. Relevant microcode update is required for the mitigation to work. More details on this approach can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
The TSX feature can be controlled by the "tsx" command line parameter. If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is deployed. The effective mitigation state can be read from sysfs.
[ bp: - massage + comments cleanup - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh. - remove partial TAA mitigation in update_mds_branch_idle() - Josh. - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g ]
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 3.16: - Use next available X86_BUG bit - Don't use BIT() in msr-index.h because it's a UAPI header - Add #include "cpu.h" in bugs.c - Drop __ro_after_init attribute - Drop "nosmt" support - Adjust context, indentation] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/nospec-branch.h | 4 +- arch/x86/include/asm/processor.h | 7 ++ arch/x86/include/uapi/asm/msr-index.h | 4 + arch/x86/kernel/cpu/bugs.c | 103 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/common.c | 15 ++++ 6 files changed, 132 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -279,5 +279,6 @@ #define X86_BUG_MDS X86_BUG(10) /* CPU is affected by Microarchitectural data sampling */ #define X86_BUG_MSBDS_ONLY X86_BUG(11) /* CPU is only affected by the MSDBS variant of BUG_MDS */ #define X86_BUG_SWAPGS X86_BUG(12) /* CPU is affected by speculation through SWAPGS */ +#define X86_BUG_TAA X86_BUG(13) /* CPU is affected by TSX Async Abort(TAA) */
#endif /* _ASM_X86_CPUFEATURES_H */ --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -268,7 +268,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear) #include <asm/segment.h>
/** - * mds_clear_cpu_buffers - Mitigation for MDS vulnerability + * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability * * This uses the otherwise unused and obsolete VERW instruction in * combination with microcode which triggers a CPU buffer flush when the @@ -291,7 +291,7 @@ static inline void mds_clear_cpu_buffers }
/** - * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability + * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability * * Clear CPU buffers if the corresponding static key is enabled */ --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -960,4 +960,11 @@ enum mds_mitigations { MDS_MITIGATION_VMWERV, };
+enum taa_mitigations { + TAA_MITIGATION_OFF, + TAA_MITIGATION_UCODE_NEEDED, + TAA_MITIGATION_VERW, + TAA_MITIGATION_TSX_DISABLED, +}; + #endif /* _ASM_X86_PROCESSOR_H */ --- a/arch/x86/include/uapi/asm/msr-index.h +++ b/arch/x86/include/uapi/asm/msr-index.h @@ -71,6 +71,10 @@ * Sampling (MDS) vulnerabilities. */ #define ARCH_CAP_TSX_CTRL_MSR (1UL << 7) /* MSR for TSX control is available. */ +#define ARCH_CAP_TAA_NO (1UL << 8) /* + * Not susceptible to + * TSX Async Abort (TAA) vulnerabilities. + */
#define MSR_IA32_BBL_CR_CTL 0x00000119 #define MSR_IA32_BBL_CR_CTL3 0x0000011e --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -30,11 +30,14 @@ #include <asm/intel-family.h> #include <asm/e820.h>
+#include "cpu.h" + static void __init spectre_v1_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); +static void __init taa_select_mitigation(void);
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */ u64 x86_spec_ctrl_base; @@ -155,6 +158,7 @@ void __init check_bugs(void) ssb_select_mitigation(); l1tf_select_mitigation(); mds_select_mitigation(); + taa_select_mitigation();
arch_smt_update();
@@ -313,6 +317,93 @@ static int __init mds_cmdline(char *str) early_param("mds", mds_cmdline);
#undef pr_fmt +#define pr_fmt(fmt) "TAA: " fmt + +/* Default mitigation for TAA-affected CPUs */ +static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW; + +static const char * const taa_strings[] = { + [TAA_MITIGATION_OFF] = "Vulnerable", + [TAA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode", + [TAA_MITIGATION_VERW] = "Mitigation: Clear CPU buffers", + [TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled", +}; + +static void __init taa_select_mitigation(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has_bug(X86_BUG_TAA)) { + taa_mitigation = TAA_MITIGATION_OFF; + return; + } + + /* TSX previously disabled by tsx=off */ + if (!boot_cpu_has(X86_FEATURE_RTM)) { + taa_mitigation = TAA_MITIGATION_TSX_DISABLED; + goto out; + } + + if (cpu_mitigations_off()) { + taa_mitigation = TAA_MITIGATION_OFF; + return; + } + + /* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */ + if (taa_mitigation == TAA_MITIGATION_OFF) + goto out; + + if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) + taa_mitigation = TAA_MITIGATION_VERW; + else + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED; + + /* + * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1. + * A microcode update fixes this behavior to clear CPU buffers. It also + * adds support for MSR_IA32_TSX_CTRL which is enumerated by the + * ARCH_CAP_TSX_CTRL_MSR bit. + * + * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode + * update is required. + */ + ia32_cap = x86_read_arch_cap_msr(); + if ( (ia32_cap & ARCH_CAP_MDS_NO) && + !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR)) + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED; + + /* + * TSX is enabled, select alternate mitigation for TAA which is + * the same as MDS. Enable MDS static branch to clear CPU buffers. + * + * For guests that can't determine whether the correct microcode is + * present on host, enable the mitigation for UCODE_NEEDED as well. + */ + static_branch_enable(&mds_user_clear); + +out: + pr_info("%s\n", taa_strings[taa_mitigation]); +} + +static int __init tsx_async_abort_parse_cmdline(char *str) +{ + if (!boot_cpu_has_bug(X86_BUG_TAA)) + return 0; + + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) { + taa_mitigation = TAA_MITIGATION_OFF; + } else if (!strcmp(str, "full")) { + taa_mitigation = TAA_MITIGATION_VERW; + } + + return 0; +} +early_param("tsx_async_abort", tsx_async_abort_parse_cmdline); + +#undef pr_fmt #define pr_fmt(fmt) "Spectre V1 : " fmt
enum spectre_v1_mitigation { @@ -824,6 +915,7 @@ static void update_mds_branch_idle(void) }
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" +#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.h... for more details.\n"
void arch_smt_update(void) { @@ -856,6 +948,17 @@ void arch_smt_update(void) break; }
+ switch (taa_mitigation) { + case TAA_MITIGATION_VERW: + case TAA_MITIGATION_UCODE_NEEDED: + if (sched_smt_active()) + pr_warn_once(TAA_MSG_SMT); + break; + case TAA_MITIGATION_TSX_DISABLED: + case TAA_MITIGATION_OFF: + break; + } + mutex_unlock(&spec_ctrl_mutex); }
--- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -914,6 +914,21 @@ static void __init cpu_set_bug_bits(stru if (!cpu_matches(NO_SWAPGS)) setup_force_cpu_bug(X86_BUG_SWAPGS);
+ /* + * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when: + * - TSX is supported or + * - TSX_CTRL is present + * + * TSX_CTRL check is needed for cases when TSX could be disabled before + * the kernel boot e.g. kexec. + * TSX_CTRL check alone is not sufficient for cases when the microcode + * update is not present or running as guest that don't get TSX_CTRL. + */ + if (!(ia32_cap & ARCH_CAP_TAA_NO) && + (cpu_has(c, X86_FEATURE_RTM) || + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) + setup_force_cpu_bug(X86_BUG_TAA); + if (cpu_matches(NO_MELTDOWN)) return;
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit 6608b45ac5ecb56f9e171252229c39580cc85f0f upstream.
Add the sysfs reporting file for TSX Async Abort. It exposes the vulnerability and the mitigation state similar to the existing files for the other hardware vulnerabilities.
Sysfs file path is: /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Neelima Krishnan neelima.krishnan@intel.com Reviewed-by: Mark Gross mgross@linux.intel.com Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++ drivers/base/cpu.c | 9 +++++++++ include/linux/cpu.h | 3 +++ 3 files changed, 35 insertions(+)
--- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1365,6 +1365,21 @@ static ssize_t mds_show_state(char *buf) sched_smt_active() ? "vulnerable" : "disabled"); }
+static ssize_t tsx_async_abort_show_state(char *buf) +{ + if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLED) || + (taa_mitigation == TAA_MITIGATION_OFF)) + return sprintf(buf, "%s\n", taa_strings[taa_mitigation]); + + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { + return sprintf(buf, "%s; SMT Host state unknown\n", + taa_strings[taa_mitigation]); + } + + return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation], + sched_smt_active() ? "vulnerable" : "disabled"); +} + static char *stibp_state(void) { if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) @@ -1430,6 +1445,9 @@ static ssize_t cpu_show_common(struct de case X86_BUG_MDS: return mds_show_state(buf);
+ case X86_BUG_TAA: + return tsx_async_abort_show_state(buf); + default: break; } @@ -1466,4 +1484,9 @@ ssize_t cpu_show_mds(struct device *dev, { return cpu_show_common(dev, attr, buf, X86_BUG_MDS); } + +ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_TAA); +} #endif --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -456,12 +456,20 @@ ssize_t __weak cpu_show_mds(struct devic return sprintf(buf, "Not affected\n"); }
+ssize_t __weak cpu_show_tsx_async_abort(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL); static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL); static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL); +static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -470,6 +478,7 @@ static struct attribute *cpu_root_vulner &dev_attr_spec_store_bypass.attr, &dev_attr_l1tf.attr, &dev_attr_mds.attr, + &dev_attr_tsx_async_abort.attr, NULL };
--- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -51,6 +51,9 @@ extern ssize_t cpu_show_l1tf(struct devi struct device_attribute *attr, char *buf); extern ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_tsx_async_abort(struct device *dev, + struct device_attribute *attr, + char *buf);
#ifdef CONFIG_HOTPLUG_CPU extern void unregister_cpu(struct cpu *cpu);
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit e1d38b63acd843cfdd4222bf19a26700fd5c699e upstream.
Export the IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async Abort(TAA) affected hosts that have TSX enabled and updated microcode. This is required so that the guests don't complain,
"Vulnerable: Clear CPU buffers attempted, no microcode"
when the host has the updated microcode to clear CPU buffers.
Microcode update also adds support for MSR_IA32_TSX_CTRL which is enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR. Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is not exported to the guests.
In this case export MDS_NO=0 to the guests. When guests have CPUID.MD_CLEAR=1, they deploy MDS mitigation which also mitigates TAA.
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Neelima Krishnan neelima.krishnan@intel.com Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/kvm/x86.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -924,6 +924,25 @@ u64 kvm_get_arch_capabilities(void) if (!boot_cpu_has_bug(X86_BUG_MDS)) data |= ARCH_CAP_MDS_NO;
+ /* + * On TAA affected systems, export MDS_NO=0 when: + * - TSX is enabled on the host, i.e. X86_FEATURE_RTM=1. + * - Updated microcode is present. This is detected by + * the presence of ARCH_CAP_TSX_CTRL_MSR and ensures + * that VERW clears CPU buffers. + * + * When MDS_NO=0 is exported, guests deploy clear CPU buffer + * mitigation and don't complain: + * + * "Vulnerable: Clear CPU buffers attempted, no microcode" + * + * If TSX is disabled on the system, guests are also mitigated against + * TAA and clear CPU buffer mitigation is not required for guests. + */ + if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) && + (data & ARCH_CAP_TSX_CTRL_MSR)) + data &= ~ARCH_CAP_MDS_NO; + return data; }
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit 7531a3596e3272d1f6841e0d601a614555dc6b65 upstream.
Platforms which are not affected by X86_BUG_TAA may want the TSX feature enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto disable TSX when X86_BUG_TAA is present, otherwise enable TSX.
More details on X86_BUG_TAA can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.h...
[ bp: Extend the arg buffer to accommodate "auto\0". ]
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 4.4: adjust filename] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- Documentation/kernel-parameters.txt | 3 +++ arch/x86/kernel/cpu/tsx.c | 7 ++++++- 2 files changed, 9 insertions(+), 1 deletion(-)
--- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -3602,6 +3602,9 @@ bytes respectively. Such letter suffixes update. This new MSR allows for the reliable deactivation of the TSX functionality.)
+ auto - Disable TSX if X86_BUG_TAA is present, + otherwise enable TSX on the system. + Not specifying this option is equivalent to tsx=off.
See Documentation/hw-vuln/tsx_async_abort.rst --- a/arch/x86/kernel/cpu/tsx.c +++ b/arch/x86/kernel/cpu/tsx.c @@ -75,7 +75,7 @@ static bool __init tsx_ctrl_is_supported
void __init tsx_init(void) { - char arg[4] = {}; + char arg[5] = {}; int ret;
if (!tsx_ctrl_is_supported()) @@ -87,6 +87,11 @@ void __init tsx_init(void) tsx_ctrl_state = TSX_CTRL_ENABLE; } else if (!strcmp(arg, "off")) { tsx_ctrl_state = TSX_CTRL_DISABLE; + } else if (!strcmp(arg, "auto")) { + if (boot_cpu_has_bug(X86_BUG_TAA)) + tsx_ctrl_state = TSX_CTRL_DISABLE; + else + tsx_ctrl_state = TSX_CTRL_ENABLE; } else { tsx_ctrl_state = TSX_CTRL_DISABLE; pr_err("tsx: invalid option, defaulting to off\n");
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit a7a248c593e4fd7a67c50b5f5318fe42a0db335e upstream.
Add the documenation for TSX Async Abort. Include the description of the issue, how to check the mitigation state, control the mitigation, guidance for system administrators.
[ bp: Add proper SPDX tags, touch ups by Josh and me. ]
Co-developed-by: Antonio Gomez Iglesias antonio.gomez.iglesias@intel.com
Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Antonio Gomez Iglesias antonio.gomez.iglesias@intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Mark Gross mgross@linux.intel.com Reviewed-by: Tony Luck tony.luck@intel.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 4.4: - Drop changes to ReST index files - Drop "nosmt" documentation - Adjust filenames, context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- .../ABI/testing/sysfs-devices-system-cpu | 1 + Documentation/hw-vuln/tsx_async_abort.rst | 268 ++++++++++++++++++ Documentation/kernel-parameters.txt | 33 +++ Documentation/x86/tsx_async_abort.rst | 117 ++++++++ 4 files changed, 419 insertions(+) create mode 100644 Documentation/hw-vuln/tsx_async_abort.rst create mode 100644 Documentation/x86/tsx_async_abort.rst
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -232,6 +232,7 @@ What: /sys/devices/system/cpu/vulnerabi /sys/devices/system/cpu/vulnerabilities/spec_store_bypass /sys/devices/system/cpu/vulnerabilities/l1tf /sys/devices/system/cpu/vulnerabilities/mds + /sys/devices/system/cpu/vulnerabilities/tsx_async_abort Date: January 2018 Contact: Linux kernel mailing list linux-kernel@vger.kernel.org Description: Information about CPU vulnerabilities --- /dev/null +++ b/Documentation/hw-vuln/tsx_async_abort.rst @@ -0,0 +1,268 @@ +.. SPDX-License-Identifier: GPL-2.0 + +TAA - TSX Asynchronous Abort +====================================== + +TAA is a hardware vulnerability that allows unprivileged speculative access to +data which is available in various CPU internal buffers by using asynchronous +aborts within an Intel TSX transactional region. + +Affected processors +------------------- + +This vulnerability only affects Intel processors that support Intel +Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8) +is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit +(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations +also mitigate against TAA. + +Whether a processor is affected or not can be read out from the TAA +vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`. + +Related CVEs +------------ + +The following CVE entry is related to this TAA issue: + + ============== ===== =================================================== + CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some + microprocessors utilizing speculative execution may + allow an authenticated user to potentially enable + information disclosure via a side channel with + local access. + ============== ===== =================================================== + +Problem +------- + +When performing store, load or L1 refill operations, processors write +data into temporary microarchitectural structures (buffers). The data in +those buffers can be forwarded to load operations as an optimization. + +Intel TSX is an extension to the x86 instruction set architecture that adds +hardware transactional memory support to improve performance of multi-threaded +software. TSX lets the processor expose and exploit concurrency hidden in an +application due to dynamically avoiding unnecessary synchronization. + +TSX supports atomic memory transactions that are either committed (success) or +aborted. During an abort, operations that happened within the transactional region +are rolled back. An asynchronous abort takes place, among other options, when a +different thread accesses a cache line that is also used within the transactional +region when that access might lead to a data race. + +Immediately after an uncompleted asynchronous abort, certain speculatively +executed loads may read data from those internal buffers and pass it to dependent +operations. This can be then used to infer the value via a cache side channel +attack. + +Because the buffers are potentially shared between Hyper-Threads cross +Hyper-Thread attacks are possible. + +The victim of a malicious actor does not need to make use of TSX. Only the +attacker needs to begin a TSX transaction and raise an asynchronous abort +which in turn potenitally leaks data stored in the buffers. + +More detailed technical information is available in the TAA specific x86 +architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`. + + +Attack scenarios +---------------- + +Attacks against the TAA vulnerability can be implemented from unprivileged +applications running on hosts or guests. + +As for MDS, the attacker has no control over the memory addresses that can +be leaked. Only the victim is responsible for bringing data to the CPU. As +a result, the malicious actor has to sample as much data as possible and +then postprocess it to try to infer any useful information from it. + +A potential attacker only has read access to the data. Also, there is no direct +privilege escalation by using this technique. + + +.. _tsx_async_abort_sys_info: + +TAA system information +----------------------- + +The Linux kernel provides a sysfs interface to enumerate the current TAA status +of mitigated systems. The relevant sysfs file is: + +/sys/devices/system/cpu/vulnerabilities/tsx_async_abort + +The possible values in this file are: + +.. list-table:: + + * - 'Vulnerable' + - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied. + * - 'Vulnerable: Clear CPU buffers attempted, no microcode' + - The system tries to clear the buffers but the microcode might not support the operation. + * - 'Mitigation: Clear CPU buffers' + - The microcode has been updated to clear the buffers. TSX is still enabled. + * - 'Mitigation: TSX disabled' + - TSX is disabled. + * - 'Not affected' + - The CPU is not affected by this issue. + +.. _ucode_needed: + +Best effort mitigation mode +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If the processor is vulnerable, but the availability of the microcode-based +mitigation mechanism is not advertised via CPUID the kernel selects a best +effort mitigation mode. This mode invokes the mitigation instructions +without a guarantee that they clear the CPU buffers. + +This is done to address virtualization scenarios where the host has the +microcode update applied, but the hypervisor is not yet updated to expose the +CPUID to the guest. If the host has updated microcode the protection takes +effect; otherwise a few CPU cycles are wasted pointlessly. + +The state in the tsx_async_abort sysfs file reflects this situation +accordingly. + + +Mitigation mechanism +-------------------- + +The kernel detects the affected CPUs and the presence of the microcode which is +required. If a CPU is affected and the microcode is available, then the kernel +enables the mitigation by default. + + +The mitigation can be controlled at boot time via a kernel command line option. +See :ref:`taa_mitigation_control_command_line`. + +.. _virt_mechanism: + +Virtualization mitigation +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Affected systems where the host has TAA microcode and TAA is mitigated by +having disabled TSX previously, are not vulnerable regardless of the status +of the VMs. + +In all other cases, if the host either does not have the TAA microcode or +the kernel is not mitigated, the system might be vulnerable. + + +.. _taa_mitigation_control_command_line: + +Mitigation control on the kernel command line +--------------------------------------------- + +The kernel command line allows to control the TAA mitigations at boot time with +the option "tsx_async_abort=". The valid arguments for this option are: + + ============ ============================================================= + off This option disables the TAA mitigation on affected platforms. + If the system has TSX enabled (see next parameter) and the CPU + is affected, the system is vulnerable. + + full TAA mitigation is enabled. If TSX is enabled, on an affected + system it will clear CPU buffers on ring transitions. On + systems which are MDS-affected and deploy MDS mitigation, + TAA is also mitigated. Specifying this option on those + systems will have no effect. + ============ ============================================================= + +Not specifying this option is equivalent to "tsx_async_abort=full". + +The kernel command line also allows to control the TSX feature using the +parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used +to control the TSX feature and the enumeration of the TSX feature bits (RTM +and HLE) in CPUID. + +The valid options are: + + ============ ============================================================= + off Disables TSX on the system. + + Note that this option takes effect only on newer CPUs which are + not vulnerable to MDS, i.e., have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 + and which get the new IA32_TSX_CTRL MSR through a microcode + update. This new MSR allows for the reliable deactivation of + the TSX functionality. + + on Enables TSX. + + Although there are mitigations for all known security + vulnerabilities, TSX has been known to be an accelerator for + several previous speculation-related CVEs, and so there may be + unknown security risks associated with leaving it enabled. + + auto Disables TSX if X86_BUG_TAA is present, otherwise enables TSX + on the system. + ============ ============================================================= + +Not specifying this option is equivalent to "tsx=off". + +The following combinations of the "tsx_async_abort" and "tsx" are possible. For +affected platforms tsx=auto is equivalent to tsx=off and the result will be: + + ========= ========================== ========================================= + tsx=on tsx_async_abort=full The system will use VERW to clear CPU + buffers. Cross-thread attacks are still + possible on SMT machines. + tsx=on tsx_async_abort=off The system is vulnerable. + tsx=off tsx_async_abort=full TSX might be disabled if microcode + provides a TSX control MSR. If so, + system is not vulnerable. + tsx=off tsx_async_abort=off ditto + ========= ========================== ========================================= + + +For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU +buffers. For platforms without TSX control (MSR_IA32_ARCH_CAPABILITIES.MDS_NO=0) +"tsx" command line argument has no effect. + +For the affected platforms below table indicates the mitigation status for the +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO +and TSX_CTRL_MSR. + + ======= ========= ============= ======================================== + MDS_NO MD_CLEAR TSX_CTRL_MSR Status + ======= ========= ============= ======================================== + 0 0 0 Vulnerable (needs microcode) + 0 1 0 MDS and TAA mitigated via VERW + 1 1 0 MDS fixed, TAA vulnerable if TSX enabled + because MD_CLEAR has no meaning and + VERW is not guaranteed to clear buffers + 1 X 1 MDS fixed, TAA can be mitigated by + VERW or TSX_CTRL_MSR + ======= ========= ============= ======================================== + +Mitigation selection guide +-------------------------- + +1. Trusted userspace and guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If all user space applications are from a trusted source and do not execute +untrusted code which is supplied externally, then the mitigation can be +disabled. The same applies to virtualized environments with trusted guests. + + +2. Untrusted userspace and guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If there are untrusted applications or guests on the system, enabling TSX +might allow a malicious actor to leak data from the host or from other +processes running on the same physical core. + +If the microcode is available and the TSX is disabled on the host, attacks +are prevented in a virtualized environment as well, even if the VMs do not +explicitly enable the mitigation. + + +.. _taa_default_mitigations: + +Default mitigations +------------------- + +The kernel's default action for vulnerable processors is: + + - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off). --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -1922,6 +1922,7 @@ bytes respectively. Such letter suffixes spectre_v2_user=off [X86] spec_store_bypass_disable=off [X86] mds=off [X86] + tsx_async_abort=off [X86]
auto (default) Mitigate all CPU vulnerabilities, but leave SMT @@ -3610,6 +3611,38 @@ bytes respectively. Such letter suffixes See Documentation/hw-vuln/tsx_async_abort.rst for more details.
+ tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async + Abort (TAA) vulnerability. + + Similar to Micro-architectural Data Sampling (MDS) + certain CPUs that support Transactional + Synchronization Extensions (TSX) are vulnerable to an + exploit against CPU internal buffers which can forward + information to a disclosure gadget under certain + conditions. + + In vulnerable processors, the speculatively forwarded + data can be used in a cache side channel attack, to + access data to which the attacker does not have direct + access. + + This parameter controls the TAA mitigation. The + options are: + + full - Enable TAA mitigation on vulnerable CPUs + if TSX is enabled. + + off - Unconditionally disable TAA mitigation + + Not specifying this option is equivalent to + tsx_async_abort=full. On CPUs which are MDS affected + and deploy MDS mitigation, TAA mitigation is not + required and doesn't provide any additional + mitigation. + + For details see: + Documentation/hw-vuln/tsx_async_abort.rst + turbografx.map[2|3]= [HW,JOY] TurboGraFX parallel port interface Format: --- /dev/null +++ b/Documentation/x86/tsx_async_abort.rst @@ -0,0 +1,117 @@ +.. SPDX-License-Identifier: GPL-2.0 + +TSX Async Abort (TAA) mitigation +================================ + +.. _tsx_async_abort: + +Overview +-------- + +TSX Async Abort (TAA) is a side channel attack on internal buffers in some +Intel processors similar to Microachitectural Data Sampling (MDS). In this +case certain loads may speculatively pass invalid data to dependent operations +when an asynchronous abort condition is pending in a Transactional +Synchronization Extensions (TSX) transaction. This includes loads with no +fault or assist condition. Such loads may speculatively expose stale data from +the same uarch data structures as in MDS, with same scope of exposure i.e. +same-thread and cross-thread. This issue affects all current processors that +support TSX. + +Mitigation strategy +------------------- + +a) TSX disable - one of the mitigations is to disable TSX. A new MSR +IA32_TSX_CTRL will be available in future and current processors after +microcode update which can be used to disable TSX. In addition, it +controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID. + +b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this +vulnerability. More details on this approach can be found in +:ref:`Documentation/hw-vuln/mds.rst <mds>`. + +Kernel internal mitigation modes +-------------------------------- + + ============= ============================================================ + off Mitigation is disabled. Either the CPU is not affected or + tsx_async_abort=off is supplied on the kernel command line. + + tsx disabled Mitigation is enabled. TSX feature is disabled by default at + bootup on processors that support TSX control. + + verw Mitigation is enabled. CPU is affected and MD_CLEAR is + advertised in CPUID. + + ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not + advertised in CPUID. That is mainly for virtualization + scenarios where the host has the updated microcode but the + hypervisor does not expose MD_CLEAR in CPUID. It's a best + effort approach without guarantee. + ============= ============================================================ + +If the CPU is affected and the "tsx_async_abort" kernel command line parameter is +not provided then the kernel selects an appropriate mitigation depending on the +status of RTM and MD_CLEAR CPUID bits. + +Below tables indicate the impact of tsx=on|off|auto cmdline options on state of +TAA mitigation, VERW behavior and TSX feature for various combinations of +MSR_IA32_ARCH_CAPABILITIES bits. + +1. "tsx=off" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=off +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Disabled Yes TSX disabled TSX disabled + 1 X 1 Disabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +2. "tsx=on" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=on +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Enabled Yes None Same as MDS + 1 X 1 Enabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +3. "tsx=auto" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=auto +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Disabled Yes TSX disabled TSX disabled + 1 X 1 Enabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that +indicates whether MSR_IA32_TSX_CTRL is supported. + +There are two control bits in IA32_TSX_CTRL MSR: + + Bit 0: When set it disables the Restricted Transactional Memory (RTM) + sub-feature of TSX (will force all transactions to abort on the + XBEGIN instruction). + + Bit 1: When set it disables the enumeration of the RTM and HLE feature + (i.e. it will make CPUID(EAX=7).EBX{bit4} and + CPUID(EAX=7).EBX{bit11} read as 0).
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Michal Hocko mhocko@suse.com
commit db616173d787395787ecc93eef075fa975227b10 upstream.
There is a general consensus that TSX usage is not largely spread while the history shows there is a non trivial space for side channel attacks possible. Therefore the tsx is disabled by default even on platforms that might have a safe implementation of TSX according to the current knowledge. This is a fair trade off to make.
There are, however, workloads that really do benefit from using TSX and updating to a newer kernel with TSX disabled might introduce a noticeable regressions. This would be especially a problem for Linux distributions which will provide TAA mitigations.
Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config setting can be overridden by the tsx cmdline options.
[ bp: Text cleanups from Josh. ]
Suggested-by: Borislav Petkov bpetkov@suse.de Signed-off-by: Michal Hocko mhocko@suse.com Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com [bwh: Backported to 3.16: adjust doc filename] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/Kconfig | 45 +++++++++++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/tsx.c | 22 +++++++++++++------ 2 files changed, 61 insertions(+), 6 deletions(-)
--- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1531,6 +1531,51 @@ config X86_SMAP
If unsure, say Y.
+choice + prompt "TSX enable mode" + depends on CPU_SUP_INTEL + default X86_INTEL_TSX_MODE_OFF + help + Intel's TSX (Transactional Synchronization Extensions) feature + allows to optimize locking protocols through lock elision which + can lead to a noticeable performance boost. + + On the other hand it has been shown that TSX can be exploited + to form side channel attacks (e.g. TAA) and chances are there + will be more of those attacks discovered in the future. + + Therefore TSX is not enabled by default (aka tsx=off). An admin + might override this decision by tsx=on the command line parameter. + Even with TSX enabled, the kernel will attempt to enable the best + possible TAA mitigation setting depending on the microcode available + for the particular machine. + + This option allows to set the default tsx mode between tsx=on, =off + and =auto. See Documentation/kernel-parameters.txt for more + details. + + Say off if not sure, auto if TSX is in use but it should be used on safe + platforms or on if TSX is in use and the security aspect of tsx is not + relevant. + +config X86_INTEL_TSX_MODE_OFF + bool "off" + help + TSX is disabled if possible - equals to tsx=off command line parameter. + +config X86_INTEL_TSX_MODE_ON + bool "on" + help + TSX is always enabled on TSX capable HW - equals the tsx=on command + line parameter. + +config X86_INTEL_TSX_MODE_AUTO + bool "auto" + help + TSX is enabled on TSX capable HW that is believed to be safe against + side channel attacks- equals the tsx=auto command line parameter. +endchoice + config EFI bool "EFI runtime service support" depends on ACPI --- a/arch/x86/kernel/cpu/tsx.c +++ b/arch/x86/kernel/cpu/tsx.c @@ -73,6 +73,14 @@ static bool __init tsx_ctrl_is_supported return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR); }
+static enum tsx_ctrl_states x86_get_tsx_auto_mode(void) +{ + if (boot_cpu_has_bug(X86_BUG_TAA)) + return TSX_CTRL_DISABLE; + + return TSX_CTRL_ENABLE; +} + void __init tsx_init(void) { char arg[5] = {}; @@ -88,17 +96,19 @@ void __init tsx_init(void) } else if (!strcmp(arg, "off")) { tsx_ctrl_state = TSX_CTRL_DISABLE; } else if (!strcmp(arg, "auto")) { - if (boot_cpu_has_bug(X86_BUG_TAA)) - tsx_ctrl_state = TSX_CTRL_DISABLE; - else - tsx_ctrl_state = TSX_CTRL_ENABLE; + tsx_ctrl_state = x86_get_tsx_auto_mode(); } else { tsx_ctrl_state = TSX_CTRL_DISABLE; pr_err("tsx: invalid option, defaulting to off\n"); } } else { - /* tsx= not provided, defaulting to off */ - tsx_ctrl_state = TSX_CTRL_DISABLE; + /* tsx= not provided */ + if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_AUTO)) + tsx_ctrl_state = x86_get_tsx_auto_mode(); + else if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_OFF)) + tsx_ctrl_state = TSX_CTRL_DISABLE; + else + tsx_ctrl_state = TSX_CTRL_ENABLE; }
if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Josh Poimboeuf jpoimboe@redhat.com
commit 012206a822a8b6ac09125bfaa210a95b9eb8f1c1 upstream.
For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of cpu_bugs_smt_update() causes the function to return early, unintentionally skipping the MDS and TAA logic.
This is not a problem for MDS, because there appears to be no overlap between IBRS_ALL and MDS-affected CPUs. So the MDS mitigation would be disabled and nothing would need to be done in this function anyway.
But for TAA, the TAA_MSG_SMT string will never get printed on Cascade Lake and newer.
The check is superfluous anyway: when 'spectre_v2_enabled' is SPECTRE_V2_IBRS_ENHANCED, 'spectre_v2_user' is always SPECTRE_V2_USER_NONE, and so the 'spectre_v2_user' switch statement handles it appropriately by doing nothing. So just remove the check.
Fixes: 1b42f017415b ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Tyler Hicks tyhicks@canonical.com Reviewed-by: Borislav Petkov bp@suse.de Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/kernel/cpu/bugs.c | 4 ---- 1 file changed, 4 deletions(-)
--- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -919,10 +919,6 @@ static void update_mds_branch_idle(void)
void arch_smt_update(void) { - /* Enhanced IBRS implies STIBP. No update required. */ - if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) - return; - mutex_lock(&spec_ctrl_mutex);
switch (spectre_v2_user) {
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Vineela Tummalapalli vineela.tummalapalli@intel.com
commit db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae upstream.
Some processors may incur a machine check error possibly resulting in an unrecoverable CPU lockup when an instruction fetch encounters a TLB multi-hit in the instruction TLB. This can occur when the page size is changed along with either the physical address or cache type. The relevant erratum can be found here:
https://bugzilla.kernel.org/show_bug.cgi?id=205195
There are other processors affected for which the erratum does not fully disclose the impact.
This issue affects both bare-metal x86 page tables and EPT.
It can be mitigated by either eliminating the use of large pages or by using careful TLB invalidations when changing the page size in the page tables.
Just like Spectre, Meltdown, L1TF and MDS, a new bit has been allocated in MSR_IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) and will be set on CPUs which are mitigated against this issue.
Signed-off-by: Vineela Tummalapalli vineela.tummalapalli@intel.com Co-developed-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Thomas Gleixner tglx@linutronix.de [bwh: Backported to 3.16: - Use next available X86_BUG bit - Don't use BIT() in msr-index.h because it's a UAPI header - No support for X86_VENDOR_HYGON, ATOM_AIRMONT_NP - Adjust filename, context, indentation] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- .../ABI/testing/sysfs-devices-system-cpu | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/uapi/asm/msr-index.h | 7 +++ arch/x86/kernel/cpu/bugs.c | 13 ++++ arch/x86/kernel/cpu/common.c | 61 ++++++++++--------- drivers/base/cpu.c | 8 +++ include/linux/cpu.h | 2 + 7 files changed, 65 insertions(+), 28 deletions(-)
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -233,6 +233,7 @@ What: /sys/devices/system/cpu/vulnerabi /sys/devices/system/cpu/vulnerabilities/l1tf /sys/devices/system/cpu/vulnerabilities/mds /sys/devices/system/cpu/vulnerabilities/tsx_async_abort + /sys/devices/system/cpu/vulnerabilities/itlb_multihit Date: January 2018 Contact: Linux kernel mailing list linux-kernel@vger.kernel.org Description: Information about CPU vulnerabilities --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -280,5 +280,6 @@ #define X86_BUG_MSBDS_ONLY X86_BUG(11) /* CPU is only affected by the MSDBS variant of BUG_MDS */ #define X86_BUG_SWAPGS X86_BUG(12) /* CPU is affected by speculation through SWAPGS */ #define X86_BUG_TAA X86_BUG(13) /* CPU is affected by TSX Async Abort(TAA) */ +#define X86_BUG_ITLB_MULTIHIT X86_BUG(14) /* CPU may incur MCE during certain page attribute changes */
#endif /* _ASM_X86_CPUFEATURES_H */ --- a/arch/x86/include/uapi/asm/msr-index.h +++ b/arch/x86/include/uapi/asm/msr-index.h @@ -70,6 +70,13 @@ * Microarchitectural Data * Sampling (MDS) vulnerabilities. */ +#define ARCH_CAP_PSCHANGE_MC_NO (1UL << 6) /* + * The processor is not susceptible to a + * machine check error due to modifying the + * code page size along with either the + * physical address or cache type + * without TLB invalidation. + */ #define ARCH_CAP_TSX_CTRL_MSR (1UL << 7) /* MSR for TSX control is available. */ #define ARCH_CAP_TAA_NO (1UL << 8) /* * Not susceptible to --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1342,6 +1342,11 @@ static void __init l1tf_select_mitigatio
#ifdef CONFIG_SYSFS
+static ssize_t itlb_multihit_show_state(char *buf) +{ + return sprintf(buf, "Processor vulnerable\n"); +} + static ssize_t mds_show_state(char *buf) { #ifdef CONFIG_HYPERVISOR_GUEST @@ -1444,6 +1449,9 @@ static ssize_t cpu_show_common(struct de case X86_BUG_TAA: return tsx_async_abort_show_state(buf);
+ case X86_BUG_ITLB_MULTIHIT: + return itlb_multihit_show_state(buf); + default: break; } @@ -1485,4 +1493,9 @@ ssize_t cpu_show_tsx_async_abort(struct { return cpu_show_common(dev, attr, buf, X86_BUG_TAA); } + +ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT); +} #endif --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -807,13 +807,14 @@ static void identify_cpu_without_cpuid(s #endif }
-#define NO_SPECULATION BIT(0) -#define NO_MELTDOWN BIT(1) -#define NO_SSB BIT(2) -#define NO_L1TF BIT(3) -#define NO_MDS BIT(4) -#define MSBDS_ONLY BIT(5) -#define NO_SWAPGS BIT(6) +#define NO_SPECULATION BIT(0) +#define NO_MELTDOWN BIT(1) +#define NO_SSB BIT(2) +#define NO_L1TF BIT(3) +#define NO_MDS BIT(4) +#define MSBDS_ONLY BIT(5) +#define NO_SWAPGS BIT(6) +#define NO_ITLB_MULTIHIT BIT(7)
#define VULNWL(_vendor, _family, _model, _whitelist) \ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist } @@ -831,26 +832,26 @@ static const __initconst struct x86_cpu_ VULNWL(NSC, 5, X86_MODEL_ANY, NO_SPECULATION),
/* Intel Family 6 */ - VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION), - VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION), - VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION), - VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION), - VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION), - - VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT), + + VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
VULNWL_INTEL(CORE_YONAH, NO_SSB),
- VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
- VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS), - VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS), + VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
/* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -861,13 +862,13 @@ static const __initconst struct x86_cpu_ */
/* AMD Family 0xf - 0x12 */ - VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */ - VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), {} };
@@ -892,6 +893,10 @@ static void __init cpu_set_bug_bits(stru { u64 ia32_cap = x86_read_arch_cap_msr();
+ /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */ + if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) + setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT); + if (cpu_matches(NO_SPECULATION)) return;
--- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -463,6 +463,12 @@ ssize_t __weak cpu_show_tsx_async_abort( return sprintf(buf, "Not affected\n"); }
+ssize_t __weak cpu_show_itlb_multihit(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -470,6 +476,7 @@ static DEVICE_ATTR(spec_store_bypass, 04 static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL); static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL); static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); +static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -479,6 +486,7 @@ static struct attribute *cpu_root_vulner &dev_attr_l1tf.attr, &dev_attr_mds.attr, &dev_attr_tsx_async_abort.attr, + &dev_attr_itlb_multihit.attr, NULL };
--- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -54,6 +54,8 @@ extern ssize_t cpu_show_mds(struct devic extern ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_itlb_multihit(struct device *dev, + struct device_attribute *attr, char *buf);
#ifdef CONFIG_HOTPLUG_CPU extern void unregister_cpu(struct cpu *cpu);
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Imre Deak imre.deak@intel.com
commit 7e34f4e4aad3fd34c02b294a3cf2321adf5b4438 upstream.
In some circumstances the RC6 context can get corrupted. We can detect this and take the required action, that is disable RC6 and runtime PM. The HW recovers from the corrupted state after a system suspend/resume cycle, so detect the recovery and re-enable RC6 and runtime PM.
v2: rebase (Mika) v3: - Move intel_suspend_gt_powersave() to the end of the GEM suspend sequence. - Add commit message. v4: - Rebased on intel_uncore_forcewake_put(i915->uncore, ...) API change. v5: - Rebased on latest upstream gt_pm refactoring.
Signed-off-by: Imre Deak imre.deak@intel.com Signed-off-by: Mika Kuoppala mika.kuoppala@linux.intel.com Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/gpu/drm/i915/i915_drv.c | 4 + drivers/gpu/drm/i915/i915_drv.h | 5 + drivers/gpu/drm/i915/i915_reg.h | 2 + drivers/gpu/drm/i915/intel_display.c | 9 ++ drivers/gpu/drm/i915/intel_drv.h | 3 + drivers/gpu/drm/i915/intel_pm.c | 146 +++++++++++++++++++++++++-- 6 files changed, 162 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -604,6 +604,8 @@ static int i915_drm_thaw_early(struct dr intel_uncore_sanitize(dev); intel_power_domains_init_hw(dev_priv);
+ i915_rc6_ctx_wa_resume(dev_priv); + return 0; }
@@ -926,6 +928,8 @@ static int i915_pm_suspend_late(struct d if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF) return 0;
+ i915_rc6_ctx_wa_suspend(to_i915(drm_dev)); + pci_disable_device(pdev); pci_set_power_state(pdev, PCI_D3hot);
--- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -910,6 +910,7 @@ struct intel_gen6_power_mgmt { enum { LOW_POWER, BETWEEN, HIGH_POWER } power;
bool enabled; + bool ctx_corrupted; struct delayed_work delayed_resume_work;
/* @@ -1959,6 +1960,10 @@ struct drm_i915_cmd_table {
/* Early gen2 have a totally busted CS tlb and require pinned batches. */ #define HAS_BROKEN_CS_TLB(dev) (IS_I830(dev) || IS_845G(dev)) + +#define NEEDS_RC6_CTX_CORRUPTION_WA(dev) \ + (IS_BROADWELL(dev) || INTEL_INFO(dev)->gen == 9) + /* * dp aux and gmbus irq on gen4 seems to be able to generate legacy interrupts * even when in MSI mode. This results in spurious interrupt warnings if the --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -136,6 +136,8 @@ #define ECOCHK_PPGTT_WT_HSW (0x2<<3) #define ECOCHK_PPGTT_WB_HSW (0x3<<3)
+#define GEN8_RC6_CTX_INFO 0x8504 + #define GAC_ECO_BITS 0x14090 #define ECOBITS_SNB_BIT (1<<13) #define ECOBITS_PPGTT_CACHE64B (3<<8) --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -8787,6 +8787,10 @@ void intel_mark_busy(struct drm_device * return;
intel_runtime_pm_get(dev_priv); + + if (NEEDS_RC6_CTX_CORRUPTION_WA(dev)) + gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL); + i915_update_gfx_val(dev_priv); dev_priv->mm.busy = true; } @@ -8814,6 +8818,11 @@ void intel_mark_idle(struct drm_device * if (INTEL_INFO(dev)->gen >= 6) gen6_rps_idle(dev->dev_private);
+ if (NEEDS_RC6_CTX_CORRUPTION_WA(dev)) { + i915_rc6_ctx_wa_check(dev_priv); + gen6_gt_force_wake_put(dev_priv, FORCEWAKE_ALL); + } + out: intel_runtime_pm_put(dev_priv); } --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -965,6 +965,9 @@ void intel_enable_gt_powersave(struct dr void intel_disable_gt_powersave(struct drm_device *dev); void intel_reset_gt_powersave(struct drm_device *dev); void ironlake_teardown_rc6(struct drm_device *dev); +bool i915_rc6_ctx_wa_check(struct drm_i915_private *i915); +void i915_rc6_ctx_wa_suspend(struct drm_i915_private *i915); +void i915_rc6_ctx_wa_resume(struct drm_i915_private *i915); void gen6_update_ring_freq(struct drm_device *dev); void gen6_rps_idle(struct drm_i915_private *dev_priv); void gen6_rps_boost(struct drm_i915_private *dev_priv); --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3378,11 +3378,17 @@ static void gen6_disable_rps_interrupts( I915_WRITE(GEN6_PMIIR, dev_priv->pm_rps_events); }
-static void gen6_disable_rps(struct drm_device *dev) +static void gen6_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private;
I915_WRITE(GEN6_RC_CONTROL, 0); +} + +static void gen6_disable_rps(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + I915_WRITE(GEN6_RPNSWREQ, 1 << 31);
if (IS_BROADWELL(dev)) @@ -3391,12 +3397,15 @@ static void gen6_disable_rps(struct drm_ gen6_disable_rps_interrupts(dev); }
-static void valleyview_disable_rps(struct drm_device *dev) +static void valleyview_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private;
I915_WRITE(GEN6_RC_CONTROL, 0); +}
+static void valleyview_disable_rps(struct drm_device *dev) +{ gen6_disable_rps_interrupts(dev); }
@@ -3529,7 +3538,8 @@ static void gen8_enable_rps(struct drm_d I915_WRITE(GEN6_RC6_THRESHOLD, 50000); /* 50/125ms per EI */
/* 3: Enable RC6 */ - if (intel_enable_rc6(dev) & INTEL_RC6_ENABLE) + if (!dev_priv->rps.ctx_corrupted && + intel_enable_rc6(dev) & INTEL_RC6_ENABLE) rc6_mask = GEN6_RC_CTL_RC6_ENABLE; intel_print_rc6_info(dev, rc6_mask); I915_WRITE(GEN6_RC_CONTROL, GEN6_RC_CTL_HW_ENABLE | @@ -4707,10 +4717,103 @@ static void intel_init_emon(struct drm_d dev_priv->ips.corr = (lcfuse & LCFUSE_HIV_MASK); }
+static bool i915_rc6_ctx_corrupted(struct drm_i915_private *dev_priv) +{ + return !I915_READ(GEN8_RC6_CTX_INFO); +} + +static void i915_rc6_ctx_wa_init(struct drm_i915_private *i915) +{ + if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915->dev)) + return; + + if (i915_rc6_ctx_corrupted(i915)) { + DRM_INFO("RC6 context corrupted, disabling runtime power management\n"); + i915->rps.ctx_corrupted = true; + intel_runtime_pm_get(i915); + } +} + +static void i915_rc6_ctx_wa_cleanup(struct drm_i915_private *i915) +{ + if (i915->rps.ctx_corrupted) { + intel_runtime_pm_put(i915); + i915->rps.ctx_corrupted = false; + } +} + +/** + * i915_rc6_ctx_wa_suspend - system suspend sequence for the RC6 CTX WA + * @i915: i915 device + * + * Perform any steps needed to clean up the RC6 CTX WA before system suspend. + */ +void i915_rc6_ctx_wa_suspend(struct drm_i915_private *i915) +{ + if (i915->rps.ctx_corrupted) + intel_runtime_pm_put(i915); +} + +/** + * i915_rc6_ctx_wa_resume - system resume sequence for the RC6 CTX WA + * @i915: i915 device + * + * Perform any steps needed to re-init the RC6 CTX WA after system resume. + */ +void i915_rc6_ctx_wa_resume(struct drm_i915_private *i915) +{ + if (!i915->rps.ctx_corrupted) + return; + + if (i915_rc6_ctx_corrupted(i915)) { + intel_runtime_pm_get(i915); + return; + } + + DRM_INFO("RC6 context restored, re-enabling runtime power management\n"); + i915->rps.ctx_corrupted = false; +} + +static void intel_disable_rc6(struct drm_device *dev); + +/** + * i915_rc6_ctx_wa_check - check for a new RC6 CTX corruption + * @i915: i915 device + * + * Check if an RC6 CTX corruption has happened since the last check and if so + * disable RC6 and runtime power management. + * + * Return false if no context corruption has happened since the last call of + * this function, true otherwise. +*/ +bool i915_rc6_ctx_wa_check(struct drm_i915_private *i915) +{ + if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915->dev)) + return false; + + if (i915->rps.ctx_corrupted) + return false; + + if (!i915_rc6_ctx_corrupted(i915)) + return false; + + printk(KERN_NOTICE "[" DRM_NAME "] " + "RC6 context corruption, disabling runtime power management\n"); + + intel_disable_rc6(i915->dev); + i915->rps.ctx_corrupted = true; + intel_runtime_pm_get_noresume(i915); + + return true; +} + + void intel_init_gt_powersave(struct drm_device *dev) { i915.enable_rc6 = sanitize_rc6_option(dev, i915.enable_rc6);
+ i915_rc6_ctx_wa_init(to_i915(dev)); + if (IS_VALLEYVIEW(dev)) valleyview_init_gt_powersave(dev); } @@ -4719,6 +4822,33 @@ void intel_cleanup_gt_powersave(struct d { if (IS_VALLEYVIEW(dev)) valleyview_cleanup_gt_powersave(dev); + + i915_rc6_ctx_wa_cleanup(to_i915(dev)); +} + +static void __intel_disable_rc6(struct drm_device *dev) +{ + if (IS_VALLEYVIEW(dev)) + valleyview_disable_rc6(dev); + else + gen6_disable_rc6(dev); +} + +static void intel_disable_rc6(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = to_i915(dev); + + mutex_lock(&dev_priv->rps.hw_lock); + __intel_disable_rc6(dev); + mutex_unlock(&dev_priv->rps.hw_lock); +} + +static void intel_disable_rps(struct drm_device *dev) +{ + if (IS_VALLEYVIEW(dev)) + valleyview_disable_rps(dev); + else + gen6_disable_rps(dev); }
void intel_disable_gt_powersave(struct drm_device *dev) @@ -4736,12 +4866,14 @@ void intel_disable_gt_powersave(struct d intel_runtime_pm_put(dev_priv);
cancel_work_sync(&dev_priv->rps.work); + mutex_lock(&dev_priv->rps.hw_lock); - if (IS_VALLEYVIEW(dev)) - valleyview_disable_rps(dev); - else - gen6_disable_rps(dev); + + __intel_disable_rc6(dev); + intel_disable_rps(dev); + dev_priv->rps.enabled = false; + mutex_unlock(&dev_priv->rps.hw_lock); } }
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Jakub Kicinski jakub.kicinski@netronome.com
commit a7fa12d15855904aff1716e1fc723c03ba38c5cc upstream.
To corrupt a GSO frame we first perform segmentation. We then proceed using the first segment instead of the full GSO skb and requeue the rest of the segments as separate packets.
If there are any issues with processing the first segment we still want to process the rest, therefore we jump to the finish_segs label.
Commit 177b8007463c ("net: netem: fix backlog accounting for corrupted GSO frames") started using the pointer to the first segment in the "rest of segments processing", but as mentioned above the first segment may had already been freed at this point.
Backlog corrections for parent qdiscs have to be adjusted.
Fixes: 177b8007463c ("net: netem: fix backlog accounting for corrupted GSO frames") Reported-by: kbuild test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com Reported-by: Ben Hutchings ben@decadent.org.uk Signed-off-by: Jakub Kicinski jakub.kicinski@netronome.com Reviewed-by: Simon Horman simon.horman@netronome.com Signed-off-by: David S. Miller davem@davemloft.net [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/sched/sch_netem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -506,6 +506,7 @@ static int netem_enqueue(struct sk_buff (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))) { rc = qdisc_drop(skb, sch); + skb = NULL; goto finish_segs; }
@@ -576,9 +577,10 @@ static int netem_enqueue(struct sk_buff finish_segs: if (segs) { unsigned int len, last_len; - int nb = 0; + int nb;
- len = skb->len; + len = skb ? skb->len : 0; + nb = skb ? 1 : 0;
while (segs) { skb2 = segs->next; @@ -595,7 +597,8 @@ finish_segs: } segs = skb2; } - qdisc_tree_reduce_backlog(sch, -nb, prev_len - len); + /* Parent qdiscs accounted for 1 skb of size @prev_len */ + qdisc_tree_reduce_backlog(sch, -(nb - 1), -(len - prev_len)); } return NET_XMIT_SUCCESS; }
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Hui Peng benquike@gmail.com
commit 39d170b3cb62ba98567f5c4f40c27b5864b304e5 upstream.
The `ar_usb` field of `ath6kl_usb_pipe_usb_pipe` objects are initialized to point to the containing `ath6kl_usb` object according to endpoint descriptors read from the device side, as shown below in `ath6kl_usb_setup_pipe_resources`:
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { endpoint = &iface_desc->endpoint[i].desc;
// get the address from endpoint descriptor pipe_num = ath6kl_usb_get_logical_pipe_num(ar_usb, endpoint->bEndpointAddress, &urbcount); ...... // select the pipe object pipe = &ar_usb->pipes[pipe_num];
// initialize the ar_usb field pipe->ar_usb = ar_usb; }
The driver assumes that the addresses reported in endpoint descriptors from device side to be complete. If a device is malicious and does not report complete addresses, it may trigger NULL-ptr-deref `ath6kl_usb_alloc_urb_from_pipe` and `ath6kl_usb_free_urb_to_pipe`.
This patch fixes the bug by preventing potential NULL-ptr-deref (CVE-2019-15098).
Signed-off-by: Hui Peng benquike@gmail.com Reported-by: Hui Peng benquike@gmail.com Reported-by: Mathias Payer mathias.payer@nebelwelt.net Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/net/wireless/ath/ath6kl/usb.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/net/wireless/ath/ath6kl/usb.c +++ b/drivers/net/wireless/ath/ath6kl/usb.c @@ -132,6 +132,10 @@ ath6kl_usb_alloc_urb_from_pipe(struct at struct ath6kl_urb_context *urb_context = NULL; unsigned long flags;
+ /* bail if this pipe is not initialized */ + if (!pipe->ar_usb) + return NULL; + spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags); if (!list_empty(&pipe->urb_list_head)) { urb_context = @@ -150,6 +154,10 @@ static void ath6kl_usb_free_urb_to_pipe( { unsigned long flags;
+ /* bail if this pipe is not initialized */ + if (!pipe->ar_usb) + return; + spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags); pipe->urb_cnt++;
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Vandana BN bnvandana@gmail.com
commit 5d2e73a5f80a5b5aff3caf1ec6d39b5b3f54b26e upstream.
SyzKaller hit the null pointer deref while reading from uninitialized udev->product in zr364xx_vidioc_querycap().
================================================================== BUG: KASAN: null-ptr-deref in read_word_at_a_time+0xe/0x20 include/linux/compiler.h:274 Read of size 1 at addr 0000000000000000 by task v4l_id/5287
CPU: 1 PID: 5287 Comm: v4l_id Not tainted 5.1.0-rc3-319004-g43151d6 #6 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xe8/0x16e lib/dump_stack.c:113 kasan_report.cold+0x5/0x3c mm/kasan/report.c:321 read_word_at_a_time+0xe/0x20 include/linux/compiler.h:274 strscpy+0x8a/0x280 lib/string.c:207 zr364xx_vidioc_querycap+0xb5/0x210 drivers/media/usb/zr364xx/zr364xx.c:706 v4l_querycap+0x12b/0x340 drivers/media/v4l2-core/v4l2-ioctl.c:1062 __video_do_ioctl+0x5bb/0xb40 drivers/media/v4l2-core/v4l2-ioctl.c:2874 video_usercopy+0x44e/0xf00 drivers/media/v4l2-core/v4l2-ioctl.c:3056 v4l2_ioctl+0x14e/0x1a0 drivers/media/v4l2-core/v4l2-dev.c:364 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:509 [inline] do_vfs_ioctl+0xced/0x12f0 fs/ioctl.c:696 ksys_ioctl+0xa0/0xc0 fs/ioctl.c:713 __do_sys_ioctl fs/ioctl.c:720 [inline] __se_sys_ioctl fs/ioctl.c:718 [inline] __x64_sys_ioctl+0x74/0xb0 fs/ioctl.c:718 do_syscall_64+0xcf/0x4f0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f3b56d8b347 Code: 90 90 90 48 8b 05 f1 fa 2a 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 90 90 90 90 90 90 90 90 90 90 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c1 fa 2a 00 31 d2 48 29 c2 64 RSP: 002b:00007ffe005d5d68 EFLAGS: 00000202 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f3b56d8b347 RDX: 00007ffe005d5d70 RSI: 0000000080685600 RDI: 0000000000000003 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000400884 R13: 00007ffe005d5ec0 R14: 0000000000000000 R15: 0000000000000000 ==================================================================
For this device udev->product is not initialized and accessing it causes a NULL pointer deref.
The fix is to check for NULL before strscpy() and copy empty string, if product is NULL
Reported-by: syzbot+66010012fd4c531a1a96@syzkaller.appspotmail.com Signed-off-by: Vandana BN bnvandana@gmail.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/media/usb/zr364xx/zr364xx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/media/usb/zr364xx/zr364xx.c +++ b/drivers/media/usb/zr364xx/zr364xx.c @@ -712,7 +712,8 @@ static int zr364xx_vidioc_querycap(struc struct zr364xx_camera *cam = video_drvdata(file);
strlcpy(cap->driver, DRIVER_DESC, sizeof(cap->driver)); - strlcpy(cap->card, cam->udev->product, sizeof(cap->card)); + if (cam->udev->product) + strlcpy(cap->card, cam->udev->product, sizeof(cap->card)); strlcpy(cap->bus_info, dev_name(&cam->udev->dev), sizeof(cap->bus_info)); cap->device_caps = V4L2_CAP_VIDEO_CAPTURE |
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Sean Young sean@mess.org
commit 0c4df39e504bf925ab666132ac3c98d6cbbe380b upstream.
Ensure we do not access the buffer beyond the end if no 0xff byte is encountered.
Reported-by: syzbot+eaaaf38a95427be88f4b@syzkaller.appspotmail.com Signed-off-by: Sean Young sean@mess.org Reviewed-by: Kees Cook keescook@chromium.org Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org [bwh: Backported to 3.16: technisat_usb2_get_ir() still uses a stack buffer, which is not worth fixing on this branch] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/media/usb/dvb-usb/technisat-usb2.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
--- a/drivers/media/usb/dvb-usb/technisat-usb2.c +++ b/drivers/media/usb/dvb-usb/technisat-usb2.c @@ -591,9 +591,9 @@ static int technisat_usb2_frontend_attac
static int technisat_usb2_get_ir(struct dvb_usb_device *d) { - u8 buf[62], *b; - int ret; struct ir_raw_event ev; + u8 buf[62]; + int i, ret;
buf[0] = GET_IR_DATA_VENDOR_REQUEST; buf[1] = 0x08; @@ -629,26 +629,25 @@ unlock: return 0; /* no key pressed */
/* decoding */ - b = buf+1;
#if 0 deb_rc("RC: %d ", ret); - debug_dump(b, ret, deb_rc); + debug_dump(buf + 1, ret, deb_rc); #endif
ev.pulse = 0; - while (1) { - ev.pulse = !ev.pulse; - ev.duration = (*b * FIRMWARE_CLOCK_DIVISOR * FIRMWARE_CLOCK_TICK) / 1000; - ir_raw_event_store(d->rc_dev, &ev); - - b++; - if (*b == 0xff) { + for (i = 1; i < ARRAY_SIZE(buf); i++) { + if (buf[i] == 0xff) { ev.pulse = 0; ev.duration = 888888*2; ir_raw_event_store(d->rc_dev, &ev); break; } + + ev.pulse = !ev.pulse; + ev.duration = (buf[i] * FIRMWARE_CLOCK_DIVISOR * + FIRMWARE_CLOCK_TICK) / 1000; + ir_raw_event_store(d->rc_dev, &ev); }
ir_raw_event_handle(d->rc_dev);
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ori Nimron orinimron123@gmail.com
commit 0614e2b73768b502fc32a75349823356d98aae2c upstream.
When creating a raw AF_AX25 socket, CAP_NET_RAW needs to be checked first.
Signed-off-by: Ori Nimron orinimron123@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/ax25/af_ax25.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -853,6 +853,8 @@ static int ax25_create(struct net *net, break;
case SOCK_RAW: + if (!capable(CAP_NET_RAW)) + return -EPERM; break; default: return -ESOCKTNOSUPPORT;
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ori Nimron orinimron123@gmail.com
commit e69dbd4619e7674c1679cba49afd9dd9ac347eef upstream.
When creating a raw AF_IEEE802154 socket, CAP_NET_RAW needs to be checked first.
Signed-off-by: Ori Nimron orinimron123@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Acked-by: Stefan Schmidt stefan@datenfreihafen.org Signed-off-by: David S. Miller davem@davemloft.net [bwh: Backported to 3.16: adjust filename] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/ieee802154/af_ieee802154.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/ieee802154/af_ieee802154.c +++ b/net/ieee802154/af_ieee802154.c @@ -255,6 +255,9 @@ static int ieee802154_create(struct net
switch (sock->type) { case SOCK_RAW: + rc = -EPERM; + if (!capable(CAP_NET_RAW)) + goto out; proto = &ieee802154_raw_prot; ops = &ieee802154_raw_ops; break;
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ori Nimron orinimron123@gmail.com
commit 6cc03e8aa36c51f3b26a0d21a3c4ce2809c842ac upstream.
When creating a raw AF_APPLETALK socket, CAP_NET_RAW needs to be checked first.
Signed-off-by: Ori Nimron orinimron123@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/appletalk/ddp.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -1029,6 +1029,11 @@ static int atalk_create(struct net *net, */ if (sock->type != SOCK_RAW && sock->type != SOCK_DGRAM) goto out; + + rc = -EPERM; + if (sock->type == SOCK_RAW && !kern && !capable(CAP_NET_RAW)) + goto out; + rc = -ENOMEM; sk = sk_alloc(net, PF_APPLETALK, GFP_KERNEL, &ddp_proto); if (!sk)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ori Nimron orinimron123@gmail.com
commit b91ee4aa2a2199ba4d4650706c272985a5a32d80 upstream.
When creating a raw AF_ISDN socket, CAP_NET_RAW needs to be checked first.
Signed-off-by: Ori Nimron orinimron123@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/isdn/mISDN/socket.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/isdn/mISDN/socket.c +++ b/drivers/isdn/mISDN/socket.c @@ -763,6 +763,8 @@ base_sock_create(struct net *net, struct
if (sock->type != SOCK_RAW) return -ESOCKTNOSUPPORT; + if (!capable(CAP_NET_RAW)) + return -EPERM;
sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto); if (!sk)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ori Nimron orinimron123@gmail.com
commit 3a359798b176183ef09efb7a3dc59abad1cc7104 upstream.
When creating a raw AF_NFC socket, CAP_NET_RAW needs to be checked first.
Signed-off-by: Ori Nimron orinimron123@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/nfc/llcp_sock.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
--- a/net/nfc/llcp_sock.c +++ b/net/nfc/llcp_sock.c @@ -1004,10 +1004,13 @@ static int llcp_sock_create(struct net * sock->type != SOCK_RAW) return -ESOCKTNOSUPPORT;
- if (sock->type == SOCK_RAW) + if (sock->type == SOCK_RAW) { + if (!capable(CAP_NET_RAW)) + return -EPERM; sock->ops = &llcp_rawsock_ops; - else + } else { sock->ops = &llcp_sock_ops; + }
sk = nfc_llcp_sock_alloc(sock, sock->type, GFP_ATOMIC); if (sk == NULL)
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon will@kernel.org
commit 4ac2813cc867ae563a1ba5a9414bfb554e5796fa upstream.
Ensure the SSID element is bounds-checked prior to invoking memcpy() with its length field, when copying to userspace.
Cc: Kees Cook keescook@chromium.org Reported-by: Nicolas Waisman nico@semmle.com Signed-off-by: Will Deacon will@kernel.org Link: https://lore.kernel.org/r/20191004095132.15777-2-will@kernel.org [adjust commit log a bit] Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Ben Hutchings ben@decadent.org.uk --- net/wireless/wext-sme.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/net/wireless/wext-sme.c +++ b/net/wireless/wext-sme.c @@ -225,6 +225,7 @@ int cfg80211_mgd_wext_giwessid(struct ne struct iw_point *data, char *ssid) { struct wireless_dev *wdev = dev->ieee80211_ptr; + int ret = 0;
/* call only for station! */ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION)) @@ -242,7 +243,10 @@ int cfg80211_mgd_wext_giwessid(struct ne if (ie) { data->flags = 1; data->length = ie[1]; - memcpy(ssid, ie + 2, data->length); + if (data->length > IW_ESSID_MAX_SIZE) + ret = -EINVAL; + else + memcpy(ssid, ie + 2, data->length); } rcu_read_unlock(); } else if (wdev->wext.connect.ssid && wdev->wext.connect.ssid_len) { @@ -252,7 +256,7 @@ int cfg80211_mgd_wext_giwessid(struct ne } wdev_unlock(wdev);
- return 0; + return ret; }
int cfg80211_mgd_wext_siwap(struct net_device *dev,
3.16.77-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Laura Abbott labbott@redhat.com
commit 8c55dedb795be8ec0cf488f98c03a1c2176f7fb1 upstream.
Nicolas Waisman noticed that even though noa_len is checked for a compatible length it's still possible to overrun the buffers of p2pinfo since there's no check on the upper bound of noa_num. Bound noa_num against P2P_MAX_NOA_NUM.
Reported-by: Nicolas Waisman nico@semmle.com Signed-off-by: Laura Abbott labbott@redhat.com Acked-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Ben Hutchings ben@decadent.org.uk --- drivers/net/wireless/rtlwifi/ps.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/wireless/rtlwifi/ps.c +++ b/drivers/net/wireless/rtlwifi/ps.c @@ -801,6 +801,9 @@ static void rtl_p2p_noa_ie(struct ieee80 return; } else { noa_num = (noa_len - 2) / 13; + if (noa_num > P2P_MAX_NOA_NUM) + noa_num = P2P_MAX_NOA_NUM; + } noa_index = ie[3]; if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode == @@ -895,6 +898,9 @@ static void rtl_p2p_action_ie(struct iee return; } else { noa_num = (noa_len - 2) / 13; + if (noa_num > P2P_MAX_NOA_NUM) + noa_num = P2P_MAX_NOA_NUM; + } noa_index = ie[3]; if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
On Tue, Nov 12, 2019 at 11:47:57PM +0000, Ben Hutchings wrote:
This is the start of the stable review cycle for the 3.16.77 release. There are 25 patches in this series, which will be posted as responses to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri Nov 15 00:00:00 UTC 2019. Anything received after that time might be too late.
Build results: total: 136 pass: 136 fail: 0 Qemu test results: total: 229 pass: 229 fail: 0
Guenter
On Wed, 2019-11-13 at 10:13 -0800, Guenter Roeck wrote:
On Tue, Nov 12, 2019 at 11:47:57PM +0000, Ben Hutchings wrote:
This is the start of the stable review cycle for the 3.16.77 release. There are 25 patches in this series, which will be posted as responses to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri Nov 15 00:00:00 UTC 2019. Anything received after that time might be too late.
Build results: total: 136 pass: 136 fail: 0 Qemu test results: total: 229 pass: 229 fail: 0
Great, thanks for testing,
Ben.
linux-stable-mirror@lists.linaro.org