This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 4.4.138-rc1
Michael Ellerman mpe@ellerman.id.au crypto: vmx - Remove overly verbose printk from AES init routines
Johannes Wienke languitar@semipol.de Input: elan_i2c - add ELAN0612 (Lenovo v330 14IKB) ACPI ID
Ethan Lee flibitijibibo@gmail.com Input: goodix - add new ACPI id for GPD Win 2 touch screen
Paolo Bonzini pbonzini@redhat.com kvm: x86: use correct privilege level for sgdt/sidt/fxsave/fxrstor access
Gil Kupfer gilkup@gmail.com vmw_balloon: fixing double free when batching mode is off
Marek Szyprowski m.szyprowski@samsung.com serial: samsung: fix maxburst parameter for DMA transactions
Paolo Bonzini pbonzini@redhat.com KVM: x86: pass kvm_vcpu to kvm_read_guest_virt and kvm_write_guest_virt_system
Paolo Bonzini pbonzini@redhat.com KVM: x86: introduce linear_{read,write}_system
Linus Torvalds torvalds@linux-foundation.org Clarify (and fix) MAX_LFS_FILESIZE macros
Linus Walleij linus.walleij@linaro.org gpio: No NULL owner
Andy Lutomirski luto@kernel.org x86/crypto, x86/fpu: Remove X86_FEATURE_EAGER_FPU #ifdef from the crc32c code
Kevin Easton kevin@guarana.org af_key: Always verify length of provided sadb_key
Andy Lutomirski luto@kernel.org x86/fpu: Fix math emulation in eager fpu mode
Andy Lutomirski luto@kernel.org x86/fpu: Fix FNSAVE usage in eagerfpu mode
Andy Lutomirski luto@kernel.org x86/fpu: Hard-disable lazy FPU mode
Borislav Petkov bp@alien8.de x86/fpu: Fix eager-FPU handling on legacy FPU machines
Yu-cheng Yu yu-cheng.yu@intel.com x86/fpu: Revert ("x86/fpu: Disable AVX when eagerfpu is off")
Andy Lutomirski luto@kernel.org x86/fpu: Fix 'no387' regression
Andy Lutomirski luto@kernel.org x86/fpu: Default eagerfpu=on on all CPUs
yu-cheng yu yu-cheng.yu@intel.com x86/fpu: Disable AVX when eagerfpu is off
yu-cheng yu yu-cheng.yu@intel.com x86/fpu: Disable MPX when eagerfpu is off
Borislav Petkov bp@suse.de x86/cpufeature: Remove unused and seldomly used cpu_has_xx macros
Juergen Gross jgross@suse.com x86: Remove unused function cpu_has_ht_siblings()
yu-cheng yu yu-cheng.yu@intel.com x86/fpu: Fix early FPU command-line parsing
-------------
Diffstat:
Makefile | 4 +- arch/x86/crypto/chacha20_glue.c | 2 +- arch/x86/crypto/crc32c-intel_glue.c | 7 +- arch/x86/include/asm/cmpxchg_32.h | 2 +- arch/x86/include/asm/cmpxchg_64.h | 2 +- arch/x86/include/asm/cpufeature.h | 39 +------ arch/x86/include/asm/fpu/internal.h | 6 +- arch/x86/include/asm/fpu/xstate.h | 2 +- arch/x86/include/asm/kvm_emulate.h | 6 +- arch/x86/include/asm/smp.h | 9 -- arch/x86/include/asm/xor_32.h | 2 +- arch/x86/kernel/cpu/amd.c | 4 +- arch/x86/kernel/cpu/common.c | 4 +- arch/x86/kernel/cpu/intel.c | 3 +- arch/x86/kernel/cpu/intel_cacheinfo.c | 6 +- arch/x86/kernel/cpu/mtrr/generic.c | 2 +- arch/x86/kernel/cpu/mtrr/main.c | 2 +- arch/x86/kernel/cpu/perf_event_amd.c | 4 +- arch/x86/kernel/cpu/perf_event_amd_uncore.c | 11 +- arch/x86/kernel/fpu/core.c | 24 +++- arch/x86/kernel/fpu/init.c | 169 +++++++--------------------- arch/x86/kernel/fpu/xstate.c | 3 +- arch/x86/kernel/hw_breakpoint.c | 6 +- arch/x86/kernel/smpboot.c | 2 +- arch/x86/kernel/traps.c | 1 - arch/x86/kernel/vm86_32.c | 4 +- arch/x86/kvm/emulate.c | 72 ++++++------ arch/x86/kvm/vmx.c | 23 ++-- arch/x86/kvm/x86.c | 51 ++++++--- arch/x86/kvm/x86.h | 4 +- arch/x86/mm/setup_nx.c | 4 +- drivers/char/hw_random/via-rng.c | 5 +- drivers/crypto/padlock-aes.c | 2 +- drivers/crypto/padlock-sha.c | 2 +- drivers/crypto/vmx/aes.c | 2 - drivers/crypto/vmx/aes_cbc.c | 2 - drivers/crypto/vmx/aes_ctr.c | 2 - drivers/crypto/vmx/ghash.c | 2 - drivers/gpio/gpiolib.c | 9 +- drivers/input/mouse/elan_i2c_core.c | 1 + drivers/input/touchscreen/goodix.c | 1 + drivers/iommu/intel_irq_remapping.c | 2 +- drivers/misc/vmw_balloon.c | 23 ++-- drivers/tty/serial/samsung.c | 7 +- fs/btrfs/disk-io.c | 2 +- include/linux/fs.h | 4 +- net/key/af_key.c | 45 ++++++-- 47 files changed, 259 insertions(+), 332 deletions(-)
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: yu-cheng yu yu-cheng.yu@intel.com
commit 4f81cbafcce2c603db7865e9d0e461f7947d77d4 upstream.
The function fpu__init_system() is executed before parse_early_param(). This causes wrong FPU configuration. This patch fixes this issue by parsing boot_command_line in the beginning of fpu__init_system().
With all four patches in this series, each parameter disables features as the following:
eagerfpu=off: eagerfpu, avx, avx2, avx512, mpx no387: fpu nofxsr: fxsr, fxsropt, xmm noxsave: xsave, xsaveopt, xsaves, xsavec, avx, avx2, avx512, mpx, xgetbv1 noxsaveopt: xsaveopt noxsaves: xsaves
Signed-off-by: Yu-cheng Yu yu-cheng.yu@intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Borislav Petkov bp@suse.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Ravi V. Shankar ravi.v.shankar@intel.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/1452119094-7252-2-git-send-email-yu-cheng.yu@intel.... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/fpu/init.c | 109 +++++++++++++++------------------------------ 1 file changed, 38 insertions(+), 71 deletions(-)
--- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -3,8 +3,11 @@ */ #include <asm/fpu/internal.h> #include <asm/tlbflush.h> +#include <asm/setup.h> +#include <asm/cmdline.h>
#include <linux/sched.h> +#include <linux/init.h>
/* * Initialize the TS bit in CR0 according to the style of context-switches @@ -262,18 +265,6 @@ static void __init fpu__init_system_xsta */ static enum { AUTO, ENABLE, DISABLE } eagerfpu = AUTO;
-static int __init eager_fpu_setup(char *s) -{ - if (!strcmp(s, "on")) - eagerfpu = ENABLE; - else if (!strcmp(s, "off")) - eagerfpu = DISABLE; - else if (!strcmp(s, "auto")) - eagerfpu = AUTO; - return 1; -} -__setup("eagerfpu=", eager_fpu_setup); - /* * Pick the FPU context switching strategy: */ @@ -308,11 +299,46 @@ static void __init fpu__init_system_ctx_ }
/* + * We parse fpu parameters early because fpu__init_system() is executed + * before parse_early_param(). + */ +static void __init fpu__init_parse_early_param(void) +{ + /* + * No need to check "eagerfpu=auto" again, since it is the + * initial default. + */ + if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) + eagerfpu = DISABLE; + else if (cmdline_find_option_bool(boot_command_line, "eagerfpu=on")) + eagerfpu = ENABLE; + + if (cmdline_find_option_bool(boot_command_line, "no387")) + setup_clear_cpu_cap(X86_FEATURE_FPU); + + if (cmdline_find_option_bool(boot_command_line, "nofxsr")) { + setup_clear_cpu_cap(X86_FEATURE_FXSR); + setup_clear_cpu_cap(X86_FEATURE_FXSR_OPT); + setup_clear_cpu_cap(X86_FEATURE_XMM); + } + + if (cmdline_find_option_bool(boot_command_line, "noxsave")) + fpu__xstate_clear_all_cpu_caps(); + + if (cmdline_find_option_bool(boot_command_line, "noxsaveopt")) + setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT); + + if (cmdline_find_option_bool(boot_command_line, "noxsaves")) + setup_clear_cpu_cap(X86_FEATURE_XSAVES); +} + +/* * Called on the boot CPU once per system bootup, to set up the initial * FPU state that is later cloned into all processes: */ void __init fpu__init_system(struct cpuinfo_x86 *c) { + fpu__init_parse_early_param(); fpu__init_system_early_generic(c);
/* @@ -336,62 +362,3 @@ void __init fpu__init_system(struct cpui
fpu__init_system_ctx_switch(); } - -/* - * Boot parameter to turn off FPU support and fall back to math-emu: - */ -static int __init no_387(char *s) -{ - setup_clear_cpu_cap(X86_FEATURE_FPU); - return 1; -} -__setup("no387", no_387); - -/* - * Disable all xstate CPU features: - */ -static int __init x86_noxsave_setup(char *s) -{ - if (strlen(s)) - return 0; - - fpu__xstate_clear_all_cpu_caps(); - - return 1; -} -__setup("noxsave", x86_noxsave_setup); - -/* - * Disable the XSAVEOPT instruction specifically: - */ -static int __init x86_noxsaveopt_setup(char *s) -{ - setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT); - - return 1; -} -__setup("noxsaveopt", x86_noxsaveopt_setup); - -/* - * Disable the XSAVES instruction: - */ -static int __init x86_noxsaves_setup(char *s) -{ - setup_clear_cpu_cap(X86_FEATURE_XSAVES); - - return 1; -} -__setup("noxsaves", x86_noxsaves_setup); - -/* - * Disable FX save/restore and SSE support: - */ -static int __init x86_nofxsr_setup(char *s) -{ - setup_clear_cpu_cap(X86_FEATURE_FXSR); - setup_clear_cpu_cap(X86_FEATURE_FXSR_OPT); - setup_clear_cpu_cap(X86_FEATURE_XMM); - - return 1; -} -__setup("nofxsr", x86_nofxsr_setup);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Juergen Gross jgross@suse.com
commit ed29210cd6a67425026e78aa298fa434e11a74e3 upstream.
It is used nowhere.
Signed-off-by: Juergen Gross jgross@suse.com Link: http://lkml.kernel.org/r/1447761943-770-1-git-send-email-jgross@suse.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/smp.h | 9 --------- 1 file changed, 9 deletions(-)
--- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -21,15 +21,6 @@ extern int smp_num_siblings; extern unsigned int num_processors;
-static inline bool cpu_has_ht_siblings(void) -{ - bool has_siblings = false; -#ifdef CONFIG_SMP - has_siblings = cpu_has_ht && smp_num_siblings > 1; -#endif - return has_siblings; -} - DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map); DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_map); /* cpus sharing the last level cache: */
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Borislav Petkov bp@suse.de
commit 362f924b64ba0f4be2ee0cb697690c33d40be721 upstream.
Those are stupid and code should use static_cpu_has_safe() or boot_cpu_has() instead. Kill the least used and unused ones.
The remaining ones need more careful inspection before a conversion can happen. On the TODO.
Signed-off-by: Borislav Petkov bp@suse.de Link: http://lkml.kernel.org/r/1449481182-27541-4-git-send-email-bp@alien8.de Cc: David Sterba dsterba@suse.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: Peter Zijlstra a.p.zijlstra@chello.nl Cc: Matt Mackall mpm@selenic.com Cc: Chris Mason clm@fb.com Cc: Josef Bacik jbacik@fb.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/crypto/chacha20_glue.c | 2 - arch/x86/crypto/crc32c-intel_glue.c | 2 - arch/x86/include/asm/cmpxchg_32.h | 2 - arch/x86/include/asm/cmpxchg_64.h | 2 - arch/x86/include/asm/cpufeature.h | 37 +++------------------------- arch/x86/include/asm/xor_32.h | 2 - arch/x86/kernel/cpu/amd.c | 4 +-- arch/x86/kernel/cpu/common.c | 4 ++- arch/x86/kernel/cpu/intel.c | 3 +- arch/x86/kernel/cpu/intel_cacheinfo.c | 6 ++-- arch/x86/kernel/cpu/mtrr/generic.c | 2 - arch/x86/kernel/cpu/mtrr/main.c | 2 - arch/x86/kernel/cpu/perf_event_amd.c | 4 +-- arch/x86/kernel/cpu/perf_event_amd_uncore.c | 11 ++++---- arch/x86/kernel/fpu/init.c | 4 +-- arch/x86/kernel/hw_breakpoint.c | 6 +++- arch/x86/kernel/smpboot.c | 2 - arch/x86/kernel/vm86_32.c | 4 ++- arch/x86/mm/setup_nx.c | 4 +-- drivers/char/hw_random/via-rng.c | 5 ++- drivers/crypto/padlock-aes.c | 2 - drivers/crypto/padlock-sha.c | 2 - drivers/iommu/intel_irq_remapping.c | 2 - fs/btrfs/disk-io.c | 2 - 24 files changed, 48 insertions(+), 68 deletions(-)
--- a/arch/x86/crypto/chacha20_glue.c +++ b/arch/x86/crypto/chacha20_glue.c @@ -125,7 +125,7 @@ static struct crypto_alg alg = {
static int __init chacha20_simd_mod_init(void) { - if (!cpu_has_ssse3) + if (!boot_cpu_has(X86_FEATURE_SSSE3)) return -ENODEV;
#ifdef CONFIG_AS_AVX2 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -257,7 +257,7 @@ static int __init crc32c_intel_mod_init( if (!x86_match_cpu(crc32c_cpu_id)) return -ENODEV; #ifdef CONFIG_X86_64 - if (cpu_has_pclmulqdq) { + if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) { alg.update = crc32c_pcl_intel_update; alg.finup = crc32c_pcl_intel_finup; alg.digest = crc32c_pcl_intel_digest; --- a/arch/x86/include/asm/cmpxchg_32.h +++ b/arch/x86/include/asm/cmpxchg_32.h @@ -109,6 +109,6 @@ static inline u64 __cmpxchg64_local(vola
#endif
-#define system_has_cmpxchg_double() cpu_has_cx8 +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX8)
#endif /* _ASM_X86_CMPXCHG_32_H */ --- a/arch/x86/include/asm/cmpxchg_64.h +++ b/arch/x86/include/asm/cmpxchg_64.h @@ -18,6 +18,6 @@ static inline void set_64bit(volatile u6 cmpxchg_local((ptr), (o), (n)); \ })
-#define system_has_cmpxchg_double() cpu_has_cx16 +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16)
#endif /* _ASM_X86_CMPXCHG_64_H */ --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -368,58 +368,29 @@ extern const char * const x86_bug_flags[ #define setup_force_cpu_bug(bit) setup_force_cpu_cap(bit)
#define cpu_has_fpu boot_cpu_has(X86_FEATURE_FPU) -#define cpu_has_de boot_cpu_has(X86_FEATURE_DE) #define cpu_has_pse boot_cpu_has(X86_FEATURE_PSE) #define cpu_has_tsc boot_cpu_has(X86_FEATURE_TSC) #define cpu_has_pge boot_cpu_has(X86_FEATURE_PGE) #define cpu_has_apic boot_cpu_has(X86_FEATURE_APIC) -#define cpu_has_sep boot_cpu_has(X86_FEATURE_SEP) -#define cpu_has_mtrr boot_cpu_has(X86_FEATURE_MTRR) -#define cpu_has_mmx boot_cpu_has(X86_FEATURE_MMX) #define cpu_has_fxsr boot_cpu_has(X86_FEATURE_FXSR) #define cpu_has_xmm boot_cpu_has(X86_FEATURE_XMM) #define cpu_has_xmm2 boot_cpu_has(X86_FEATURE_XMM2) -#define cpu_has_xmm3 boot_cpu_has(X86_FEATURE_XMM3) -#define cpu_has_ssse3 boot_cpu_has(X86_FEATURE_SSSE3) #define cpu_has_aes boot_cpu_has(X86_FEATURE_AES) #define cpu_has_avx boot_cpu_has(X86_FEATURE_AVX) #define cpu_has_avx2 boot_cpu_has(X86_FEATURE_AVX2) -#define cpu_has_ht boot_cpu_has(X86_FEATURE_HT) -#define cpu_has_nx boot_cpu_has(X86_FEATURE_NX) -#define cpu_has_xstore boot_cpu_has(X86_FEATURE_XSTORE) -#define cpu_has_xstore_enabled boot_cpu_has(X86_FEATURE_XSTORE_EN) -#define cpu_has_xcrypt boot_cpu_has(X86_FEATURE_XCRYPT) -#define cpu_has_xcrypt_enabled boot_cpu_has(X86_FEATURE_XCRYPT_EN) -#define cpu_has_ace2 boot_cpu_has(X86_FEATURE_ACE2) -#define cpu_has_ace2_enabled boot_cpu_has(X86_FEATURE_ACE2_EN) -#define cpu_has_phe boot_cpu_has(X86_FEATURE_PHE) -#define cpu_has_phe_enabled boot_cpu_has(X86_FEATURE_PHE_EN) -#define cpu_has_pmm boot_cpu_has(X86_FEATURE_PMM) -#define cpu_has_pmm_enabled boot_cpu_has(X86_FEATURE_PMM_EN) -#define cpu_has_ds boot_cpu_has(X86_FEATURE_DS) -#define cpu_has_pebs boot_cpu_has(X86_FEATURE_PEBS) #define cpu_has_clflush boot_cpu_has(X86_FEATURE_CLFLUSH) -#define cpu_has_bts boot_cpu_has(X86_FEATURE_BTS) #define cpu_has_gbpages boot_cpu_has(X86_FEATURE_GBPAGES) #define cpu_has_arch_perfmon boot_cpu_has(X86_FEATURE_ARCH_PERFMON) #define cpu_has_pat boot_cpu_has(X86_FEATURE_PAT) -#define cpu_has_xmm4_1 boot_cpu_has(X86_FEATURE_XMM4_1) -#define cpu_has_xmm4_2 boot_cpu_has(X86_FEATURE_XMM4_2) #define cpu_has_x2apic boot_cpu_has(X86_FEATURE_X2APIC) #define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE) -#define cpu_has_xsaveopt boot_cpu_has(X86_FEATURE_XSAVEOPT) #define cpu_has_xsaves boot_cpu_has(X86_FEATURE_XSAVES) #define cpu_has_osxsave boot_cpu_has(X86_FEATURE_OSXSAVE) #define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR) -#define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ) -#define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE) -#define cpu_has_perfctr_nb boot_cpu_has(X86_FEATURE_PERFCTR_NB) -#define cpu_has_perfctr_l2 boot_cpu_has(X86_FEATURE_PERFCTR_L2) -#define cpu_has_cx8 boot_cpu_has(X86_FEATURE_CX8) -#define cpu_has_cx16 boot_cpu_has(X86_FEATURE_CX16) -#define cpu_has_eager_fpu boot_cpu_has(X86_FEATURE_EAGER_FPU) -#define cpu_has_topoext boot_cpu_has(X86_FEATURE_TOPOEXT) -#define cpu_has_bpext boot_cpu_has(X86_FEATURE_BPEXT) +/* + * Do not add any more of those clumsy macros - use static_cpu_has_safe() for + * fast paths and boot_cpu_has() otherwise! + */
#if __GNUC__ >= 4 extern void warn_pre_alternatives(void); --- a/arch/x86/include/asm/xor_32.h +++ b/arch/x86/include/asm/xor_32.h @@ -553,7 +553,7 @@ do { \ if (cpu_has_xmm) { \ xor_speed(&xor_block_pIII_sse); \ xor_speed(&xor_block_sse_pf64); \ - } else if (cpu_has_mmx) { \ + } else if (boot_cpu_has(X86_FEATURE_MMX)) { \ xor_speed(&xor_block_pII_mmx); \ xor_speed(&xor_block_p5_mmx); \ } else { \ --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -304,7 +304,7 @@ static void amd_get_topology(struct cpui int cpu = smp_processor_id();
/* get information required for multi-node processors */ - if (cpu_has_topoext) { + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { u32 eax, ebx, ecx, edx;
cpuid(0x8000001e, &eax, &ebx, &ecx, &edx); @@ -954,7 +954,7 @@ static bool cpu_has_amd_erratum(struct c
void set_dr_addr_mask(unsigned long mask, int dr) { - if (!cpu_has_bpext) + if (!boot_cpu_has(X86_FEATURE_BPEXT)) return;
switch (dr) { --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1539,7 +1539,9 @@ void cpu_init(void)
printk(KERN_INFO "Initializing CPU#%d\n", cpu);
- if (cpu_feature_enabled(X86_FEATURE_VME) || cpu_has_tsc || cpu_has_de) + if (cpu_feature_enabled(X86_FEATURE_VME) || + cpu_has_tsc || + boot_cpu_has(X86_FEATURE_DE)) cr4_clear_bits(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
load_current_idt(); --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -445,7 +445,8 @@ static void init_intel(struct cpuinfo_x8
if (cpu_has_xmm2) set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); - if (cpu_has_ds) { + + if (boot_cpu_has(X86_FEATURE_DS)) { unsigned int l1; rdmsr(MSR_IA32_MISC_ENABLE, l1, l2); if (!(l1 & (1<<11))) --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -591,7 +591,7 @@ cpuid4_cache_lookup_regs(int index, stru unsigned edx;
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { - if (cpu_has_topoext) + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) cpuid_count(0x8000001d, index, &eax.full, &ebx.full, &ecx.full, &edx); else @@ -637,7 +637,7 @@ static int find_num_cache_leaves(struct void init_amd_cacheinfo(struct cpuinfo_x86 *c) {
- if (cpu_has_topoext) { + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { num_cache_leaves = find_num_cache_leaves(c); } else if (c->extended_cpuid_level >= 0x80000006) { if (cpuid_edx(0x80000006) & 0xf000) @@ -809,7 +809,7 @@ static int __cache_amd_cpumap_setup(unsi struct cacheinfo *this_leaf; int i, sibling;
- if (cpu_has_topoext) { + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { unsigned int apicid, nshared, first, last;
this_leaf = this_cpu_ci->info_list + index; --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -349,7 +349,7 @@ static void get_fixed_ranges(mtrr_type *
void mtrr_save_fixed_ranges(void *info) { - if (cpu_has_mtrr) + if (boot_cpu_has(X86_FEATURE_MTRR)) get_fixed_ranges(mtrr_state.fixed_ranges); }
--- a/arch/x86/kernel/cpu/mtrr/main.c +++ b/arch/x86/kernel/cpu/mtrr/main.c @@ -682,7 +682,7 @@ void __init mtrr_bp_init(void)
phys_addr = 32;
- if (cpu_has_mtrr) { + if (boot_cpu_has(X86_FEATURE_MTRR)) { mtrr_if = &generic_mtrr_ops; size_or_mask = SIZE_OR_MASK_BITS(36); size_and_mask = 0x00f00000; --- a/arch/x86/kernel/cpu/perf_event_amd.c +++ b/arch/x86/kernel/cpu/perf_event_amd.c @@ -160,7 +160,7 @@ static inline int amd_pmu_addr_offset(in if (offset) return offset;
- if (!cpu_has_perfctr_core) + if (!boot_cpu_has(X86_FEATURE_PERFCTR_CORE)) offset = index; else offset = index << 1; @@ -652,7 +652,7 @@ static __initconst const struct x86_pmu
static int __init amd_core_pmu_init(void) { - if (!cpu_has_perfctr_core) + if (!boot_cpu_has(X86_FEATURE_PERFCTR_CORE)) return 0;
switch (boot_cpu_data.x86) { --- a/arch/x86/kernel/cpu/perf_event_amd_uncore.c +++ b/arch/x86/kernel/cpu/perf_event_amd_uncore.c @@ -523,10 +523,10 @@ static int __init amd_uncore_init(void) if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) goto fail_nodev;
- if (!cpu_has_topoext) + if (!boot_cpu_has(X86_FEATURE_TOPOEXT)) goto fail_nodev;
- if (cpu_has_perfctr_nb) { + if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) { amd_uncore_nb = alloc_percpu(struct amd_uncore *); if (!amd_uncore_nb) { ret = -ENOMEM; @@ -540,7 +540,7 @@ static int __init amd_uncore_init(void) ret = 0; }
- if (cpu_has_perfctr_l2) { + if (boot_cpu_has(X86_FEATURE_PERFCTR_L2)) { amd_uncore_l2 = alloc_percpu(struct amd_uncore *); if (!amd_uncore_l2) { ret = -ENOMEM; @@ -583,10 +583,11 @@ fail_online:
/* amd_uncore_nb/l2 should have been freed by cleanup_cpu_online */ amd_uncore_nb = amd_uncore_l2 = NULL; - if (cpu_has_perfctr_l2) + + if (boot_cpu_has(X86_FEATURE_PERFCTR_L2)) perf_pmu_unregister(&amd_l2_pmu); fail_l2: - if (cpu_has_perfctr_nb) + if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) perf_pmu_unregister(&amd_nb_pmu); if (amd_uncore_l2) free_percpu(amd_uncore_l2); --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -15,7 +15,7 @@ */ static void fpu__init_cpu_ctx_switch(void) { - if (!cpu_has_eager_fpu) + if (!boot_cpu_has(X86_FEATURE_EAGER_FPU)) stts(); else clts(); @@ -279,7 +279,7 @@ static void __init fpu__init_system_ctx_ current_thread_info()->status = 0;
/* Auto enable eagerfpu for xsaveopt */ - if (cpu_has_xsaveopt && eagerfpu != DISABLE) + if (boot_cpu_has(X86_FEATURE_XSAVEOPT) && eagerfpu != DISABLE) eagerfpu = ENABLE;
if (xfeatures_mask & XFEATURE_MASK_EAGER) { --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -300,6 +300,10 @@ static int arch_build_bp_info(struct per return -EINVAL; if (bp->attr.bp_addr & (bp->attr.bp_len - 1)) return -EINVAL; + + if (!boot_cpu_has(X86_FEATURE_BPEXT)) + return -EOPNOTSUPP; + /* * It's impossible to use a range breakpoint to fake out * user vs kernel detection because bp_len - 1 can't @@ -307,8 +311,6 @@ static int arch_build_bp_info(struct per * breakpoints, then we'll have to check for kprobe-blacklisted * addresses anywhere in the range. */ - if (!cpu_has_bpext) - return -EOPNOTSUPP; info->mask = bp->attr.bp_len - 1; info->len = X86_BREAKPOINT_LEN_1; } --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -295,7 +295,7 @@ do { \
static bool match_smt(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) { - if (cpu_has_topoext) { + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
if (c->phys_proc_id == o->phys_proc_id && --- a/arch/x86/kernel/vm86_32.c +++ b/arch/x86/kernel/vm86_32.c @@ -357,8 +357,10 @@ static long do_sys_vm86(struct vm86plus_ tss = &per_cpu(cpu_tss, get_cpu()); /* make room for real-mode segments */ tsk->thread.sp0 += 16; - if (cpu_has_sep) + + if (static_cpu_has_safe(X86_FEATURE_SEP)) tsk->thread.sysenter_cs = 0; + load_sp0(tss, &tsk->thread); put_cpu();
--- a/arch/x86/mm/setup_nx.c +++ b/arch/x86/mm/setup_nx.c @@ -31,7 +31,7 @@ early_param("noexec", noexec_setup);
void x86_configure_nx(void) { - if (cpu_has_nx && !disable_nx) + if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx) __supported_pte_mask |= _PAGE_NX; else __supported_pte_mask &= ~_PAGE_NX; @@ -39,7 +39,7 @@ void x86_configure_nx(void)
void __init x86_report_nx(void) { - if (!cpu_has_nx) { + if (!boot_cpu_has(X86_FEATURE_NX)) { printk(KERN_NOTICE "Notice: NX (Execute Disable) protection " "missing in CPU!\n"); } else { --- a/drivers/char/hw_random/via-rng.c +++ b/drivers/char/hw_random/via-rng.c @@ -140,7 +140,7 @@ static int via_rng_init(struct hwrng *rn * RNG configuration like it used to be the case in this * register */ if ((c->x86 == 6) && (c->x86_model >= 0x0f)) { - if (!cpu_has_xstore_enabled) { + if (!boot_cpu_has(X86_FEATURE_XSTORE_EN)) { pr_err(PFX "can't enable hardware RNG " "if XSTORE is not enabled\n"); return -ENODEV; @@ -200,8 +200,9 @@ static int __init mod_init(void) { int err;
- if (!cpu_has_xstore) + if (!boot_cpu_has(X86_FEATURE_XSTORE)) return -ENODEV; + pr_info("VIA RNG detected\n"); err = hwrng_register(&via_rng); if (err) { --- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -515,7 +515,7 @@ static int __init padlock_init(void) if (!x86_match_cpu(padlock_cpu_id)) return -ENODEV;
- if (!cpu_has_xcrypt_enabled) { + if (!boot_cpu_has(X86_FEATURE_XCRYPT_EN)) { printk(KERN_NOTICE PFX "VIA PadLock detected, but not enabled. Hmm, strange...\n"); return -ENODEV; } --- a/drivers/crypto/padlock-sha.c +++ b/drivers/crypto/padlock-sha.c @@ -540,7 +540,7 @@ static int __init padlock_init(void) struct shash_alg *sha1; struct shash_alg *sha256;
- if (!x86_match_cpu(padlock_sha_ids) || !cpu_has_phe_enabled) + if (!x86_match_cpu(padlock_sha_ids) || !boot_cpu_has(X86_FEATURE_PHE_EN)) return -ENODEV;
/* Register the newly added algorithm module if on * --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -753,7 +753,7 @@ static inline void set_irq_posting_cap(v * should have X86_FEATURE_CX16 support, this has been confirmed * with Intel hardware guys. */ - if ( cpu_has_cx16 ) + if (boot_cpu_has(X86_FEATURE_CX16)) intel_irq_remap_ops.capability |= 1 << IRQ_POSTING_CAP;
for_each_iommu(iommu, drhd) --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -923,7 +923,7 @@ static int check_async_write(struct inod if (bio_flags & EXTENT_BIO_TREE_LOG) return 0; #ifdef CONFIG_X86 - if (cpu_has_xmm4_2) + if (static_cpu_has_safe(X86_FEATURE_XMM4_2)) return 0; #endif return 1;
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: yu-cheng yu yu-cheng.yu@intel.com
commit a5fe93a549c54838063d2952dd9643b0b18aa67f upstream.
This issue is a fallout from the command-line parsing move.
When "eagerfpu=off" is given as a command-line input, the kernel should disable MPX support. The decision for turning off MPX was made in fpu__init_system_ctx_switch(), which is after the selection of the XSAVE format. This patch fixes it by getting that decision done earlier in fpu__init_system_xstate().
Signed-off-by: Yu-cheng Yu yu-cheng.yu@intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Borislav Petkov bp@suse.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Ravi V. Shankar ravi.v.shankar@intel.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/1452119094-7252-4-git-send-email-yu-cheng.yu@intel.... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/fpu/internal.h | 1 arch/x86/kernel/fpu/init.c | 56 ++++++++++++++++++++++++++++-------- arch/x86/kernel/fpu/xstate.c | 3 - 3 files changed, 46 insertions(+), 14 deletions(-)
--- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -42,6 +42,7 @@ extern void fpu__init_cpu_xstate(void); extern void fpu__init_system(struct cpuinfo_x86 *c); extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); +extern u64 fpu__get_supported_xfeatures_mask(void);
/* * Debugging facility: --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -266,7 +266,45 @@ static void __init fpu__init_system_xsta static enum { AUTO, ENABLE, DISABLE } eagerfpu = AUTO;
/* + * Find supported xfeatures based on cpu features and command-line input. + * This must be called after fpu__init_parse_early_param() is called and + * xfeatures_mask is enumerated. + */ +u64 __init fpu__get_supported_xfeatures_mask(void) +{ + /* Support all xfeatures known to us */ + if (eagerfpu != DISABLE) + return XCNTXT_MASK; + + /* Warning of xfeatures being disabled for no eagerfpu mode */ + if (xfeatures_mask & XFEATURE_MASK_EAGER) { + pr_err("x86/fpu: eagerfpu switching disabled, disabling the following xstate features: 0x%llx.\n", + xfeatures_mask & XFEATURE_MASK_EAGER); + } + + /* Return a mask that masks out all features requiring eagerfpu mode */ + return ~XFEATURE_MASK_EAGER; +} + +/* + * Disable features dependent on eagerfpu. + */ +static void __init fpu__clear_eager_fpu_features(void) +{ + setup_clear_cpu_cap(X86_FEATURE_MPX); +} + +/* * Pick the FPU context switching strategy: + * + * When eagerfpu is AUTO or ENABLE, we ensure it is ENABLE if either of + * the following is true: + * + * (1) the cpu has xsaveopt, as it has the optimization and doing eager + * FPU switching has a relatively low cost compared to a plain xsave; + * (2) the cpu has xsave features (e.g. MPX) that depend on eager FPU + * switching. Should the kernel boot with noxsaveopt, we support MPX + * with eager FPU switching at a higher cost. */ static void __init fpu__init_system_ctx_switch(void) { @@ -278,19 +316,11 @@ static void __init fpu__init_system_ctx_ WARN_ON_FPU(current->thread.fpu.fpstate_active); current_thread_info()->status = 0;
- /* Auto enable eagerfpu for xsaveopt */ if (boot_cpu_has(X86_FEATURE_XSAVEOPT) && eagerfpu != DISABLE) eagerfpu = ENABLE;
- if (xfeatures_mask & XFEATURE_MASK_EAGER) { - if (eagerfpu == DISABLE) { - pr_err("x86/fpu: eagerfpu switching disabled, disabling the following xstate features: 0x%llx.\n", - xfeatures_mask & XFEATURE_MASK_EAGER); - xfeatures_mask &= ~XFEATURE_MASK_EAGER; - } else { - eagerfpu = ENABLE; - } - } + if (xfeatures_mask & XFEATURE_MASK_EAGER) + eagerfpu = ENABLE;
if (eagerfpu == ENABLE) setup_force_cpu_cap(X86_FEATURE_EAGER_FPU); @@ -308,10 +338,12 @@ static void __init fpu__init_parse_early * No need to check "eagerfpu=auto" again, since it is the * initial default. */ - if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) + if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) { eagerfpu = DISABLE; - else if (cmdline_find_option_bool(boot_command_line, "eagerfpu=on")) + fpu__clear_eager_fpu_features(); + } else if (cmdline_find_option_bool(boot_command_line, "eagerfpu=on")) { eagerfpu = ENABLE; + }
if (cmdline_find_option_bool(boot_command_line, "no387")) setup_clear_cpu_cap(X86_FEATURE_FPU); --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -632,8 +632,7 @@ void __init fpu__init_system_xstate(void BUG(); }
- /* Support only the state known to the OS: */ - xfeatures_mask = xfeatures_mask & XCNTXT_MASK; + xfeatures_mask &= fpu__get_supported_xfeatures_mask();
/* Enable xstate instructions to be able to continue with initialization: */ fpu__init_cpu_xstate();
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: yu-cheng yu yu-cheng.yu@intel.com
commit 394db20ca240741a08d472173db13d6f6a6e5a28 upstream.
When "eagerfpu=off" is given as a command-line input, the kernel should disable AVX support.
The Task Switched bit used for lazy context switching does not support AVX. If AVX is enabled without eagerfpu context switching, one task's AVX state could become corrupted or leak to other tasks. This is a bug and has bad security implications.
This only affects systems that have AVX/AVX2/AVX512 and this issue will be found only when one actually uses AVX/AVX2/AVX512 _AND_ does eagerfpu=off.
Reference: Intel Software Developer's Manual Vol. 3A
Sec. 2.5 Control Registers: TS Task Switched bit (bit 3 of CR0) -- Allows the saving of the x87 FPU/ MMX/SSE/SSE2/SSE3/SSSE3/SSE4 context on a task switch to be delayed until an x87 FPU/MMX/SSE/SSE2/SSE3/SSSE3/SSE4 instruction is actually executed by the new task.
Sec. 13.4.1 Using the TS Flag to Control the Saving of the X87 FPU and SSE State When the TS flag is set, the processor monitors the instruction stream for x87 FPU, MMX, SSE instructions. When the processor detects one of these instructions, it raises a device-not-available exeception (#NM) prior to executing the instruction.
Signed-off-by: Yu-cheng Yu yu-cheng.yu@intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Borislav Petkov bp@suse.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Ravi V. Shankar ravi.v.shankar@intel.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/1452119094-7252-5-git-send-email-yu-cheng.yu@intel.... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/fpu/xstate.h | 11 ++++++----- arch/x86/kernel/fpu/init.c | 6 ++++++ 2 files changed, 12 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -20,15 +20,16 @@
/* Supported features which support lazy state saving */ #define XFEATURE_MASK_LAZY (XFEATURE_MASK_FP | \ - XFEATURE_MASK_SSE | \ + XFEATURE_MASK_SSE) + +/* Supported features which require eager state saving */ +#define XFEATURE_MASK_EAGER (XFEATURE_MASK_BNDREGS | \ + XFEATURE_MASK_BNDCSR | \ XFEATURE_MASK_YMM | \ - XFEATURE_MASK_OPMASK | \ + XFEATURE_MASK_OPMASK | \ XFEATURE_MASK_ZMM_Hi256 | \ XFEATURE_MASK_Hi16_ZMM)
-/* Supported features which require eager state saving */ -#define XFEATURE_MASK_EAGER (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR) - /* All currently supported features */ #define XCNTXT_MASK (XFEATURE_MASK_LAZY | XFEATURE_MASK_EAGER)
--- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -292,6 +292,12 @@ u64 __init fpu__get_supported_xfeatures_ static void __init fpu__clear_eager_fpu_features(void) { setup_clear_cpu_cap(X86_FEATURE_MPX); + setup_clear_cpu_cap(X86_FEATURE_AVX); + setup_clear_cpu_cap(X86_FEATURE_AVX2); + setup_clear_cpu_cap(X86_FEATURE_AVX512F); + setup_clear_cpu_cap(X86_FEATURE_AVX512PF); + setup_clear_cpu_cap(X86_FEATURE_AVX512ER); + setup_clear_cpu_cap(X86_FEATURE_AVX512CD); }
/*
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit 58122bf1d856a4ea9581d62a07c557d997d46a19 upstream.
We have eager and lazy FPU modes, introduced in:
304bceda6a18 ("x86, fpu: use non-lazy fpu restore for processors supporting xsave")
The result is rather messy. There are two code paths in almost all of the FPU code, and only one of them (the eager case) is tested frequently, since most kernel developers have new enough hardware that we use eagerfpu.
It seems that, on any remotely recent hardware, eagerfpu is a win: glibc uses SSE2, so laziness is probably overoptimistic, and, in any case, manipulating TS is far slower that saving and restoring the full state. (Stores to CR0.TS are serializing and are poorly optimized.)
To try to shake out any latent issues on old hardware, this changes the default to eager on all CPUs. If no performance or functionality problems show up, a subsequent patch could remove lazy mode entirely.
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Rik van Riel riel@redhat.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/ac290de61bf08d9cfc2664a4f5080257ffc1075a.1453675014... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/fpu/init.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)
--- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -252,7 +252,10 @@ static void __init fpu__init_system_xsta * not only saved the restores along the way, but we also have the * FPU ready to be used for the original task. * - * 'eager' switching is used on modern CPUs, there we switch the FPU + * 'lazy' is deprecated because it's almost never a performance win + * and it's much more complicated than 'eager'. + * + * 'eager' switching is by default on all CPUs, there we switch the FPU * state during every context switch, regardless of whether the task * has used FPU instructions in that time slice or not. This is done * because modern FPU context saving instructions are able to optimize @@ -263,7 +266,7 @@ static void __init fpu__init_system_xsta * to use 'eager' restores, if we detect that a task is using the FPU * frequently. See the fpu->counter logic in fpu/internal.h for that. ] */ -static enum { AUTO, ENABLE, DISABLE } eagerfpu = AUTO; +static enum { ENABLE, DISABLE } eagerfpu = ENABLE;
/* * Find supported xfeatures based on cpu features and command-line input. @@ -340,15 +343,9 @@ static void __init fpu__init_system_ctx_ */ static void __init fpu__init_parse_early_param(void) { - /* - * No need to check "eagerfpu=auto" again, since it is the - * initial default. - */ if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) { eagerfpu = DISABLE; fpu__clear_eager_fpu_features(); - } else if (cmdline_find_option_bool(boot_command_line, "eagerfpu=on")) { - eagerfpu = ENABLE; }
if (cmdline_find_option_bool(boot_command_line, "no387"))
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit f363938c70a04e6bc99023a5e0c44ef7879b903f upstream.
After fixing FPU option parsing, we now parse the 'no387' boot option too early: no387 clears X86_FEATURE_FPU before it's even probed, so the boot CPU promptly re-enables it.
I suspect it gets even more confused on SMP.
Fix the probing code to leave X86_FEATURE_FPU off if it's been disabled by setup_clear_cpu_cap().
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Fixes: 4f81cbafcce2 ("x86/fpu: Fix early FPU command-line parsing") Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/fpu/init.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -78,13 +78,15 @@ static void fpu__init_system_early_gener cr0 &= ~(X86_CR0_TS | X86_CR0_EM); write_cr0(cr0);
- asm volatile("fninit ; fnstsw %0 ; fnstcw %1" - : "+m" (fsw), "+m" (fcw)); + if (!test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) { + asm volatile("fninit ; fnstsw %0 ; fnstcw %1" + : "+m" (fsw), "+m" (fcw));
- if (fsw == 0 && (fcw & 0x103f) == 0x003f) - set_cpu_cap(c, X86_FEATURE_FPU); - else - clear_cpu_cap(c, X86_FEATURE_FPU); + if (fsw == 0 && (fcw & 0x103f) == 0x003f) + set_cpu_cap(c, X86_FEATURE_FPU); + else + clear_cpu_cap(c, X86_FEATURE_FPU); + }
#ifndef CONFIG_MATH_EMULATION if (!cpu_has_fpu) {
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu-cheng Yu yu-cheng.yu@intel.com
commit a65050c6f17e52442716138d48d0a47301a8344b upstream.
Leonid Shatz noticed that the SDM interpretation of the following recent commit:
394db20ca240741 ("x86/fpu: Disable AVX when eagerfpu is off")
... is incorrect and that the original behavior of the FPU code was correct.
Because AVX is not stated in CR0 TS bit description, it was mistakenly believed to be not supported for lazy context switch. This turns out to be false:
Intel Software Developer's Manual Vol. 3A, Sec. 2.5 Control Registers:
'TS Task Switched bit (bit 3 of CR0) -- Allows the saving of the x87 FPU/ MMX/SSE/SSE2/SSE3/SSSE3/SSE4 context on a task switch to be delayed until an x87 FPU/MMX/SSE/SSE2/SSE3/SSSE3/SSE4 instruction is actually executed by the new task.'
Intel Software Developer's Manual Vol. 2A, Sec. 2.4 Instruction Exception Specification:
'AVX instructions refer to exceptions by classes that include #NM "Device Not Available" exception for lazy context switch.'
So revert the commit.
Reported-by: Leonid Shatz leonid.shatz@ravellosystems.com Signed-off-by: Yu-cheng Yu yu-cheng.yu@intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bp@suse.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ravi V. Shankar ravi.v.shankar@intel.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/1457569734-3785-1-git-send-email-yu-cheng.yu@intel.... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/fpu/xstate.h | 9 ++++----- arch/x86/kernel/fpu/init.c | 6 ------ 2 files changed, 4 insertions(+), 11 deletions(-)
--- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -20,16 +20,15 @@
/* Supported features which support lazy state saving */ #define XFEATURE_MASK_LAZY (XFEATURE_MASK_FP | \ - XFEATURE_MASK_SSE) - -/* Supported features which require eager state saving */ -#define XFEATURE_MASK_EAGER (XFEATURE_MASK_BNDREGS | \ - XFEATURE_MASK_BNDCSR | \ + XFEATURE_MASK_SSE | \ XFEATURE_MASK_YMM | \ XFEATURE_MASK_OPMASK | \ XFEATURE_MASK_ZMM_Hi256 | \ XFEATURE_MASK_Hi16_ZMM)
+/* Supported features which require eager state saving */ +#define XFEATURE_MASK_EAGER (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR) + /* All currently supported features */ #define XCNTXT_MASK (XFEATURE_MASK_LAZY | XFEATURE_MASK_EAGER)
--- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -297,12 +297,6 @@ u64 __init fpu__get_supported_xfeatures_ static void __init fpu__clear_eager_fpu_features(void) { setup_clear_cpu_cap(X86_FEATURE_MPX); - setup_clear_cpu_cap(X86_FEATURE_AVX); - setup_clear_cpu_cap(X86_FEATURE_AVX2); - setup_clear_cpu_cap(X86_FEATURE_AVX512F); - setup_clear_cpu_cap(X86_FEATURE_AVX512PF); - setup_clear_cpu_cap(X86_FEATURE_AVX512ER); - setup_clear_cpu_cap(X86_FEATURE_AVX512CD); }
/*
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Borislav Petkov bp@alien8.de
commit 6e6867093de35141f0a76b66ac13f9f2e2c8e77a upstream.
i486 derived cores like Intel Quark support only the very old, legacy x87 FPU (FSAVE/FRSTOR, CPUID bit FXSR is not set), and our FPU code wasn't handling the saving and restoring there properly in the 'eagerfpu' case.
So after we made eagerfpu the default for all CPU types:
58122bf1d856 x86/fpu: Default eagerfpu=on on all CPUs
these old FPU designs broke. First, Andy Shevchenko reported a splat:
WARNING: CPU: 0 PID: 823 at arch/x86/include/asm/fpu/internal.h:163 fpu__clear+0x8c/0x160
which was us trying to execute FXRSTOR on those machines even though they don't support it.
After taking care of that, Bryan O'Donoghue reported that a simple FPU test still failed because we weren't initializing the FPU state properly on those machines.
Take care of all that.
Reported-and-tested-by: Bryan O'Donoghue pure.logic@nexus-software.ie Reported-by: Andy Shevchenko andy.shevchenko@gmail.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Linus Torvalds torvalds@linux-foundation.org Cc: Andrew Morton akpm@linux-foundation.org Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Yu-cheng yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/20160311113206.GD4312@pd.tnic Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/fpu/core.c | 4 +++- arch/x86/kernel/fpu/init.c | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -409,8 +409,10 @@ static inline void copy_init_fpstate_to_ { if (use_xsave()) copy_kernel_to_xregs(&init_fpstate.xsave, -1); - else + else if (static_cpu_has(X86_FEATURE_FXSR)) copy_kernel_to_fxregs(&init_fpstate.fxsave); + else + copy_kernel_to_fregs(&init_fpstate.fsave); }
/* --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -135,7 +135,7 @@ static void __init fpu__init_system_gene * Set up the legacy init FPU context. (xstate init might overwrite this * with a more modern format, if the CPU supports it.) */ - fpstate_init_fxstate(&init_fpstate.fxsave); + fpstate_init(&init_fpstate);
fpu__init_system_mxcsr(); }
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit ca6938a1cd8a1c5e861a99b67f84ac166fc2b9e7 upstream.
Since commit:
58122bf1d856 ("x86/fpu: Default eagerfpu=on on all CPUs")
... in Linux 4.6, eager FPU mode has been the default on all x86 systems, and no one has reported any regressions.
This patch removes the ability to enable lazy mode: use_eager_fpu() becomes "return true" and all of the FPU mode selection machinery is removed.
Signed-off-by: Andy Lutomirski luto@kernel.org Signed-off-by: Rik van Riel riel@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Josh Poimboeuf jpoimboe@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Thomas Gleixner tglx@linutronix.de Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-3-git-send-email-riel@redhat.com Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/cpufeature.h | 2 arch/x86/include/asm/fpu/internal.h | 2 arch/x86/kernel/fpu/init.c | 91 +----------------------------------- 3 files changed, 5 insertions(+), 90 deletions(-)
--- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -104,7 +104,7 @@ #define X86_FEATURE_EXTD_APICID ( 3*32+26) /* has extended APICID (8 bits) */ #define X86_FEATURE_AMD_DCM ( 3*32+27) /* multi-node processor */ #define X86_FEATURE_APERFMPERF ( 3*32+28) /* APERFMPERF */ -#define X86_FEATURE_EAGER_FPU ( 3*32+29) /* "eagerfpu" Non lazy FPU restore */ +/* free, was #define X86_FEATURE_EAGER_FPU ( 3*32+29) * "eagerfpu" Non lazy FPU restore */ #define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* TSC doesn't stop in S3 state */
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) { - return static_cpu_has_safe(X86_FEATURE_EAGER_FPU); + return true; }
static __always_inline __pure bool use_xsaveopt(void) --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -15,10 +15,7 @@ */ static void fpu__init_cpu_ctx_switch(void) { - if (!boot_cpu_has(X86_FEATURE_EAGER_FPU)) - stts(); - else - clts(); + clts(); }
/* @@ -235,82 +232,16 @@ static void __init fpu__init_system_xsta }
/* - * FPU context switching strategies: - * - * Against popular belief, we don't do lazy FPU saves, due to the - * task migration complications it brings on SMP - we only do - * lazy FPU restores. - * - * 'lazy' is the traditional strategy, which is based on setting - * CR0::TS to 1 during context-switch (instead of doing a full - * restore of the FPU state), which causes the first FPU instruction - * after the context switch (whenever it is executed) to fault - at - * which point we lazily restore the FPU state into FPU registers. - * - * Tasks are of course under no obligation to execute FPU instructions, - * so it can easily happen that another context-switch occurs without - * a single FPU instruction being executed. If we eventually switch - * back to the original task (that still owns the FPU) then we have - * not only saved the restores along the way, but we also have the - * FPU ready to be used for the original task. - * - * 'lazy' is deprecated because it's almost never a performance win - * and it's much more complicated than 'eager'. - * - * 'eager' switching is by default on all CPUs, there we switch the FPU - * state during every context switch, regardless of whether the task - * has used FPU instructions in that time slice or not. This is done - * because modern FPU context saving instructions are able to optimize - * state saving and restoration in hardware: they can detect both - * unused and untouched FPU state and optimize accordingly. - * - * [ Note that even in 'lazy' mode we might optimize context switches - * to use 'eager' restores, if we detect that a task is using the FPU - * frequently. See the fpu->counter logic in fpu/internal.h for that. ] - */ -static enum { ENABLE, DISABLE } eagerfpu = ENABLE; - -/* * Find supported xfeatures based on cpu features and command-line input. * This must be called after fpu__init_parse_early_param() is called and * xfeatures_mask is enumerated. */ u64 __init fpu__get_supported_xfeatures_mask(void) { - /* Support all xfeatures known to us */ - if (eagerfpu != DISABLE) - return XCNTXT_MASK; - - /* Warning of xfeatures being disabled for no eagerfpu mode */ - if (xfeatures_mask & XFEATURE_MASK_EAGER) { - pr_err("x86/fpu: eagerfpu switching disabled, disabling the following xstate features: 0x%llx.\n", - xfeatures_mask & XFEATURE_MASK_EAGER); - } - - /* Return a mask that masks out all features requiring eagerfpu mode */ - return ~XFEATURE_MASK_EAGER; + return XCNTXT_MASK; }
-/* - * Disable features dependent on eagerfpu. - */ -static void __init fpu__clear_eager_fpu_features(void) -{ - setup_clear_cpu_cap(X86_FEATURE_MPX); -} - -/* - * Pick the FPU context switching strategy: - * - * When eagerfpu is AUTO or ENABLE, we ensure it is ENABLE if either of - * the following is true: - * - * (1) the cpu has xsaveopt, as it has the optimization and doing eager - * FPU switching has a relatively low cost compared to a plain xsave; - * (2) the cpu has xsave features (e.g. MPX) that depend on eager FPU - * switching. Should the kernel boot with noxsaveopt, we support MPX - * with eager FPU switching at a higher cost. - */ +/* Legacy code to initialize eager fpu mode. */ static void __init fpu__init_system_ctx_switch(void) { static bool on_boot_cpu = 1; @@ -320,17 +251,6 @@ static void __init fpu__init_system_ctx_
WARN_ON_FPU(current->thread.fpu.fpstate_active); current_thread_info()->status = 0; - - if (boot_cpu_has(X86_FEATURE_XSAVEOPT) && eagerfpu != DISABLE) - eagerfpu = ENABLE; - - if (xfeatures_mask & XFEATURE_MASK_EAGER) - eagerfpu = ENABLE; - - if (eagerfpu == ENABLE) - setup_force_cpu_cap(X86_FEATURE_EAGER_FPU); - - printk(KERN_INFO "x86/fpu: Using '%s' FPU context switches.\n", eagerfpu == ENABLE ? "eager" : "lazy"); }
/* @@ -339,11 +259,6 @@ static void __init fpu__init_system_ctx_ */ static void __init fpu__init_parse_early_param(void) { - if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) { - eagerfpu = DISABLE; - fpu__clear_eager_fpu_features(); - } - if (cmdline_find_option_bool(boot_command_line, "no387")) setup_clear_cpu_cap(X86_FEATURE_FPU);
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code depending on lazy FPU mode. Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we should remove it from the Documentation. This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
Thanks, Daniel Sangorrin
On Fri, Jun 15, 2018 at 01:24:27PM +0900, Daniel Sangorrin wrote:
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code depending on lazy FPU mode. Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we should remove it from the Documentation. This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
What are the git commit ids of those patches in Linus's tree? No need to point to patchwork links, I don't use that tool.
thanks,
greg k-h
-----Original Message----- From: stable-owner@vger.kernel.org [mailto:stable-owner@vger.kernel.org] On Behalf Of 'Greg Kroah-Hartman' Sent: Friday, June 15, 2018 1:56 PM To: Daniel Sangorrin daniel.sangorrin@toshiba.co.jp Cc: linux-kernel@vger.kernel.org; stable@vger.kernel.org; 'Andy Lutomirski' luto@kernel.org; 'Rik van Riel' riel@redhat.com; 'Borislav Petkov' bp@alien8.de; 'Brian Gerst' brgerst@gmail.com; 'Dave Hansen' dave.hansen@linux.intel.com; 'Denys Vlasenko' dvlasenk@redhat.com; 'Fenghua Yu' fenghua.yu@intel.com; 'H. Peter Anvin' hpa@zytor.com; 'Josh Poimboeuf' jpoimboe@redhat.com; 'Linus Torvalds' torvalds@linux-foundation.org; 'Oleg Nesterov' oleg@redhat.com; 'Peter Zijlstra' peterz@infradead.org; 'Quentin Casasnovas' quentin.casasnovas@oracle.com; 'Thomas Gleixner' tglx@linutronix.de; pbonzini@redhat.com; 'Ingo Molnar' mingo@kernel.org Subject: Re: [PATCH 4.4 10/24] x86/fpu: Hard-disable lazy FPU mode
On Fri, Jun 15, 2018 at 01:24:27PM +0900, Daniel Sangorrin wrote:
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code depending on
lazy FPU mode.
Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we should
remove it from the Documentation.
This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
What are the git commit ids of those patches in Linus's tree? No need to point to patchwork links, I don't use that tool.
OK, I got it.
"x86/fpu: Remove use_eager_fpu()": c592b57347069abfc0dcad3b3a302cf882602597 "x86/fpu: Finish excising 'eagerfpu'": e63650840e8b053aa09ad934877e87e9941ed135
Unfortunately, they don't apply cleanly to stable kernels.
Thanks, Daniel Sangorrin
On Fri, Jun 15, 2018 at 02:23:08PM +0900, Daniel Sangorrin wrote:
-----Original Message----- From: stable-owner@vger.kernel.org [mailto:stable-owner@vger.kernel.org] On Behalf Of 'Greg Kroah-Hartman' Sent: Friday, June 15, 2018 1:56 PM To: Daniel Sangorrin daniel.sangorrin@toshiba.co.jp Cc: linux-kernel@vger.kernel.org; stable@vger.kernel.org; 'Andy Lutomirski' luto@kernel.org; 'Rik van Riel' riel@redhat.com; 'Borislav Petkov' bp@alien8.de; 'Brian Gerst' brgerst@gmail.com; 'Dave Hansen' dave.hansen@linux.intel.com; 'Denys Vlasenko' dvlasenk@redhat.com; 'Fenghua Yu' fenghua.yu@intel.com; 'H. Peter Anvin' hpa@zytor.com; 'Josh Poimboeuf' jpoimboe@redhat.com; 'Linus Torvalds' torvalds@linux-foundation.org; 'Oleg Nesterov' oleg@redhat.com; 'Peter Zijlstra' peterz@infradead.org; 'Quentin Casasnovas' quentin.casasnovas@oracle.com; 'Thomas Gleixner' tglx@linutronix.de; pbonzini@redhat.com; 'Ingo Molnar' mingo@kernel.org Subject: Re: [PATCH 4.4 10/24] x86/fpu: Hard-disable lazy FPU mode
On Fri, Jun 15, 2018 at 01:24:27PM +0900, Daniel Sangorrin wrote:
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code depending on
lazy FPU mode.
Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we should
remove it from the Documentation.
This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
What are the git commit ids of those patches in Linus's tree? No need to point to patchwork links, I don't use that tool.
OK, I got it.
"x86/fpu: Remove use_eager_fpu()": c592b57347069abfc0dcad3b3a302cf882602597 "x86/fpu: Finish excising 'eagerfpu'": e63650840e8b053aa09ad934877e87e9941ed135
Minor nit. For kernel commits, the "normal" way we reference them looks like this: c592b5734706 ("x86/fpu: Remove use_eager_fpu()") e63650840e8b ("x86/fpu: Finish excising 'eagerfpu'")
Which makes it much easier to read and understand. That's what we use for the "Fixes:" tag in commits and in other places (text in commit messages.)
To automatically generate that format, you can just do: git show -s --abbrev-commit --abbrev=12 --pretty=format:"%h ("%s")%n"
I recommend just setting up an alias for the above line, otherwise it's a pain to have to remember how to do it all the time. Here's what I do: $ alias gsr='git show -s --abbrev-commit --abbrev=12 --pretty=format:"%h ("%s")%n"' $ gsr c592b57347069abfc0dcad3b3a302cf882602597 c592b5734706 ("x86/fpu: Remove use_eager_fpu()")
Unfortunately, they don't apply cleanly to stable kernels.
Should be very simple to backport if you want to. Also I need copies for the 4.9.y tree as well if you do so.
thanks,
greg k-h
-----Original Message----- From: 'Greg Kroah-Hartman' [mailto:gregkh@linuxfoundation.org] Sent: Friday, June 15, 2018 4:06 PM To: Daniel Sangorrin daniel.sangorrin@toshiba.co.jp Cc: linux-kernel@vger.kernel.org; stable@vger.kernel.org; 'Andy Lutomirski' luto@kernel.org; 'Rik van Riel' riel@redhat.com; 'Borislav Petkov' bp@alien8.de; 'Brian Gerst' brgerst@gmail.com; 'Dave Hansen' dave.hansen@linux.intel.com; 'Denys Vlasenko' dvlasenk@redhat.com; 'Fenghua Yu' fenghua.yu@intel.com; 'H. Peter Anvin' hpa@zytor.com; 'Josh Poimboeuf' jpoimboe@redhat.com; 'Linus Torvalds' torvalds@linux-foundation.org; 'Oleg Nesterov' oleg@redhat.com; 'Peter Zijlstra' peterz@infradead.org; 'Quentin Casasnovas' quentin.casasnovas@oracle.com; 'Thomas Gleixner' tglx@linutronix.de; pbonzini@redhat.com; 'Ingo Molnar' mingo@kernel.org Subject: Re: [PATCH 4.4 10/24] x86/fpu: Hard-disable lazy FPU mode
On Fri, Jun 15, 2018 at 02:23:08PM +0900, Daniel Sangorrin wrote:
-----Original Message----- From: stable-owner@vger.kernel.org [mailto:stable-owner@vger.kernel.org]
On
Behalf Of 'Greg Kroah-Hartman' Sent: Friday, June 15, 2018 1:56 PM To: Daniel Sangorrin daniel.sangorrin@toshiba.co.jp Cc: linux-kernel@vger.kernel.org; stable@vger.kernel.org; 'Andy Lutomirski' luto@kernel.org; 'Rik van Riel' riel@redhat.com; 'Borislav Petkov' bp@alien8.de; 'Brian Gerst' brgerst@gmail.com; 'Dave Hansen' dave.hansen@linux.intel.com; 'Denys Vlasenko' dvlasenk@redhat.com; 'Fenghua Yu' fenghua.yu@intel.com; 'H. Peter Anvin' hpa@zytor.com;
'Josh
Poimboeuf' jpoimboe@redhat.com; 'Linus Torvalds' torvalds@linux-foundation.org; 'Oleg Nesterov' oleg@redhat.com; 'Peter Zijlstra' peterz@infradead.org; 'Quentin Casasnovas' quentin.casasnovas@oracle.com; 'Thomas Gleixner' tglx@linutronix.de; pbonzini@redhat.com; 'Ingo Molnar' mingo@kernel.org Subject: Re: [PATCH 4.4 10/24] x86/fpu: Hard-disable lazy FPU mode
On Fri, Jun 15, 2018 at 01:24:27PM +0900, Daniel Sangorrin wrote:
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code
depending on
lazy FPU mode.
Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line,
"eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we
should
remove it from the Documentation.
This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
What are the git commit ids of those patches in Linus's tree? No need to point to patchwork links, I don't use that tool.
OK, I got it.
"x86/fpu: Remove use_eager_fpu()":
c592b57347069abfc0dcad3b3a302cf882602597
"x86/fpu: Finish excising 'eagerfpu'":
e63650840e8b053aa09ad934877e87e9941ed135
Minor nit. For kernel commits, the "normal" way we reference them looks like this: c592b5734706 ("x86/fpu: Remove use_eager_fpu()") e63650840e8b ("x86/fpu: Finish excising 'eagerfpu'")
Which makes it much easier to read and understand. That's what we use for the "Fixes:" tag in commits and in other places (text in commit messages.)
To automatically generate that format, you can just do: git show -s --abbrev-commit --abbrev=12 --pretty=format:"%h ("%s")%n"
I recommend just setting up an alias for the above line, otherwise it's a pain to have to remember how to do it all the time. Here's what I do: $ alias gsr='git show -s --abbrev-commit --abbrev=12 --pretty=format:"%h ("%s")%n"' $ gsr c592b57347069abfc0dcad3b3a302cf882602597 c592b5734706 ("x86/fpu: Remove use_eager_fpu()")
Thanks a lot for the detailed explanation.
Unfortunately, they don't apply cleanly to stable kernels.
Should be very simple to backport if you want to. Also I need copies for the 4.9.y tree as well if you do so.
OK, I will do it after the OSS Japan next week if you don't mind. I also want to run some tests for the FPU and try different boot parameter combinations (e.g. no387, nofxsr etc) to check I didn't break anything.
Thanks, Daniel
thanks,
greg k-h
On Fri, 2018-06-15 at 13:24 +0900, Daniel Sangorrin wrote:
Hi Greg,
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -58,7 +58,7 @@ extern u64 fpu__get_supported_xfeatures_ */ static __always_inline __pure bool use_eager_fpu(void) {
- return static_cpu_has_safe(X86_FEATURE_EAGER_FPU);
- return true;
}
Since this function returns always true then we can remove the code depending on lazy FPU mode. Actually this has already been done in "x86/fpu: Remove use_eager_fpu()" Ref: https://patchwork.kernel.org/patch/9365883/
static void __init fpu__init_parse_early_param(void) {
- if (cmdline_find_option_bool(boot_command_line, "eagerfpu=off")) {
eagerfpu = DISABLE;
fpu__clear_eager_fpu_features();
- }
Since this patch removes the kernel boot parameter "eagerfpu", maybe we should remove it from the Documentation. This has also been done by commit "x86/fpu: Finish excising 'eagerfpu'" Ref: https://patchwork.kernel.org/patch/9380673/
I will try backporting those patches unless anyone has any objections.
This does seem like a good idea—there is quite a bit of dead code left and it may be hard to backport any further bug fixes in this area without that removal.
Ben.
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit 5ed73f40735c68d8a656b46d09b1885d3b8740ae upstream.
In eager fpu mode, having deactivated FPU without immediately reloading some other context is illegal. Therefore, to recover from FNSAVE, we can't just deactivate the state -- we need to reload it if we're not actively context switching.
We had this wrong in fpu__save() and fpu__copy(). Fix both. __kernel_fpu_begin() was fine -- add a comment.
This fixes a warning triggerable with nofxsr eagerfpu=on.
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Rik van Riel riel@redhat.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/60662444e13c76f06e23c15c5dcdba31b4ac3d67.1453675014... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/fpu/core.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
--- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -114,6 +114,10 @@ void __kernel_fpu_begin(void) kernel_fpu_disable();
if (fpu->fpregs_active) { + /* + * Ignore return value -- we don't care if reg state + * is clobbered. + */ copy_fpregs_to_fpstate(fpu); } else { this_cpu_write(fpu_fpregs_owner_ctx, NULL); @@ -189,8 +193,12 @@ void fpu__save(struct fpu *fpu)
preempt_disable(); if (fpu->fpregs_active) { - if (!copy_fpregs_to_fpstate(fpu)) - fpregs_deactivate(fpu); + if (!copy_fpregs_to_fpstate(fpu)) { + if (use_eager_fpu()) + copy_kernel_to_fpregs(&fpu->state); + else + fpregs_deactivate(fpu); + } } preempt_enable(); } @@ -259,7 +267,11 @@ static void fpu_copy(struct fpu *dst_fpu preempt_disable(); if (!copy_fpregs_to_fpstate(dst_fpu)) { memcpy(&src_fpu->state, &dst_fpu->state, xstate_size); - fpregs_deactivate(src_fpu); + + if (use_eager_fpu()) + copy_kernel_to_fpregs(&src_fpu->state); + else + fpregs_deactivate(src_fpu); } preempt_enable(); }
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit 4ecd16ec7059390b430af34bd8bc3ca2b5dcef9a upstream.
Systems without an FPU are generally old and therefore use lazy FPU switching. Unsurprisingly, math emulation in eager FPU mode is a bit buggy. Fix it.
There were two bugs involving kernel code trying to use the FPU registers in eager mode even if they didn't exist and one BUG_ON() that was incorrect.
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Andy Lutomirski luto@amacapital.net Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Rik van Riel riel@redhat.com Cc: Sai Praneeth Prakhya sai.praneeth.prakhya@intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: yu-cheng yu yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/b4b8d112436bd6fab866e1b4011131507e8d7fbe.1453675014... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/fpu/internal.h | 3 ++- arch/x86/kernel/fpu/core.c | 2 +- arch/x86/kernel/traps.c | 1 - 3 files changed, 3 insertions(+), 3 deletions(-)
--- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -596,7 +596,8 @@ switch_fpu_prepare(struct fpu *old_fpu, * If the task has used the math, pre-load the FPU on xsave processors * or if the past 5 consecutive context-switches used math. */ - fpu.preload = new_fpu->fpstate_active && + fpu.preload = static_cpu_has(X86_FEATURE_FPU) && + new_fpu->fpstate_active && (use_eager_fpu() || new_fpu->counter > 5);
if (old_fpu->fpregs_active) { --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -437,7 +437,7 @@ void fpu__clear(struct fpu *fpu) { WARN_ON_FPU(fpu != ¤t->thread.fpu); /* Almost certainly an anomaly */
- if (!use_eager_fpu()) { + if (!use_eager_fpu() || !static_cpu_has(X86_FEATURE_FPU)) { /* FPU state will be reallocated lazily at the first use. */ fpu__drop(fpu); } else { --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -751,7 +751,6 @@ dotraplinkage void do_device_not_available(struct pt_regs *regs, long error_code) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU"); - BUG_ON(use_eager_fpu());
#ifdef CONFIG_MATH_EMULATION if (read_cr0() & X86_CR0_EM) {
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kevin Easton kevin@guarana.org
commit 4b66af2d6356a00e94bcdea3e7fea324e8b5c6f4 upstream.
Key extensions (struct sadb_key) include a user-specified number of key bits. The kernel uses that number to determine how much key data to copy out of the message in pfkey_msg2xfrm_state().
The length of the sadb_key message must be verified to be long enough, even in the case of SADB_X_AALG_NULL. Furthermore, the sadb_key_len value must be long enough to include both the key data and the struct sadb_key itself.
Introduce a helper function verify_key_len(), and call it from parse_exthdrs() where other exthdr types are similarly checked for correctness.
Signed-off-by: Kevin Easton kevin@guarana.org Reported-by: syzbot+5022a34ca5a3d49b84223653fab632dfb7b4cf37@syzkaller.appspotmail.com Signed-off-by: Steffen Klassert steffen.klassert@secunet.com Cc: Zubin Mithra zsm@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/key/af_key.c | 45 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 35 insertions(+), 10 deletions(-)
--- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -437,6 +437,24 @@ static int verify_address_len(const void return 0; }
+static inline int sadb_key_len(const struct sadb_key *key) +{ + int key_bytes = DIV_ROUND_UP(key->sadb_key_bits, 8); + + return DIV_ROUND_UP(sizeof(struct sadb_key) + key_bytes, + sizeof(uint64_t)); +} + +static int verify_key_len(const void *p) +{ + const struct sadb_key *key = p; + + if (sadb_key_len(key) > key->sadb_key_len) + return -EINVAL; + + return 0; +} + static inline int pfkey_sec_ctx_len(const struct sadb_x_sec_ctx *sec_ctx) { return DIV_ROUND_UP(sizeof(struct sadb_x_sec_ctx) + @@ -533,16 +551,25 @@ static int parse_exthdrs(struct sk_buff return -EINVAL; if (ext_hdrs[ext_type-1] != NULL) return -EINVAL; - if (ext_type == SADB_EXT_ADDRESS_SRC || - ext_type == SADB_EXT_ADDRESS_DST || - ext_type == SADB_EXT_ADDRESS_PROXY || - ext_type == SADB_X_EXT_NAT_T_OA) { + switch (ext_type) { + case SADB_EXT_ADDRESS_SRC: + case SADB_EXT_ADDRESS_DST: + case SADB_EXT_ADDRESS_PROXY: + case SADB_X_EXT_NAT_T_OA: if (verify_address_len(p)) return -EINVAL; - } - if (ext_type == SADB_X_EXT_SEC_CTX) { + break; + case SADB_X_EXT_SEC_CTX: if (verify_sec_ctx_len(p)) return -EINVAL; + break; + case SADB_EXT_KEY_AUTH: + case SADB_EXT_KEY_ENCRYPT: + if (verify_key_len(p)) + return -EINVAL; + break; + default: + break; } ext_hdrs[ext_type-1] = (void *) p; } @@ -1111,14 +1138,12 @@ static struct xfrm_state * pfkey_msg2xfr key = ext_hdrs[SADB_EXT_KEY_AUTH - 1]; if (key != NULL && sa->sadb_sa_auth != SADB_X_AALG_NULL && - ((key->sadb_key_bits+7) / 8 == 0 || - (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t))) + key->sadb_key_bits == 0) return ERR_PTR(-EINVAL); key = ext_hdrs[SADB_EXT_KEY_ENCRYPT-1]; if (key != NULL && sa->sadb_sa_encrypt != SADB_EALG_NULL && - ((key->sadb_key_bits+7) / 8 == 0 || - (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t))) + key->sadb_key_bits == 0) return ERR_PTR(-EINVAL);
x = xfrm_state_alloc(net);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit 02f39b2379fb81557ae864ec8f85421c0250c954 upstream.
The crypto code was checking both use_eager_fpu() and defined(X86_FEATURE_EAGER_FPU). The latter was nonsensical, so remove it. This will avoid breakage when we remove X86_FEATURE_EAGER_FPU.
Signed-off-by: Andy Lutomirski luto@kernel.org Signed-off-by: Rik van Riel riel@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Josh Poimboeuf jpoimboe@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Oleg Nesterov oleg@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Casasnovas quentin.casasnovas@oracle.com Cc: Thomas Gleixner tglx@linutronix.de Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-2-git-send-email-riel@redhat.com Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/crypto/crc32c-intel_glue.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -58,16 +58,11 @@ asmlinkage unsigned int crc_pcl(const u8 *buffer, int len, unsigned int crc_init); static int crc32c_pcl_breakeven = CRC32C_PCL_BREAKEVEN_EAGERFPU; -#if defined(X86_FEATURE_EAGER_FPU) #define set_pcl_breakeven_point() \ do { \ if (!use_eager_fpu()) \ crc32c_pcl_breakeven = CRC32C_PCL_BREAKEVEN_NOEAGERFPU; \ } while (0) -#else -#define set_pcl_breakeven_point() \ - (crc32c_pcl_breakeven = CRC32C_PCL_BREAKEVEN_NOEAGERFPU) -#endif #endif /* CONFIG_X86_64 */
static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t length)
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Linus Walleij linus.walleij@linaro.org
commit 7d18f0a14aa6a0d6bad39111c1fb655f07f71d59 upstream.
Sometimes a GPIO is fetched with NULL as parent device, and that is just fine. So under these circumstances, avoid using dev_name() to provide a name for the GPIO line.
Signed-off-by: Linus Walleij linus.walleij@linaro.org Cc: Daniel Rosenberg drosen@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpio/gpiolib.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -2117,6 +2117,8 @@ struct gpio_desc *__must_check gpiod_get struct gpio_desc *desc = NULL; int status; enum gpio_lookup_flags lookupflags = 0; + /* Maybe we have a device name, maybe not */ + const char *devname = dev ? dev_name(dev) : "?";
dev_dbg(dev, "GPIO lookup for consumer %s\n", con_id);
@@ -2145,8 +2147,11 @@ struct gpio_desc *__must_check gpiod_get return desc; }
- /* If a connection label was passed use that, else use the device name as label */ - status = gpiod_request(desc, con_id ? con_id : dev_name(dev)); + /* + * If a connection label was passed use that, else attempt to use + * the device name as label + */ + status = gpiod_request(desc, con_id ? con_id : devname); if (status < 0) return ERR_PTR(status);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Linus Torvalds torvalds@linux-foundation.org
commit 0cc3b0ec23ce4c69e1e890ed2b8d2fa932b14aad upstream.
We have a MAX_LFS_FILESIZE macro that is meant to be filled in by filesystems (and other IO targets) that know they are 64-bit clean and don't have any 32-bit limits in their IO path.
It turns out that our 32-bit value for that limit was bogus. On 32-bit, the VM layer is limited by the page cache to only 32-bit index values, but our logic for that was confusing and actually wrong. We used to define that value to
(((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1)
which is actually odd in several ways: it limits the index to 31 bits, and then it limits files so that they can't have data in that last byte of a page that has the highest 31-bit index (ie page index 0x7fffffff).
Neither of those limitations make sense. The index is actually the full 32 bit unsigned value, and we can use that whole full page. So the maximum size of the file would logically be "PAGE_SIZE << BITS_PER_LONG".
However, we do wan tto avoid the maximum index, because we have code that iterates over the page indexes, and we don't want that code to overflow. So the maximum size of a file on a 32-bit host should actually be one page less than the full 32-bit index.
So the actual limit is ULONG_MAX << PAGE_SHIFT. That means that we will not actually be using the page of that last index (ULONG_MAX), but we can grow a file up to that limit.
The wrong value of MAX_LFS_FILESIZE actually caused problems for Doug Nazar, who was still using a 32-bit host, but with a 9.7TB 2 x RAID5 volume. It turns out that our old MAX_LFS_FILESIZE was 8TiB (well, one byte less), but the actual true VM limit is one page less than 16TiB.
This was invisible until commit c2a9737f45e2 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()"), which started applying that MAX_LFS_FILESIZE limit to block devices too.
NOTE! On 64-bit, the page index isn't a limiter at all, and the limit is actually just the offset type itself (loff_t), which is signed. But for clarity, on 64-bit, just use the maximum signed value, and don't make people have to count the number of 'f' characters in the hex constant.
So just use LLONG_MAX for the 64-bit case. That was what the value had been before too, just written out as a hex constant.
Fixes: c2a9737f45e2 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()") Reported-and-tested-by: Doug Nazar nazard@nazar.ca Cc: Andreas Dilger adilger@dilger.ca Cc: Mark Fasheh mfasheh@versity.com Cc: Joel Becker jlbec@evilplan.org Cc: Dave Kleikamp shaggy@kernel.org Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Cc: Rafael Tinoco rafael.tinoco@linaro.org [backported to 4.4.y due to requests of failed LTP tests - gregkh] Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/fs.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -926,9 +926,9 @@ static inline struct file *get_file(stru /* Page cache limit. The filesystems should put that into their s_maxbytes limits, otherwise bad things can happen in VM. */ #if BITS_PER_LONG==32 -#define MAX_LFS_FILESIZE (((loff_t)PAGE_CACHE_SIZE << (BITS_PER_LONG-1))-1) +#define MAX_LFS_FILESIZE ((loff_t)ULONG_MAX << PAGE_SHIFT) #elif BITS_PER_LONG==64 -#define MAX_LFS_FILESIZE ((loff_t)0x7fffffffffffffffLL) +#define MAX_LFS_FILESIZE ((loff_t)LLONG_MAX) #endif
#define FL_POSIX 1
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini pbonzini@redhat.com
commit 79367a65743975e5cac8d24d08eccc7fdae832b0 upstream.
Wrap the common invocation of ctxt->ops->read_std and ctxt->ops->write_std, so as to have a smaller patch when the functions grow another argument.
Fixes: 129a72a0d3c8 ("KVM: x86: Introduce segmented_write_std", 2017-01-12) Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/emulate.c | 64 ++++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 32 deletions(-)
--- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -790,6 +790,19 @@ static inline int jmp_rel(struct x86_emu return assign_eip_near(ctxt, ctxt->_eip + rel); }
+static int linear_read_system(struct x86_emulate_ctxt *ctxt, ulong linear, + void *data, unsigned size) +{ + return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception); +} + +static int linear_write_system(struct x86_emulate_ctxt *ctxt, + ulong linear, void *data, + unsigned int size) +{ + return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception); +} + static int segmented_read_std(struct x86_emulate_ctxt *ctxt, struct segmented_address addr, void *data, @@ -1488,8 +1501,7 @@ static int read_interrupt_descriptor(str return emulate_gp(ctxt, index << 3 | 0x2);
addr = dt.address + index * 8; - return ctxt->ops->read_std(ctxt, addr, desc, sizeof *desc, - &ctxt->exception); + return linear_read_system(ctxt, addr, desc, sizeof *desc); }
static void get_descriptor_table_ptr(struct x86_emulate_ctxt *ctxt, @@ -1552,8 +1564,7 @@ static int read_segment_descriptor(struc if (rc != X86EMUL_CONTINUE) return rc;
- return ctxt->ops->read_std(ctxt, *desc_addr_p, desc, sizeof(*desc), - &ctxt->exception); + return linear_read_system(ctxt, *desc_addr_p, desc, sizeof(*desc)); }
/* allowed just for 8 bytes segments */ @@ -1567,8 +1578,7 @@ static int write_segment_descriptor(stru if (rc != X86EMUL_CONTINUE) return rc;
- return ctxt->ops->write_std(ctxt, addr, desc, sizeof *desc, - &ctxt->exception); + return linear_write_system(ctxt, addr, desc, sizeof *desc); }
static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt, @@ -1729,8 +1739,7 @@ static int __load_segment_descriptor(str return ret; } } else if (ctxt->mode == X86EMUL_MODE_PROT64) { - ret = ctxt->ops->read_std(ctxt, desc_addr+8, &base3, - sizeof(base3), &ctxt->exception); + ret = linear_read_system(ctxt, desc_addr+8, &base3, sizeof(base3)); if (ret != X86EMUL_CONTINUE) return ret; if (is_noncanonical_address(get_desc_base(&seg_desc) | @@ -2043,11 +2052,11 @@ static int __emulate_int_real(struct x86 eip_addr = dt.address + (irq << 2); cs_addr = dt.address + (irq << 2) + 2;
- rc = ops->read_std(ctxt, cs_addr, &cs, 2, &ctxt->exception); + rc = linear_read_system(ctxt, cs_addr, &cs, 2); if (rc != X86EMUL_CONTINUE) return rc;
- rc = ops->read_std(ctxt, eip_addr, &eip, 2, &ctxt->exception); + rc = linear_read_system(ctxt, eip_addr, &eip, 2); if (rc != X86EMUL_CONTINUE) return rc;
@@ -3025,35 +3034,30 @@ static int task_switch_16(struct x86_emu u16 tss_selector, u16 old_tss_sel, ulong old_tss_base, struct desc_struct *new_desc) { - const struct x86_emulate_ops *ops = ctxt->ops; struct tss_segment_16 tss_seg; int ret; u32 new_tss_base = get_desc_base(new_desc);
- ret = ops->read_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg, - &ctxt->exception); + ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg); if (ret != X86EMUL_CONTINUE) return ret;
save_state_to_tss16(ctxt, &tss_seg);
- ret = ops->write_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg, - &ctxt->exception); + ret = linear_write_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg); if (ret != X86EMUL_CONTINUE) return ret;
- ret = ops->read_std(ctxt, new_tss_base, &tss_seg, sizeof tss_seg, - &ctxt->exception); + ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof tss_seg); if (ret != X86EMUL_CONTINUE) return ret;
if (old_tss_sel != 0xffff) { tss_seg.prev_task_link = old_tss_sel;
- ret = ops->write_std(ctxt, new_tss_base, - &tss_seg.prev_task_link, - sizeof tss_seg.prev_task_link, - &ctxt->exception); + ret = linear_write_system(ctxt, new_tss_base, + &tss_seg.prev_task_link, + sizeof tss_seg.prev_task_link); if (ret != X86EMUL_CONTINUE) return ret; } @@ -3169,38 +3173,34 @@ static int task_switch_32(struct x86_emu u16 tss_selector, u16 old_tss_sel, ulong old_tss_base, struct desc_struct *new_desc) { - const struct x86_emulate_ops *ops = ctxt->ops; struct tss_segment_32 tss_seg; int ret; u32 new_tss_base = get_desc_base(new_desc); u32 eip_offset = offsetof(struct tss_segment_32, eip); u32 ldt_sel_offset = offsetof(struct tss_segment_32, ldt_selector);
- ret = ops->read_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg, - &ctxt->exception); + ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg); if (ret != X86EMUL_CONTINUE) return ret;
save_state_to_tss32(ctxt, &tss_seg);
/* Only GP registers and segment selectors are saved */ - ret = ops->write_std(ctxt, old_tss_base + eip_offset, &tss_seg.eip, - ldt_sel_offset - eip_offset, &ctxt->exception); + ret = linear_write_system(ctxt, old_tss_base + eip_offset, &tss_seg.eip, + ldt_sel_offset - eip_offset); if (ret != X86EMUL_CONTINUE) return ret;
- ret = ops->read_std(ctxt, new_tss_base, &tss_seg, sizeof tss_seg, - &ctxt->exception); + ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof tss_seg); if (ret != X86EMUL_CONTINUE) return ret;
if (old_tss_sel != 0xffff) { tss_seg.prev_task_link = old_tss_sel;
- ret = ops->write_std(ctxt, new_tss_base, - &tss_seg.prev_task_link, - sizeof tss_seg.prev_task_link, - &ctxt->exception); + ret = linear_write_system(ctxt, new_tss_base, + &tss_seg.prev_task_link, + sizeof tss_seg.prev_task_link); if (ret != X86EMUL_CONTINUE) return ret; }
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini pbonzini@redhat.com
commit ce14e868a54edeb2e30cb7a7b104a2fc4b9d76ca upstream.
Int the next patch the emulator's .read_std and .write_std callbacks will grow another argument, which is not needed in kvm_read_guest_virt and kvm_write_guest_virt_system's callers. Since we have to make separate functions, let's give the currently existing names a nicer interface, too.
Fixes: 129a72a0d3c8 ("KVM: x86: Introduce segmented_write_std", 2017-01-12) Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx.c | 23 ++++++++++------------- arch/x86/kvm/x86.c | 39 ++++++++++++++++++++++++++------------- arch/x86/kvm/x86.h | 4 ++-- 3 files changed, 38 insertions(+), 28 deletions(-)
--- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -6692,8 +6692,7 @@ static int nested_vmx_check_vmptr(struct vmcs_read32(VMX_INSTRUCTION_INFO), false, &gva)) return 1;
- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr, - sizeof(vmptr), &e)) { + if (kvm_read_guest_virt(vcpu, gva, &vmptr, sizeof(vmptr), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } @@ -7211,8 +7210,8 @@ static int handle_vmread(struct kvm_vcpu vmx_instruction_info, true, &gva)) return 1; /* _system ok, as nested_vmx_check_permission verified cpl=0 */ - kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva, - &field_value, (is_long_mode(vcpu) ? 8 : 4), NULL); + kvm_write_guest_virt_system(vcpu, gva, &field_value, + (is_long_mode(vcpu) ? 8 : 4), NULL); }
nested_vmx_succeed(vcpu); @@ -7247,8 +7246,8 @@ static int handle_vmwrite(struct kvm_vcp if (get_vmx_mem_address(vcpu, exit_qualification, vmx_instruction_info, false, &gva)) return 1; - if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, - &field_value, (is_64_bit_mode(vcpu) ? 8 : 4), &e)) { + if (kvm_read_guest_virt(vcpu, gva, &field_value, + (is_64_bit_mode(vcpu) ? 8 : 4), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } @@ -7338,9 +7337,9 @@ static int handle_vmptrst(struct kvm_vcp vmx_instruction_info, true, &vmcs_gva)) return 1; /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */ - if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva, - (void *)&to_vmx(vcpu)->nested.current_vmptr, - sizeof(u64), &e)) { + if (kvm_write_guest_virt_system(vcpu, vmcs_gva, + (void *)&to_vmx(vcpu)->nested.current_vmptr, + sizeof(u64), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } @@ -7394,8 +7393,7 @@ static int handle_invept(struct kvm_vcpu if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), vmx_instruction_info, false, &gva)) return 1; - if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &operand, - sizeof(operand), &e)) { + if (kvm_read_guest_virt(vcpu, gva, &operand, sizeof(operand), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } @@ -7454,8 +7452,7 @@ static int handle_invvpid(struct kvm_vcp if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), vmx_instruction_info, false, &gva)) return 1; - if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vpid, - sizeof(u32), &e)) { + if (kvm_read_guest_virt(vcpu, gva, &vpid, sizeof(u32), &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4245,11 +4245,10 @@ static int kvm_fetch_guest_virt(struct x return X86EMUL_CONTINUE; }
-int kvm_read_guest_virt(struct x86_emulate_ctxt *ctxt, +int kvm_read_guest_virt(struct kvm_vcpu *vcpu, gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception) { - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, @@ -4257,9 +4256,9 @@ int kvm_read_guest_virt(struct x86_emula } EXPORT_SYMBOL_GPL(kvm_read_guest_virt);
-static int kvm_read_guest_virt_system(struct x86_emulate_ctxt *ctxt, - gva_t addr, void *val, unsigned int bytes, - struct x86_exception *exception) +static int emulator_read_std(struct x86_emulate_ctxt *ctxt, + gva_t addr, void *val, unsigned int bytes, + struct x86_exception *exception) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, exception); @@ -4274,18 +4273,16 @@ static int kvm_read_guest_phys_system(st return r < 0 ? X86EMUL_IO_NEEDED : X86EMUL_CONTINUE; }
-int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt, - gva_t addr, void *val, - unsigned int bytes, - struct x86_exception *exception) +static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes, + struct kvm_vcpu *vcpu, u32 access, + struct x86_exception *exception) { - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); void *data = val; int r = X86EMUL_CONTINUE;
while (bytes) { gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr, - PFERR_WRITE_MASK, + access, exception); unsigned offset = addr & (PAGE_SIZE-1); unsigned towrite = min(bytes, (unsigned)PAGE_SIZE - offset); @@ -4306,6 +4303,22 @@ int kvm_write_guest_virt_system(struct x out: return r; } + +static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val, + unsigned int bytes, struct x86_exception *exception) +{ + struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); + + return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, + PFERR_WRITE_MASK, exception); +} + +int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val, + unsigned int bytes, struct x86_exception *exception) +{ + return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, + PFERR_WRITE_MASK, exception); +} EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva, @@ -5025,8 +5038,8 @@ static void emulator_set_hflags(struct x static const struct x86_emulate_ops emulate_ops = { .read_gpr = emulator_read_gpr, .write_gpr = emulator_write_gpr, - .read_std = kvm_read_guest_virt_system, - .write_std = kvm_write_guest_virt_system, + .read_std = emulator_read_std, + .write_std = emulator_write_std, .read_phys = kvm_read_guest_phys_system, .fetch = kvm_fetch_guest_virt, .read_emulated = emulator_read_emulated, --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -164,11 +164,11 @@ int kvm_inject_realmode_interrupt(struct
void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr);
-int kvm_read_guest_virt(struct x86_emulate_ctxt *ctxt, +int kvm_read_guest_virt(struct kvm_vcpu *vcpu, gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception);
-int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt, +int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marek Szyprowski m.szyprowski@samsung.com
commit aa2f80e752c75e593b3820f42c416ed9458fa73e upstream.
The best granularity of residue that DMA engine can report is in the BURST units, so the serial driver must use MAXBURST = 1 and DMA_SLAVE_BUSWIDTH_1_BYTE if it relies on exact number of bytes transferred by DMA engine.
Fixes: 62c37eedb74c ("serial: samsung: add dma reqest/release functions") Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Acked-by: Krzysztof Kozlowski krzk@kernel.org Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/serial/samsung.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
--- a/drivers/tty/serial/samsung.c +++ b/drivers/tty/serial/samsung.c @@ -860,15 +860,12 @@ static int s3c24xx_serial_request_dma(st dma->rx_conf.direction = DMA_DEV_TO_MEM; dma->rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; dma->rx_conf.src_addr = p->port.mapbase + S3C2410_URXH; - dma->rx_conf.src_maxburst = 16; + dma->rx_conf.src_maxburst = 1;
dma->tx_conf.direction = DMA_MEM_TO_DEV; dma->tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; dma->tx_conf.dst_addr = p->port.mapbase + S3C2410_UTXH; - if (dma_get_cache_alignment() >= 16) - dma->tx_conf.dst_maxburst = 16; - else - dma->tx_conf.dst_maxburst = 1; + dma->tx_conf.dst_maxburst = 1;
dma_cap_zero(mask); dma_cap_set(DMA_SLAVE, mask);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gil Kupfer gilkup@gmail.com
commit b23220fe054e92f616b82450fae8cd3ab176cc60 upstream.
The balloon.page field is used for two different purposes if batching is on or off. If batching is on, the field point to the page which is used to communicate with with the hypervisor. If it is off, balloon.page points to the page that is about to be (un)locked.
Unfortunately, this dual-purpose of the field introduced a bug: when the balloon is popped (e.g., when the machine is reset or the balloon driver is explicitly removed), the balloon driver frees, unconditionally, the page that is held in balloon.page. As a result, if batching is disabled, this leads to double freeing the last page that is sent to the hypervisor.
The following error occurs during rmmod when kernel checkers are on, and the balloon is not empty:
[ 42.307653] ------------[ cut here ]------------ [ 42.307657] Kernel BUG at ffffffffba1e4b28 [verbose debug info unavailable] [ 42.307720] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC [ 42.312512] Modules linked in: vmw_vsock_vmci_transport vsock ppdev joydev vmw_balloon(-) input_leds serio_raw vmw_vmci parport_pc shpchp parport i2c_piix4 nfit mac_hid autofs4 vmwgfx drm_kms_helper hid_generic syscopyarea sysfillrect usbhid sysimgblt fb_sys_fops hid ttm mptspi scsi_transport_spi ahci mptscsih drm psmouse vmxnet3 libahci mptbase pata_acpi [ 42.312766] CPU: 10 PID: 1527 Comm: rmmod Not tainted 4.12.0+ #5 [ 42.312803] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/30/2016 [ 42.313042] task: ffff9bf9680f8000 task.stack: ffffbfefc1638000 [ 42.313290] RIP: 0010:__free_pages+0x38/0x40 [ 42.313510] RSP: 0018:ffffbfefc163be98 EFLAGS: 00010246 [ 42.313731] RAX: 000000000000003e RBX: ffffffffc02b9720 RCX: 0000000000000006 [ 42.313972] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9bf97e08e0a0 [ 42.314201] RBP: ffffbfefc163be98 R08: 0000000000000000 R09: 0000000000000000 [ 42.314435] R10: 0000000000000000 R11: 0000000000000000 R12: ffffffffc02b97e4 [ 42.314505] R13: ffffffffc02b9748 R14: ffffffffc02b9728 R15: 0000000000000200 [ 42.314550] FS: 00007f3af5fec700(0000) GS:ffff9bf97e080000(0000) knlGS:0000000000000000 [ 42.314599] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 42.314635] CR2: 00007f44f6f4ab24 CR3: 00000003a7d12000 CR4: 00000000000006e0 [ 42.314864] Call Trace: [ 42.315774] vmballoon_pop+0x102/0x130 [vmw_balloon] [ 42.315816] vmballoon_exit+0x42/0xd64 [vmw_balloon] [ 42.315853] SyS_delete_module+0x1e2/0x250 [ 42.315891] entry_SYSCALL_64_fastpath+0x23/0xc2 [ 42.315924] RIP: 0033:0x7f3af5b0e8e7 [ 42.315949] RSP: 002b:00007fffe6ce0148 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0 [ 42.315996] RAX: ffffffffffffffda RBX: 000055be676401e0 RCX: 00007f3af5b0e8e7 [ 42.316951] RDX: 000000000000000a RSI: 0000000000000800 RDI: 000055be67640248 [ 42.317887] RBP: 0000000000000003 R08: 0000000000000000 R09: 1999999999999999 [ 42.318845] R10: 0000000000000883 R11: 0000000000000206 R12: 00007fffe6cdf130 [ 42.319755] R13: 0000000000000000 R14: 0000000000000000 R15: 000055be676401e0 [ 42.320606] Code: c0 74 1c f0 ff 4f 1c 74 02 5d c3 85 f6 74 07 e8 0f d8 ff ff 5d c3 31 f6 e8 c6 fb ff ff 5d c3 48 c7 c6 c8 0f c5 ba e8 58 be 02 00 <0f> 0b 66 0f 1f 44 00 00 66 66 66 66 90 48 85 ff 75 01 c3 55 48 [ 42.323462] RIP: __free_pages+0x38/0x40 RSP: ffffbfefc163be98 [ 42.325735] ---[ end trace 872e008e33f81508 ]---
To solve the bug, we eliminate the dual purpose of balloon.page.
Fixes: f220a80f0c2e ("VMware balloon: add batching to the vmw_balloon.") Cc: stable@vger.kernel.org Reported-by: Oleksandr Natalenko onatalen@redhat.com Signed-off-by: Gil Kupfer gilkup@gmail.com Signed-off-by: Nadav Amit namit@vmware.com Reviewed-by: Xavier Deguillard xdeguillard@vmware.com Tested-by: Oleksandr Natalenko oleksandr@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/misc/vmw_balloon.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-)
--- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -576,15 +576,9 @@ static void vmballoon_pop(struct vmballo } }
- if (b->batch_page) { - vunmap(b->batch_page); - b->batch_page = NULL; - } - - if (b->page) { - __free_page(b->page); - b->page = NULL; - } + /* Clearing the batch_page unconditionally has no adverse effect */ + free_page((unsigned long)b->batch_page); + b->batch_page = NULL; }
/* @@ -991,16 +985,13 @@ static const struct vmballoon_ops vmball
static bool vmballoon_init_batching(struct vmballoon *b) { - b->page = alloc_page(VMW_PAGE_ALLOC_NOSLEEP); - if (!b->page) - return false; + struct page *page;
- b->batch_page = vmap(&b->page, 1, VM_MAP, PAGE_KERNEL); - if (!b->batch_page) { - __free_page(b->page); + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) return false; - }
+ b->batch_page = page_address(page); return true; }
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini pbonzini@redhat.com
commit 3c9fa24ca7c9c47605672916491f79e8ccacb9e6 upstream.
The functions that were used in the emulation of fxrstor, fxsave, sgdt and sidt were originally meant for task switching, and as such they did not check privilege levels. This is very bad when the same functions are used in the emulation of unprivileged instructions. This is CVE-2018-10853.
The obvious fix is to add a new argument to ops->read_std and ops->write_std, which decides whether the access is a "system" access or should use the processor's CPL.
Fixes: 129a72a0d3c8 ("KVM: x86: Introduce segmented_write_std", 2017-01-12) Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/kvm_emulate.h | 6 ++++-- arch/x86/kvm/emulate.c | 12 ++++++------ arch/x86/kvm/x86.c | 18 ++++++++++++++---- 3 files changed, 24 insertions(+), 12 deletions(-)
--- a/arch/x86/include/asm/kvm_emulate.h +++ b/arch/x86/include/asm/kvm_emulate.h @@ -105,11 +105,12 @@ struct x86_emulate_ops { * @addr: [IN ] Linear address from which to read. * @val: [OUT] Value read from memory, zero-extended to 'u_long'. * @bytes: [IN ] Number of bytes to read from memory. + * @system:[IN ] Whether the access is forced to be at CPL0. */ int (*read_std)(struct x86_emulate_ctxt *ctxt, unsigned long addr, void *val, unsigned int bytes, - struct x86_exception *fault); + struct x86_exception *fault, bool system);
/* * read_phys: Read bytes of standard (non-emulated/special) memory. @@ -127,10 +128,11 @@ struct x86_emulate_ops { * @addr: [IN ] Linear address to which to write. * @val: [OUT] Value write to memory, zero-extended to 'u_long'. * @bytes: [IN ] Number of bytes to write to memory. + * @system:[IN ] Whether the access is forced to be at CPL0. */ int (*write_std)(struct x86_emulate_ctxt *ctxt, unsigned long addr, void *val, unsigned int bytes, - struct x86_exception *fault); + struct x86_exception *fault, bool system); /* * fetch: Read bytes of standard (non-emulated/special) memory. * Used for instruction fetch. --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -793,14 +793,14 @@ static inline int jmp_rel(struct x86_emu static int linear_read_system(struct x86_emulate_ctxt *ctxt, ulong linear, void *data, unsigned size) { - return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception); + return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, true); }
static int linear_write_system(struct x86_emulate_ctxt *ctxt, ulong linear, void *data, unsigned int size) { - return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception); + return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, true); }
static int segmented_read_std(struct x86_emulate_ctxt *ctxt, @@ -814,7 +814,7 @@ static int segmented_read_std(struct x86 rc = linearize(ctxt, addr, size, false, &linear); if (rc != X86EMUL_CONTINUE) return rc; - return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception); + return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, false); }
static int segmented_write_std(struct x86_emulate_ctxt *ctxt, @@ -828,7 +828,7 @@ static int segmented_write_std(struct x8 rc = linearize(ctxt, addr, size, true, &linear); if (rc != X86EMUL_CONTINUE) return rc; - return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception); + return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, false); }
/* @@ -2900,12 +2900,12 @@ static bool emulator_io_port_access_allo #ifdef CONFIG_X86_64 base |= ((u64)base3) << 32; #endif - r = ops->read_std(ctxt, base + 102, &io_bitmap_ptr, 2, NULL); + r = ops->read_std(ctxt, base + 102, &io_bitmap_ptr, 2, NULL, true); if (r != X86EMUL_CONTINUE) return false; if (io_bitmap_ptr + port/8 > desc_limit_scaled(&tr_seg)) return false; - r = ops->read_std(ctxt, base + io_bitmap_ptr + port/8, &perm, 2, NULL); + r = ops->read_std(ctxt, base + io_bitmap_ptr + port/8, &perm, 2, NULL, true); if (r != X86EMUL_CONTINUE) return false; if ((perm >> bit_idx) & mask) --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4258,10 +4258,15 @@ EXPORT_SYMBOL_GPL(kvm_read_guest_virt);
static int emulator_read_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val, unsigned int bytes, - struct x86_exception *exception) + struct x86_exception *exception, bool system) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, exception); + u32 access = 0; + + if (!system && kvm_x86_ops->get_cpl(vcpu) == 3) + access |= PFERR_USER_MASK; + + return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception); }
static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt, @@ -4305,12 +4310,17 @@ out: }
static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val, - unsigned int bytes, struct x86_exception *exception) + unsigned int bytes, struct x86_exception *exception, + bool system) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); + u32 access = PFERR_WRITE_MASK; + + if (!system && kvm_x86_ops->get_cpl(vcpu) == 3) + access |= PFERR_USER_MASK;
return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, - PFERR_WRITE_MASK, exception); + access, exception); }
int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ethan Lee flibitijibibo@gmail.com
commit 5ca4d1ae9bad0f59bd6f851c39b19f5366953666 upstream.
GPD Win 2 Website: http://www.gpd.hk/gpdwin2.asp
Tested on a unit from the first production run sent to Indiegogo backers
Signed-off-by: Ethan Lee flibitijibibo@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/input/touchscreen/goodix.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/input/touchscreen/goodix.c +++ b/drivers/input/touchscreen/goodix.c @@ -425,6 +425,7 @@ MODULE_DEVICE_TABLE(i2c, goodix_ts_id); #ifdef CONFIG_ACPI static const struct acpi_device_id goodix_acpi_match[] = { { "GDIX1001", 0 }, + { "GDIX1002", 0 }, { } }; MODULE_DEVICE_TABLE(acpi, goodix_acpi_match);
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Wienke languitar@semipol.de
commit e6e7e9cd8eed0e18217c899843bffbe8c7dae564 upstream.
Add ELAN0612 to the list of supported touchpads; this ID is used in Lenovo v330 14IKB devices.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199253 Signed-off-by: Johannes Wienke languitar@semipol.de Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/input/mouse/elan_i2c_core.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/input/mouse/elan_i2c_core.c +++ b/drivers/input/mouse/elan_i2c_core.c @@ -1249,6 +1249,7 @@ static const struct acpi_device_id elan_ { "ELAN060B", 0 }, { "ELAN060C", 0 }, { "ELAN0611", 0 }, + { "ELAN0612", 0 }, { "ELAN1000", 0 }, { } };
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
commit 1411b5218adbcf1d45ddb260db5553c52e8d917c upstream.
In the vmx AES init routines we do a printk(KERN_INFO ...) to report the fallback implementation we're using.
However with a slow console this can significantly affect the speed of crypto operations. Using 'cryptsetup benchmark' the removal of the printk() leads to a ~5x speedup for aes-cbc decryption.
So remove them.
Fixes: 8676590a1593 ("crypto: vmx - Adding AES routines for VMX module") Fixes: 8c755ace357c ("crypto: vmx - Adding CBC routines for VMX module") Fixes: 4f7f60d312b3 ("crypto: vmx - Adding CTR routines for VMX module") Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/vmx/aes.c | 2 -- drivers/crypto/vmx/aes_cbc.c | 2 -- drivers/crypto/vmx/aes_ctr.c | 2 -- drivers/crypto/vmx/ghash.c | 2 -- 4 files changed, 8 deletions(-)
--- a/drivers/crypto/vmx/aes.c +++ b/drivers/crypto/vmx/aes.c @@ -53,8 +53,6 @@ static int p8_aes_init(struct crypto_tfm alg, PTR_ERR(fallback)); return PTR_ERR(fallback); } - printk(KERN_INFO "Using '%s' as fallback implementation.\n", - crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
crypto_cipher_set_flags(fallback, crypto_cipher_get_flags((struct --- a/drivers/crypto/vmx/aes_cbc.c +++ b/drivers/crypto/vmx/aes_cbc.c @@ -55,8 +55,6 @@ static int p8_aes_cbc_init(struct crypto alg, PTR_ERR(fallback)); return PTR_ERR(fallback); } - printk(KERN_INFO "Using '%s' as fallback implementation.\n", - crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
crypto_blkcipher_set_flags( fallback, --- a/drivers/crypto/vmx/aes_ctr.c +++ b/drivers/crypto/vmx/aes_ctr.c @@ -53,8 +53,6 @@ static int p8_aes_ctr_init(struct crypto alg, PTR_ERR(fallback)); return PTR_ERR(fallback); } - printk(KERN_INFO "Using '%s' as fallback implementation.\n", - crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
crypto_blkcipher_set_flags( fallback, --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -64,8 +64,6 @@ static int p8_ghash_init_tfm(struct cryp alg, PTR_ERR(fallback)); return PTR_ERR(fallback); } - printk(KERN_INFO "Using '%s' as fallback implementation.\n", - crypto_tfm_alg_driver_name(crypto_shash_tfm(fallback)));
crypto_shash_set_flags(fallback, crypto_shash_get_flags((struct crypto_shash
On Thu, Jun 14, 2018 at 04:04:55PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
thanks,
greg k-h
Merged, compiled with -Werror, and installed onto my Pixel 2 XL and OnePlus 5.
No initial issues noticed in dmesg or general usage.
Thanks! Nathan
On Thu, Jun 14, 2018 at 09:57:45AM -0700, Nathan Chancellor wrote:
On Thu, Jun 14, 2018 at 04:04:55PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
thanks,
greg k-h
Merged, compiled with -Werror, and installed onto my Pixel 2 XL and OnePlus 5.
No initial issues noticed in dmesg or general usage.
Great, thanks for testing and letting me know.
greg k-h
On 06/14/2018 08:04 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
thanks, -- Shuah
On 14 June 2018 at 19:34, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm and x86_64.
NOTE: 1) ealier reported CVE-2011-2496 failure test got PASS now. vma03 1 TPASS : mremap failed as expected
2) LTP: cve-2015-3290 failed intermittently on qemu_x86_64 https://bugs.linaro.org/show_bug.cgi?id=3910 We will investigate.
Summary ------------------------------------------------------------------------
kernel: 4.4.138-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.4.y git commit: 64f298340be794f1500c602285b542c4bfd3eb21 git describe: v4.4.137-25-g64f298340be7 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.4-oe/build \ /v4.4.137-25-g64f298340be7 ^ Please join URL
No regressions (compared to build v4.4.137-15-g7d690c56754e) ------------------------------------------------------------------------
Ran 7174 total tests in the following environments and test suites.
Environments -------------- - juno-r2 - arm64 - qemu_arm - qemu_x86_64 - x15 - arm - x86_64
Test Suites ----------- * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-containers-tests * ltp-cve-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests * kselftest-vsyscall-mode-native * kselftest-vsyscall-mode-none Summary ------------------------------------------------------------------------
kernel: 4.4.138-rc1 git repo: https://git.linaro.org/lkft/arm64-stable-rc.git git branch: 4.4.138-rc1-hikey-20180614-218 git commit: 55a4e4dfb0ebf4bbc212a778883e72f06d3735b7 git describe: 4.4.138-rc1-hikey-20180614-218 Test details: https://qa-reports.linaro.org/lkft/ \ linaro-hikey-stable-rc-4.4-oe/build/4.4.138-rc1-hikey-20180614-218 ^ Please join URL
No regressions (compared to build 4.4.138-rc1-hikey-20180613-217)
Ran 2629 total tests in the following environments and test suites.
Environments -------------- - hi6220-hikey - arm64 - qemu_arm64
Test Suites ----------- * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-containers-tests * ltp-cve-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests * ltp-fs-tests
On Thu, Jun 14, 2018 at 04:04:55PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
Build results: total: 148 pass: 148 fail: 0 Qemu test results: total: 135 pass: 135 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
On Thu, 2018-06-14 at 16:04 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
[...]
3.18 and 4.4 are still missing this important fix to early parameter parsing:
commit 02afeaae9843733a39cd9b11053748b2d1dc5ae7 Author: Dave Hansen dave.hansen@linux.intel.com Date: Tue Dec 22 14:52:38 2015 -0800
x86/boot: Fix early command-line parsing when matching at end
Ben.
-----Original Message----- From: stable-owner@vger.kernel.org stable-owner@vger.kernel.org On Behalf Of Ben Hutchings
[..]
3.18 and 4.4 are still missing this important fix to early parameter parsing:
commit 02afeaae9843733a39cd9b11053748b2d1dc5ae7 Author: Dave Hansen dave.hansen@linux.intel.com Date: Tue Dec 22 14:52:38 2015 -0800
x86/boot: Fix early command-line parsing when matching at end
I have cherry-picked that commit into both 3.18.y and 4.4.y (it applies cleanly) and tested them on my machine. Both worked correctly.
Test method: - Added printks to the functions that detect noxsave, noxsaves and noxsaveopt - Booted both kernels with and without the commit, and adding the kernel parameter "noxsave" - Checked that "noxsaves" and "noxsaveopt" do not appear on dmesg anymore after the commit.
Thanks, Daniel
On Tue, Jun 19, 2018 at 03:28:40PM +0100, Ben Hutchings wrote:
On Thu, 2018-06-14 at 16:04 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.4.138 release. There are 24 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat Jun 16 13:27:15 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.138-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y and the diffstat can be found below.
[...]
3.18 and 4.4 are still missing this important fix to early parameter parsing:
commit 02afeaae9843733a39cd9b11053748b2d1dc5ae7 Author: Dave Hansen dave.hansen@linux.intel.com Date: Tue Dec 22 14:52:38 2015 -0800
x86/boot: Fix early command-line parsing when matching at end
Ben.
Thanks, now applied.
greg k-h
linux-stable-mirror@lists.linaro.org