Hi Greg, Sasha,
While backporting 37640adbefd6 ("MIPS: PCI: remember nasid changed by
set interrupt affinity") something went wrong and an extra 'n' was added.
So 'data->nasid' became 'data->nnasid' and the MIPS builds started failing.
Since v5.4.78 is already released I assumed you will need a patch to
fix it. Please consider applying the attached patch, this is only needed
for 5.4-stable tree.
--
Regards
Sudip
IBM Power9 processors can speculatively operate on data in the L1
cache before it has been completely validated, via a way-prediction
mechanism. It is not possible for an attacker to determine the
contents of impermissible memory using this method, since these
systems implement a combination of hardware and software security
measures to prevent scenarios where protected data could be leaked.
However these measures don't address the scenario where an attacker
induces the operating system to speculatively execute instructions
using data that the attacker controls. This can be used for example to
speculatively bypass "kernel user access prevention" techniques, as
discovered by Anthony Steinhauser of Google's Safeside Project. This
is not an attack by itself, but there is a possibility it could be
used in conjunction with side-channels or other weaknesses in the
privileged code to construct an attack.
This issue can be mitigated by flushing the L1 cache between privilege
boundaries of concern. This series flushes the cache on kernel entry and
after kernel user accesses.
Thanks to Nick Piggin, Russell Currey, Christopher M. Riedl, Michael
Ellerman and Spoorthy S for their work in developing, optimising,
testing and backporting these fixes, and to the many others who helped
behind the scenes.
Daniel Axtens (1):
selftests/powerpc: entry flush test
Michael Ellerman (1):
powerpc: Only include kup-radix.h for 64-bit Book3S
Nicholas Piggin (2):
powerpc/64s: flush L1D on kernel entry
powerpc/64s: flush L1D after user accesses
Russell Currey (1):
selftests/powerpc: rfi_flush: disable entry flush if present
.../admin-guide/kernel-parameters.txt | 7 +
.../powerpc/include/asm/book3s/64/kup-radix.h | 66 +++---
arch/powerpc/include/asm/exception-64s.h | 12 +-
arch/powerpc/include/asm/feature-fixups.h | 19 ++
arch/powerpc/include/asm/kup.h | 26 ++-
arch/powerpc/include/asm/security_features.h | 7 +
arch/powerpc/include/asm/setup.h | 4 +
arch/powerpc/kernel/exceptions-64s.S | 80 +++----
arch/powerpc/kernel/setup_64.c | 122 ++++++++++-
arch/powerpc/kernel/syscall_64.c | 2 +-
arch/powerpc/kernel/vmlinux.lds.S | 14 ++
arch/powerpc/lib/feature-fixups.c | 104 +++++++++
arch/powerpc/platforms/powernv/setup.c | 17 ++
arch/powerpc/platforms/pseries/setup.c | 8 +
.../selftests/powerpc/security/.gitignore | 1 +
.../selftests/powerpc/security/Makefile | 2 +-
.../selftests/powerpc/security/entry_flush.c | 198 ++++++++++++++++++
.../selftests/powerpc/security/rfi_flush.c | 35 +++-
18 files changed, 646 insertions(+), 78 deletions(-)
create mode 100644 tools/testing/selftests/powerpc/security/entry_flush.c
--
2.25.1
This adds crashkernel=auto feature to configure reserved memory for
vmcore creation to both x86 and ARM platforms based on the total memory
size.
Cc: stable(a)vger.kernel.org
Signed-off-by: John Donnelly <john.p.donnelly(a)oracle.com>
Signed-off-by: Saeed Mirzamohammadi <saeed.mirzamohammadi(a)oracle.com>
---
Documentation/admin-guide/kdump/kdump.rst | 5 +++++
arch/arm64/Kconfig | 26 ++++++++++++++++++++++-
arch/arm64/configs/defconfig | 1 +
arch/x86/Kconfig | 26 ++++++++++++++++++++++-
arch/x86/configs/x86_64_defconfig | 1 +
kernel/crash_core.c | 20 +++++++++++++++--
6 files changed, 75 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/kdump/kdump.rst b/Documentation/admin-guide/kdump/kdump.rst
index 75a9dd98e76e..f95a2af64f59 100644
--- a/Documentation/admin-guide/kdump/kdump.rst
+++ b/Documentation/admin-guide/kdump/kdump.rst
@@ -285,7 +285,12 @@ This would mean:
2) if the RAM size is between 512M and 2G (exclusive), then reserve 64M
3) if the RAM size is larger than 2G, then reserve 128M
+Or you can use crashkernel=auto if you have enough memory. The threshold
+is 1G on x86_64 and arm64. If your system memory is less than the threshold,
+crashkernel=auto will not reserve memory. The size changes according to
+the system memory size like below:
+ x86_64/arm64: 1G-64G:128M,64G-1T:256M,1T-:512M
Boot into System Kernel
=======================
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1515f6f153a0..d359dcffa80e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1124,7 +1124,7 @@ comment "Support for PE file signature verification disabled"
depends on KEXEC_SIG
depends on !EFI || !SIGNED_PE_FILE_VERIFICATION
-config CRASH_DUMP
+menuconfig CRASH_DUMP
bool "Build kdump crash kernel"
help
Generate crash dump after being started by kexec. This should
@@ -1135,6 +1135,30 @@ config CRASH_DUMP
For more details see Documentation/admin-guide/kdump/kdump.rst
+if CRASH_DUMP
+
+config CRASH_AUTO_STR
+ string "Memory reserved for crash kernel"
+ depends on CRASH_DUMP
+ default "1G-64G:128M,64G-1T:256M,1T-:512M"
+ help
+ This configures the reserved memory dependent
+ on the value of System RAM. The syntax is:
+ crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
+ range=start-[end]
+
+ For example:
+ crashkernel=512M-2G:64M,2G-:128M
+
+ This would mean:
+
+ 1) if the RAM is smaller than 512M, then don't reserve anything
+ (this is the "rescue" case)
+ 2) if the RAM size is between 512M and 2G (exclusive), then reserve 64M
+ 3) if the RAM size is larger than 2G, then reserve 128M
+
+endif # CRASH_DUMP
+
config XEN_DOM0
def_bool y
depends on XEN
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 5cfe3cf6f2ac..899ef3b6a78f 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -69,6 +69,7 @@ CONFIG_SECCOMP=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_CRASH_DUMP=y
+# CONFIG_CRASH_AUTO_STR is not set
CONFIG_XEN=y
CONFIG_COMPAT=y
CONFIG_RANDOMIZE_BASE=y
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6946b81f74a..bacd17312bb1 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2035,7 +2035,7 @@ config KEXEC_BZIMAGE_VERIFY_SIG
help
Enable bzImage signature verification support.
-config CRASH_DUMP
+menuconfig CRASH_DUMP
bool "kernel crash dumps"
depends on X86_64 || (X86_32 && HIGHMEM)
help
@@ -2049,6 +2049,30 @@ config CRASH_DUMP
(CONFIG_RELOCATABLE=y).
For more details see Documentation/admin-guide/kdump/kdump.rst
+if CRASH_DUMP
+
+config CRASH_AUTO_STR
+ string "Memory reserved for crash kernel" if X86_64
+ depends on CRASH_DUMP
+ default "1G-64G:128M,64G-1T:256M,1T-:512M"
+ help
+ This configures the reserved memory dependent
+ on the value of System RAM. The syntax is:
+ crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
+ range=start-[end]
+
+ For example:
+ crashkernel=512M-2G:64M,2G-:128M
+
+ This would mean:
+
+ 1) if the RAM is smaller than 512M, then don't reserve anything
+ (this is the "rescue" case)
+ 2) if the RAM size is between 512M and 2G (exclusive), then reserve 64M
+ 3) if the RAM size is larger than 2G, then reserve 128M
+
+endif # CRASH_DUMP
+
config KEXEC_JUMP
bool "kexec jump"
depends on KEXEC && HIBERNATION
diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
index 9936528e1939..7a87fbecf40b 100644
--- a/arch/x86/configs/x86_64_defconfig
+++ b/arch/x86/configs/x86_64_defconfig
@@ -33,6 +33,7 @@ CONFIG_EFI_MIXED=y
CONFIG_HZ_1000=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
+# CONFIG_CRASH_AUTO_STR is not set
CONFIG_HIBERNATION=y
CONFIG_PM_DEBUG=y
CONFIG_PM_TRACE_RTC=y
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index 106e4500fd53..a44cd9cc12c4 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -7,6 +7,7 @@
#include <linux/crash_core.h>
#include <linux/utsname.h>
#include <linux/vmalloc.h>
+#include <linux/kexec.h>
#include <asm/page.h>
#include <asm/sections.h>
@@ -41,6 +42,15 @@ static int __init parse_crashkernel_mem(char *cmdline,
unsigned long long *crash_base)
{
char *cur = cmdline, *tmp;
+ unsigned long long total_mem = system_ram;
+
+ /*
+ * Firmware sometimes reserves some memory regions for it's own use.
+ * so we get less than actual system memory size.
+ * Workaround this by round up the total size to 128M which is
+ * enough for most test cases.
+ */
+ total_mem = roundup(total_mem, SZ_128M);
/* for each entry of the comma-separated list */
do {
@@ -85,13 +95,13 @@ static int __init parse_crashkernel_mem(char *cmdline,
return -EINVAL;
}
cur = tmp;
- if (size >= system_ram) {
+ if (size >= total_mem) {
pr_warn("crashkernel: invalid size\n");
return -EINVAL;
}
/* match ? */
- if (system_ram >= start && system_ram < end) {
+ if (total_mem >= start && total_mem < end) {
*crash_size = size;
break;
}
@@ -250,6 +260,12 @@ static int __init __parse_crashkernel(char *cmdline,
if (suffix)
return parse_crashkernel_suffix(ck_cmdline, crash_size,
suffix);
+#ifdef CONFIG_CRASH_AUTO_STR
+ if (strncmp(ck_cmdline, "auto", 4) == 0) {
+ ck_cmdline = CONFIG_CRASH_AUTO_STR;
+ pr_info("Using crashkernel=auto, the size chosen is a best effort estimation.\n");
+ }
+#endif
/*
* if the commandline contains a ':', then that's the extended
* syntax -- if not, it must be the classic syntax
--
2.18.4
IBM Power9 processors can speculatively operate on data in the L1
cache before it has been completely validated, via a way-prediction
mechanism. It is not possible for an attacker to determine the
contents of impermissible memory using this method, since these
systems implement a combination of hardware and software security
measures to prevent scenarios where protected data could be leaked.
However these measures don't address the scenario where an attacker
induces the operating system to speculatively execute instructions
using data that the attacker controls. This can be used for example to
speculatively bypass "kernel user access prevention" techniques, as
discovered by Anthony Steinhauser of Google's Safeside Project. This
is not an attack by itself, but there is a possibility it could be
used in conjunction with side-channels or other weaknesses in the
privileged code to construct an attack.
This issue can be mitigated by flushing the L1 cache between privilege
boundaries of concern. This series flushes the cache on kernel entry and
after kernel user accesses.
Thanks to Nick Piggin, Russell Currey, Christopher M. Riedl, Michael
Ellerman and Spoorthy S for their work in developing, optimising,
testing and backporting these fixes, and to the many others who helped
behind the scenes.
Andrew Donnellan (1):
powerpc: Fix __clear_user() with KUAP enabled
Christophe Leroy (2):
powerpc: Add a framework for user access tracking
powerpc: Implement user_access_begin and friends
Daniel Axtens (2):
powerpc/64s: Define MASKABLE_RELON_EXCEPTION_PSERIES_OOL
powerpc/64s: move some exception handlers out of line
Nicholas Piggin (3):
powerpc/64s: flush L1D on kernel entry
powerpc/uaccess: Evaluate macro arguments once, before user access is
allowed
powerpc/64s: flush L1D after user accesses
Documentation/kernel-parameters.txt | 7 +
.../powerpc/include/asm/book3s/64/kup-radix.h | 23 ++
arch/powerpc/include/asm/exception-64s.h | 15 +-
arch/powerpc/include/asm/feature-fixups.h | 19 ++
arch/powerpc/include/asm/futex.h | 4 +
arch/powerpc/include/asm/kup.h | 40 ++++
arch/powerpc/include/asm/security_features.h | 7 +
arch/powerpc/include/asm/setup.h | 4 +
arch/powerpc/include/asm/uaccess.h | 142 +++++++++---
arch/powerpc/kernel/exceptions-64s.S | 210 +++++++++++-------
arch/powerpc/kernel/ppc_ksyms.c | 10 +
arch/powerpc/kernel/setup_64.c | 138 ++++++++++++
arch/powerpc/kernel/vmlinux.lds.S | 14 ++
arch/powerpc/lib/checksum_wrappers_64.c | 4 +
arch/powerpc/lib/feature-fixups.c | 104 +++++++++
arch/powerpc/lib/string.S | 2 +-
arch/powerpc/lib/string_64.S | 4 +-
arch/powerpc/platforms/powernv/setup.c | 15 ++
arch/powerpc/platforms/pseries/setup.c | 8 +
19 files changed, 653 insertions(+), 117 deletions(-)
create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
create mode 100644 arch/powerpc/include/asm/kup.h
--
2.25.1
IBM Power9 processors can speculatively operate on data in the L1
cache before it has been completely validated, via a way-prediction
mechanism. It is not possible for an attacker to determine the
contents of impermissible memory using this method, since these
systems implement a combination of hardware and software security
measures to prevent scenarios where protected data could be leaked.
However these measures don't address the scenario where an attacker
induces the operating system to speculatively execute instructions
using data that the attacker controls. This can be used for example to
speculatively bypass "kernel user access prevention" techniques, as
discovered by Anthony Steinhauser of Google's Safeside Project. This
is not an attack by itself, but there is a possibility it could be
used in conjunction with side-channels or other weaknesses in the
privileged code to construct an attack.
This issue can be mitigated by flushing the L1 cache between privilege
boundaries of concern. This series flushes the cache on kernel entry and
after kernel user accesses.
Thanks to Nick Piggin, Russell Currey, Christopher M. Riedl, Michael
Ellerman and Spoorthy S for their work in developing, optimising,
testing and backporting these fixes, and to the many others who helped
behind the scenes.
Andrew Donnellan (1):
powerpc: Fix __clear_user() with KUAP enabled
Christophe Leroy (2):
powerpc: Add a framework for user access tracking
powerpc: Implement user_access_begin and friends
Daniel Axtens (2):
powerpc/64s: Define MASKABLE_RELON_EXCEPTION_PSERIES_OOL
powerpc/64s: move some exception handlers out of line
Nicholas Piggin (3):
powerpc/64s: flush L1D on kernel entry
powerpc/uaccess: Evaluate macro arguments once, before user access is
allowed
powerpc/64s: flush L1D after user accesses
Documentation/kernel-parameters.txt | 7 +
.../powerpc/include/asm/book3s/64/kup-radix.h | 22 +++
arch/powerpc/include/asm/exception-64s.h | 13 +-
arch/powerpc/include/asm/feature-fixups.h | 19 +++
arch/powerpc/include/asm/futex.h | 4 +
arch/powerpc/include/asm/kup.h | 40 +++++
arch/powerpc/include/asm/security_features.h | 7 +
arch/powerpc/include/asm/setup.h | 4 +
arch/powerpc/include/asm/uaccess.h | 143 ++++++++++++++----
arch/powerpc/kernel/exceptions-64s.S | 130 ++++++++--------
arch/powerpc/kernel/setup_64.c | 120 +++++++++++++++
arch/powerpc/kernel/vmlinux.lds.S | 14 ++
arch/powerpc/lib/checksum_wrappers.c | 4 +
arch/powerpc/lib/feature-fixups.c | 104 +++++++++++++
arch/powerpc/lib/string.S | 4 +-
arch/powerpc/lib/string_64.S | 6 +-
arch/powerpc/platforms/powernv/setup.c | 15 ++
arch/powerpc/platforms/pseries/setup.c | 8 +
18 files changed, 567 insertions(+), 97 deletions(-)
create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
create mode 100644 arch/powerpc/include/asm/kup.h
--
2.25.1