This is a note to let you know that I've just added the patch titled
x86/process: Define cpu_tss_rw in same section as declaration
to the 4.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb Mon Sep 17 00:00:00 2001
From: Nick Desaulniers ndesaulniers@google.com Date: Wed, 3 Jan 2018 12:39:52 -0800 Subject: x86/process: Define cpu_tss_rw in same section as declaration
From: Nick Desaulniers ndesaulniers@google.com
commit 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb upstream.
cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED but then defined with DEFINE_PER_CPU_SHARED_ALIGNED leading to section mismatch warnings.
Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because it's mapped to the cpu entry area and must be page aligned.
[ tglx: Massaged changelog a bit ]
Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct") Suggested-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: thomas.lendacky@amd.com Cc: Borislav Petkov bpetkov@suse.de Cc: tklauser@distanz.ch Cc: minipli@googlemail.com Cc: me@kylehuey.com Cc: namit@vmware.com Cc: luto@kernel.org Cc: jpoimboe@redhat.com Cc: tj@kernel.org Cc: cl@linux.com Cc: bp@suse.de Cc: thgarnie@google.com Cc: kirill.shutemov@linux.intel.com Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -47,7 +47,7 @@ * section. Since TSS's are completely CPU-local, we want them * on exact cacheline boundaries, to eliminate cacheline ping-pong. */ -__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = { +__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = { .x86_tss = { /* * .sp0 is only used when entering ring 0 from a lower
Patches currently in stable-queue which might be from ndesaulniers@google.com are
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch