This is a note to let you know that I've just added the patch titled
x86/mm, sched/core: Turn off IRQs in switch_mm()
to the 4.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: x86-mm-sched-core-turn-off-irqs-in-switch_mm.patch and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a Mon Sep 17 00:00:00 2001
From: Andy Lutomirski luto@kernel.org Date: Tue, 26 Apr 2016 09:39:09 -0700 Subject: x86/mm, sched/core: Turn off IRQs in switch_mm()
From: Andy Lutomirski luto@kernel.org
commit 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a upstream.
Potential races between switch_mm() and TLB-flush or LDT-flush IPIs could be very messy. AFAICT the code is currently okay, whether by accident or by careful design, but enabling PCID will make it considerably more complicated and will no longer be obviously safe.
Fix it with a big hammer: run switch_mm() with IRQs off.
To avoid a performance hit in the scheduler, we take advantage of our knowledge that the scheduler already has IRQs disabled when it calls switch_mm().
Signed-off-by: Andy Lutomirski luto@kernel.org Reviewed-by: Borislav Petkov bp@suse.de Cc: Borislav Petkov bp@alien8.de Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/f19baf759693c9dcae64bbff76189db77cb13398.1461688545... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/mmu_context.h | 3 +++ arch/x86/mm/tlb.c | 10 ++++++++++ 2 files changed, 13 insertions(+)
--- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -107,6 +107,9 @@ static inline void enter_lazy_tlb(struct extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk);
+extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + struct task_struct *tsk); +#define switch_mm_irqs_off switch_mm_irqs_off
#define activate_mm(prev, next) \ do { \ --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -64,6 +64,16 @@ EXPORT_SYMBOL_GPL(leave_mm); void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { + unsigned long flags; + + local_irq_save(flags); + switch_mm_irqs_off(prev, next, tsk); + local_irq_restore(flags); +} + +void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + struct task_struct *tsk) +{ unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
Patches currently in stable-queue which might be from luto@kernel.org are
queue-4.4/x86-mm-sched-core-uninline-switch_mm.patch queue-4.4/x86-mm-add-a-noinvpcid-boot-option-to-turn-off-invpcid.patch queue-4.4/x86-irq-do-not-substract-irq_tlb_count-from-irq_call_count.patch queue-4.4/x86-mm-if-invpcid-is-available-use-it-to-flush-global-mappings.patch queue-4.4/x86-mm-add-invpcid-helpers.patch queue-4.4/sched-core-add-switch_mm_irqs_off-and-use-it-in-the-scheduler.patch queue-4.4/arm-hide-finish_arch_post_lock_switch-from-modules.patch queue-4.4/x86-mm-sched-core-turn-off-irqs-in-switch_mm.patch queue-4.4/mm-mmu_context-sched-core-fix-mmu_context.h-assumption.patch queue-4.4/x86-mm-build-arch-x86-mm-tlb.c-even-on-smp.patch queue-4.4/sched-core-idle_task_exit-shouldn-t-use-switch_mm_irqs_off.patch