The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From a38662084c8bdb829ff486468c7ea801c13fcc34 Mon Sep 17 00:00:00 2001
From: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
Date: Tue, 8 Jan 2019 12:44:57 +0100
Subject: [PATCH] s390/mm: always force a load of the primary ASCE on context
switch
The ASCE of an mm_struct can be modified after a task has been created,
e.g. via crst_table_downgrade for a compat process. The active_mm logic
to avoid the switch_mm call if the next task is a kernel thread can
lead to a situation where switch_mm is called where 'prev == next' is
true but 'prev->context.asce == next->context.asce' is not.
This can lead to a situation where a CPU uses the outdated ASCE to run
a task. The result can be a crash, endless loops and really subtle
problem due to TLBs being created with an invalid ASCE.
Cc: stable(a)kernel.org # v3.15+
Fixes: 53e857f30867 ("s390/mm,tlb: race of lazy TLB flush vs. recreation")
Reported-by: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index ccbb53e22024..e4462202200d 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -90,8 +90,6 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
{
int cpu = smp_processor_id();
- if (prev == next)
- return;
S390_lowcore.user_asce = next->context.asce;
cpumask_set_cpu(cpu, &next->context.cpu_attach_mask);
/* Clear previous user-ASCE from CR1 and CR7 */
@@ -103,7 +101,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
__ctl_load(S390_lowcore.vdso_asce, 7, 7);
clear_cpu_flag(CIF_ASCE_SECONDARY);
}
- cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
+ if (prev != next)
+ cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
}
#define finish_arch_post_lock_switch finish_arch_post_lock_switch
The patch below does not apply to the 4.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From a38662084c8bdb829ff486468c7ea801c13fcc34 Mon Sep 17 00:00:00 2001
From: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
Date: Tue, 8 Jan 2019 12:44:57 +0100
Subject: [PATCH] s390/mm: always force a load of the primary ASCE on context
switch
The ASCE of an mm_struct can be modified after a task has been created,
e.g. via crst_table_downgrade for a compat process. The active_mm logic
to avoid the switch_mm call if the next task is a kernel thread can
lead to a situation where switch_mm is called where 'prev == next' is
true but 'prev->context.asce == next->context.asce' is not.
This can lead to a situation where a CPU uses the outdated ASCE to run
a task. The result can be a crash, endless loops and really subtle
problem due to TLBs being created with an invalid ASCE.
Cc: stable(a)kernel.org # v3.15+
Fixes: 53e857f30867 ("s390/mm,tlb: race of lazy TLB flush vs. recreation")
Reported-by: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index ccbb53e22024..e4462202200d 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -90,8 +90,6 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
{
int cpu = smp_processor_id();
- if (prev == next)
- return;
S390_lowcore.user_asce = next->context.asce;
cpumask_set_cpu(cpu, &next->context.cpu_attach_mask);
/* Clear previous user-ASCE from CR1 and CR7 */
@@ -103,7 +101,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
__ctl_load(S390_lowcore.vdso_asce, 7, 7);
clear_cpu_flag(CIF_ASCE_SECONDARY);
}
- cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
+ if (prev != next)
+ cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
}
#define finish_arch_post_lock_switch finish_arch_post_lock_switch
The patch below does not apply to the 4.20-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 4d447455e73b47c43dd35fcc38ed823d3182a474 Mon Sep 17 00:00:00 2001
From: Vineet Gupta <vgupta(a)synopsys.com>
Date: Mon, 10 Dec 2018 16:56:45 -0800
Subject: [PATCH] ARC: mm: do_page_fault fixes #1: relinquish mmap_sem if
signal arrives while handle_mm_fault
do_page_fault() forgot to relinquish mmap_sem if a signal came while
handling handle_mm_fault() - due to say a ctl+c or oom etc.
This would later cause a deadlock by acquiring it twice.
This came to light when running libc testsuite tst-tls3-malloc test but
is likely also the cause for prior seen LTP failures. Using lockdep
clearly showed what the issue was.
| # while true; do ./tst-tls3-malloc ; done
| Didn't expect signal from child: got `Segmentation fault'
| ^C
| ============================================
| WARNING: possible recursive locking detected
| 4.17.0+ #25 Not tainted
| --------------------------------------------
| tst-tls3-malloc/510 is trying to acquire lock:
| 606c7728 (&mm->mmap_sem){++++}, at: __might_fault+0x28/0x5c
|
|but task is already holding lock:
|606c7728 (&mm->mmap_sem){++++}, at: do_page_fault+0x9c/0x2a0
|
| other info that might help us debug this:
| Possible unsafe locking scenario:
|
| CPU0
| ----
| lock(&mm->mmap_sem);
| lock(&mm->mmap_sem);
|
| *** DEADLOCK ***
|
------------------------------------------------------------
What the change does is not obvious (note to myself)
prior code was
| do_page_fault
|
| down_read() <-- lock taken
| handle_mm_fault <-- signal pending as this runs
| if fatal_signal_pending
| if VM_FAULT_ERROR
| up_read
| if user_mode
| return <-- lock still held, this was the BUG
New code
| do_page_fault
|
| down_read() <-- lock taken
| handle_mm_fault <-- signal pending as this runs
| if fatal_signal_pending
| if VM_FAULT_RETRY
| return <-- not same case as above, but still OK since
| core mm already relinq lock for FAULT_RETRY
| ...
|
| < Now falls through for bug case above >
|
| up_read() <-- lock relinquished
Cc: stable(a)vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta(a)synopsys.com>
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index a1d723197084..8df1638259f3 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -141,12 +141,17 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
*/
fault = handle_mm_fault(vma, address, flags);
- /* If Pagefault was interrupted by SIGKILL, exit page fault "early" */
if (fatal_signal_pending(current)) {
- if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
- up_read(&mm->mmap_sem);
- if (user_mode(regs))
+
+ /*
+ * if fault retry, mmap_sem already relinquished by core mm
+ * so OK to return to user mode (with signal handled first)
+ */
+ if (fault & VM_FAULT_RETRY) {
+ if (!user_mode(regs))
+ goto no_context;
return;
+ }
}
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
The patch below does not apply to the 4.20-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From f731a8e89f8c78985707c626680f3e24c7a60772 Mon Sep 17 00:00:00 2001
From: Vineet Gupta <vgupta(a)synopsys.com>
Date: Tue, 18 Dec 2018 10:39:58 -0800
Subject: [PATCH] ARC: show_regs: lockdep: re-enable preemption
signal handling core calls show_regs() with preemption disabled which
on ARC takes mmap_sem for mm/vma access, causing lockdep splat.
| [ARCLinux]# ./segv-null-ptr
| potentially unexpected fatal signal 11.
| BUG: sleeping function called from invalid context at kernel/fork.c:1011
| in_atomic(): 1, irqs_disabled(): 0, pid: 70, name: segv-null-ptr
| no locks held by segv-null-ptr/70.
| CPU: 0 PID: 70 Comm: segv-null-ptr Not tainted 4.18.0+ #69
|
| Stack Trace:
| arc_unwind_core+0xcc/0x100
| ___might_sleep+0x17a/0x190
| mmput+0x16/0xb8
| show_regs+0x52/0x310
| get_signal+0x5ee/0x610
| do_signal+0x2c/0x218
| resume_user_mode_begin+0x90/0xd8
Workaround by re-enabling preemption temporarily.
Note that the preemption disabling in core code around show_regs()
was introduced by commit 3a9f84d354ce ("signals, debug: fix BUG: using
smp_processor_id() in preemptible code in print_fatal_signal()")
to silence a differnt lockdep seen on x86 bakc in 2009.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Vineet Gupta <vgupta(a)synopsys.com>
diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c
index 5c6663321e87..215f515442e0 100644
--- a/arch/arc/kernel/troubleshoot.c
+++ b/arch/arc/kernel/troubleshoot.c
@@ -179,6 +179,12 @@ void show_regs(struct pt_regs *regs)
struct task_struct *tsk = current;
struct callee_regs *cregs;
+ /*
+ * generic code calls us with preemption disabled, but some calls
+ * here could sleep, so re-enable to avoid lockdep splat
+ */
+ preempt_enable();
+
print_task_path_n_nm(tsk);
show_regs_print_info(KERN_INFO);
@@ -221,6 +227,8 @@ void show_regs(struct pt_regs *regs)
cregs = (struct callee_regs *)current->thread.callee_reg;
if (cregs)
show_callee_regs(cregs);
+
+ preempt_disable();
}
void show_kernel_fault_diag(const char *str, struct pt_regs *regs,
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e6a72b7daeeb521753803550f0ed711152bb2555 Mon Sep 17 00:00:00 2001
From: Eugeniy Paltsev <Eugeniy.Paltsev(a)synopsys.com>
Date: Mon, 14 Jan 2019 18:16:48 +0300
Subject: [PATCH] ARCv2: lib: memeset: fix doing prefetchw outside of buffer
ARCv2 optimized memset uses PREFETCHW instruction for prefetching the
next cache line but doesn't ensure that the line is not past the end of
the buffer. PRETECHW changes the line ownership and marks it dirty,
which can cause issues in SMP config when next line was already owned by
other core. Fix the issue by avoiding the PREFETCHW
Some more details:
The current code has 3 logical loops (ignroing the unaligned part)
(a) Big loop for doing aligned 64 bytes per iteration with PREALLOC
(b) Loop for 32 x 2 bytes with PREFETCHW
(c) any left over bytes
loop (a) was already eliding the last 64 bytes, so PREALLOC was
safe. The fix was removing PREFETCW from (b).
Another potential issue (applicable to configs with 32 or 128 byte L1
cache line) is that PREALLOC assumes 64 byte cache line and may not do
the right thing specially for 32b. While it would be easy to adapt,
there are no known configs with those lie sizes, so for now, just
compile out PREALLOC in such cases.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev(a)synopsys.com>
Cc: stable(a)vger.kernel.org #4.4+
Signed-off-by: Vineet Gupta <vgupta(a)synopsys.com>
[vgupta: rewrote changelog, used asm .macro vs. "C" macro]
diff --git a/arch/arc/lib/memset-archs.S b/arch/arc/lib/memset-archs.S
index 62ad4bcb841a..f230bb7092fd 100644
--- a/arch/arc/lib/memset-archs.S
+++ b/arch/arc/lib/memset-archs.S
@@ -7,11 +7,39 @@
*/
#include <linux/linkage.h>
+#include <asm/cache.h>
-#undef PREALLOC_NOT_AVAIL
+/*
+ * The memset implementation below is optimized to use prefetchw and prealloc
+ * instruction in case of CPU with 64B L1 data cache line (L1_CACHE_SHIFT == 6)
+ * If you want to implement optimized memset for other possible L1 data cache
+ * line lengths (32B and 128B) you should rewrite code carefully checking
+ * we don't call any prefetchw/prealloc instruction for L1 cache lines which
+ * don't belongs to memset area.
+ */
+
+#if L1_CACHE_SHIFT == 6
+
+.macro PREALLOC_INSTR reg, off
+ prealloc [\reg, \off]
+.endm
+
+.macro PREFETCHW_INSTR reg, off
+ prefetchw [\reg, \off]
+.endm
+
+#else
+
+.macro PREALLOC_INSTR
+.endm
+
+.macro PREFETCHW_INSTR
+.endm
+
+#endif
ENTRY_CFI(memset)
- prefetchw [r0] ; Prefetch the write location
+ PREFETCHW_INSTR r0, 0 ; Prefetch the first write location
mov.f 0, r2
;;; if size is zero
jz.d [blink]
@@ -48,11 +76,8 @@ ENTRY_CFI(memset)
lpnz @.Lset64bytes
;; LOOP START
-#ifdef PREALLOC_NOT_AVAIL
- prefetchw [r3, 64] ;Prefetch the next write location
-#else
- prealloc [r3, 64]
-#endif
+ PREALLOC_INSTR r3, 64 ; alloc next line w/o fetching
+
#ifdef CONFIG_ARC_HAS_LL64
std.ab r4, [r3, 8]
std.ab r4, [r3, 8]
@@ -85,7 +110,6 @@ ENTRY_CFI(memset)
lsr.f lp_count, r2, 5 ;Last remaining max 124 bytes
lpnz .Lset32bytes
;; LOOP START
- prefetchw [r3, 32] ;Prefetch the next write location
#ifdef CONFIG_ARC_HAS_LL64
std.ab r4, [r3, 8]
std.ab r4, [r3, 8]
Hi Sasha,
On Wed, Jan 16, 2019 at 01:35:52PM +0000, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: c6267a51639b usb: dwc3: gadget: align transfers to wMaxPacketSize.
>
> The bot has tested the following trees: v4.20.2, v4.19.15, v4.14.93.
>
> v4.20.2: Build failed! Errors:
> drivers/usb/dwc3/gadget.c:180:5: error: ‘struct dwc3_request’ has no member named ‘needs_extra_trb’
>
> v4.19.15: Build failed! Errors:
> drivers/usb/dwc3/gadget.c:180:5: error: ‘struct dwc3_request’ has no member named ‘needs_extra_trb’
>
> v4.14.93: Build failed! Errors:
> drivers/usb/dwc3/gadget.c:185:5: error: ‘struct dwc3_request’ has no member named ‘needs_extra_trb’
>
>
> How should we proceed with this patch?
I mentioned in my "cover" message of the patch that it needs a slight
rewrite for -stable since the 'needs_extra_trb' was only introduced in
v5.0-rc1. Now that my patch is in Linus's tree, I can send it separately
for -stable; I'll fix up the commit text slightly to refer to the
variables used before v5.0.
Thanks,
Jack
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
From: Vijay Viswanath <vviswana(a)codeaurora.org>
commit 99d570da309813f67e9c741edeff55bafc6c1d5e upstream.
Enable CONFIG_MMC_SDHCI_IO_ACCESSORS so that SDHC controller specific
register read and write APIs, if registered, can be used.
Signed-off-by: Vijay Viswanath <vviswana(a)codeaurora.org>
Acked-by: Adrian Hunter <adrian.hunter(a)intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Cc: Koen Vandeputte <koen.vandeputte(a)ncentric.com>
Cc: Loic Poulain <loic.poulain(a)linaro.org>
Signed-off-by: Georgi Djakov <georgi.djakov(a)linaro.org>
---
We got a report [1] that a specific platform (ipq806x) fails to build,
after the following commit in the v4.14 stable kernel:
4abb6960f61c ("mmc: sdhci-msm: Disable CDR function on TX")
Let's pick this patch into v4.14.y to automagically select this Kconfig
dependency for platforms which have enabled the driver, but not the
accessors in their kernel config.
[1] https://www.spinics.net/lists/linux-mmc/msg52983.html
drivers/mmc/host/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index 8c15637178ff..9ed786935a30 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -429,6 +429,7 @@ config MMC_SDHCI_MSM
tristate "Qualcomm SDHCI Controller Support"
depends on ARCH_QCOM || (ARM && COMPILE_TEST)
depends on MMC_SDHCI_PLTFM
+ select MMC_SDHCI_IO_ACCESSORS
help
This selects the Secure Digital Host Controller Interface (SDHCI)
support present in Qualcomm SOCs. The controller supports
Please pick this commit for the 4.9 and 4.14 stable branches:
commit f6f2a4a2eb92bc73671204198bb2f8ab53ff59fb
Author: Paolo Abeni <pabeni(a)redhat.com>
Date: Fri Jul 6 12:30:20 2018 +0200
ipfrag: really prevent allocation on netns exit
It's not needed on earlier branches unless other commits are also applied.
Ben.
--
Ben Hutchings, Software Developer Codethink Ltd
https://www.codethink.co.uk/ Dale House, 35 Dale Street
Manchester, M1 2HF, United Kingdom