From: Tobias Schramm <t.schramm(a)manjaro.org>
[ Upstream commit ae1ba50f1e706dfd7ce402ac52c1f1f10becad68 ]
Previously the stm32h7 interrupt thread cleared all non-masked interrupts.
If an interrupt was to occur during the handling of another interrupt its
flag would be unset, resulting in a lost interrupt.
This patches fixes the issue by clearing only the currently set interrupt
flags.
Signed-off-by: Tobias Schramm <t.schramm(a)manjaro.org>
Link: https://lore.kernel.org/r/20200804195136.1485392-1-t.schramm@manjaro.org
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/spi/spi-stm32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
index 44ac6eb3298d4..ef3be03574e80 100644
--- a/drivers/spi/spi-stm32.c
+++ b/drivers/spi/spi-stm32.c
@@ -964,7 +964,7 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
stm32h7_spi_read_rxfifo(spi, false);
- writel_relaxed(mask, spi->base + STM32H7_SPI_IFCR);
+ writel_relaxed(sr & mask, spi->base + STM32H7_SPI_IFCR);
spin_unlock_irqrestore(&spi->lock, flags);
--
2.25.1
The patch titled
Subject: mm: slub: fix conversion of freelist_corrupted()
has been added to the -mm tree. Its filename is
mm-slub-fix-conversion-of-freelist_corrupted.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-slub-fix-conversion-of-freelis…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-fix-conversion-of-freelis…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Eugeniu Rosca <erosca(a)de.adit-jv.com>
Subject: mm: slub: fix conversion of freelist_corrupted()
Commit 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in
deactivate_slab()") suffered an update when picked up from LKML [1].
Specifically, relocating 'freelist = NULL' into 'freelist_corrupted()'
created a no-op statement. Fix it by sticking to the behavior intended in
the original patch [1]. In addition, make freelist_corrupted() immune to
passing NULL instead of &freelist.
The issue has been spotted via static analysis and code review.
[1] https://lore.kernel.org/linux-mm/20200331031450.12182-1-dongli.zhang@oracle…
Link: https://lkml.kernel.org/r/20200824130643.10291-1-erosca@de.adit-jv.com
Fixes: 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in deactivate_slab()")
Signed-off-by: Eugeniu Rosca <erosca(a)de.adit-jv.com>
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Joe Jin <joe.jin(a)oracle.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/slub.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
--- a/mm/slub.c~mm-slub-fix-conversion-of-freelist_corrupted
+++ a/mm/slub.c
@@ -672,12 +672,12 @@ static void slab_fix(struct kmem_cache *
}
static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
- void *freelist, void *nextfree)
+ void **freelist, void *nextfree)
{
if ((s->flags & SLAB_CONSISTENCY_CHECKS) &&
- !check_valid_pointer(s, page, nextfree)) {
- object_err(s, page, freelist, "Freechain corrupt");
- freelist = NULL;
+ !check_valid_pointer(s, page, nextfree) && freelist) {
+ object_err(s, page, *freelist, "Freechain corrupt");
+ *freelist = NULL;
slab_fix(s, "Isolate corrupted freechain");
return true;
}
@@ -1494,7 +1494,7 @@ static inline void dec_slabs_node(struct
int objects) {}
static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
- void *freelist, void *nextfree)
+ void **freelist, void *nextfree)
{
return false;
}
@@ -2184,7 +2184,7 @@ static void deactivate_slab(struct kmem_
* 'freelist' is already corrupted. So isolate all objects
* starting at 'freelist'.
*/
- if (freelist_corrupted(s, page, freelist, nextfree))
+ if (freelist_corrupted(s, page, &freelist, nextfree))
break;
do {
_
Patches currently in -mm which might be from erosca(a)de.adit-jv.com are
mm-slub-fix-conversion-of-freelist_corrupted.patch
The patch titled
Subject: mm: slub: fix conversion of freelist_corrupted()
has been removed from the -mm tree. Its filename was
mm-slub-fix-conversion-of-freelist_corrupted.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------
From: Eugeniu Rosca <erosca(a)de.adit-jv.com>
Subject: mm: slub: fix conversion of freelist_corrupted()
Commit 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in
deactivate_slab()") suffered an update when picked up from LKML [1].
Specifically, relocating 'freelist = NULL' into 'freelist_corrupted()'
created a no-op statement. Fix it by sticking to the behavior intended in
the original patch [1]. Prefer the lowest-line-count solution.
The issue popped up as a result of static analysis and code review.
Therefore, I lack any specific runtime behavior example being fixed.
[1] https://lore.kernel.org/linux-mm/20200331031450.12182-1-dongli.zhang@oracle…
Link: http://lkml.kernel.org/r/20200811124656.10308-1-erosca@de.adit-jv.com
Fixes: 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in deactivate_slab()")
Signed-off-by: Eugeniu Rosca <erosca(a)de.adit-jv.com>
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Joe Jin <joe.jin(a)oracle.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/slub.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/mm/slub.c~mm-slub-fix-conversion-of-freelist_corrupted
+++ a/mm/slub.c
@@ -677,7 +677,6 @@ static bool freelist_corrupted(struct km
if ((s->flags & SLAB_CONSISTENCY_CHECKS) &&
!check_valid_pointer(s, page, nextfree)) {
object_err(s, page, freelist, "Freechain corrupt");
- freelist = NULL;
slab_fix(s, "Isolate corrupted freechain");
return true;
}
@@ -2184,8 +2183,10 @@ static void deactivate_slab(struct kmem_
* 'freelist' is already corrupted. So isolate all objects
* starting at 'freelist'.
*/
- if (freelist_corrupted(s, page, freelist, nextfree))
+ if (freelist_corrupted(s, page, freelist, nextfree)) {
+ freelist = NULL;
break;
+ }
do {
prior = page->freelist;
_
Patches currently in -mm which might be from erosca(a)de.adit-jv.com are
When cat /proc/pid/stat, do_task_stat will call into cputime_adjust,
which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0
[17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1
[17179954.674339]dtask: dc9d7040 task.stack: d3cb4000
[17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000
[17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176)
[17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000
[17179954.674364]
GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff
GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff
GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f
GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c
[17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0
[17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138
[17179954.674434]Call Trace:
[17179961.832693]Call Trace:
[17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable)
[17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80
[17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870
[17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4
[17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c
[17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114
[17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c
[17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4
[17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64
->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel
the cputime has changed from unsigned long to u64 data type. About 200
days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32
is unsigned long data type,which is 32 bit for powepc 32, the bug still
exists.
So it is also a bug in the cputime_adjust which does not check if
stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime,
(__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in
mainline kernel may has fixed this case. But it is also better to check
if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong <zhongguohua1(a)huawei.com>
Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)"
Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)"
Cc: stable(a)vger.kernel.org # v2.6.15+
---
arch/powerpc/boot/div64.S | 4 ++++
arch/powerpc/lib/div64.S | 4 ++++
2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S
index 4354928ed62e..39a25b9712d1 100644
--- a/arch/powerpc/boot/div64.S
+++ b/arch/powerpc/boot/div64.S
@@ -13,6 +13,9 @@
.globl __div64_32
__div64_32:
+ li r9,0
+ cmplw r4,r9 # check if divisor r4 is zero
+ beq 5f # jump to label 5 if r4(divisor) is zero
lwz r5,0(r3) # get the dividend into r5/r6
lwz r6,4(r3)
cmplw r5,r4
@@ -52,6 +55,7 @@ __div64_32:
4: stw r7,0(r3) # return the quotient in *r3
stw r8,4(r3)
mr r3,r6 # return the remainder in r3
+5: # return if divisor r4 is zero
blr
/*
diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S
index 3d5426e7dcc4..1cc9bcabf678 100644
--- a/arch/powerpc/lib/div64.S
+++ b/arch/powerpc/lib/div64.S
@@ -13,6 +13,9 @@
#include <asm/processor.h>
_GLOBAL(__div64_32)
+ li r9,0
+ cmplw r4,r9 # check if divisor r4 is zero
+ beq 5f # jump to label 5 if r4(divisor) is zero
lwz r5,0(r3) # get the dividend into r5/r6
lwz r6,4(r3)
cmplw r5,r4
@@ -52,4 +55,5 @@ _GLOBAL(__div64_32)
4: stw r7,0(r3) # return the quotient in *r3
stw r8,4(r3)
mr r3,r6 # return the remainder in r3
+5: # return if divisor r4 is zero
blr
--
2.12.3
Adding stable(a)vger.kernel.org. Please see the original message regarding this
OOB memory access security fix. The patch is intended for the LTS branches
4.19.y, 4.14.y, 4.9.y and 4.4.y.
Thanks,
Will