From: Palmer Dabbelt <palmer(a)rivosinc.com>
The SBI spec defines SBI calls as following the standard calling
convention, but we don't actually inform GCC of that when making an
ecall. Unfortunately this does actually manifest for the more complex
SBI call wrappers, for example sbi_s, for example sbi_send_ipi_v02()
uses t1.
This patch just marks sbi_ecall() as noinline, which implicitly enforces
the standard calling convention.
Fixes : b9dcd9e41587 ("RISC-V: Add basic support for SBI v0.2")
Cc: stable(a)vger.kernel.org
Reported-by: Atish Patra <atishp(a)rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer(a)rivosinc.com>
---
This is more of a stop-gap fix than anything else, but it's small enough
that it should be straight-forward to back port to stable. This bug has
existed forever, in theory, but none of this was specified in SBI-0.1
so the backport to the introduction of 0.2 should be sufficient.
No extant versions OpenSBI or BBL will manifest issues here, as they
save all registers, but the spec is quite explicit so we're better off
getting back in line sooner rather than later.
There'll be some marginal performance impact here. I'll send a
follow-on to clean up the SBI call wrappers in a way that allows
inlining without violating the spec, but that'll be a bigger change and
thus isn't really suitable for stable.
---
arch/riscv/kernel/sbi.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c
index f72527fcb347..7be586f5dc69 100644
--- a/arch/riscv/kernel/sbi.c
+++ b/arch/riscv/kernel/sbi.c
@@ -21,6 +21,11 @@ static int (*__sbi_rfence)(int fid, const struct cpumask *cpu_mask,
unsigned long start, unsigned long size,
unsigned long arg4, unsigned long arg5) __ro_after_init;
+/*
+ * This ecall stub can't be inlined because we're relying on the presence of a
+ * function call to enforce the calling convention.
+ */
+noinline
struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
unsigned long arg1, unsigned long arg2,
unsigned long arg3, unsigned long arg4,
--
2.34.1
The patch titled
Subject: mm/util.c: make kvfree() safe for calling while holding spinlocks
has been added to the -mm tree. Its filename is
mm-utilc-make-kvfree-safe-for-calling-while-holding-spinlocks.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-utilc-make-kvfree-safe-for-cal…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-utilc-make-kvfree-safe-for-cal…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Manfred Spraul <manfred(a)colorfullife.com>
Subject: mm/util.c: make kvfree() safe for calling while holding spinlocks
One codepath in find_alloc_undo() calls kvfree() while holding a spinlock.
Since vfree() can sleep this is a bug.
Previously, the code path used kfree(), and kfree() is safe to be called
while holding a spinlock.
Minghao proposed to fix this by updating find_alloc_undo().
Alternate proposal to fix this: Instead of changing find_alloc_undo(),
change kvfree() so that the same rules as for kfree() apply: Having
different rules for kfree() and kvfree() just asks for bugs.
Disadvantage: Releasing vmalloc'ed memory will be delayed a bit.
Link: https://lkml.kernel.org/r/20211222194828.15320-1-manfred@colorfullife.com
Link: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn/
Fixes: fc37a3b8b438 ("[PATCH] ipc sem: use kvmalloc for sem_undo allocation")
Signed-off-by: Manfred Spraul <manfred(a)colorfullife.com>
Reported-by: Zeal Robot <zealci(a)zte.com.cn>
Reported-by: Minghao Chi <chi.minghao(a)zte.com.cn>
Cc: Vasily Averin <vvs(a)virtuozzo.com>
Cc: CGEL ZTE <cgel.zte(a)gmail.com>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: Randy Dunlap <rdunlap(a)infradead.org>
Cc: Davidlohr Bueso <dbueso(a)suse.de>
Cc: Bhaskar Chowdhury <unixbhaskar(a)gmail.com>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Uladzislau Rezki <urezki(a)gmail.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: <1vier1(a)web.de>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/util.c~mm-utilc-make-kvfree-safe-for-calling-while-holding-spinlocks
+++ a/mm/util.c
@@ -603,12 +603,12 @@ EXPORT_SYMBOL(kvmalloc_node);
* It is slightly more efficient to use kfree() or vfree() if you are certain
* that you know which one to use.
*
- * Context: Either preemptible task context or not-NMI interrupt.
+ * Context: Any context except NMI interrupt.
*/
void kvfree(const void *addr)
{
if (is_vmalloc_addr(addr))
- vfree(addr);
+ vfree_atomic(addr);
else
kfree(addr);
}
_
Patches currently in -mm which might be from manfred(a)colorfullife.com are
mm-utilc-make-kvfree-safe-for-calling-while-holding-spinlocks.patch
The patch titled
Subject: mm/util.c: make kvfree() safe for calling while holding spinlocks
has been removed from the -mm tree. Its filename was
mm-utilc-make-kvfree-safe-for-calling-while-holding-spinlocks.patch
This patch was dropped because an alternative patch was merged
------------------------------------------------------
From: Manfred Spraul <manfred(a)colorfullife.com>
Subject: mm/util.c: make kvfree() safe for calling while holding spinlocks
One codepath in find_alloc_undo() calls kvfree() while holding a spinlock.
Since vfree() can sleep this is a bug.
Previously, the code path used kfree(), and kfree() is safe to be called
while holding a spinlock.
Minghao proposed to fix this by updating find_alloc_undo().
Alternate proposal to fix this: Instead of changing find_alloc_undo(),
change kvfree() so that the same rules as for kfree() apply: Having
different rules for kfree() and kvfree() just asks for bugs.
Disadvantage: Releasing vmalloc'ed memory will be delayed a bit.
Link: https://lkml.kernel.org/r/20211222194828.15320-1-manfred@colorfullife.com
Link: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn/
Fixes: fc37a3b8b438 ("[PATCH] ipc sem: use kvmalloc for sem_undo allocation")
Signed-off-by: Manfred Spraul <manfred(a)colorfullife.com>
Reported-by: Zeal Robot <zealci(a)zte.com.cn>
Reported-by: Minghao Chi <chi.minghao(a)zte.com.cn>
Cc: Vasily Averin <vvs(a)virtuozzo.com>
Cc: CGEL ZTE <cgel.zte(a)gmail.com>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: Randy Dunlap <rdunlap(a)infradead.org>
Cc: Davidlohr Bueso <dbueso(a)suse.de>
Cc: Bhaskar Chowdhury <unixbhaskar(a)gmail.com>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Uladzislau Rezki <urezki(a)gmail.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: <1vier1(a)web.de>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/util.c~mm-utilc-make-kvfree-safe-for-calling-while-holding-spinlocks
+++ a/mm/util.c
@@ -603,12 +603,12 @@ EXPORT_SYMBOL(kvmalloc_node);
* It is slightly more efficient to use kfree() or vfree() if you are certain
* that you know which one to use.
*
- * Context: Either preemptible task context or not-NMI interrupt.
+ * Context: Any context except NMI interrupt.
*/
void kvfree(const void *addr)
{
if (is_vmalloc_addr(addr))
- vfree(addr);
+ vfree_atomic(addr);
else
kfree(addr);
}
_
Patches currently in -mm which might be from manfred(a)colorfullife.com are