From: ChiYuan Huang <cy_huang(a)richtek.com>
To make sure LED enter off state after file handle is closed, initiatively
configure LED_MODE to NONE. This can guarantee whatever the previous state
is torch or strobe mode, the final state will be off.
Cc: stable(a)vger.kernel.org
Fixes : 42bd6f59ae90 ("media: Add registration helpers for V4L2 flash sub-devices")
Reported-by: Roger Wang <roger-hy.wang(a)mediatek.com>
Signed-off-by: ChiYuan Huang <cy_huang(a)richtek.com>
---
Hi,
We encounter an issue. When the upper layer camera process is crashed,
if the new process did not reinit the LED, it will keeps the previous
state whatever it's in torch or strobe mode
OS will handle the resource management. So when the process is crashed
or terminated, the 'close' API will be called to release resources.
That's why we add the initiative action to trigger LED off in file
handle close is called.
---
drivers/media/v4l2-core/v4l2-flash-led-class.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/media/v4l2-core/v4l2-flash-led-class.c b/drivers/media/v4l2-core/v4l2-flash-led-class.c
index 355595a0fefa..347b37f3ef69 100644
--- a/drivers/media/v4l2-core/v4l2-flash-led-class.c
+++ b/drivers/media/v4l2-core/v4l2-flash-led-class.c
@@ -623,6 +623,12 @@ static int v4l2_flash_close(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
return 0;
if (led_cdev) {
+ /* If file handle is released, make sure LED enter off state */
+ ret = v4l2_ctrl_s_ctrl(v4l2_flash->ctrls[LED_MODE],
+ V4L2_FLASH_LED_MODE_NONE);
+ if (ret)
+ return ret;
+
mutex_lock(&led_cdev->led_access);
if (v4l2_flash->ctrls[STROBE_SOURCE])
--
2.34.1
The quilt patch titled
Subject: maple_tree: add dead node check in mas_dup_alloc()
has been removed from the -mm tree. Its filename was
maple_tree-add-dead-node-check-in-mas_dup_alloc.patch
This patch was dropped because an updated version will be issued
------------------------------------------------------
From: Boudewijn van der Heide <boudewijn(a)delta-utec.com>
Subject: maple_tree: add dead node check in mas_dup_alloc()
Date: Sat, 3 Jan 2026 17:57:58 +0100
__mt_dup() is exported and can be called without internal locking, relying
on the caller to provide appropriate synchronization. If a caller fails
to hold proper locks, the source tree may be modified concurrently,
potentially resulting in dead nodes during traversal.
The call stack is:
__mt_dup()
��� mas_dup_build()
��� mas_dup_alloc() [accesses node->slot[]]
mas_dup_alloc() may access node slots without first verifying that the
node is still alive. If a dead node is encountered, its memory layout may
have been switched to the RCU union member, making slot array access
undefined behavior as we would be reading from the rcu_head structure
instead.
If __mt_dup() is invoked without the required external locking and the
source tree is concurrently modified, a node can transition to the dead
RCU layout while mas_dup_alloc() is still traversing it. In that case the
code may interpret the rcu_head contents as slot pointers.
Practically, this could lead to invalid pointer dereferences (kernel oops)
or corruption of the duplicated tree. Depending on how that duplicated
tree is later used (e.g. in mm/VMA paths), the effects could be
userspace-visible, such as fork() failures, process crashes, or broader
system instability.
My understanding is that current in-tree users hold the appropriate locks
and should not hit this, as triggering it requires violating the
__mt_dup() synchronization contract. The risk primarily comes from the
fact that __mt_dup() is exported (EXPORT_SYMBOL), making it reachable by
out-of-tree modules or future callers which may not follow the locking
rules.
Add an explicit dead node check to detect concurrent modification during
duplication. When a dead node is detected, return -EBUSY to indicate that
the tree is undergoing concurrent modification.
Link: https://lkml.kernel.org/r/20260103165758.74094-1-boudewijn@delta-utec.com
Fixes: fd32e4e9b764 ("maple_tree: introduce interfaces __mt_dup() and mtree_dup()")
Signed-off-by: Boudewijn van der Heide <boudewijn(a)delta-utec.com>
Cc: Alice Ryhl <aliceryhl(a)google.com>
Cc: Andrew Ballance <andrewjballance(a)gmail.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
lib/maple_tree.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/lib/maple_tree.c~maple_tree-add-dead-node-check-in-mas_dup_alloc
+++ a/lib/maple_tree.c
@@ -6251,6 +6251,11 @@ static inline void mas_dup_alloc(struct
/* Allocate memory for child nodes. */
type = mte_node_type(mas->node);
new_slots = ma_slots(new_node, type);
+ if (unlikely(ma_dead_node(node))) {
+ mas_set_err(mas, -EBUSY);
+ return;
+ }
+
count = mas->node_request = mas_data_end(mas) + 1;
mas_alloc_nodes(mas, gfp);
if (unlikely(mas_is_err(mas)))
_
Patches currently in -mm which might be from boudewijn(a)delta-utec.com are
The patch below does not apply to the 6.6-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y
git checkout FETCH_HEAD
git cherry-pick -x 45cb47c628dfbd1994c619f3eac271a780602826
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2026010541-varmint-tablet-f295@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 45cb47c628dfbd1994c619f3eac271a780602826 Mon Sep 17 00:00:00 2001
From: Chenghao Duan <duanchenghao(a)kylinos.cn>
Date: Wed, 31 Dec 2025 15:19:20 +0800
Subject: [PATCH] LoongArch: Refactor register restoration in
ftrace_common_return
Refactor the register restoration sequence in the ftrace_common_return
function to clearly distinguish between the logic of normal returns and
direct call returns in function tracing scenarios. The logic is as
follows:
1. In the case of a normal return, the execution flow returns to the
traced function, and ftrace must ensure that the register data is
consistent with the state when the function was entered.
ra = parent return address; t0 = traced function return address.
2. In the case of a direct call return, the execution flow jumps to the
custom trampoline function, and ftrace must ensure that the register
data is consistent with the state when ftrace was entered.
ra = traced function return address; t0 = parent return address.
Cc: stable(a)vger.kernel.org
Fixes: 9cdc3b6a299c ("LoongArch: ftrace: Add direct call support")
Signed-off-by: Chenghao Duan <duanchenghao(a)kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai(a)loongson.cn>
diff --git a/arch/loongarch/kernel/mcount_dyn.S b/arch/loongarch/kernel/mcount_dyn.S
index d6b474ad1d5e..5729c20e5b8b 100644
--- a/arch/loongarch/kernel/mcount_dyn.S
+++ b/arch/loongarch/kernel/mcount_dyn.S
@@ -94,7 +94,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
* at the callsite, so there is no need to restore the T series regs.
*/
ftrace_common_return:
- PTR_L ra, sp, PT_R1
PTR_L a0, sp, PT_R4
PTR_L a1, sp, PT_R5
PTR_L a2, sp, PT_R6
@@ -104,12 +103,17 @@ ftrace_common_return:
PTR_L a6, sp, PT_R10
PTR_L a7, sp, PT_R11
PTR_L fp, sp, PT_R22
- PTR_L t0, sp, PT_ERA
PTR_L t1, sp, PT_R13
- PTR_ADDI sp, sp, PT_SIZE
bnez t1, .Ldirect
+
+ PTR_L ra, sp, PT_R1
+ PTR_L t0, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t0
.Ldirect:
+ PTR_L t0, sp, PT_R1
+ PTR_L ra, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t1
SYM_CODE_END(ftrace_common)
@@ -161,6 +165,8 @@ SYM_CODE_END(return_to_handler)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
SYM_CODE_START(ftrace_stub_direct_tramp)
UNWIND_HINT_UNDEFINED
- jr t0
+ move t1, ra
+ move ra, t0
+ jr t1
SYM_CODE_END(ftrace_stub_direct_tramp)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
From: Chenghao Duan <duanchenghao(a)kylinos.cn>
commit 45cb47c628dfbd1994c619f3eac271a780602826 upstream.
Refactor the register restoration sequence in the ftrace_common_return
function to clearly distinguish between the logic of normal returns and
direct call returns in function tracing scenarios. The logic is as
follows:
1. In the case of a normal return, the execution flow returns to the
traced function, and ftrace must ensure that the register data is
consistent with the state when the function was entered.
ra = parent return address; t0 = traced function return address.
2. In the case of a direct call return, the execution flow jumps to the
custom trampoline function, and ftrace must ensure that the register
data is consistent with the state when ftrace was entered.
ra = traced function return address; t0 = parent return address.
Cc: stable(a)vger.kernel.org
Fixes: 9cdc3b6a299c ("LoongArch: ftrace: Add direct call support")
Signed-off-by: Chenghao Duan <duanchenghao(a)kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai(a)loongson.cn>
---
arch/loongarch/kernel/mcount_dyn.S | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/loongarch/kernel/mcount_dyn.S b/arch/loongarch/kernel/mcount_dyn.S
index 482aa553aa2d..5fb3a6e5e5c6 100644
--- a/arch/loongarch/kernel/mcount_dyn.S
+++ b/arch/loongarch/kernel/mcount_dyn.S
@@ -93,7 +93,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
* at the callsite, so there is no need to restore the T series regs.
*/
ftrace_common_return:
- PTR_L ra, sp, PT_R1
PTR_L a0, sp, PT_R4
PTR_L a1, sp, PT_R5
PTR_L a2, sp, PT_R6
@@ -103,12 +102,17 @@ ftrace_common_return:
PTR_L a6, sp, PT_R10
PTR_L a7, sp, PT_R11
PTR_L fp, sp, PT_R22
- PTR_L t0, sp, PT_ERA
PTR_L t1, sp, PT_R13
- PTR_ADDI sp, sp, PT_SIZE
bnez t1, .Ldirect
+
+ PTR_L ra, sp, PT_R1
+ PTR_L t0, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t0
.Ldirect:
+ PTR_L t0, sp, PT_R1
+ PTR_L ra, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t1
SYM_CODE_END(ftrace_common)
@@ -155,6 +159,8 @@ SYM_CODE_END(return_to_handler)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
SYM_CODE_START(ftrace_stub_direct_tramp)
- jr t0
+ move t1, ra
+ move ra, t0
+ jr t1
SYM_CODE_END(ftrace_stub_direct_tramp)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
--
2.47.3
The patch below does not apply to the 6.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.18.y
git checkout FETCH_HEAD
git cherry-pick -x 73721d8676771c6c7b06d4e636cc053fc76afefd
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2026010547-passenger-getting-5dde@gregkh' --subject-prefix 'PATCH 6.18.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 73721d8676771c6c7b06d4e636cc053fc76afefd Mon Sep 17 00:00:00 2001
From: Chenghao Duan <duanchenghao(a)kylinos.cn>
Date: Wed, 31 Dec 2025 15:19:21 +0800
Subject: [PATCH] LoongArch: BPF: Enhance the bpf_arch_text_poke() function
Enhance the bpf_arch_text_poke() function to enable accurate location
of BPF program entry points.
When modifying the entry point of a BPF program, skip the "move t0, ra"
instruction to ensure the correct logic and copy of the jump address.
Cc: stable(a)vger.kernel.org
Fixes: 677e6123e3d2 ("LoongArch: BPF: Disable trampoline for kernel module function trace")
Signed-off-by: Chenghao Duan <duanchenghao(a)kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai(a)loongson.cn>
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index 9f6e93343b6e..d1d5a65308b9 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1309,15 +1309,30 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
{
int ret;
bool is_call;
+ unsigned long size = 0;
+ unsigned long offset = 0;
+ void *image = NULL;
+ char namebuf[KSYM_NAME_LEN];
u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
/* Only poking bpf text is supported. Since kernel function entry
* is set up by ftrace, we rely on ftrace to poke kernel functions.
*/
- if (!is_bpf_text_address((unsigned long)ip))
+ if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf))
return -ENOTSUPP;
+ image = ip - offset;
+
+ /* zero offset means we're poking bpf prog entry */
+ if (offset == 0) {
+ /* skip to the nop instruction in bpf prog entry:
+ * move t0, ra
+ * nop
+ */
+ ip = image + LOONGARCH_INSN_SIZE;
+ }
+
is_call = old_t == BPF_MOD_CALL;
ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call);
if (ret)
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.15.y
git checkout FETCH_HEAD
git cherry-pick -x 0a63a0e7570b9b2631dfb8d836dc572709dce39e
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2026010513-quickness-paging-fee4@gregkh' --subject-prefix 'PATCH 5.15.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 0a63a0e7570b9b2631dfb8d836dc572709dce39e Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj(a)kernel.org>
Date: Sat, 1 Nov 2025 11:20:13 -0700
Subject: [PATCH] mm/damon/tests/vaddr-kunit: handle alloc failures on
damon_test_split_evenly_succ()
damon_test_split_evenly_succ() is assuming all dynamic memory allocation
in it will succeed. Those are indeed likely in the real use cases since
those allocations are too small to fail, but theoretically those could
fail. In the case, inappropriate memory access can happen. Fix it by
appropriately cleanup pre-allocated memory and skip the execution of the
remaining tests in the failure cases.
Link: https://lkml.kernel.org/r/20251101182021.74868-20-sj@kernel.org
Fixes: 17ccae8bb5c9 ("mm/damon: add kunit tests")
Signed-off-by: SeongJae Park <sj(a)kernel.org>
Cc: Brendan Higgins <brendan.higgins(a)linux.dev>
Cc: David Gow <davidgow(a)google.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: <stable(a)vger.kernel.org> [5.15+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index 1b0f21c2e376..30dc5459f1d2 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -284,10 +284,17 @@ static void damon_test_split_evenly_succ(struct kunit *test,
unsigned long start, unsigned long end, unsigned int nr_pieces)
{
struct damon_target *t = damon_new_target();
- struct damon_region *r = damon_new_region(start, end);
+ struct damon_region *r;
unsigned long expected_width = (end - start) / nr_pieces;
unsigned long i = 0;
+ if (!t)
+ kunit_skip(test, "target alloc fail");
+ r = damon_new_region(start, end);
+ if (!r) {
+ damon_free_target(t);
+ kunit_skip(test, "region alloc fail");
+ }
damon_add_region(r, t);
KUNIT_EXPECT_EQ(test,
damon_va_evenly_split_region(t, r, nr_pieces), 0);
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.15.y
git checkout FETCH_HEAD
git cherry-pick -x 2b22d0fcc6320ba29b2122434c1d2f0785fb0a25
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2026010548-handrail-gallantly-8186@gregkh' --subject-prefix 'PATCH 5.15.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 2b22d0fcc6320ba29b2122434c1d2f0785fb0a25 Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj(a)kernel.org>
Date: Sat, 1 Nov 2025 11:20:11 -0700
Subject: [PATCH] mm/damon/tests/vaddr-kunit: handle alloc failures on
damon_do_test_apply_three_regions()
damon_do_test_apply_three_regions() is assuming all dynamic memory
allocation in it will succeed. Those are indeed likely in the real use
cases since those allocations are too small to fail, but theoretically
those could fail. In the case, inappropriate memory access can happen.
Fix it by appropriately cleanup pre-allocated memory and skip the
execution of the remaining tests in the failure cases.
Link: https://lkml.kernel.org/r/20251101182021.74868-18-sj@kernel.org
Fixes: 17ccae8bb5c9 ("mm/damon: add kunit tests")
Signed-off-by: SeongJae Park <sj(a)kernel.org>
Cc: Brendan Higgins <brendan.higgins(a)linux.dev>
Cc: David Gow <davidgow(a)google.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: <stable(a)vger.kernel.org> [5.15+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index fce38dd53cf8..484223f19545 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -136,8 +136,14 @@ static void damon_do_test_apply_three_regions(struct kunit *test,
int i;
t = damon_new_target();
+ if (!t)
+ kunit_skip(test, "target alloc fail");
for (i = 0; i < nr_regions / 2; i++) {
r = damon_new_region(regions[i * 2], regions[i * 2 + 1]);
+ if (!r) {
+ damon_destroy_target(t, NULL);
+ kunit_skip(test, "region alloc fail");
+ }
damon_add_region(r, t);
}