From: Vladimir Riabchun ferr.lambarginio@gmail.com
[ Upstream commit 4099b98203d6b33d990586542fa5beee408032a3 ]
A soft lockup was observed when loading amdgpu module. If a module has a lot of tracable functions, multiple calls to kallsyms_lookup can spend too much time in RCU critical section and with disabled preemption, causing kernel panic. This is the same issue that was fixed in commit d0b24b4e91fc ("ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels") and commit 42ea22e754ba ("ftrace: Add cond_resched() to ftrace_graph_set_hash()").
Fix it the same way by adding cond_resched() in ftrace_module_enable.
Link: https://lore.kernel.org/aMQD9_lxYmphT-up@vova-pc Signed-off-by: Vladimir Riabchun ferr.lambarginio@gmail.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
YES - The change inserts `cond_resched()` inside the inner iteration over every ftrace record (`kernel/trace/ftrace.c:7538`). That loop holds the ftrace mutex and, for each record, invokes heavy helpers like `test_for_valid_rec()` which in turn calls `kallsyms_lookup()` (`kernel/trace/ftrace.c:4289`). On huge modules (e.g. amdgpu) this can run for tens of milliseconds with preemption disabled, triggering the documented soft lockup/panic during module load. - `ftrace_module_enable()` runs only in process context via `prepare_coming_module()` (`kernel/module/main.c:3279`), so adding a voluntary reschedule point is safe; the same pattern already exists in other long-running ftrace loops (see commits d0b24b4e91fc and 42ea22e754ba), so this brings consistency without changing control flow or semantics. - No data structures or interfaces change, and the code still executes under the same locking (`ftrace_lock`, `text_mutex` when the arch overrides `ftrace_arch_code_modify_prepare()`), so the risk of regression is minimal: the new call simply yields CPU if needed while keeping the locks held, preventing watchdog-induced crashes but otherwise behaving identically.
Given it fixes a real, user-visible soft lockup with a contained and well-understood tweak, this is an excellent candidate for stable backporting.
kernel/trace/ftrace.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index a69067367c296..42bd2ba68a821 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -7535,6 +7535,8 @@ void ftrace_module_enable(struct module *mod) if (!within_module(rec->ip, mod)) break;
+ cond_resched(); + /* Weak functions should still be ignored */ if (!test_for_valid_rec(rec)) { /* Clear all other flags. Should not be enabled anyway */