The patch below does not apply to the 5.5-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 54a16ff6f2e50775145b210bcd94d62c3c2af117 Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (VMware)" rostedt@goodmis.org Date: Wed, 5 Feb 2020 09:20:32 -0500 Subject: [PATCH] ftrace: Protect ftrace_graph_hash with ftrace_sync
As function_graph tracer can run when RCU is not "watching", it can not be protected by synchronize_rcu() it requires running a task on each CPU before it can be freed. Calling schedule_on_each_cpu(ftrace_sync) needs to be used.
Link: https://lore.kernel.org/r/20200205131110.GT2935@paulmck-ThinkPad-P72
Cc: stable@vger.kernel.org Fixes: b9b0c831bed26 ("ftrace: Convert graph filter to use hash tables") Reported-by: "Paul E. McKenney" paulmck@kernel.org Reviewed-by: Joel Fernandes (Google) joel@joelfernandes.org Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 481ede3eac13..3f7ee102868a 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5867,8 +5867,15 @@ ftrace_graph_release(struct inode *inode, struct file *file)
mutex_unlock(&graph_lock);
- /* Wait till all users are no longer using the old hash */ - synchronize_rcu(); + /* + * We need to do a hard force of sched synchronization. + * This is because we use preempt_disable() to do RCU, but + * the function tracers can be called where RCU is not watching + * (like before user_exit()). We can not rely on the RCU + * infrastructure to do the synchronization, thus we must do it + * ourselves. + */ + schedule_on_each_cpu(ftrace_sync);
free_ftrace_hash(old_hash); } diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 8c52f5de9384..3c75d29bd861 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -979,6 +979,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching". + * Protected with schedule_on_each_cpu(ftrace_sync) */ hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible());
@@ -1031,6 +1032,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching". + * Protected with schedule_on_each_cpu(ftrace_sync) */ notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash, !preemptible());
Hi Greg,
On Fri, 07 Feb 2020 11:16:16 +0100 gregkh@linuxfoundation.org wrote:
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 8c52f5de9384..3c75d29bd861 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -979,6 +979,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching".
*/ hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible());* Protected with schedule_on_each_cpu(ftrace_sync)
@@ -1031,6 +1032,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching".
*/ notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash, !preemptible());* Protected with schedule_on_each_cpu(ftrace_sync)
Ah, I updated that patch to insert these comments, which makes it dependent on 16052dd5bdfa ("ftrace: Add comment to why rcu_dereference_sched() is open coded"). This is just adding comments and should have a very lower risk of breaking anything. If you add that patch first, then this patch should apply cleanly. Would it be OK to add that comment patch? It should fix most the conflicts.
-- Steve
On Fri, Feb 07, 2020 at 08:28:42AM -0500, Steven Rostedt wrote:
Hi Greg,
On Fri, 07 Feb 2020 11:16:16 +0100 gregkh@linuxfoundation.org wrote:
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 8c52f5de9384..3c75d29bd861 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -979,6 +979,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching".
*/ hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible());* Protected with schedule_on_each_cpu(ftrace_sync)
@@ -1031,6 +1032,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching".
*/ notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash, !preemptible());* Protected with schedule_on_each_cpu(ftrace_sync)
Ah, I updated that patch to insert these comments, which makes it dependent on 16052dd5bdfa ("ftrace: Add comment to why rcu_dereference_sched() is open coded"). This is just adding comments and should have a very lower risk of breaking anything. If you add that patch first, then this patch should apply cleanly. Would it be OK to add that comment patch? It should fix most the conflicts.
I've ended up taking these additional commits, and queued everything for 5.5-4.14:
16052dd5bdfa ("ftrace: Add comment to why rcu_dereference_sched() is open coded") 24a9729f8314 ("tracing: Annotate ftrace_graph_hash pointer with __rcu") fd0e6852c407 ("tracing: Annotate ftrace_graph_notrace_hash pointer with __rcu")
On Fri, 7 Feb 2020 10:07:28 -0500 Sasha Levin sashal@kernel.org wrote:
I've ended up taking these additional commits, and queued everything for 5.5-4.14:
16052dd5bdfa ("ftrace: Add comment to why rcu_dereference_sched() is open coded") 24a9729f8314 ("tracing: Annotate ftrace_graph_hash pointer with __rcu") fd0e6852c407 ("tracing: Annotate ftrace_graph_notrace_hash pointer with __rcu")
Thanks Sasha!
-- Steve
linux-stable-mirror@lists.linaro.org