6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Gray bgray@linux.ibm.com
[ Upstream commit cc879ab3ce39bc39f9b1d238b283f43a5f6f957d ]
thread_change_pc() uses CPU local data, so must be protected from swapping CPUs while it is reading the breakpoint struct.
The error is more noticeable after 1e60f3564bad ("powerpc/watchpoints: Track perf single step directly on the breakpoint"), which added an unconditional __this_cpu_read() call in thread_change_pc(). However the existing __this_cpu_read() that runs if a breakpoint does need to be re-inserted has the same issue.
Signed-off-by: Benjamin Gray bgray@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://msgid.link/20230829063457.54157-2-bgray@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/kernel/hw_breakpoint.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c index 8db1a15d7acbe..a72f86c130488 100644 --- a/arch/powerpc/kernel/hw_breakpoint.c +++ b/arch/powerpc/kernel/hw_breakpoint.c @@ -505,11 +505,13 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs) struct arch_hw_breakpoint *info; int i;
+ preempt_disable(); + for (i = 0; i < nr_wp_slots(); i++) { if (unlikely(tsk->thread.last_hit_ubp[i])) goto reset; } - return; + goto out;
reset: regs_set_return_msr(regs, regs->msr & ~MSR_SE); @@ -518,6 +520,9 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs) __set_breakpoint(i, info); tsk->thread.last_hit_ubp[i] = NULL; } + +out: + preempt_enable(); }
static bool is_larx_stcx_instr(int type)