From: Benjamin Berg benjamin.berg@intel.com
[ Upstream commit 887c5c12e80c8424bd471122d2e8b6b462e12874 ]
sched_yield by a userspace may not actually cause scheduling in time-travel mode as no time has passed. In the case seen it appears to be a badly implemented userspace spinlock in ASAN. Unfortunately, with time-travel it causes an extreme slowdown or even deadlock depending on the kernel configuration (CONFIG_UML_MAX_USERSPACE_ITERATIONS).
Work around it by accounting time to the process whenever it executes a sched_yield syscall.
Signed-off-by: Benjamin Berg benjamin.berg@intel.com Link: https://patch.msgid.link/20250314130815.226872-1-benjamin@sipsolutions.net Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/um/include/linux/time-internal.h | 2 ++ arch/um/kernel/skas/syscall.c | 11 +++++++++++ 2 files changed, 13 insertions(+)
diff --git a/arch/um/include/linux/time-internal.h b/arch/um/include/linux/time-internal.h index b22226634ff60..138908b999d76 100644 --- a/arch/um/include/linux/time-internal.h +++ b/arch/um/include/linux/time-internal.h @@ -83,6 +83,8 @@ extern void time_travel_not_configured(void); #define time_travel_del_event(...) time_travel_not_configured() #endif /* CONFIG_UML_TIME_TRAVEL_SUPPORT */
+extern unsigned long tt_extra_sched_jiffies; + /* * Without CONFIG_UML_TIME_TRAVEL_SUPPORT this is a linker error if used, * which is intentional since we really shouldn't link it in that case. diff --git a/arch/um/kernel/skas/syscall.c b/arch/um/kernel/skas/syscall.c index b09e85279d2b8..a5beaea2967ec 100644 --- a/arch/um/kernel/skas/syscall.c +++ b/arch/um/kernel/skas/syscall.c @@ -31,6 +31,17 @@ void handle_syscall(struct uml_pt_regs *r) goto out;
syscall = UPT_SYSCALL_NR(r); + + /* + * If no time passes, then sched_yield may not actually yield, causing + * broken spinlock implementations in userspace (ASAN) to hang for long + * periods of time. + */ + if ((time_travel_mode == TT_MODE_INFCPU || + time_travel_mode == TT_MODE_EXTERNAL) && + syscall == __NR_sched_yield) + tt_extra_sched_jiffies += 1; + if (syscall >= 0 && syscall < __NR_syscalls) { unsigned long ret = EXECUTE_SYSCALL(syscall, regs);