On Sun, 3 Aug 2025 07:20:43 +0000 Yunseong Kim ysk@kzalloc.com wrote:
The locks kcov->lock and kcov_remote_lock can be acquired from atomic contexts, such as instrumentation hooks invoked from interrupt handlers.
On PREEMPT_RT-enabled kernels, spinlock_t is typically implemented
On PREEMPT_RT is implemented as a sleeping lock. You don't need to say "typically".
as a sleeping lock (e.g., mapped to an rt_mutex). Acquiring such a lock in atomic context, where sleeping is not allowed, can lead to system hangs or crashes.
To avoid this, convert both locks to raw_spinlock_t, which always provides non-sleeping spinlock semantics regardless of preemption model.
Signed-off-by: Yunseong Kim ysk@kzalloc.com
kernel/kcov.c | 58 +++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-)
diff --git a/kernel/kcov.c b/kernel/kcov.c index 187ba1b80bda..7d9b53385d81 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -54,7 +54,7 @@ struct kcov { */ refcount_t refcount; /* The lock protects mode, size, area and t. */
- spinlock_t lock;
- raw_spinlock_t lock; enum kcov_mode mode; /* Size of arena (in long's). */ unsigned int size;
@@ -84,7 +84,7 @@ struct kcov_remote { struct hlist_node hnode; }; -static DEFINE_SPINLOCK(kcov_remote_lock); +static DEFINE_RAW_SPINLOCK(kcov_remote_lock); static DEFINE_HASHTABLE(kcov_remote_map, 4); static struct list_head kcov_remote_areas = LIST_HEAD_INIT(kcov_remote_areas); @@ -406,7 +406,7 @@ static void kcov_remote_reset(struct kcov *kcov) struct hlist_node *tmp; unsigned long flags;
- spin_lock_irqsave(&kcov_remote_lock, flags);
- raw_spin_lock_irqsave(&kcov_remote_lock, flags);
Not related to these patches, but have you thought about converting some of these locks over to the "guard()" infrastructure provided by cleanup.h?
hash_for_each_safe(kcov_remote_map, bkt, tmp, remote, hnode) { if (remote->kcov != kcov) continue;
Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org
-- Steve