On Thu, May 07, 2026 at 03:55:26PM +0100, James Clark wrote:
[...]
Hi Leo,
Testing on the Orion O6 board was all good, and so was stress testing concurrent sysfs mode and hotplug on Juno.
Thanks a lot for test!
However, I was trying to stress test sysfs mode and rmmod on Juno and ran into an issue, although a similar issue is present without your patchset.
I don't think CPU PM introduces additional complexity for the above cases. The reason is that CPU PM notifiers _only_ apply to active sessions, and once a device is enabled, the module cannot be removed.
If the race conditions between enabling/disabling sessions and module load/unload are properly handled, CPU PM should be safe. If we have bug in these race conditions, the high frequency data access in CPU PM may expose the issues - I don't expect CPU PM is the culprit.
If you run an rmmod on all the coresight devices at the same time as an enable_source / disable loop you always get this:
WARNING: possible circular locking dependency detected 7.0.0-rc1+ #713 Tainted: G N
rmmod/1361 is trying to acquire lock: ffff0008042f69a8 (kn->active#144){++++}-{0:0}, at: __kernfs_remove+0x1b8/0x2c8
kn->active is not a lock but an active reference of sysfs node, but it use lockdep annotation to detect lock dependency.
Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(coresight_mutex); lock(cpu_hotplug_lock); lock(coresight_mutex); lock(kn->active#144); *** DEADLOCK ***
The potential deadlock sequence could be:
kernfs_fop_write_iter() `> kernfs_get_active_of() => acquire kn->active `> coresight_enable_sysfs() => acquire coresight_mutex
coresight_unregister() => acquire coresight_mutex `> device_unregister() `> __kernfs_remove() `> kernfs_drain() => acquire kn->active
I think the issue can be fixed by releasing the coresight_mutex before device_unregister():
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 015363da12fa..620560880f12 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -1639,8 +1639,8 @@ void coresight_unregister(struct coresight_device *csdev) coresight_remove_conns(csdev); coresight_clear_default_sink(csdev); coresight_release_platform_data(csdev->dev.parent, csdev->pdata);
device_unregister(&csdev->dev); mutex_unlock(&coresight_mutex);
device_unregister(&csdev->dev);} EXPORT_SYMBOL_GPL(coresight_unregister);
If so, we also need to move device_register() out of the mutex scope.
That said, I still think we should dive a bit if can use smaller locking granluarity (combining with bus management provided by device model).
Although I didn't think too hard about the implications, but it might be ok because once all the connections are removed the device can't be used so releasing the coresight_mutex isn't an issue.
But then testing that I ran into some kind of refleak where I couldn't unload modules anymore, even though I'd disabled everything. But that could be a different issue:
rmmod: ERROR: Module coresight_funnel is in use rmmod: ERROR: Module coresight_replicator is in use rmmod: ERROR: Module coresight_etm4x is in use rmmod: ERROR: Module coresight_tmc is in use rmmod: ERROR: Module coresight_cti is in use rmmod: ERROR: Module coresight is in use by: coresight_tmc coresight_cti coresight_etm4x coresight_replicator coresight_funnel
I suspect this is due to module references are not properly released, or the entire CS path is not properly disabled.
After the issue occurs, can the ETM sysfs knob still be accessed? I am curious whether this is caused by the sysfs knob disappearing so no way to disable the path or the sysfs knob still exists but the driver internally misses to disable the path.
Anyway I don't think your patches make this worse, so we can probably ignore it, but it would be good to be able to stress test the new modifications around the same area.
As no regression in test, I agree that we should not defer this series.
We can fix the race with module load/unload as a separate task:
- sysfs mode + module load/unload - perf mode + module load/unload
Then we can combine stress test with CPU idle/hotplug.
Thanks, Leo