On Tue, Jun 03, 2025 at 03:49:26PM +0100, James Clark wrote:
[...]
@@ -577,6 +583,10 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode, } }
- source = coresight_get_source(path);
- if (coresight_is_percpu_source(source))
source->path = path;
- out: return ret; err_disable_helpers:
diff --git a/include/linux/coresight.h b/include/linux/coresight.h index 64cee4040daa..cd9709a36b9c 100644 --- a/include/linux/coresight.h +++ b/include/linux/coresight.h @@ -265,6 +265,7 @@ struct coresight_trace_id_map {
CS_MODE_SYSFS. Otherwise it must be accessed from inside the
spinlock.
- @orphan: true if the component has connections that haven't been linked.
- @path: activated path pointer (only used for source device).
- @sysfs_sink_activated: 'true' when a sink has been selected for use via sysfs
by writing a 1 to the 'enable_sink' file. A sink can be
activated but not yet enabled. Enabling for a _sink_ happens
@@ -291,6 +292,8 @@ struct coresight_device { local_t mode; int refcnt; bool orphan;
- /* activated path (for source only) */
- struct coresight_path *path;
This is probably cleaner if it's saved in the new global per-cpu you added in patch 2. It's even more specific than "for source only", it's actually only for per-cpu sources so it's not worth having it on every device.
We can maintain Coresight path pointers in a global per-CPU data structure. These path pointers are set and cleared during path enable and disable operations, respectively. Idle threads can then access these pointers for power management purposes.
However, the challenge is how we can safely access the path pointer from an idle thread - particularly because idle threads cannot acquire coresight_mutex, which is a sleepable lock. Consider a race condition scenario where SysFS knobs frequently enable and disable paths, while the corresponding CPUs enter and exit idle states. In such cases, if an idle thread attempts to access the path pointers, we would need to use raw spinlocks to ensure safety. This is problematic, as it could result in the CPU suspend/resume flow waiting on SysFS operations.
This is why it turns out to use lockless approach. Two key background info:
- Path pointers are not guaranteed to be safe to access in CPU power management (PM) notifiers. In contrast, the coresight_device pointer of a source device (e.g., an ETM) is safe to access.
A path's lifecycle begins and ends with SysFS enable and disable operations. OTOH, an ETM’s coresight_device pointer is initialized during the probe phase and cleared during module removal. Module loading and unloading trigger SMP calls for initialization and cleanup. These synchronized SMP calls ensure that it is safe for CPU PM notifiers to access the source device structure.
- Another factor is that the device mode is used to determine whether it is safe to access the path pointer. When a source device is enabled, we must ensure that its state transitions from DISABLED to an enabled state on the target CPU. This guarantees that the mode, and any subsequent operations dependent on it, follow a strictly sequential order.
CPU0 CPU1 etm4_enable() ` etm4_enable_sysfs() ` smp_call_function_single() ----> etm4_enable_hw_smp_call() ` coresight_take_mode(SYSFS)
CPU idle: etm4_cpu_save() ` coresight_get_mode() ^^^ Read out the SYSFS mode Safely access coresight_device::path
Therefore, safe access to the Coresight path heavily relies on the mode (as part of the state machine). This is why the patch adds the path pointer to the same structure as the mode.
Thanks, Leo
/* sink specific fields */ bool sysfs_sink_activated; struct dev_ext_attribute *ea;