Introduction of TPDM MCMB(Multi-lane Continuous Multi Bit) subunit
MCMB (Multi-lane CMB) is a special form of CMB dataset type. MCMB
subunit has the same number and usage of registers as CMB subunit.
Just like the CMB subunit, the MCMB subunit must be configured prior
to enablement. This series adds support for TPDM to configure the
MCMB subunit.
Once this series patches are applied properly, the new tpdm nodes for
should be observed at the tpdm path /sys/bus/coresight/devices/tpdm*
which supports MCMB subunit. All sysfs files of CMB subunit TPDM are
included in MCMB subunit TPDM. On this basis, MCMB subunit TPDM will
have new sysfs files to select and enable the lane.
Changes in V3:
1. Update the date in ABI file.
2. Remove the unrelated change.
3. Correct typo.
4. Move the CMB_CR related definitions together.
Changes in V2:
1. Use tdpm_data->cmb instead of (tpdm_has_cmb_dataset(tpdm_data) ||
tpdm_has_mcmb_dataset(tpdm_data)) for cmb dataset support.
2. Embed mcmb_dataset struct into cmb struct.
3. Update the date and version in sysfs-bus-coresight-devices-tpdm
Link: https://patchwork.kernel.org/project/linux-arm-msm/patch/20241105123940.396…
Mao Jinlong (1):
coresight-tpdm: Add MCMB dataset support
Tao Zhang (2):
coresight-tpdm: Add support to select lane
coresight-tpdm: Add support to enable the lane for MCMB TPDM
.../testing/sysfs-bus-coresight-devices-tpdm | 15 +++
drivers/hwtracing/coresight/coresight-tpda.c | 7 +-
drivers/hwtracing/coresight/coresight-tpdm.c | 120 +++++++++++++++++-
drivers/hwtracing/coresight/coresight-tpdm.h | 33 +++--
4 files changed, 155 insertions(+), 20 deletions(-)
--
2.17.1
On 24/12/2024 10:13 am, Yeoreum Yun wrote:
> Hi James.
>>> diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
>>> index a70c1454b410..dfa7dcbaf25d 100644
>>> --- a/drivers/hwtracing/coresight/coresight-syscfg.c
>>> +++ b/drivers/hwtracing/coresight/coresight-syscfg.c
>>> @@ -953,7 +953,8 @@ int cscfg_config_sysfs_activate(struct cscfg_config_desc *config_desc, bool acti
>>> cscfg_mgr->sysfs_active_config = cfg_hash;
>>> } else {
>>> /* disable if matching current value */
>>> - if (cscfg_mgr->sysfs_active_config == cfg_hash) {
>>> + if (cscfg_mgr->sysfs_active_config == cfg_hash &&
>>> + !atomic_read(&cscfg_mgr->sys_enable_cnt)) {
>>> _cscfg_deactivate_config(cfg_hash);
>>
>> So is sys_enable_cnt a global value? If a fix is needed doesn't it need to
>> be a per-config refcount?
>>
>> Say you have two active configs, sys_enable_cnt is now 2, how do you disable
>> one without it always skipping here when the other config is enabled?
>
> Sorry to miss this one!.
> Because when one configuration is enabled,
> cscfg_mgr->sysfs_active_config becomes !NULL, so it wouldn't happen
> there is no two activate configurations. so sys_enable_cnt wouldn't be
> 2.
>
>
>
Maybe "sys_enabled" is a better name then. Count implies that it can be
more than one. And the doc could be updated to say it's only ever 0 or 1.
But what about my other point about enabled always being a subset of
active? Can we not change "sys_active_cnt" to a more generic "refcount",
then both activation and enabling steps increment that same refcount,
because they are both technically users of the config. Then you can
solve the problem without adding another separate counter. I think
that's potentially easier to understand.
Although the easiest is just locking every function with the mutex (or a
spinlock if it also needs to be used from Perf). Obviously all these
atomics are harder to get right than that, and this isn't performance
sensitive in any way.
On 18/12/2024 8:48 am, Yeoreum Yun wrote:
> While enable active config via cscfg_csdev_enable_active_config(),
> active config could be deactivated via configfs' sysfs interface.
> This could make UAF issue in below scenario:
>
> CPU0 CPU1
> (perf or sysfs enable) load module
> cscfg_load_config_sets()
> activate config. // sysfs
> (sys_active_cnt == 1)
> ...
> cscfg_csdev_enable_active_config()
> lock(csdev->cscfg_csdev_lock)
> // here load config activate by CPU1
> unlock(csdev->cscfg_csdev_lock)
>
> deactivate config // sysfs
> (sys_activec_cnt == 0)
Assuming the left side does Perf, are there some steps missing? To get
to enable_active_config() you first need to pass through etm_setup_aux()
-> cscfg_activate_config(). That would also increment sys_active_cnt
which would leave it at 2 if there were two concurrent sessions. Then it
would end up as 1 here after deactivate, rather than 0.
It's not explicitly mentioned in the sequence but I'm assuming the left
and right are the same config, but I suppose it could be an issue with
different configs too.
> cscfg_unload_config_sets()
> unload module
On the left cscfg_activate_config() also bumps the module refcount, so
unload wouldn't cause a UAF here as far as I can see.
>
> // access to config_desc which freed
> // while unloading module.
> cfs_csdev_enable_config
>
> To address this, introduce sys_enable_cnt in cscfg_mgr to prevent
> deactivate while there is enabled configuration.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun(a)arm.com>
> ---
> .../hwtracing/coresight/coresight-etm4x-core.c | 3 +++
> drivers/hwtracing/coresight/coresight-syscfg.c | 18 ++++++++++++++++--
> drivers/hwtracing/coresight/coresight-syscfg.h | 2 ++
> 3 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> index 86893115df17..6218ef40acbc 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> @@ -986,6 +986,9 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
> smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
>
> raw_spin_unlock(&drvdata->spinlock);
> +
> + cscfg_csdev_disable_active_config(csdev);
> +
> cpus_read_unlock();
>
> /*
> diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
> index a70c1454b410..dfa7dcbaf25d 100644
> --- a/drivers/hwtracing/coresight/coresight-syscfg.c
> +++ b/drivers/hwtracing/coresight/coresight-syscfg.c
> @@ -953,7 +953,8 @@ int cscfg_config_sysfs_activate(struct cscfg_config_desc *config_desc, bool acti
> cscfg_mgr->sysfs_active_config = cfg_hash;
> } else {
> /* disable if matching current value */
> - if (cscfg_mgr->sysfs_active_config == cfg_hash) {
> + if (cscfg_mgr->sysfs_active_config == cfg_hash &&
> + !atomic_read(&cscfg_mgr->sys_enable_cnt)) {
> _cscfg_deactivate_config(cfg_hash);
So is sys_enable_cnt a global value? If a fix is needed doesn't it need
to be a per-config refcount?
Say you have two active configs, sys_enable_cnt is now 2, how do you
disable one without it always skipping here when the other config is
enabled?
> cscfg_mgr->sysfs_active_config = 0;
> } else
> @@ -1055,6 +1056,12 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
> if (!atomic_read(&cscfg_mgr->sys_active_cnt))
> return 0;
>
> + /*
> + * increment sys_enable_cnt first to prevent deactivate the config
> + * while enable active config.
> + */
> + atomic_inc(&cscfg_mgr->sys_enable_cnt);
> +
> /*
> * Look for matching configuration - set the active configuration
> * context if found.
> @@ -1098,6 +1105,10 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
> raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
> }
> }
> +
> + if (!config_csdev_active || err)
> + atomic_dec(&cscfg_mgr->sys_enable_cnt);
> +
> return err;
> }
> EXPORT_SYMBOL_GPL(cscfg_csdev_enable_active_config);
> @@ -1129,8 +1140,10 @@ void cscfg_csdev_disable_active_config(struct coresight_device *csdev)
> if (config_csdev) {
> if (!config_csdev->enabled)
> config_csdev = NULL;
> - else
> + else {
> config_csdev->enabled = false;
> + atomic_dec(&cscfg_mgr->sys_enable_cnt);
> + }
> }
> csdev->active_cscfg_ctxt = NULL;
> raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
> @@ -1179,6 +1192,7 @@ static int cscfg_create_device(void)
> INIT_LIST_HEAD(&cscfg_mgr->config_desc_list);
> INIT_LIST_HEAD(&cscfg_mgr->load_order_list);
> atomic_set(&cscfg_mgr->sys_active_cnt, 0);
> + atomic_set(&cscfg_mgr->sys_enable_cnt, 0);
> cscfg_mgr->load_state = CSCFG_NONE;
>
> /* setup the device */
> diff --git a/drivers/hwtracing/coresight/coresight-syscfg.h b/drivers/hwtracing/coresight/coresight-syscfg.h
> index 66e2db890d82..2fc397919985 100644
> --- a/drivers/hwtracing/coresight/coresight-syscfg.h
> +++ b/drivers/hwtracing/coresight/coresight-syscfg.h
> @@ -38,6 +38,7 @@ enum cscfg_load_ops {
> * @config_desc_list: List of system configuration descriptors to load into registered devices.
> * @load_order_list: Ordered list of owners for dynamically loaded configurations.
> * @sys_active_cnt: Total number of active config descriptor references.
> + * @sys_enable_cnt: Total number of enable of active config descriptor references.
When these are next to each other it makes me wonder why active_cnt
isn't enough to prevent unloading? Enabled is always a subset of active,
so as long as you gate unloads or modifications on the existing active
count it seems fine?
> * @cfgfs_subsys: configfs subsystem used to manage configurations.
> * @sysfs_active_config:Active config hash used if CoreSight controlled from sysfs.
> * @sysfs_active_preset:Active preset index used if CoreSight controlled from sysfs.
> @@ -50,6 +51,7 @@ struct cscfg_manager {
> struct list_head config_desc_list;
> struct list_head load_order_list;
> atomic_t sys_active_cnt;
> + atomic_t sys_enable_cnt;
> struct configfs_subsystem cfgfs_subsys;
> u32 sysfs_active_config;
> int sysfs_active_preset;
On 21/12/2024 11:54 am, Marc Zyngier wrote:
> On Fri, 20 Dec 2024 17:32:17 +0000,
> James Clark <james.clark(a)linaro.org> wrote:
>>
>>
>>
>> On 20/12/2024 5:05 pm, Marc Zyngier wrote:
>>> On Wed, 27 Nov 2024 10:01:23 +0000,
>>> James Clark <james.clark(a)linaro.org> wrote:
>>>>
>>>> Currently in nVHE, KVM has to check if TRBE is enabled on every guest
>>>> switch even if it was never used. Because it's a debug feature and is
>>>> more likely to not be used than used, give KVM the TRBE buffer status to
>>>> allow a much simpler and faster do-nothing path in the hyp.
>>>>
>>>> This is always called with preemption disabled except for probe/hotplug
>>>> which gets wrapped with preempt_disable().
>>>>
>>>> Protected mode disables trace regardless of TRBE (because
>>>> guest_trfcr_el1 is always 0), which was not previously done. HAS_TRBE
>>>> becomes redundant, but HAS_TRF is now required for this.
>>>>
>>>> Signed-off-by: James Clark <james.clark(a)linaro.org>
>>>> ---
>>>> arch/arm64/include/asm/kvm_host.h | 10 +++-
>>>> arch/arm64/kvm/debug.c | 25 ++++++++--
>>>> arch/arm64/kvm/hyp/nvhe/debug-sr.c | 51 +++++++++++---------
>>>> drivers/hwtracing/coresight/coresight-trbe.c | 5 ++
>>>> 4 files changed, 65 insertions(+), 26 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>> index 7e3478386351..ba251caa593b 100644
>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>> @@ -611,7 +611,8 @@ struct cpu_sve_state {
>>>> */
>>>> struct kvm_host_data {
>>>> #define KVM_HOST_DATA_FLAG_HAS_SPE 0
>>>> -#define KVM_HOST_DATA_FLAG_HAS_TRBE 1
>>>> +#define KVM_HOST_DATA_FLAG_HAS_TRF 1
>>>> +#define KVM_HOST_DATA_FLAG_TRBE_ENABLED 2
>>>> unsigned long flags;
>>>> struct kvm_cpu_context host_ctxt;
>>>> @@ -657,6 +658,9 @@ struct kvm_host_data {
>>>> u64 mdcr_el2;
>>>> } host_debug_state;
>>>> + /* Guest trace filter value */
>>>> + u64 guest_trfcr_el1;
>>>
>>> Guest value? Or host state while running the guest? If the former,
>>> then this has nothing to do here. If the latter, this should be
>>> spelled out (trfcr_in_guest?), and the comment amended.
>>>
>>>> +
>>>> /* Number of programmable event counters (PMCR_EL0.N) for this CPU */
>>>> unsigned int nr_event_counters;
>>>> };
>>>> @@ -1381,6 +1385,8 @@ static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
>>>> void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr);
>>>> void kvm_clr_pmu_events(u64 clr);
>>>> bool kvm_set_pmuserenr(u64 val);
>>>> +void kvm_enable_trbe(void);
>>>> +void kvm_disable_trbe(void);
>>>> #else
>>>> static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {}
>>>> static inline void kvm_clr_pmu_events(u64 clr) {}
>>>> @@ -1388,6 +1394,8 @@ static inline bool kvm_set_pmuserenr(u64 val)
>>>> {
>>>> return false;
>>>> }
>>>> +static inline void kvm_enable_trbe(void) {}
>>>> +static inline void kvm_disable_trbe(void) {}
>>>> #endif
>>>> void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
>>>> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
>>>> index dd9e139dfd13..0c340ae7b5d1 100644
>>>> --- a/arch/arm64/kvm/debug.c
>>>> +++ b/arch/arm64/kvm/debug.c
>>>> @@ -314,7 +314,26 @@ void kvm_init_host_debug_data(void)
>>>> !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P))
>>>> host_data_set_flag(HAS_SPE);
>>>> - if (cpuid_feature_extract_unsigned_field(dfr0,
>>>> ID_AA64DFR0_EL1_TraceBuffer_SHIFT) &&
>>>> - !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P))
>>>> - host_data_set_flag(HAS_TRBE);
>>>> + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_SHIFT))
>>>> + host_data_set_flag(HAS_TRF);
>>>> }
>>>> +
>>>> +void kvm_enable_trbe(void)
>>>> +{
>>>> + if (has_vhe() || is_protected_kvm_enabled() ||
>>>> + WARN_ON_ONCE(preemptible()))
>>>> + return;
>>>> +
>>>> + host_data_set_flag(TRBE_ENABLED);
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(kvm_enable_trbe);
>>>> +
>>>> +void kvm_disable_trbe(void)
>>>> +{
>>>> + if (has_vhe() || is_protected_kvm_enabled() ||
>>>> + WARN_ON_ONCE(preemptible()))
>>>> + return;
>>>> +
>>>> + host_data_clear_flag(TRBE_ENABLED);
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(kvm_disable_trbe);
>>>> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>>>> index 858bb38e273f..9479bee41801 100644
>>>> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>>>> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>>>> @@ -51,32 +51,39 @@ static void __debug_restore_spe(u64 pmscr_el1)
>>>> write_sysreg_el1(pmscr_el1, SYS_PMSCR);
>>>> }
>>>> -static void __debug_save_trace(u64 *trfcr_el1)
>>>> +static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr)
>>>> {
>>>> - *trfcr_el1 = 0;
>>>> + *saved_trfcr = read_sysreg_el1(SYS_TRFCR);
>>>> + write_sysreg_el1(new_trfcr, SYS_TRFCR);
>>>> - /* Check if the TRBE is enabled */
>>>> - if (!(read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E))
>>>> + /* No need to drain if going to an enabled state or from disabled state */
>>>> + if (new_trfcr || !*saved_trfcr)
>>>
>>> What if TRFCR_EL1.TS is set to something non-zero? I'd rather you
>>> check for the E*TRE bits instead of assuming things.
>>>
>>
>> Yeah it's probably better that way. TS is actually always set when any
>> tracing session starts and then never cleared, so doing it the simpler
>> way made it always flush even after tracing finished, which probably
>> wasn't great.
>
> Quite. Can you please *test* these things?
>
> [...]
>
Sorry to confuse things I wasn't 100% accurate here, yes it's tested and
working. It works because of the split set/clear_trfcr() API. The
Coresight driver specifically calls clear at the end of the session
rather than a set of 0. That signals this function not to be called so
there's no excessive swapping.
Secondly, the buffer flushing case is triggered by TRBE_ENABLED, which
forces TRFCR to 0, so "if (new_trfcr)" is an OK way to gate the flush.
>>>> @@ -253,8 +256,10 @@ static void trbe_drain_and_disable_local(struct trbe_cpudata *cpudata)
>>>> static void trbe_reset_local(struct trbe_cpudata *cpudata)
>>>> {
>>>> + preempt_disable();
>>>> trbe_drain_and_disable_local(cpudata);
>>>> write_sysreg_s(0, SYS_TRBLIMITR_EL1);
>>>> + preempt_enable();
>>>
>>> This looks terribly wrong. If you need to disable preemption here, why
>>> doesn't the critical section cover all register accesses? Surely you
>>> don't want to nuke another CPU's context?
>>>
>>> But looking at the calling sites, this makes even less sense. The two
>>> callers of this thing mess with *per-CPU* interrupts. Dealing with
>>> per-CPU interrupts in preemptible context is a big no-no (hint: they
>>> start with a call to smp_processor_id()).
>>>
>>> So what is this supposed to ensure?
>>>
>>> M.
>>>
>>
>> These ones are only intended to silence the
>> WARN_ON_ONCE(preemptible()) in kvm_enable_trbe() when this is called
>> from boot/hotplug (arm_trbe_enable_cpu()). Preemption isn't disabled,
>> but a guest can't run at that point either.
>>
>> The "real" calls to kvm_enable_trbe() _are_ called from an atomic
>> context. I think there was a previous review comment about when it was
>> safe to call the KVM parts of this change, which is why I added the
>> warning making sure it was always called with preemption disabled. But
>> actually I could remove the warning and these preempt_disables() and
>> replace them with a comment.
>
> You should keep the WARN_ON(), and either *never* end-up calling this
> stuff during a CPUHP event, or handle the fact that preemption isn't
> initialised yet. For example by checking whether the current CPU is
> online.
>
> But this sort of random spreading of preemption disabling is not an
> acceptable outcome.
>
> M.
>
I'll look into this again. This was my initial attempt but couldn't find
any easily accessible state that allowed to to be done this way. Maybe I
missed something, but the obvious cpu_online() etc were already true at
this point.
Thanks
James
On 21/12/2024 12:34 pm, Marc Zyngier wrote:
> On Wed, 27 Nov 2024 10:01:24 +0000,
> James Clark <james.clark(a)linaro.org> wrote:
>>
>> For nVHE, switch the filter value in and out if the Coresight driver
>> asks for it. This will support filters for guests when sinks other than
>> TRBE are used.
>>
>> For VHE, just write the filter directly to TRFCR_EL1 where trace can be
>> used even with TRBE sinks.
>>
>> Signed-off-by: James Clark <james.clark(a)linaro.org>
>> ---
>> arch/arm64/include/asm/kvm_host.h | 5 +++++
>> arch/arm64/kvm/debug.c | 28 ++++++++++++++++++++++++++++
>> arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 +
>> 3 files changed, 34 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index ba251caa593b..cce07887551b 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -613,6 +613,7 @@ struct kvm_host_data {
>> #define KVM_HOST_DATA_FLAG_HAS_SPE 0
>> #define KVM_HOST_DATA_FLAG_HAS_TRF 1
>> #define KVM_HOST_DATA_FLAG_TRBE_ENABLED 2
>> +#define KVM_HOST_DATA_FLAG_GUEST_FILTER 3
>
> Guest filter what? This is meaningless.
>
KVM_HOST_DATA_FLAG_SWAP_TRFCR maybe?
>> unsigned long flags;
>>
>> struct kvm_cpu_context host_ctxt;
>> @@ -1387,6 +1388,8 @@ void kvm_clr_pmu_events(u64 clr);
>> bool kvm_set_pmuserenr(u64 val);
>> void kvm_enable_trbe(void);
>> void kvm_disable_trbe(void);
>> +void kvm_set_trfcr(u64 guest_trfcr);
>> +void kvm_clear_trfcr(void);
>> #else
>> static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {}
>> static inline void kvm_clr_pmu_events(u64 clr) {}
>> @@ -1396,6 +1399,8 @@ static inline bool kvm_set_pmuserenr(u64 val)
>> }
>> static inline void kvm_enable_trbe(void) {}
>> static inline void kvm_disable_trbe(void) {}
>> +static inline void kvm_set_trfcr(u64 guest_trfcr) {}
>> +static inline void kvm_clear_trfcr(void) {}
>> #endif
>>
>> void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
>> index 0c340ae7b5d1..9266f2776991 100644
>> --- a/arch/arm64/kvm/debug.c
>> +++ b/arch/arm64/kvm/debug.c
>> @@ -337,3 +337,31 @@ void kvm_disable_trbe(void)
>> host_data_clear_flag(TRBE_ENABLED);
>> }
>> EXPORT_SYMBOL_GPL(kvm_disable_trbe);
>> +
>> +void kvm_set_trfcr(u64 guest_trfcr)
>
> Again. Is this the guest's view? or the host view while running the
> guest? I asked the question on the previous patch, and you didn't
> reply.
>
Ah sorry missed that one:
> Guest value? Or host state while running the guest? If the former,
> then this has nothing to do here. If the latter, this should be
> spelled out (trfcr_in_guest?), and the comment amended.
Yes, the latter, guest TRFCR reads are undef anyway. I can rename this
and the host_data variable to be trfcr_in_guest.
>> +{
>> + if (is_protected_kvm_enabled() || WARN_ON_ONCE(preemptible()))
>> + return;
>> +
>> + if (has_vhe())
>> + write_sysreg_s(guest_trfcr, SYS_TRFCR_EL12);
>> + else {
>> + *host_data_ptr(guest_trfcr_el1) = guest_trfcr;
>> + host_data_set_flag(GUEST_FILTER);
>> + }
>
> Oh come on. This is basic coding style, see section 3 in
> Documentation/process/coding-style.rst.
>
Oops, I'd have thought checkpatch could catch something like that. Will fix.
>> +}
>> +EXPORT_SYMBOL_GPL(kvm_set_trfcr);
>> +
>> +void kvm_clear_trfcr(void)
>> +{
>> + if (is_protected_kvm_enabled() || WARN_ON_ONCE(preemptible()))
>> + return;
>> +
>> + if (has_vhe())
>> + write_sysreg_s(0, SYS_TRFCR_EL12);
>> + else {
>> + *host_data_ptr(guest_trfcr_el1) = 0;
>> + host_data_clear_flag(GUEST_FILTER);
>> + }
>> +}
>> +EXPORT_SYMBOL_GPL(kvm_clear_trfcr);
>
> Why do we have two helpers? Clearly, calling kvm_set_trfcr() with
> E{1,0}TRE=={0,0} should result in *disabling* things. Except it
> doesn't, and you should fix it. Once that is fixed, it becomes
obvious> that kvm_clear_trfcr() serves no purpose.
>
With only one kvm_set_trfcr() there's no way to distinguish swapping in
a 0 value or stopping swapping altogether. I thought we wanted a single
flag that gated the register accesses so the hyp mostly does nothing?
With only kvm_set_trfcr() you first need to check FEAT_TRF then you need
to compare the real register with trfcr_in_guest to know whether to swap
or not every time.
Actually I think some of the previous versions had something like this
but it was a bit more complicated.
Maybe set/clear_trfcr() aren't great names. Perhaps
kvm_set_trfcr_in_guest() and kvm_disable_trfcr_in_guest()? With the
second one hinting that it stops the swapping regardless of what the
values are.
I don't think calling kvm_set_trfcr() with E{1,0}TRE=={0,0} is actually
broken in this version, it means that the Coresight driver wants that
value to be installed for guests. So it should actually _enable_
swapping in the value of 0, not disable anything.
> To sum it up, KVM's API should reflect the architecture instead of
> making things up.
>
We had kvm_set_trfcr(u64 host_trfcr, u64 guest_trfcr) on the last
version, which also serves the same purpose I mentioned above because
you can check if they're the same or not and disable swapping. I don't
know if that counts as reflecting the architecture better. But Oliver
mentioned he preferred it more "intent" based which is why I added the
clear_trfcr().
>> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> index 9479bee41801..7edee7ace433 100644
>> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> @@ -67,6 +67,7 @@ static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr)
>> static bool __trace_needs_switch(void)
>> {
>> return host_data_test_flag(TRBE_ENABLED) ||
>> + host_data_test_flag(GUEST_FILTER) ||
>> (is_protected_kvm_enabled() && host_data_test_flag(HAS_TRF));
>
> Wouldn't it make more sense to just force the "GUEST_FILTER" flag in
> the pKVM case, and drop the 3rd term altogether?
>
> M.
>
Yep we can set GUEST_FILTER once at startup and it gets dropped along
with HAS_TRF. That's a lot simpler.
Thanks
James
On 20/12/2024 5:05 pm, Marc Zyngier wrote:
> On Wed, 27 Nov 2024 10:01:23 +0000,
> James Clark <james.clark(a)linaro.org> wrote:
>>
>> Currently in nVHE, KVM has to check if TRBE is enabled on every guest
>> switch even if it was never used. Because it's a debug feature and is
>> more likely to not be used than used, give KVM the TRBE buffer status to
>> allow a much simpler and faster do-nothing path in the hyp.
>>
>> This is always called with preemption disabled except for probe/hotplug
>> which gets wrapped with preempt_disable().
>>
>> Protected mode disables trace regardless of TRBE (because
>> guest_trfcr_el1 is always 0), which was not previously done. HAS_TRBE
>> becomes redundant, but HAS_TRF is now required for this.
>>
>> Signed-off-by: James Clark <james.clark(a)linaro.org>
>> ---
>> arch/arm64/include/asm/kvm_host.h | 10 +++-
>> arch/arm64/kvm/debug.c | 25 ++++++++--
>> arch/arm64/kvm/hyp/nvhe/debug-sr.c | 51 +++++++++++---------
>> drivers/hwtracing/coresight/coresight-trbe.c | 5 ++
>> 4 files changed, 65 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 7e3478386351..ba251caa593b 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -611,7 +611,8 @@ struct cpu_sve_state {
>> */
>> struct kvm_host_data {
>> #define KVM_HOST_DATA_FLAG_HAS_SPE 0
>> -#define KVM_HOST_DATA_FLAG_HAS_TRBE 1
>> +#define KVM_HOST_DATA_FLAG_HAS_TRF 1
>> +#define KVM_HOST_DATA_FLAG_TRBE_ENABLED 2
>> unsigned long flags;
>>
>> struct kvm_cpu_context host_ctxt;
>> @@ -657,6 +658,9 @@ struct kvm_host_data {
>> u64 mdcr_el2;
>> } host_debug_state;
>>
>> + /* Guest trace filter value */
>> + u64 guest_trfcr_el1;
>
> Guest value? Or host state while running the guest? If the former,
> then this has nothing to do here. If the latter, this should be
> spelled out (trfcr_in_guest?), and the comment amended.
>
>> +
>> /* Number of programmable event counters (PMCR_EL0.N) for this CPU */
>> unsigned int nr_event_counters;
>> };
>> @@ -1381,6 +1385,8 @@ static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
>> void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr);
>> void kvm_clr_pmu_events(u64 clr);
>> bool kvm_set_pmuserenr(u64 val);
>> +void kvm_enable_trbe(void);
>> +void kvm_disable_trbe(void);
>> #else
>> static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {}
>> static inline void kvm_clr_pmu_events(u64 clr) {}
>> @@ -1388,6 +1394,8 @@ static inline bool kvm_set_pmuserenr(u64 val)
>> {
>> return false;
>> }
>> +static inline void kvm_enable_trbe(void) {}
>> +static inline void kvm_disable_trbe(void) {}
>> #endif
>>
>> void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
>> index dd9e139dfd13..0c340ae7b5d1 100644
>> --- a/arch/arm64/kvm/debug.c
>> +++ b/arch/arm64/kvm/debug.c
>> @@ -314,7 +314,26 @@ void kvm_init_host_debug_data(void)
>> !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P))
>> host_data_set_flag(HAS_SPE);
>>
>> - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) &&
>> - !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P))
>> - host_data_set_flag(HAS_TRBE);
>> + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_SHIFT))
>> + host_data_set_flag(HAS_TRF);
>> }
>> +
>> +void kvm_enable_trbe(void)
>> +{
>> + if (has_vhe() || is_protected_kvm_enabled() ||
>> + WARN_ON_ONCE(preemptible()))
>> + return;
>> +
>> + host_data_set_flag(TRBE_ENABLED);
>> +}
>> +EXPORT_SYMBOL_GPL(kvm_enable_trbe);
>> +
>> +void kvm_disable_trbe(void)
>> +{
>> + if (has_vhe() || is_protected_kvm_enabled() ||
>> + WARN_ON_ONCE(preemptible()))
>> + return;
>> +
>> + host_data_clear_flag(TRBE_ENABLED);
>> +}
>> +EXPORT_SYMBOL_GPL(kvm_disable_trbe);
>> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> index 858bb38e273f..9479bee41801 100644
>> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> @@ -51,32 +51,39 @@ static void __debug_restore_spe(u64 pmscr_el1)
>> write_sysreg_el1(pmscr_el1, SYS_PMSCR);
>> }
>>
>> -static void __debug_save_trace(u64 *trfcr_el1)
>> +static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr)
>> {
>> - *trfcr_el1 = 0;
>> + *saved_trfcr = read_sysreg_el1(SYS_TRFCR);
>> + write_sysreg_el1(new_trfcr, SYS_TRFCR);
>>
>> - /* Check if the TRBE is enabled */
>> - if (!(read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E))
>> + /* No need to drain if going to an enabled state or from disabled state */
>> + if (new_trfcr || !*saved_trfcr)
>
> What if TRFCR_EL1.TS is set to something non-zero? I'd rather you
> check for the E*TRE bits instead of assuming things.
>
Yeah it's probably better that way. TS is actually always set when any
tracing session starts and then never cleared, so doing it the simpler
way made it always flush even after tracing finished, which probably
wasn't great.
>> return;
>> - /*
>> - * Prohibit trace generation while we are in guest.
>> - * Since access to TRFCR_EL1 is trapped, the guest can't
>> - * modify the filtering set by the host.
>> - */
>> - *trfcr_el1 = read_sysreg_el1(SYS_TRFCR);
>> - write_sysreg_el1(0, SYS_TRFCR);
>> +
>> isb();
>> - /* Drain the trace buffer to memory */
>> tsb_csync();
>> }
>>
>> -static void __debug_restore_trace(u64 trfcr_el1)
>> +static bool __trace_needs_switch(void)
>> {
>> - if (!trfcr_el1)
>> - return;
>> + return host_data_test_flag(TRBE_ENABLED) ||
>> + (is_protected_kvm_enabled() && host_data_test_flag(HAS_TRF));
>> +}
>>
>> - /* Restore trace filter controls */
>> - write_sysreg_el1(trfcr_el1, SYS_TRFCR);
>> +static void __trace_switch_to_guest(void)
>> +{
>> + /* Unsupported with TRBE so disable */
>> + if (host_data_test_flag(TRBE_ENABLED))
>> + *host_data_ptr(guest_trfcr_el1) = 0;
>> +
>> + __trace_do_switch(host_data_ptr(host_debug_state.trfcr_el1),
>> + *host_data_ptr(guest_trfcr_el1));
>> +}
>> +
>> +static void __trace_switch_to_host(void)
>> +{
>> + __trace_do_switch(host_data_ptr(guest_trfcr_el1),
>> + *host_data_ptr(host_debug_state.trfcr_el1));
>> }
>>
>> void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>> @@ -84,9 +91,9 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>> /* Disable and flush SPE data generation */
>> if (host_data_test_flag(HAS_SPE))
>> __debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1));
>> - /* Disable and flush Self-Hosted Trace generation */
>> - if (host_data_test_flag(HAS_TRBE))
>> - __debug_save_trace(host_data_ptr(host_debug_state.trfcr_el1));
>> +
>> + if (__trace_needs_switch())
>> + __trace_switch_to_guest();
>> }
>>
>> void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
>> @@ -98,8 +105,8 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>> {
>> if (host_data_test_flag(HAS_SPE))
>> __debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1));
>> - if (host_data_test_flag(HAS_TRBE))
>> - __debug_restore_trace(*host_data_ptr(host_debug_state.trfcr_el1));
>> + if (__trace_needs_switch())
>> + __trace_switch_to_host();
>> }
>>
>> void __debug_switch_to_host(struct kvm_vcpu *vcpu)
>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
>> index 96a32b213669..9c0f8c43e6fe 100644
>> --- a/drivers/hwtracing/coresight/coresight-trbe.c
>> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
>> @@ -18,6 +18,7 @@
>> #include <asm/barrier.h>
>> #include <asm/cpufeature.h>
>> #include <linux/vmalloc.h>
>> +#include <linux/kvm_host.h>
>
> Ordering of include files.
>
>>
>> #include "coresight-self-hosted-trace.h"
>> #include "coresight-trbe.h"
>> @@ -221,6 +222,7 @@ static inline void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr)
>> */
>> trblimitr |= TRBLIMITR_EL1_E;
>> write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>> + kvm_enable_trbe();
>>
>> /* Synchronize the TRBE enable event */
>> isb();
>> @@ -239,6 +241,7 @@ static inline void set_trbe_disabled(struct trbe_cpudata *cpudata)
>> */
>> trblimitr &= ~TRBLIMITR_EL1_E;
>> write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
>> + kvm_disable_trbe();
>>
>> if (trbe_needs_drain_after_disable(cpudata))
>> trbe_drain_buffer();
>> @@ -253,8 +256,10 @@ static void trbe_drain_and_disable_local(struct trbe_cpudata *cpudata)
>>
>> static void trbe_reset_local(struct trbe_cpudata *cpudata)
>> {
>> + preempt_disable();
>> trbe_drain_and_disable_local(cpudata);
>> write_sysreg_s(0, SYS_TRBLIMITR_EL1);
>> + preempt_enable();
>
> This looks terribly wrong. If you need to disable preemption here, why
> doesn't the critical section cover all register accesses? Surely you
> don't want to nuke another CPU's context?
>
> But looking at the calling sites, this makes even less sense. The two
> callers of this thing mess with *per-CPU* interrupts. Dealing with
> per-CPU interrupts in preemptible context is a big no-no (hint: they
> start with a call to smp_processor_id()).
>
> So what is this supposed to ensure?
>
> M.
>
These ones are only intended to silence the WARN_ON_ONCE(preemptible())
in kvm_enable_trbe() when this is called from boot/hotplug
(arm_trbe_enable_cpu()). Preemption isn't disabled, but a guest can't
run at that point either.
The "real" calls to kvm_enable_trbe() _are_ called from an atomic
context. I think there was a previous review comment about when it was
safe to call the KVM parts of this change, which is why I added the
warning making sure it was always called with preemption disabled. But
actually I could remove the warning and these preempt_disables() and
replace them with a comment.
Thanks
James
On 20/12/2024 11:39, Yeoreum Yun wrote:
> Hi Suzuki,
>> On 20/12/2024 10:38, Yeoreum Yun wrote:
>>> Hi Mike.
>>>
>>>> Notably missing is the same changes for the etm3x driver. The ETMv3.x
>>>> and PTM1.x are supported by this driver, and these trace source
>>>> variants are also supported in perf in the cs_etm.c code.
>>>
>>> But I wonder etmv3 needs to change. Because its spinlock is used only
>>> via sysfs enable/disable path.
>>> So, I think it doesn't need to change the lock type.
>>
>> ETM3 can be used in perf mode, similar to the ETM4x.
>>
>> So, you need to fix it as well.
>
> Yes. But etmv3's etmdata->spinlock doesn't used in perf path
> its usage is only in sysfs interface path.
> That's why I think it could skip too.
Ok, which I think is a problem, since the sysfs mode could overwrite the
"config" while perf is preparing the config from the event parsing.
And we would need it there. So, for the time being, we can accept this
series, pending other review comments and address this issue separately
Suzuki
>
> Thanks.
On 20/12/2024 10:38, Yeoreum Yun wrote:
> Hi Mike.
>
>> Notably missing is the same changes for the etm3x driver. The ETMv3.x
>> and PTM1.x are supported by this driver, and these trace source
>> variants are also supported in perf in the cs_etm.c code.
>
> But I wonder etmv3 needs to change. Because its spinlock is used only
> via sysfs enable/disable path.
> So, I think it doesn't need to change the lock type.
ETM3 can be used in perf mode, similar to the ETM4x.
So, you need to fix it as well.
>
>> STM is also missing, though this is not directly enabled via perf -
>> but could perhaps run concurrently as it can be a target output for
>> ftrace.
>
> Actually, I couldn't find out the path where
> the STM's lock could be grabbed under other raw_spin_lock (including csdev)
> If you don't mind would you let me the code path please?
STM can't be used in perf mode, and as such you may skip it.
Suzuki
>
> Thanks
>> --
>> Mike Leach
>> Principal Engineer, ARM Ltd.
>> Manchester Design Centre. UK