On Thu, May 22, 2025 at 12:34 AM Jingwei Wang wangjingwei@iscas.ac.cn wrote:
The riscv_hwprobe vDSO data is populated by init_hwprobe_vdso_data(), an arch_initcall_sync. However, underlying data for some keys, like RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF, is determined asynchronously.
Specifically, the per_cpu(vector_misaligned_access, cpu) values are set by the vec_check_unaligned_access_speed_all_cpus kthread. This kthread is spawned by an earlier arch_initcall (check_unaligned_access_all_cpus) and may complete its benchmark *after* init_hwprobe_vdso_data() has already populated the vDSO with default/stale values.
So, refresh the vDSO data for specified keys (e.g., MISALIGNED_VECTOR_PERF) ensuring it reflects the final boot-time values.
Test by comparing vDSO and syscall results for affected keys (e.g., MISALIGNED_VECTOR_PERF), which now match their final boot-time values.
Reported-by: Tsukasa OI research_trasio@irq.a4lg.com Closes: https://lore.kernel.org/linux-riscv/760d637b-b13b-4518-b6bf-883d55d44e7f@irq... Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe") Cc: stable@vger.kernel.org Signed-off-by: Jingwei Wang wangjingwei@iscas.ac.cn
Changes in v2:
- Addressed feedback from Yixun's regarding #ifdef CONFIG_MMU usage.
- Updated commit message to provide a high-level summary.
- Added Fixes tag for commit e7c9d66e313b.
v1: https://lore.kernel.org/linux-riscv/20250521052754.185231-1-wangjingwei@isca...
arch/riscv/include/asm/hwprobe.h | 6 ++++++ arch/riscv/kernel/sys_hwprobe.c | 16 ++++++++++++++++ arch/riscv/kernel/unaligned_access_speed.c | 2 +- 3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwprobe.h index 1f690fea0e03de6a..58dc847d86c7f2b0 100644 --- a/arch/riscv/include/asm/hwprobe.h +++ b/arch/riscv/include/asm/hwprobe.h @@ -40,4 +40,10 @@ static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair, return pair->value == other_pair->value; }
+#ifdef CONFIG_MMU +void riscv_hwprobe_vdso_sync(__s64 sync_key); +#else +static inline void riscv_hwprobe_vdso_sync(__s64 sync_key) { }; +#endif /* CONFIG_MMU */
#endif diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c index 249aec8594a92a80..2e3e612b7ac6fd57 100644 --- a/arch/riscv/kernel/sys_hwprobe.c +++ b/arch/riscv/kernel/sys_hwprobe.c @@ -17,6 +17,7 @@ #include <asm/vector.h> #include <asm/vendor_extensions/thead_hwprobe.h> #include <vdso/vsyscall.h> +#include <vdso/datapage.h>
static void hwprobe_arch_id(struct riscv_hwprobe *pair, @@ -500,6 +501,21 @@ static int __init init_hwprobe_vdso_data(void)
arch_initcall_sync(init_hwprobe_vdso_data);
+void riscv_hwprobe_vdso_sync(__s64 sync_key) +{
struct vdso_arch_data *avd = vdso_k_arch_data;
struct riscv_hwprobe pair;
pair.key = sync_key;
hwprobe_one_pair(&pair, cpu_online_mask);
/*
* Update vDSO data for the given key.
* Currently for non-ID key updates (e.g. MISALIGNED_VECTOR_PERF),
* so 'homogeneous_cpus' is not re-evaluated here.
*/
avd->all_cpu_hwprobe_values[sync_key] = pair.value;
+}
#endif /* CONFIG_MMU */
SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs, diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c index 585d2dcf2dab1ccb..81bc4997350acc87 100644 --- a/arch/riscv/kernel/unaligned_access_speed.c +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -375,7 +375,7 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused) { schedule_on_each_cpu(check_vector_unaligned_access);
riscv_hwprobe_vdso_sync(RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF); return 0;
}
#else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
2.49.0
Reviewed-by: Jesse Taube jesse@rivosinc.com
Thanks, Jesse Taube