This is not an optimization in any sane interpretation of that word.
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
From: Dapeng Mi dapeng1.mi@linux.intel.com
Currently pmu->global_ctrl is initialized in the common kvm_pmu_refresh() helper since both Intel and AMD CPUs set enable bits for all GP counters for PERF_GLOBAL_CTRL MSR. But it may be not the best place to initialize pmu->global_ctrl. Strictly speaking, pmu->global_ctrl is vendor specific
And? There's mounds of KVM code that show it's very, very easy to manage global_ctrl in common code.
and there are lots of global_ctrl related processing in intel/amd_pmu_refresh() helpers, so better handle them in same place. Thus move pmu->global_ctrl initialization into intel/amd_pmu_refresh() helpers.
Besides, intel_pmu_refresh() doesn't handle global_ctrl_rsvd and global_status_rsvd properly and fix it.
Really? You mention a bug fix in passing, and squash it into an opinionated refactoring that is advertised as "optimizations" without even stating what the bug is? C'mon.
Signed-off-by: Dapeng Mi dapeng1.mi@linux.intel.com Signed-off-by: Mingwei Zhang mizhang@google.com
arch/x86/kvm/pmu.c | 10 ------- arch/x86/kvm/svm/pmu.c | 14 +++++++-- arch/x86/kvm/vmx/pmu_intel.c | 55 ++++++++++++++++++------------------ 3 files changed, 39 insertions(+), 40 deletions(-)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4e8cefcce7ab..2ac4c039de8b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -843,16 +843,6 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) return; kvm_pmu_call(refresh)(vcpu);
- /*
* At RESET, both Intel and AMD CPUs set all enable bits for general
* purpose counters in IA32_PERF_GLOBAL_CTRL (so that software that
* was written for v1 PMUs don't unknowingly leave GP counters disabled
* in the global controls). Emulate that behavior when refreshing the
* PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL.
*/
- if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters)
pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0);
Absolutely not, this code stays where it is.
} void kvm_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 153972e944eb..eba086ef5eca 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -198,12 +198,20 @@ static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->nr_arch_gp_counters = min_t(unsigned int, pmu->nr_arch_gp_counters, kvm_pmu_cap.num_counters_gp);
- if (pmu->version > 1) {
pmu->global_ctrl_rsvd = ~((1ull << pmu->nr_arch_gp_counters) - 1);
- if (kvm_pmu_cap.version > 1) {
It's not just global_ctrl. PEBS and the fixed counters also depend on v2+ (the SDM contradicts itself; KVM's ABI is that they're v2+).
/*
* At RESET, AMD CPUs set all enable bits for general purpose counters in
* IA32_PERF_GLOBAL_CTRL (so that software that was written for v1 PMUs
* don't unknowingly leave GP counters disabled in the global controls).
* Emulate that behavior when refreshing the PMU so that userspace doesn't
* need to manually set PERF_GLOBAL_CTRL.
*/
pmu->global_ctrl = BIT_ULL(pmu->nr_arch_gp_counters) - 1;
pmu->global_status_rsvd = pmu->global_ctrl_rsvd; }pmu->global_ctrl_rsvd = ~pmu->global_ctrl;
- pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
- pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
I like these cleanups, but they too belong in a separate patch.
pmu->reserved_bits = 0xfffffff000280000ull; pmu->raw_event_mask = AMD64_RAW_EVENT_MASK; /* not applicable to AMD; but clean them to prevent any fall out */