Oliver Upton oliver.upton@linux.dev writes:
On Mon, Jun 02, 2025 at 07:26:55PM +0000, Colton Lewis wrote:
With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect.
Signed-off-by: Colton Lewis coltonlewis@google.com
arch/arm64/kvm/sys_regs.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d368eeb4f88e..afd06400429a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include <linux/printk.h> #include <linux/uaccess.h> #include <linux/irqchip/arm-gic-v3.h> +#include <linux/perf/arm_pmu.h> #include <linux/perf/arm_pmuv3.h>
#include <asm/arm_pmuv3.h> @@ -942,7 +943,11 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 pmcr, val;
- pmcr = kvm_vcpu_read_pmcr(vcpu);
- if (kvm_vcpu_pmu_is_partitioned(vcpu))
pmcr = read_pmcr();
Reading PMCR_EL0 from EL2 is not going to have the desired effect. PMCR_EL0.N only returns HPMN when read from the guest.
Okay. I'll change that.
- else
pmcr = kvm_vcpu_read_pmcr(vcpu);
- val = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu);
@@ -1037,6 +1042,22 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; }
+static void writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
u64 reg, u64 idx)
+{
- u64 evmask = kvm_pmu_evtyper_mask(vcpu->kvm);
- u64 val = p->regval & evmask;
- __vcpu_sys_reg(vcpu, reg) = val;
- if (idx == ARMV8_PMU_CYCLE_IDX)
write_pmccfiltr(val);
- else if (idx == ARMV8_PMU_INSTR_IDX)
write_pmicfiltr(val);
- else
write_pmevtypern(idx, val);
+}
How are you preventing the VM from configuring an event counter to count at EL2?
I had thought that's what kvm_pmu_evtyper_mask() did since masking with that is what kvm_pmu_set_counter_event_type() writes to the vCPU register.
I see that you're setting MDCR_EL2.HPMD (which assumes FEAT_PMUv3p1) but due to an architecture bug there's no control to prohibit the cycle counter until FEAT_PMUv3p5 (MDCR_EL2.HCCD).
I'll fix that.
Since you're already trapping PMCCFILTR you could potentially configure the hardware value in such a way that it filters EL2.
Sure.
static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1063,7 +1084,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false;
- if (p->is_write) {
- if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) {
writethrough_pmevtyper(vcpu, p, reg, idx);
What about the vPMU event filter?
I'll check that too.
- } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else {
-- 2.49.0.1204.g71687c7c1d-goog
Thanks, Oliver