On Tue, 2022-07-12 at 20:32 +0800, Wang Guangju wrote:
When EOI virtualization is performed on VMX, kvm_apic_set_eoi_accelerated() is called upon EXIT_REASON_EOI_INDUCED but unlike its non-accelerated apic_set_eoi() sibling, Hyper-V SINT vectors are left unhandled.
Send EOI to Hyper-V SINT vectors when handling acclerated EOI-induced VM-Exits. KVM Hyper-V needs to handle the SINT EOI irrespective of whether the EOI is acclerated or not.
How does this relate to the AutoEOI feature, and the fact that on AVIC, it can't intercept EOI at all (*)?
Best regards, Maxim Levitsky
(*) AVIC does intercept EOI write but only for level triggered interrupts.
Rename kvm_apic_set_eoi_accelerated() to kvm_apic_set_eoi() and let the non-accelerated helper call the "acclerated" version. That will document the delta between the non-accelerated path and the accelerated path. In addition, guarantee to trace even if there's no valid vector to EOI in the non-accelerated path in order to keep the semantics of the function intact.
Fixes: 5c919412fe61 ("kvm/x86: Hyper-V synthetic interrupt controller") Cc: stable@vger.kernel.org Tested-by: Wang Guangju wangguangju@baidu.com Suggested-by: Sean Christopherson seanjc@google.com Suggested-by: Vitaly Kuznetsov vkuznets@redhat.com Co-developed-by: Li Rongqing lirongqing@baidu.com Signed-off-by: Wang Guangju wangguangju@baidu.com
v1 -> v2: Updated the commit message and implement a new inline function of apic_set_eoi_vector()
v2 -> v3: Updated the subject and commit message, drop func apic_set_eoi_vector() and rename kvm_apic_set_eoi_accelerated() to kvm_apic_set_eoi()
arch/x86/kvm/lapic.c | 45 ++++++++++++++++++++++----------------------- arch/x86/kvm/lapic.h | 2 +- arch/x86/kvm/vmx/vmx.c | 3 ++- 3 files changed, 25 insertions(+), 25 deletions(-)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index f03facc..b2e72ab 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1269,46 +1269,45 @@ static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector) kvm_ioapic_update_eoi(apic->vcpu, vector, trigger_mode); } +/*
- Send EOI for a valid vector. The caller, or hardware when this is invoked
- after an accelerated EOI VM-Exit, is responsible for updating the vISR and
- vPPR.
- */
+void kvm_apic_set_eoi(struct kvm_lapic *apic, int vector) +{ + trace_kvm_eoi(apic, vector);
+ if (to_hv_vcpu(apic->vcpu) && + test_bit(vector, to_hv_synic(apic->vcpu)->vec_bitmap)) + kvm_hv_synic_send_eoi(apic->vcpu, vector);
+ kvm_ioapic_send_eoi(apic, vector); + kvm_make_request(KVM_REQ_EVENT, apic->vcpu); +} +EXPORT_SYMBOL_GPL(kvm_apic_set_eoi);
static int apic_set_eoi(struct kvm_lapic *apic) { int vector = apic_find_highest_isr(apic); - trace_kvm_eoi(apic, vector);
/* * Not every write EOI will has corresponding ISR, * one example is when Kernel check timer on setup_IO_APIC */ - if (vector == -1) + if (vector == -1) { + trace_kvm_eoi(apic, vector); return vector; + } apic_clear_isr(vector, apic); apic_update_ppr(apic); - if (to_hv_vcpu(apic->vcpu) && - test_bit(vector, to_hv_synic(apic->vcpu)->vec_bitmap)) - kvm_hv_synic_send_eoi(apic->vcpu, vector); + kvm_apic_set_eoi(apic, vector); - kvm_ioapic_send_eoi(apic, vector); - kvm_make_request(KVM_REQ_EVENT, apic->vcpu); return vector; } -/*
- this interface assumes a trap-like exit, which has already finished
- desired side effect including vISR and vPPR update.
- */
-void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector) -{ - struct kvm_lapic *apic = vcpu->arch.apic;
- trace_kvm_eoi(apic, vector);
- kvm_ioapic_send_eoi(apic, vector); - kvm_make_request(KVM_REQ_EVENT, apic->vcpu); -} -EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high) { struct kvm_lapic_irq irq; diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 762bf61..48260fa 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -126,7 +126,7 @@ u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu); void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data); void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset); -void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector); +void kvm_apic_set_eoi(struct kvm_lapic *apic, int vector); int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr); void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9258468..f8b9eb1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5519,9 +5519,10 @@ static int handle_apic_eoi_induced(struct kvm_vcpu *vcpu) { unsigned long exit_qualification = vmx_get_exit_qual(vcpu); int vector = exit_qualification & 0xff; + struct kvm_lapic *apic = vcpu->arch.apic; /* EOI-induced VM exit is trap-like and thus no need to adjust IP */ - kvm_apic_set_eoi_accelerated(vcpu, vector); + kvm_apic_set_eoi(apic, vector); return 1; }