According to the APM, from the reference of the VMRUN instruction:
Upon #VMEXIT, the processor performs the following actions in order to return to the host execution context: ... clear EVENTINJ field in VMCB
KVM correctly cleared EVENTINJ (i.e. event_inj and event_inj_err) on nested #VMEXIT before commit 2d8a42be0e2b ("KVM: nSVM: synchronize VMCB controls updated by the processor on every vmexit"). That commit made sure the fields are synchronized between VMCB02 and KVM's cached VMCB12 on every L2->L0 #VMEXIT, such that they are serialized correctly on save/restore.
However, the commit also incorrectly copied the fields from KVM's cached VMCB12 to L1's VMCB12 on nested #VMEXIT. Go back to clearing the fields, and so in __nested_svm_vmexit() instead of nested_svm_vmexit(), such that it also applies to #VMEXITs caused by a failed VMRUN.
Fixes: 2d8a42be0e2b ("KVM: nSVM: synchronize VMCB controls updated by the processor on every vmexit") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed yosry.ahmed@linux.dev --- arch/x86/kvm/svm/nested.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 632e941febaf..b4074e674c9d 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -937,7 +937,7 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, return 0; }
-static void __nested_svm_vmexit(struct vcpu_svm *svm) +static void __nested_svm_vmexit(struct vcpu_svm *svm, struct vmcb *vmcb12) { struct vmcb *vmcb01 = svm->vmcb01.ptr; struct kvm_vcpu *vcpu = &svm->vcpu; @@ -949,6 +949,10 @@ static void __nested_svm_vmexit(struct vcpu_svm *svm) svm_set_gif(svm, false); vmcb01->control.exit_int_info = 0;
+ /* event_inj is cleared on #VMEXIT */ + vmcb12->control.event_inj = 0; + vmcb12->control.event_inj_err = 0; + nested_svm_uninit_mmu_context(vcpu); if (nested_svm_load_cr3(vcpu, vmcb01->save.cr3, false, true)) { kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); @@ -973,7 +977,7 @@ static void nested_svm_failed_vmrun(struct vcpu_svm *svm, struct vmcb *vmcb12) vmcb12->control.exit_code_hi = -1u; vmcb12->control.exit_info_1 = 0; vmcb12->control.exit_info_2 = 0; - __nested_svm_vmexit(svm); + __nested_svm_vmexit(svm, vmcb12); }
int nested_svm_vmrun(struct kvm_vcpu *vcpu) @@ -1156,8 +1160,6 @@ void nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.next_rip = vmcb02->control.next_rip;
vmcb12->control.int_ctl = svm->nested.ctl.int_ctl; - vmcb12->control.event_inj = svm->nested.ctl.event_inj; - vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err;
if (!kvm_pause_in_guest(vcpu->kvm)) { vmcb01->control.pause_filter_count = vmcb02->control.pause_filter_count; @@ -1259,8 +1261,6 @@ void nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.exit_int_info_err, KVM_ISA_SVM);
- kvm_vcpu_unmap(vcpu, &map); - nested_svm_transition_tlb_flush(vcpu);
/* @@ -1282,7 +1282,9 @@ void nested_svm_vmexit(struct vcpu_svm *svm) * Potentially queues an exception, so it needs to be after * kvm_clear_exception_queue() is called above. */ - __nested_svm_vmexit(svm); + __nested_svm_vmexit(svm, vmcb12); + + kvm_vcpu_unmap(vcpu, &map); }
static void nested_svm_triple_fault(struct kvm_vcpu *vcpu)