On 9/9/25 12:14 AM, Sean Christopherson wrote:
On Mon, Sep 08, 2025, Fei Li wrote:
On 9/5/25 10:59 PM, Fei Li wrote:
On 8/29/25 12:44 AM, Paolo Bonzini wrote:
On Thu, Aug 28, 2025 at 5:13 PM Fei Li lifei.shirley@bytedance.com wrote:
Actually this is a bug triggered by one monitor tool in our production environment. This monitor executes 'info registers -a' hmp at a fixed frequency, even during VM startup process, which makes some AP stay in KVM_MP_STATE_UNINITIALIZED forever. But this race only occurs with extremely low probability, about 1~2 VM hangs per week.
Considering other emulators, like cloud-hypervisor and firecracker maybe also have similar potential race issues, I think KVM had better do some handling. But anyway, I will check Qemu code to avoid such race. Thanks for both of your comments. 🙂
If you can check whether other emulators invoke KVM_SET_VCPU_EVENTS in similar cases, that of course would help understanding the situation better.
In QEMU, it is possible to delay KVM_GET_VCPU_EVENTS until after all vCPUs have halted.
Paolo
Replacing the original message with a decently formatted version. Please try to format your emails for plain text, I assume something in your mail system inserted a pile of line wraps and made the entire thing all but unreadable.
Sure, sorry for the inconvenience.
`info registers -a` hmp per 2ms[1] AP(vcpu1) thread[2] BSP(vcpu0) send INIT/SIPI[3]
[1] for each cpu: cpu_synchronize_state if !qemu_thread_is_self() 1. insert to cpu->work_list, and handle asynchronously 2. then kick the AP(vcpu1) by sending SIG_IPI/SIGUSR1 signal
[2] KVM: KVM_RUN and then schedule() in kvm_vcpu_block() loop KVM: checks signal_pending, breaks loop and returns -EINTR Qemu: break kvm_cpu_exec loop, run 1. qemu_wait_io_event() => process_queued_cpu_work => cpu->work_list.func() e.i. do_kvm_cpu_synchronize_state() callback => kvm_arch_get_registers => kvm_get_mp_state /* KVM: get_mpstate also calls kvm_apic_accept_events() to handle INIT and SIPI */ => cpu->vcpu_dirty = true; // end of qemu_wait_io_event
[3] SeaBIOS: BSP enters non-root mode and runs reset_vector() in SeaBIOS. send INIT and then SIPI by writing APIC_ICR during smp_scan KVM: BSP(vcpu0) exits, then => handle_apic_write => kvm_lapic_reg_write => kvm_apic_send_ipi to all APs => for each AP: __apic_accept_irq, e.g. for AP(vcpu1) => case APIC_DM_INIT: apic->pending_events = (1UL << KVM_APIC_INIT) (not kick the AP yet) => case APIC_DM_STARTUP: set_bit(KVM_APIC_SIPI, &apic->pending_events) (not kick the AP yet)
[2] 2. kvm_cpu_exec() => if (cpu->vcpu_dirty): => kvm_arch_put_registers => kvm_put_vcpu_events KVM: kvm_vcpu_ioctl_x86_set_vcpu_events => clear_bit(KVM_APIC_INIT, &vcpu->arch.apic->pending_events); e.i. pending_events changes from 11b to 10b // end of kvm_vcpu_ioctl_x86_set_vcpu_events
Qemu is clearly "putting" stale data here.
Qemu: => after put_registers, cpu->vcpu_dirty = false; => kvm_vcpu_ioctl(cpu, KVM_RUN, 0) KVM: KVM_RUN => schedule() in kvm_vcpu_block() until Qemu's next SIG_IPI/SIGUSR1 signal /* But AP(vcpu1)'s mp_state will never change from KVM_MP_STATE_UNINITIALIZED to KVM_MP_STATE_INIT_RECEIVED, even then to KVM_MP_STATE_RUNNABLE without handling INIT inside kvm_apic_accept_events(), considering BSP will never send INIT/SIPI again during smp_scan. Then AP(vcpu1) will never enter non-root mode */
[3] SeaBIOS: waits CountCPUs == expected_cpus_count and loops forever e.i. the AP(vcpu1) stays: EIP=0000fff0 && CS =f000 ffff0000 and BSP(vcpu0) appears 100% utilized as it is in a while loop.
By the way, this doesn't seem to be a Qemu bug, since calling "info registers -a" is allowed regardless of the vcpu state (including when the VM is in the bootloader). Thus the INIT should not be latched in this case.
No, this is a Qemu bug. It is the VMM's responsibility to ensure it doesn't load stale data into a vCPU. There is simply no way for KVM to do the right thing, because KVM can't know if userspace _wants_ to clobber events versus when userspace is racing, as in this case.
E.g. the exact same race exists with NMIs.
kvm_vcpu_ioctl_x86_get_vcpu_events() vcpu->arch.nmi_queued = 0 vcpu->arch.nmi_pending = 0 kvm_vcpu_events.pending = 0
kvm_inject_nmi() vcpu->arch.nmi_queued = 1 vcpu->arch.nmi_pending = 0 kvm_vcpu_events.pending = 0
kvm_vcpu_ioctl_x86_set_vcpu_events() vcpu->arch.nmi_queued = 0 // Moved to nmi_pending by process_nmi() vcpu->arch.nmi_pending = 0 // Explicitly cleared after process_nmi() when KVM_VCPUEVENT_VALID_NMI_PENDING kvm_vcpu_events.pending = 0 // Stale data
But for NMI, Qemu avoids clobbering state thinks to a 15+ year old commit that specifically avoids clobbering NMI *and SIPI* when not putting "reset" state:
commit ea64305139357e89f58fc05ff5d48dc233d44d87 Author: Jan Kiszka jan.kiszka@siemens.com AuthorDate: Mon Mar 1 19:10:31 2010 +0100 Commit: Marcelo Tosatti mtosatti@redhat.com CommitDate: Thu Mar 4 00:29:30 2010 -0300
KVM: x86: Restrict writeback of VCPU state Do not write nmi_pending, sipi_vector, and mpstate unless we at least go through a reset. And TSC as well as KVM wallclocks should only be written on full sync, otherwise we risk to drop some time on state read-modify-write. if (level >= KVM_PUT_RESET_STATE) { <========================= events.flags |= KVM_VCPUEVENT_VALID_NMI_PENDING; if (env->mp_state == KVM_MP_STATE_SIPI_RECEIVED) { events.flags |= KVM_VCPUEVENT_VALID_SIPI_VECTOR; } }
Presumably "SMIs" need the same treatment, e.g.
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 6c749d4ee8..f5bc0f9327 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -5033,7 +5033,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level) events.sipi_vector = env->sipi_vector;
- if (has_msr_smbase) {
- if (has_msr_smbase && level >= KVM_PUT_RESET_STATE) { events.flags |= KVM_VCPUEVENT_VALID_SMM; events.smi.smm = !!(env->hflags & HF_SMM_MASK); events.smi.smm_inside_nmi = !!(env->hflags2 & HF2_SMM_INSIDE_NMI_MASK);
I see, this is indeed a feasible solution. I will send a patch to the Qemu community then to seek more advises. Thanks for your suggestions.
Have a nice day, thanks a lot Fei