From: Wanpeng Li wanpengli@tencent.com
commit e751732486eb3f159089a64d1901992b1357e7cc upstream.
The idea before commit 240c35a37 (which has just been reverted) was that we have the following FPU states:
userspace (QEMU) guest --------------------------------------------------------------------------- processor vcpu->arch.guest_fpu
KVM_RUN: kvm_load_guest_fpu
vcpu->arch.user_fpu processor
preempt out
vcpu->arch.user_fpu current->thread.fpu
preempt in
vcpu->arch.user_fpu processor
back to userspace kvm_put_guest_fpu
processor vcpu->arch.guest_fpu ---------------------------------------------------------------------------
With the new lazy model we want to get the state back to the processor when schedule in from current->thread.fpu.
Reported-by: Thomas Lambertz mail@thomaslambertz.de Reported-by: anthony antdev66@gmail.com Tested-by: anthony antdev66@gmail.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Radim Krčmář rkrcmar@redhat.com Cc: Thomas Lambertz mail@thomaslambertz.de Cc: anthony antdev66@gmail.com Cc: stable@vger.kernel.org Fixes: 5f409e20b (x86/fpu: Defer FPU state load until return to userspace) Signed-off-by: Wanpeng Li wanpengli@tencent.com [Add a comment in front of the warning. - Paolo] Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/x86.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3264,6 +3264,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu
kvm_x86_ops->vcpu_load(vcpu, cpu);
+ fpregs_assert_state_consistent(); + if (test_thread_flag(TIF_NEED_FPU_LOAD)) + switch_fpu_return(); + /* Apply any externally detected TSC adjustments (due to suspend) */ if (unlikely(vcpu->arch.tsc_offset_adjustment)) { adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment); @@ -7955,9 +7959,8 @@ static int vcpu_enter_guest(struct kvm_v wait_lapic_expire(vcpu); guest_enter_irqoff();
- fpregs_assert_state_consistent(); - if (test_thread_flag(TIF_NEED_FPU_LOAD)) - switch_fpu_return(); + /* The preempt notifier should have taken care of the FPU already. */ + WARN_ON_ONCE(test_thread_flag(TIF_NEED_FPU_LOAD));
if (unlikely(vcpu->arch.switch_db_regs)) { set_debugreg(0, 7);