On Fri, Dec 26, 2025, Yao Yuan wrote:
On Wed, Dec 24, 2025 at 01:12:45AM +0800, Paolo Bonzini wrote:
Create a variant of fpregs_lock_and_load() that KVM can use in its vCPU entry code after preemption has been disabled. While basing it on the existing logic in vcpu_enter_guest(), ensure that fpregs_assert_state_consistent() always runs and sprinkle a few more assertions.
Cc: stable@vger.kernel.org Fixes: 820a6ee944e7 ("kvm: x86: Add emulation for IA32_XFD", 2022-01-14) Signed-off-by: Paolo Bonzini pbonzini@redhat.com
arch/x86/include/asm/fpu/api.h | 1 + arch/x86/kernel/fpu/core.c | 17 +++++++++++++++++ arch/x86/kvm/x86.c | 8 +------- 3 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index cd6f194a912b..0820b2621416 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -147,6 +147,7 @@ extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr); /* KVM specific functions */ extern bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu); extern void fpu_free_guest_fpstate(struct fpu_guest *gfpu); +extern void fpu_load_guest_fpstate(struct fpu_guest *gfpu); extern int fpu_swap_kvm_fpstate(struct fpu_guest *gfpu, bool enter_guest); extern int fpu_enable_guest_xfd_features(struct fpu_guest *guest_fpu, u64 xfeatures);
diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 3ab27fb86618..a480fa8c65d5 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -878,6 +878,23 @@ void fpregs_lock_and_load(void) fpregs_assert_state_consistent(); }
+void fpu_load_guest_fpstate(struct fpu_guest *gfpu) +{ +#ifdef CONFIG_X86_DEBUG_FPU
- struct fpu *fpu = x86_task_fpu(current);
- WARN_ON_ONCE(gfpu->fpstate != fpu->fpstate);
+#endif
- lockdep_assert_preemption_disabled();
Hi Paolo,
Do we need make sure the irq is disabled w/ lockdep ?
Yes please, e.g. see commit 2620fe268e80 ("KVM: x86: Revert "KVM: X86: Fix fpu state crash in kvm guest"").
The irq_fpu_usable() returns true for:
!in_nmi () && in_hardirq() and !softirq_count()
It's possible that the TIF_NEED_FPU_LOAD is set again w/ interrupt is enabled.