This is a note to let you know that I've just added the patch titled
KVM: nVMX: Fix handling of lmsw instruction
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: kvm-nvmx-fix-handling-of-lmsw-instruction.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From foo@baz Mon Apr 9 17:09:24 CEST 2018
From: "Jan H. Schönherr" jschoenh@amazon.de Date: Sat, 20 May 2017 13:22:56 +0200 Subject: KVM: nVMX: Fix handling of lmsw instruction
From: "Jan H. Schönherr" jschoenh@amazon.de
[ Upstream commit e1d39b17e044e8ae819827810d87d809ba5f58c0 ]
The decision whether or not to exit from L2 to L1 on an lmsw instruction is based on bogus values: instead of using the information encoded within the exit qualification, it uses the data also used for the mov-to-cr instruction, which boils down to using whatever is in %eax at that point.
Use the correct values instead.
Without this fix, an L1 may not get notified when a 32-bit Linux L2 switches its secondary CPUs to protected mode; the L1 is only notified on the next modification of CR0. This short time window poses a problem, when there is some other reason to exit to L1 in between. Then, L2 will be resumed in real mode and chaos ensues.
Signed-off-by: Jan H. Schönherr jschoenh@amazon.de Reviewed-by: Wanpeng Li wanpeng.li@hotmail.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin alexander.levin@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/vmx.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
--- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -7924,11 +7924,13 @@ static bool nested_vmx_exit_handled_cr(s { unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); int cr = exit_qualification & 15; - int reg = (exit_qualification >> 8) & 15; - unsigned long val = kvm_register_readl(vcpu, reg); + int reg; + unsigned long val;
switch ((exit_qualification >> 4) & 3) { case 0: /* mov to cr */ + reg = (exit_qualification >> 8) & 15; + val = kvm_register_readl(vcpu, reg); switch (cr) { case 0: if (vmcs12->cr0_guest_host_mask & @@ -7983,6 +7985,7 @@ static bool nested_vmx_exit_handled_cr(s * lmsw can change bits 1..3 of cr0, and only set bit 0 of * cr0. Other attempted changes are ignored, with no exit. */ + val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; if (vmcs12->cr0_guest_host_mask & 0xe & (val ^ vmcs12->cr0_read_shadow)) return true;
Patches currently in stable-queue which might be from jschoenh@amazon.de are
queue-4.9/kvm-nvmx-fix-handling-of-lmsw-instruction.patch