On Fri, Nov 01, 2024 at 06:26:09PM +0800, Xiongfeng Wang wrote:
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit e4d2102018542e3ae5e297bc6e229303abff8a0f upstream.
Robert Gill reported below #GP in 32-bit mode when dosemu software was executing vm86() system call:
general protection fault: 0000 [#1] PREEMPT SMP CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1 Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010 EIP: restore_all_switch_stack+0xbe/0xcf EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000 ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046 CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0 Call Trace: show_regs+0x70/0x78 die_addr+0x29/0x70 exc_general_protection+0x13c/0x348 exc_bounds+0x98/0x98 handle_exception+0x14d/0x14d exc_bounds+0x98/0x98 restore_all_switch_stack+0xbe/0xcf exc_bounds+0x98/0x98 restore_all_switch_stack+0xbe/0xcf
This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS are enabled. This is because segment registers with an arbitrary user value can result in #GP when executing VERW. Intel SDM vol. 2C documents the following behavior for VERW instruction:
#GP(0) - If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user space. Use %cs selector to reference VERW operand. This ensures VERW will not #GP for an arbitrary user %ds.
[ mingo: Fixed the SOB chain. ]
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition") Reported-by: Robert Gill rtgill82@gmail.com Reviewed-by: Andrew Cooper andrew.cooper3@citrix.com Cc: stable@vger.kernel.org # 5.10+ Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707 Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.in... Suggested-by: Dave Hansen dave.hansen@linux.intel.com Suggested-by: Brian Gerst brgerst@gmail.com Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org [xiongfeng: fix conflicts caused by the runtime patch jmp] Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com
arch/x86/include/asm/nospec-branch.h | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 87e1ff064025..7978d5fe1ce6 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -199,7 +199,16 @@ */ .macro CLEAR_CPU_BUFFERS ALTERNATIVE "jmp .Lskip_verw_@", "", X86_FEATURE_CLEAR_CPU_BUF
- verw _ASM_RIP(mds_verw_sel)
+#ifdef CONFIG_X86_64
- verw mds_verw_sel(%rip)
+#else
- /*
* In 32bit mode, the memory operand must be a %cs reference. The data
* segments may not be usable (vm86 mode), and the stack segment may not
* be flat (ESPFIX32).
*/
- verw %cs:mds_verw_sel
+#endif .Lskip_verw_@: .endm
I sent these backports sometime back, seems they were not picked:
5.10: https://lore.kernel.org/stable/f9c84ff992511890556cd52c19c2875b440b98c6.1729... 5.15: https://lore.kernel.org/stable/d2fca828795e4980e0708a179bd60b2a89bc8089.1729... 6.1: https://lore.kernel.org/stable/7aad4bddc4cf131ee88657da20960c4a714aa756.1729...