From: Joerg Roedel jroedel@suse.de
When emulating guest instructions for MMIO or IOIO accesses the #VC handler might get a page-fault and will not be able to complete. Forward the page-fault in this case to the correct handler instead of killing the machine.
Fixes: 0786138c78e7 ("x86/sev-es: Add a Runtime #VC Exception Handler") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Joerg Roedel jroedel@suse.de --- arch/x86/kernel/sev.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c49270c7669e..6530a844eb61 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1265,6 +1265,10 @@ static __always_inline void vc_forward_exception(struct es_em_ctxt *ctxt) case X86_TRAP_UD: exc_invalid_op(ctxt->regs); break; + case X86_TRAP_PF: + write_cr2(ctxt->fi.cr2); + exc_page_fault(ctxt->regs, error_code); + break; case X86_TRAP_AC: exc_alignment_check(ctxt->regs, error_code); break;
On Wed, May 12, 2021, Joerg Roedel wrote:
From: Joerg Roedel jroedel@suse.de
When emulating guest instructions for MMIO or IOIO accesses the #VC handler might get a page-fault and will not be able to complete. Forward the page-fault in this case to the correct handler instead of killing the machine.
Fixes: 0786138c78e7 ("x86/sev-es: Add a Runtime #VC Exception Handler") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Joerg Roedel jroedel@suse.de
arch/x86/kernel/sev.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c49270c7669e..6530a844eb61 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1265,6 +1265,10 @@ static __always_inline void vc_forward_exception(struct es_em_ctxt *ctxt) case X86_TRAP_UD: exc_invalid_op(ctxt->regs); break;
- case X86_TRAP_PF:
write_cr2(ctxt->fi.cr2);
exc_page_fault(ctxt->regs, error_code);
break;
This got me looking at the flows that "inject" #PF, and I'm pretty sure there are bugs in __vc_decode_user_insn() + insn_get_effective_ip().
Problem #1: __vc_decode_user_insn() assumes a #PF if insn_fetch_from_user_inatomic() fails, but the majority of failure cases in insn_get_seg_base() are #GPs, not #PF.
res = insn_fetch_from_user_inatomic(ctxt->regs, buffer); if (!res) { ctxt->fi.vector = X86_TRAP_PF; ctxt->fi.error_code = X86_PF_INSTR | X86_PF_USER; ctxt->fi.cr2 = ctxt->regs->ip; return ES_EXCEPTION; }
Problem #2: Using '0' as an error code means a legitimate effective IP of '0' will be misinterpreted as a failure. Practically speaking, I highly doubt anyone will ever actually run code at address 0, but it's technically possible. The most robust approach would be to pass a pointer to @ip and return an actual error code. Using a non-canonical magic value might also work, but that could run afoul of future shenanigans like LAM.
ip = insn_get_effective_ip(regs); if (!ip) return 0;
Hi Sean,
On Wed, May 12, 2021 at 05:31:03PM +0000, Sean Christopherson wrote:
This got me looking at the flows that "inject" #PF, and I'm pretty sure there are bugs in __vc_decode_user_insn() + insn_get_effective_ip().
Problem #1: __vc_decode_user_insn() assumes a #PF if insn_fetch_from_user_inatomic() fails, but the majority of failure cases in insn_get_seg_base() are #GPs, not #PF.
res = insn_fetch_from_user_inatomic(ctxt->regs, buffer); if (!res) { ctxt->fi.vector = X86_TRAP_PF; ctxt->fi.error_code = X86_PF_INSTR | X86_PF_USER; ctxt->fi.cr2 = ctxt->regs->ip; return ES_EXCEPTION; }
Problem #2: Using '0' as an error code means a legitimate effective IP of '0' will be misinterpreted as a failure. Practically speaking, I highly doubt anyone will ever actually run code at address 0, but it's technically possible. The most robust approach would be to pass a pointer to @ip and return an actual error code. Using a non-canonical magic value might also work, but that could run afoul of future shenanigans like LAM.
ip = insn_get_effective_ip(regs); if (!ip) return 0;
Your observations are all correct. I put some changes onto this patch-set to fix these problems.
Regards,
Joerg
linux-stable-mirror@lists.linaro.org