On 2021/5/27 1:27, Sean Christopherson wrote:
On Wed, May 26, 2021, Pu Wen wrote:
The first two bits of the CPUID leaf 0x8000001F EAX indicate whether SEV or SME is supported respectively. It's better to check whether SEV or SME is supported before checking the SEV MSR(0xc0010131) to see whether SEV or SME is enabled.
This also avoid the MSR reading failure on the first generation Hygon Dhyana CPU which does not support SEV or SME.
Fixes: eab696d8e8b9 ("x86/sev: Do not require Hypervisor CPUID bit for SEV guests") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Pu Wen puwen@hygon.cn
arch/x86/mm/mem_encrypt_identity.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index a9639f663d25..470b20208430 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -504,10 +504,6 @@ void __init sme_enable(struct boot_params *bp) #define AMD_SME_BIT BIT(0) #define AMD_SEV_BIT BIT(1)
- /* Check the SEV MSR whether SEV or SME is enabled */
- sev_status = __rdmsr(MSR_AMD64_SEV);
- feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
- /*
- Check for the SME/SEV feature:
- CPUID Fn8000_001F[EAX]
@@ -519,11 +515,16 @@ void __init sme_enable(struct boot_params *bp) eax = 0x8000001f; ecx = 0; native_cpuid(&eax, &ebx, &ecx, &edx);
- if (!(eax & feature_mask))
- /* Check whether SEV or SME is supported */
- if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
Hmm, checking CPUID at all before MSR_AMD64_SEV is flawed for SEV, e.g. the VMM doesn't need to pass-through CPUID to attack the guest, it can lie directly.
SEV-ES is protected by virtue of CPUID interception being reflected as #VC, which effectively tells the guest that it's (probably) an SEV-ES guest and also gives the guest the opportunity to sanity check the emulated CPUID values provided by the VMM.
In other words, this patch is flawed, but commit eab696d8e8b9 was also flawed by conditioning the SEV path on CPUID.0x80000000.
Yes, so I think we'd better admit that the VMM is still trusted for SEV guests as you mentioned below.
Given that #VC can be handled cleanly, the kernel should be able to handle a #GP at this point. So I think the proper fix is to change __rdmsr() to native_read_msr_safe(), or an open coded variant if necessary, and drop the CPUID
Reading MSR_AMD64_SEV which is not implemented on Hygon Dhyana CPU will cause the kernel reboot, and native_read_msr_safe() has no help.
checks for SEV.
The other alternative is to admit that the VMM is still trusted for SEV guests
Agree with that.