6.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Seongman Lee augustus92@kaist.ac.kr
[ Upstream commit f7387eff4bad33d12719c66c43541c095556ae4e ]
The GHCB_MSR_VMPL_REQ_LEVEL macro lacked parentheses around the bitmask expression, causing the shift operation to bind too early. As a result, when requesting VMPL1 (e.g., GHCB_MSR_VMPL_REQ_LEVEL(1)), incorrect values such as 0x000000016 were generated instead of the intended 0x100000016 (the requested VMPL level is specified in GHCBData[39:32]).
Fix the precedence issue by grouping the masked value before applying the shift.
[ bp: Massage commit message. ]
Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas") Signed-off-by: Seongman Lee augustus92@kaist.ac.kr Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/20250511092329.12680-1-cloudlee1719@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/sev-common.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h index dcbccdb280f9e..1385227125ab4 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -116,7 +116,7 @@ enum psc_op { #define GHCB_MSR_VMPL_REQ 0x016 #define GHCB_MSR_VMPL_REQ_LEVEL(v) \ /* GHCBData[39:32] */ \ - (((u64)(v) & GENMASK_ULL(7, 0) << 32) | \ + ((((u64)(v) & GENMASK_ULL(7, 0)) << 32) | \ /* GHCBDdata[11:0] */ \ GHCB_MSR_VMPL_REQ)