From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
[ Upstream commit 9f85fdb9fc5a1bd308a10a0a7d7e34f2712ba58b ]
The purpose of the warning is to prevent an unexpected change to the return thunk mitigation. However, there are legitimate cases where the return thunk is intentionally set more than once. For example, ITS and SRSO both can set the return thunk after retbleed has set it. In both the cases retbleed is still mitigated.
Replace the warning with an info about the active return thunk.
Suggested-by: Borislav Petkov bp@alien8.de Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/20250611-eibrs-fix-v4-3-5ff86cac6c61@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
**YES**
This commit should be backported to stable kernel trees. Here's my extensive analysis:
## Key Reasons for Backporting:
### 1. **Fixes a Real User-Visible Issue** The code change shows that the original warning was triggering in legitimate scenarios: ```c - if (x86_return_thunk != __x86_return_thunk) - pr_warn("x86/bugs: return thunk changed\n"); ``` This warning would appear in kernel logs whenever multiple CPU vulnerability mitigations interact, which is a normal and expected behavior. The false positive warnings could: - Cause unnecessary alarm for system administrators - Clutter kernel logs with misleading information - Potentially mask real issues by creating noise
### 2. **Small, Contained Change** The commit is minimal and focused: - Only modifies 3 lines of functional code - Changes are confined to a single function (`set_return_thunk()`) - Simply replaces a conditional warning with an unconditional info message: ```c + pr_info("active return thunk: %ps\n", thunk); ```
### 3. **No Risk of Regression** The change is safe because: - It doesn't modify any security logic or mitigation behavior - The function still sets the return thunk exactly as before - The new info message provides better visibility into which mitigation is active - The `%ps` format specifier safely prints the function symbol name
### 4. **Addresses Security Mitigation Interactions** The commit message explicitly states that "ITS and SRSO both can set the return thunk after retbleed has set it" and that "retbleed is still mitigated." This is important for stable kernels because: - Multiple CVE mitigations (Retbleed, SRSO, ITS) need to coexist - The warning was incorrectly flagging legitimate mitigation sequences - Stable kernels need these mitigations to work together without false alarms
### 5. **Part of Critical Security Infrastructure** The x86/bugs subsystem handles CPU vulnerability mitigations for: - CVE-2023-20569 (SRSO) - CVE-2024-28956 (ITS) - Retbleed vulnerabilities These are critical security features that need to function correctly in stable kernels.
### 6. **Improves Operational Visibility** The new info message (`pr_info("active return thunk: %ps\n", thunk);`) actually improves debugging and system monitoring by: - Showing which specific return thunk is currently active - Providing better audit trail for security mitigation changes - Using proper function symbol printing with `%ps`
## Conclusion: This commit meets all the criteria for stable backporting: - Fixes a real issue (false positive warnings) - Minimal risk (3-line change, no logic modifications) - Improves system behavior without introducing new features - Addresses interaction between critical security mitigations - Makes the kernel more reliable and less confusing for operators
The change is particularly important for stable kernels that need to support multiple CPU vulnerability mitigations simultaneously without generating misleading warnings.
arch/x86/kernel/cpu/bugs.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f4d3abb12317..ea7f94438254 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -115,10 +115,9 @@ void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
static void __init set_return_thunk(void *thunk) { - if (x86_return_thunk != __x86_return_thunk) - pr_warn("x86/bugs: return thunk changed\n"); - x86_return_thunk = thunk; + + pr_info("active return thunk: %ps\n", thunk); }
/* Update SPEC_CTRL MSR and its cached copy unconditionally */