A warning is emitted in set_return_thunk() when the return thunk is overwritten since this is likely a bug and will result in mitigations not functioning and the mitigation information displayed in sysfs being incorrect.
There is a special case when the return thunk is overwritten from retbleed_return_thunk to srso_return_thunk since srso_return_thunk provides a superset of the functionality of retbleed_return_thunk, and this is handled correctly in entry_untrain_ret(). Avoid emitting the warning in this scenario to clarify that this is not an issue.
This situation occurs on certain AMD processors (e.g. Zen2) which are affected by both retbleed and srso.
Fixes: f4818881c47fd ("x86/its: Enable Indirect Target Selection mitigation") Cc: stable@vger.kernel.org # 5.15.x- Signed-off-by: Suraj Jitindar Singh surajjs@amazon.com --- arch/x86/kernel/cpu/bugs.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 8596ce85026c..b7797636140f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -69,7 +69,16 @@ void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
static void __init set_return_thunk(void *thunk) { - if (x86_return_thunk != __x86_return_thunk) + /* + * There can only be one return thunk enabled at a time, so issue a + * warning when overwriting it. retbleed_return_thunk is a special case + * which is safe to be overwritten with srso_return_thunk since it + * provides a superset of the functionality and is handled correctly in + * entry_untrain_ret(). + */ + if ((x86_return_thunk != __x86_return_thunk) && + (thunk != srso_return_thunk || + x86_return_thunk != retbleed_return_thunk)) pr_warn("x86/bugs: return thunk changed\n");
x86_return_thunk = thunk;