From: Len Brown len.brown@intel.com
[ Upstream commit c97c057d357c4b39b153e9e430bbf8976e05bd4e ]
On enabling HWP, preserve the reserved bits in MSR_PM_ENABLE.
Also, skip writing the MSR_PM_ENABLE if HWP is already enabled.
Signed-off-by: Len Brown len.brown@intel.com Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
YES
**What It Fixes** - Preserves IA32_PM_ENABLE reserved bits instead of zeroing them: - Old behavior wrote a literal `1` unconditionally, clearing all other bits: `put_msr(cpu, MSR_PM_ENABLE, 1)` (tools/power/x86/x86_energy_p erf_policy/x86_energy_perf_policy.c:1169 in the pre-change context shown in the diff). - New behavior reads the MSR, ORs in the enable bit, and writes back: `get_msr(...) -> new_msr = old_msr | 1 -> put_msr(...)` (tools/power /x86/x86_energy_perf_policy/x86_energy_perf_policy.c:1176, 1182). This avoids clobbering reserved bits that firmware/hardware may set. - Skips redundant writes when HWP is already enabled: - Adds early return if `old_msr & 1` is set (tools/power/x86/x86_energ y_perf_policy/x86_energy_perf_policy.c:1179). This prevents unnecessary MSR writes and reduces potential races.
**Why It Matters** - IA32_PM_ENABLE (MSR 0x770) has bit 0 for HWP enable, with other bits reserved. Writing a raw `1` previously cleared those bits, which risks undoing firmware-initialized or future-defined bits. Preserving them (`old_msr | 1`) is the correct, robust pattern. - Reducing writes when already enabled avoids touching MSRs unnecessarily, which is generally safer and can avoid subtle interactions with concurrent management agents or firmware.
**Scope and Risk** - Change is small and self-contained to `enable_hwp_on_cpu()` in the userspace tool, not a kernel subsystem: - A few lines changed, no architectural refactor, no new features. - No API/ABI changes; only verbose logging format changes from decimal to hex (`%llX`) (tools/power/x86/x86_energy_perf_policy/x86_energy_p erf_policy.c:1185). This is developer-facing and gated by `verbose`. - Aligns with standard MSR handling practice: read-modify-write for registers with reserved bits. - Regression risk is minimal. If reserved bits were zero (as they should be on some parts), preserving them keeps them zero; if firmware set them, they won’t be inadvertently cleared.
**Backport Considerations** - Independent of other recent refactoring in this tool. The function and helpers (`get_msr`, `put_msr`) exist across older branches. - No dependency on kernel internal changes; applies cleanly to the tool. - Improves correctness and robustness without adding new behavior.
**Conclusion** - This is a clear bug fix that prevents reserved-bit clobbering and avoids unnecessary writes. It is small, low risk, and confined to the userspace tool. It fits stable rules and should be backported.
.../x86_energy_perf_policy/x86_energy_perf_policy.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c b/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c index c883f211dbcc9..0bda8e3ae7f77 100644 --- a/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c +++ b/tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c @@ -1166,13 +1166,18 @@ int update_hwp_request_pkg(int pkg)
int enable_hwp_on_cpu(int cpu) { - unsigned long long msr; + unsigned long long old_msr, new_msr; + + get_msr(cpu, MSR_PM_ENABLE, &old_msr); + + if (old_msr & 1) + return 0; /* already enabled */
- get_msr(cpu, MSR_PM_ENABLE, &msr); - put_msr(cpu, MSR_PM_ENABLE, 1); + new_msr = old_msr | 1; + put_msr(cpu, MSR_PM_ENABLE, new_msr);
if (verbose) - printf("cpu%d: MSR_PM_ENABLE old: %d new: %d\n", cpu, (unsigned int) msr, 1); + printf("cpu%d: MSR_PM_ENABLE old: %llX new: %llX\n", cpu, old_msr, new_msr);
return 0; }