Commit 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C") moved the switch_mmu_context() to C. While in principle a good idea, it meant that the function now uses the stack. The stack is not accessible from real mode though.
So to keep calling the function, let's turn on MSR_DR while we call it. That way, all pointer references to the stack are handled virtually.
In addition, make sure to save/restore r12 on the stack, as it may get clobbered by the C function.
Reported-by: Matt Evans matt@ozlabs.org Fixes: 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C") Signed-off-by: Alexander Graf graf@amazon.com Cc: stable@vger.kernel.org # v5.14+
---
v1 -> v2:
- Save and restore R12, so that we don't touch volatile registers while calling into C.
v2 -> v3:
- Save and restore R12 on the stack. SPRGs may be clobbered by page faults. --- arch/powerpc/kvm/book3s_32_sr.S | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S index e3ab9df6cf19..6cfcd20d4668 100644 --- a/arch/powerpc/kvm/book3s_32_sr.S +++ b/arch/powerpc/kvm/book3s_32_sr.S @@ -122,11 +122,27 @@
/* 0x0 - 0xb */
- /* 'current->mm' needs to be in r4 */ - tophys(r4, r2) - lwz r4, MM(r4) - tophys(r4, r4) - /* This only clobbers r0, r3, r4 and r5 */ + /* switch_mmu_context() needs paging, let's enable it */ + mfmsr r9 + ori r11, r9, MSR_DR + mtmsr r11 + sync + + /* switch_mmu_context() clobbers r12, rescue it */ + SAVE_GPR(12, r1) + + /* Calling switch_mmu_context(<inv>, current->mm, <inv>); */ + lwz r4, MM(r2) bl switch_mmu_context
+ /* restore r12 */ + REST_GPR(12, r1) + + /* Disable paging again */ + mfmsr r9 + li r6, MSR_DR + andc r9, r9, r6 + mtmsr r9 + sync + .endm
Le 10/05/2022 à 14:37, Alexander Graf a écrit :
Commit 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C") moved the switch_mmu_context() to C. While in principle a good idea, it meant that the function now uses the stack. The stack is not accessible from real mode though.
So to keep calling the function, let's turn on MSR_DR while we call it. That way, all pointer references to the stack are handled virtually.
In addition, make sure to save/restore r12 on the stack, as it may get clobbered by the C function.
Reported-by: Matt Evans matt@ozlabs.org Fixes: 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C") Signed-off-by: Alexander Graf graf@amazon.com Cc: stable@vger.kernel.org # v5.14+
v1 -> v2:
- Save and restore R12, so that we don't touch volatile registers while calling into C.
v2 -> v3:
- Save and restore R12 on the stack. SPRGs may be clobbered by page faults.
arch/powerpc/kvm/book3s_32_sr.S | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S index e3ab9df6cf19..6cfcd20d4668 100644 --- a/arch/powerpc/kvm/book3s_32_sr.S +++ b/arch/powerpc/kvm/book3s_32_sr.S @@ -122,11 +122,27 @@ /* 0x0 - 0xb */
- /* 'current->mm' needs to be in r4 */
- tophys(r4, r2)
- lwz r4, MM(r4)
- tophys(r4, r4)
- /* This only clobbers r0, r3, r4 and r5 */
- /* switch_mmu_context() needs paging, let's enable it */
- mfmsr r9
- ori r11, r9, MSR_DR
- mtmsr r11
- sync
- /* switch_mmu_context() clobbers r12, rescue it */
- SAVE_GPR(12, r1)
- /* Calling switch_mmu_context(<inv>, current->mm, <inv>); */
- lwz r4, MM(r2) bl switch_mmu_context
- /* restore r12 */
- REST_GPR(12, r1)
- /* Disable paging again */
- mfmsr r9
- li r6, MSR_DR
- andc r9, r9, r6
Instead of li/andc you can do:
rlwinm r9, r9, 0, ~MSR_DR
- mtmsr r9
- sync
- .endm
linux-stable-mirror@lists.linaro.org