From: Yicong Yang yangyicong@hisilicon.com
FEAT_LS64* instructions only support to access Device/Uncacheable memory, otherwise a data abort for unsupported Exclusive or atomic access (0x35) is generated per spec. It's implementation defined whether the target exception level is routed and is possible to implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section C3.2.12.2 Single-copy atomic 64-byte load/store:
The check is performed against the resulting memory type after all enabled stages of translation. In this case the fault is reported at the final enabled stage of translation.
If it's implemented as generate the DABT to the final enabled stage (stage-2), inject a DABT to the guest to handle it.
Signed-off-by: Yicong Yang yangyicong@hisilicon.com --- arch/arm64/kvm/mmu.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..b7e6f0a27537 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) return 1; }
+ /* + * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on + * unsupported memory regions, a DABT for unsupported Exclusive or + * atomic access is generated. It's implementation defined whether + * the exception will be taken to, a stage-1 DABT or the final enabled + * stage of translation (stage-2 in this case as we hit here). Inject + * a DABT to the guest to handle it if it's implemented as a stage-2 + * DABT. + */ + if (esr_fsc_is_excl_atomic_fault(esr)) { + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 1; + } + trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), kvm_vcpu_get_hfar(vcpu), fault_ipa);