On Fri, Aug 06, 2021 at 12:31:04PM +0100, Will Deacon wrote:
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 75beffe2ee8a..e9c30859f80c 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -27,11 +27,32 @@ typedef struct { } mm_context_t; /*
- This macro is only used by the TLBI and low-level switch_mm() code,
- neither of which can race with an ASID change. We therefore don't
- need to reload the counter using atomic64_read().
- We use atomic64_read() here because the ASID for an 'mm_struct' can
- be reallocated when scheduling one of its threads following a
- rollover event (see new_context() and flush_context()). In this case,
- a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
- may use a stale ASID. This is fine in principle as the new ASID is
- guaranteed to be clean in the TLB, but the TLBI routines have to take
- care to handle the following race:
- CPU 0 CPU 1 CPU 2
- // ptep_clear_flush(mm)
- xchg_relaxed(pte, 0)
- DSB ISHST
- old = ASID(mm)
We'd need specs clarified (ARM ARM, cat model) that the DSB ISHST is sufficient to order the pte write with the subsequent ASID read. Otherwise the patch looks fine to me:
Reviewed-by: Catalin Marinas catalin.marinas@arm.com