On 4/10/24 10:18, Mathieu Desnoyers wrote:
--- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -79,6 +79,9 @@ do { \ #define __smp_mb__before_atomic() do { } while (0) #define __smp_mb__after_atomic() do { } while (0) +/* Writing to CR3 provides a full memory barrier in switch_mm(). */ +#define smp_mb__after_switch_mm() do { } while (0)
I haven't gone through this in detail, but the CR3 certainly is a full barrier and the x86 code _looks_ correct, so:
Acked-by: Dave Hansen dave.hansen@linux.intel.com # for x86