On Fri, Jul 26, 2019 at 01:18:06PM +0300, Jari Ruusu wrote:
Greg Kroah-Hartman wrote:
[ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]
Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as x86) fail for:
*x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y;
[snip]
--- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "addl %1,%0" : "+m" (v->counter)
: "ir" (i));
: "ir" (i) : "memory");
}
/**
Shouldn't those clobber contraints actually be: "memory","cc" That is because addl subl (and other) machine instructions actually modify the flags register too.
gcc docs say: The "cc" clobber indicates that the assembler code modifies the flags register.
GCC x86 assumes any asm() will clobber "cc".