On May 5, 2020 10:44:22 AM PDT, Nick Desaulniers ndesaulniers@google.com wrote:
From: Sedat Dilek sedat.dilek@gmail.com
It turns out that if your config tickles __builtin_constant_p via differences in choices to inline or not, this now produces invalid assembly:
$ cat foo.c long a(long b, long c) { asm("orb\t%1, %0" : "+q"(c): "r"(b)); return c; } $ gcc foo.c foo.c: Assembler messages: foo.c:2: Error: `%rax' not allowed with `orb'
The "q" constraint only has meanting on -m32 otherwise is treated as "r".
This is easily reproducible via Clang+CONFIG_STAGING=y+CONFIG_VT6656=m, or Clang+allyesconfig.
Keep the masking operation to appease sparse (`make C=1`), add back the cast in order to properly select the proper 8b register alias.
[Nick: reworded]
Cc: stable@vger.kernel.org Cc: Jesse Brandeburg jesse.brandeburg@intel.com Link: https://github.com/ClangBuiltLinux/linux/issues/961 Link: https://lore.kernel.org/lkml/20200504193524.GA221287@google.com/ Fixes: 1651e700664b4 ("x86: Fix bitops.h warning with a moved cast") Reported-by: Sedat Dilek sedat.dilek@gmail.com Reported-by: kernelci.org bot bot@kernelci.org Suggested-by: Andy Shevchenko andriy.shevchenko@intel.com Suggested-by: Ilie Halip ilie.halip@gmail.com Tested-by: Sedat Dilek sedat.dilek@gmail.com Signed-off-by: Sedat Dilek sedat.dilek@gmail.com Signed-off-by: Nick Desaulniers ndesaulniers@google.com
arch/x86/include/asm/bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index b392571c1f1d..139122e5b25b 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr) if (__builtin_constant_p(nr)) { asm volatile(LOCK_PREFIX "orb %1,%0" : CONST_MASK_ADDR(nr, addr)
: "iq" (CONST_MASK(nr) & 0xff)
} else { asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0": "iq" ((u8)(CONST_MASK(nr) & 0xff)) : "memory");
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr) if (__builtin_constant_p(nr)) { asm volatile(LOCK_PREFIX "andb %1,%0" : CONST_MASK_ADDR(nr, addr)
: "iq" (CONST_MASK(nr) ^ 0xff));
} else { asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0" : : RLONG_ADDR(addr), "Ir" (nr) : "memory");: "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
Drop & 0xff and change ^ 0xff to ~.
The redundancy is confusing.