3.16.81-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra peterz@infradead.org
commit f6b4ecee0eb7bfa66ae8d5652105ed4da53209a3 upstream.
There are no users, kill it.
Signed-off-by: Peter Zijlstra peterz@infradead.org Cc: Jesse Brandeburg jesse.brandeburg@intel.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Paul E. McKenney paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/20140508135851.768177189@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org [bwh: Backported to 3.16 because this function is broken after "x86/atomic: Fix smp_mb__{before,after}_atomic()"] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/include/asm/atomic.h | 15 --------------- 1 file changed, 15 deletions(-)
--- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -218,21 +218,6 @@ static inline short int atomic_inc_short return *v; }
-#ifdef CONFIG_X86_64 -/** - * atomic_or_long - OR of two long integers - * @v1: pointer to type unsigned long - * @v2: pointer to type unsigned long - * - * Atomically ORs @v1 and @v2 - * Returns the result of the OR - */ -static inline void atomic_or_long(unsigned long *v1, unsigned long v2) -{ - asm(LOCK_PREFIX "orq %1, %0" : "+m" (*v1) : "r" (v2)); -} -#endif - /* These are x86-specific, used by some header files */ #define atomic_clear_mask(mask, addr) \ asm volatile(LOCK_PREFIX "andl %0,%1" \