locking/rtmutex: Use try_cmpxchg_relaxed() in mark_rt_mutex_waiters()
authorUros Bizjak <ubizjak@gmail.com>
Wed, 24 Jan 2024 10:49:53 +0000 (11:49 +0100)
committerIngo Molnar <mingo@kernel.org>
Fri, 1 Mar 2024 12:02:05 +0000 (13:02 +0100)
Use try_cmpxchg() instead of cmpxchg(*ptr, old, new) == old.

The x86 CMPXCHG instruction returns success in the ZF flag, so this change
saves a compare after CMPXCHG (and related move instruction in front of CMPXCHG).

Also, try_cmpxchg() implicitly assigns old *ptr value to "old" when CMPXCHG
fails. There is no need to re-read the value in the loop.

Note that the value from *ptr should be read using READ_ONCE() to prevent
the compiler from merging, refetching or reordering the read.

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/20240124104953.612063-1-ubizjak@gmail.com
kernel/locking/rtmutex.c

index 4a10e8c16fd2bd691f53871228e556a59f0f781a..88d08eeb8bc03df508146f30a8120d80654a1759 100644 (file)
@@ -237,12 +237,13 @@ static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock,
  */
 static __always_inline void mark_rt_mutex_waiters(struct rt_mutex_base *lock)
 {
-       unsigned long owner, *p = (unsigned long *) &lock->owner;
+       unsigned long *p = (unsigned long *) &lock->owner;
+       unsigned long owner, new;
 
+       owner = READ_ONCE(*p);
        do {
-               owner = *p;
-       } while (cmpxchg_relaxed(p, owner,
-                                owner | RT_MUTEX_HAS_WAITERS) != owner);
+               new = owner | RT_MUTEX_HAS_WAITERS;
+       } while (!try_cmpxchg_relaxed(p, &owner, new));
 
        /*
         * The cmpxchg loop above is relaxed to avoid back-to-back ACQUIRE