[Devel] [PATCH RHEL7 COMMIT] ms/x86/spinlock: Replace ACCESS_ONCE with READ_ONCE

Konstantin Khorenko khorenko at virtuozzo.com
Thu Jul 13 18:30:24 MSK 2017


The commit is pushed to "branch-rh7-3.10.0-514.26.1.vz7.33.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-514.26.1.vz7.33.5
------>
commit 78620e902812d0bb19c2868982876b6954097a0a
Author: Christian Borntraeger <borntraeger at de.ibm.com>
Date:   Thu Jul 13 19:30:17 2017 +0400

    ms/x86/spinlock: Replace ACCESS_ONCE with READ_ONCE
    
    Patchset description:
    fix for "KASan: out of bounds access in do_raw_spin_unlock"
    
    The series of 3 cherry-picked patches fixing
    https://jira.sw.ru/browse/PSBM-68212
    
    Christian Borntraeger (1):
      x86/spinlock: Replace ACCESS_ONCE with READ_ONCE
    
      Oleg Nesterov (1):
        x86/ticketlock: Fix spin_unlock_wait() livelock
    
      Raghavendra K T (1):
        x86/spinlocks/paravirt: Fix memory corruption on unlock
    
    ========================================================
    This patch description:
    
    ACCESS_ONCE does not work reliably on non-scalar types. For
    example gcc 4.6 and 4.7 might remove the volatile tag for such
    accesses during the SRA (scalar replacement of aggregates) step
    (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
    
    Change the spinlock code to replace ACCESS_ONCE with READ_ONCE.
    
    Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
    Acked-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
    
    (cherry picked from commit 4f9d1382e6f80dcfa891b2c02d5a35c53be485f1)
    Signed-off-by: Denis Plotnikov <dplotnikov at virtuozzo.com>
    Acked-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
 arch/x86/include/asm/spinlock.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 9b4f7b0..4dc8606 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -92,7 +92,7 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
+			if (READ_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
 		} while (--count);
@@ -105,7 +105,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
-	old.tickets = ACCESS_ONCE(lock->tickets);
+	old.tickets = READ_ONCE(lock->tickets);
 	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
@@ -162,14 +162,14 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+	struct __raw_tickets tmp = READ_ONCE(lock->tickets);
 
 	return tmp.tail != tmp.head;
 }
 
 static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
-	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+	struct __raw_tickets tmp = READ_ONCE(lock->tickets);
 
 	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
 }


More information about the Devel mailing list