[Devel] [PATCH RHEL7 COMMIT] ms/locks: remove conditional lock release in middle of flock_lock_file

Konstantin Khorenko khorenko at virtuozzo.com
Fri Apr 8 00:13:02 PDT 2016


The commit is pushed to "branch-rh7-3.10.0-327.10.1.vz7.12.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-327.10.1.vz7.12.5
------>
commit 2bdccff182cb59766d4997665a4a35efa2dc0dd0
Author: Jeff Layton <jeff.layton at primarydata.com>
Date:   Fri Apr 8 11:13:02 2016 +0400

    ms/locks: remove conditional lock release in middle of flock_lock_file
    
    ms commit: 267f1128583074b575b90a58de4dcb12dd25af96
    ("locks: remove conditional lock release in middle of flock_lock_file")
    
    As Linus pointed out:
    
        Say we have an existing flock, and now do a new one that conflicts. I
        see what looks like three separate bugs.
    
         - We go through the first loop, find a lock of another type, and
        delete it in preparation for replacing it
    
         - we *drop* the lock context spinlock.
    
         - BUG #1? So now there is no lock at all, and somebody can come in
        and see that unlocked state. Is that really valid?
    
         - another thread comes in while the first thread dropped the lock
        context lock, and wants to add its own lock. It doesn't see the
        deleted or pending locks, so it just adds it
    
         - the first thread gets the context spinlock again, and adds the lock
        that replaced the original
    
         - BUG #2? So now there are *two* locks on the thing, and the next
        time you do an unlock (or when you close the file), it will only
        remove/replace the first one.
    
    ...remove the "drop the spinlock" code in the middle of this function as
    it has always been suspicious. This should eliminate the potential race
    that can leave two locks for the same struct file on the list.
    
    He also pointed out another thing as a bug -- namely that you
    flock_lock_file removes the lock from the list unconditionally when
    doing a lock upgrade, without knowing whether it'll be able to set the
    new lock. Bruce pointed out that this is expected behavior and may help
    prevent certain deadlock situations.
    
    We may want to revisit that at some point, but it's probably best that
    we do so in the context of a different patchset.
    
    Reported-by: Linus Torvalds <torvalds at linux-foundation.org>
    Signed-off-by: Jeff Layton <jeff.layton at primarydata.com>
    
    in scope of https://jira.sw.ru/browse/PSBM-45393
    
    Signed-off-by: Cyrill Gorcunov <gorcunov at virtuozzo.com>
---
 fs/locks.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 70eec42..93c097b 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -894,16 +894,6 @@ static int flock_lock_file(struct file *filp, struct file_lock *request)
 		goto out;
 	}
 
-	/*
-	 * If a higher-priority process was blocked on the old file lock,
-	 * give it the opportunity to lock the file.
-	 */
-	if (found) {
-		spin_unlock(&inode->i_lock);
-		cond_resched();
-		spin_lock(&inode->i_lock);
-	}
-
 find_conflict:
 	for_each_lock(inode, before) {
 		struct file_lock *fl = *before;


More information about the Devel mailing list