[Devel] [PATCH rh7] sched/core: Fix unserialized r-m-w scribbling stuff

Kirill Tkhai ktkhai at virtuozzo.com
Fri Jan 15 07:09:36 PST 2016


Port commit be958bdc96f18bc1356177bbb79d46ea0c037b96 from mainstream.

Some of the sched bitfieds (notably sched_reset_on_fork) can be set
on other than current, this can cause the r-m-w to race with other
updates.

Since all the sched bits are serialized by scheduler locks, pull them
in a separate word.

Reported-by: Tejun Heo <tj at kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: Dmitry Vyukov <dvyukov at google.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Sasha Levin <sasha.levin at oracle.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: akpm at linux-foundation.org
Cc: hannes at cmpxchg.org
Cc: mhocko at kernel.org
Cc: Dmitry Vyukov <dvyukov at google.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Peter Zijlstra <peterz at infradead.org>

Really the below 4 bits are protected by different locks:

unsigned sched_reset_on_fork:1; rq pi
unsigned sched_contributes_to_load:1; pi
unsigned sched_interruptible_sleep:1; pi
unsigned woken_while_running:1; rq (patch makes this)

But we may say, all of them are protected by rq lock,
because sched_contributes_to_load and sched_interruptible_sleep
are modified in try_to_wake_up(), when task is not on rq,
so nobody can modify sched_reset_on_fork and woken_while_running
this moment.

Thus we won't use one more bitfield and save a memory of one word.

Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
---
 include/linux/sched.h |   21 +++++++++++----------
 kernel/sched/core.c   |    5 ++---
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4bbd391..81dc3d7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1231,21 +1231,22 @@ struct task_struct {
 	/* Used for emulating ABI behavior of previous Linux versions */
 	unsigned int personality;
 
+	/* scheduler bits, serialized by rq lock: */
+	unsigned sched_reset_on_fork:1;
+	/* Two below are really protected by pi_lock, but they are modified in
+         * the place where nobody else can modify other fields using rq->lock */
+	unsigned sched_contributes_to_load:1;
+	unsigned sched_interruptible_sleep:1;
+	unsigned woken_while_running:1;
+	unsigned :0; /* force alignment to the next boundary */
+
+	/* unserialized, strictly 'current' */
 	unsigned did_exec:1;
 	unsigned in_execve:1;	/* Tell the LSMs that the process is doing an
 				 * execve */
 	unsigned in_iowait:1;
 	unsigned did_ve_enter:1;
-
-	/* task may not gain privileges */
-	unsigned no_new_privs:1;
-
-	/* Revert to default priority/policy when forking */
-	unsigned sched_reset_on_fork:1;
-	unsigned sched_contributes_to_load:1;
-	unsigned sched_interruptible_sleep:1;
-
-	unsigned woken_while_running:1;
+	unsigned no_new_privs:1; /* task may not gain privileges */
 	unsigned may_throttle:1;
 
 	pid_t pid;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c8ac8bd..f61a4d1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1616,6 +1616,7 @@ static int ttwu_remote(struct task_struct *p, int wake_flags)
 	rq = __task_rq_lock(p);
 	if (p->on_rq) {
 		ttwu_do_wakeup(rq, p, wake_flags);
+		p->woken_while_running = 1;
 		ret = 1;
 	}
 	__task_rq_unlock(rq);
@@ -1739,10 +1740,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	success = 1; /* we're going to change ->state */
 	cpu = task_cpu(p);
 
-	if (p->on_rq && ttwu_remote(p, wake_flags)) {
-		p->woken_while_running = 1;
+	if (p->on_rq && ttwu_remote(p, wake_flags))
 		goto stat;
-	}
 
 #ifdef CONFIG_SMP
 	/*



More information about the Devel mailing list