[Devel] [PATCH RHEL7 COMMIT] sched: Make calc_global_load() return true when it's need to update ve statistic

Konstantin Khorenko khorenko at virtuozzo.com
Thu Jul 19 15:56:56 MSK 2018


The commit is pushed to "branch-rh7-3.10.0-862.6.3.vz7.62.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-862.6.3.vz7.62.4
------>
commit 89edc221c0d64b8131f47c43d83559cc0a7d3043
Author: Kirill Tkhai <ktkhai at virtuozzo.com>
Date:   Thu Jul 19 15:56:56 2018 +0300

    sched: Make calc_global_load() return true when it's need to update ve statistic
    
    This will be used in next patches to call calc_load_ve() out of jiffies lock.
    
    Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
    
    =========================
    Patchset description:
    Make calc_load_ve() be executed out of jiffies_lock
    
    https://jira.sw.ru/browse/PSBM-84967
    
    Kirill Tkhai (3):
          sched: Make calc_global_load() return true when it's need to update ve statistic
          sched: Export calc_load_ve()
          sched: Call calc_load_ve() out of jiffies_lock
---
 include/linux/sched.h       | 2 +-
 kernel/sched/core.c         | 5 +++--
 kernel/time/tick-internal.h | 2 +-
 kernel/time/timekeeping.c   | 4 ++--
 4 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 670eca9f4096..1f7c39dc436e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -185,7 +185,7 @@ extern unsigned long nr_active_cpu(void);
 extern atomic_t nr_dead;
 extern unsigned long nr_zombie;
 
-extern void calc_global_load(unsigned long ticks);
+extern bool calc_global_load(unsigned long ticks);
 extern void update_cpu_load_nohz(void);
 
 extern unsigned long get_parent_ip(unsigned long addr);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 04056f711092..aa5422339e3d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3168,12 +3168,12 @@ static inline void calc_global_nohz(void) { }
  * calc_load - update the avenrun load estimates 10 ticks after the
  * CPUs have updated calc_load_tasks.
  */
-void calc_global_load(unsigned long ticks)
+bool calc_global_load(unsigned long ticks)
 {
 	long active, delta;
 
 	if (time_before(jiffies, calc_load_update + 10))
-		return;
+		return false;
 
 	/*
 	 * Fold the 'old' idle-delta to include all NO_HZ cpus.
@@ -3197,6 +3197,7 @@ void calc_global_load(unsigned long ticks)
 	 * In case we idled for multiple LOAD_FREQ intervals, catch up in bulk.
 	 */
 	calc_global_nohz();
+	return true;
 }
 
 /*
diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
index 74ad669ea929..bfa57616ef0b 100644
--- a/kernel/time/tick-internal.h
+++ b/kernel/time/tick-internal.h
@@ -160,7 +160,7 @@ static inline int tick_device_is_functional(struct clock_event_device *dev)
 #endif
 
 int __clockevents_update_freq(struct clock_event_device *dev, u32 freq);
-extern void do_timer(unsigned long ticks);
+extern bool do_timer(unsigned long ticks);
 extern void update_wall_time(void);
 
 extern u64 get_next_timer_interrupt(unsigned long basej, u64 basem);
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index f623b396271b..774651652076 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -2005,10 +2005,10 @@ struct timespec get_monotonic_coarse(void)
 /*
  * Must hold jiffies_lock
  */
-void do_timer(unsigned long ticks)
+bool do_timer(unsigned long ticks)
 {
 	jiffies_64 += ticks;
-	calc_global_load(ticks);
+	return calc_global_load(ticks);
 }
 
 /**


More information about the Devel mailing list