[Devel] [PATCH 2/2] sched: load_balance: Reset env when going to redo due to all pinned
Peter Zijlstra
peterz at infradead.org
Sun Sep 15 22:43:01 PDT 2013
On Sun, Sep 15, 2013 at 09:30:14PM +0400, Vladimir Davydov wrote:
> Firstly, reset env.dst_cpu/dst_rq to this_cpu/this_rq, because it could
> have changed in 'some pinned' case. Otherwise, should_we_balance() can
> stop balancing beforehand.
>
> Secondly, reset env.flags, because it can have LBF_SOME_PINNED set.
>
> Thirdly, reset env.dst_grpmask cpus in env.cpus to allow handling 'some
> pinned' case when pulling tasks from a new busiest cpu.
Did you actually run into any problems because of this?
> Signed-off-by: Vladimir Davydov <vdavydov at parallels.com>
> ---
> kernel/sched/fair.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index cd59640..d840e51 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5289,8 +5289,16 @@ more_balance:
> if (unlikely(env.flags & LBF_ALL_PINNED)) {
> cpumask_clear_cpu(cpu_of(busiest), cpus);
> if (!cpumask_empty(cpus)) {
> - env.loop = 0;
> - env.loop_break = sched_nr_migrate_break;
> + env.dst_cpu = this_cpu;
> + env.dst_rq = this_rq;
> + env.flags = 0;
> + env.loop = 0;
> + env.loop_break = sched_nr_migrate_break;
> +
> + /* Reset cpus cleared in LBF_SOME_PINNED case */
> + if (env.dst_grpmask)
> + cpumask_or(cpus, cpus, env.dst_grpmask);
> +
> goto redo;
> }
> goto out_balanced;
So the problem I have with this is that it removes the bound on the
number of iterations we do. Currently we're limited by the bits in cpus,
but by resetting those we can do on and on and on...
More information about the Devel
mailing list