[Devel] [PATCH 13/13] mm/vmscan.c: clear shrinker bit if there are no objects related to memcg

Kirill Tkhai ktkhai at virtuozzo.com
Tue Aug 28 14:33:14 MSK 2018


ms commit f90280d6b796

To avoid further unneed calls of do_shrink_slab() for shrinkers, which
already do not have any charged objects in a memcg, their bits have to
be cleared.

This patch introduces a lockless mechanism to do that without races
without parallel list lru add.  After do_shrink_slab() returns
SHRINK_EMPTY the first time, we clear the bit and call it once again.
Then we restore the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg() covers
two situations:

1)list_lru_add()     shrink_slab_memcg
    list_add_tail()    for_each_set_bit() <--- read bit
                         do_shrink_slab() <--- missed list update (no barrier)
    <MB>                 <MB>
    set_bit()            do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit, but it
doesn't see list update (i.e., race with the first element queueing), is
rare.  So we don't add <MB> before the first call of do_shrink_slab()
instead of this to do not slow down generic case.  Also, it's need the
second call as seen in below in (2).

2)list_lru_add()      shrink_slab_memcg()
   list_add_tail()     ...
   set_bit()           ...
  ...                   for_each_set_bit()
  do_shrink_slab()        do_shrink_slab()
    clear_bit()           ...
  ...                     ...
  list_lru_add()          ...
    list_add_tail()       clear_bit()
    <MB>                  <MB>
    set_bit()             do_shrink_slab()

The barriers guarantee that the second do_shrink_slab() in the right
side task sees list update if really cleared the bit.  This case is
drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify increase
of performance:

  $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
  $mkdir /sys/fs/cgroup/memory/ct
  $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
      $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i;
                            echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs;
                            mkdir -p s/$i; mount -t tmpfs $i s/$i;
                            touch s/$i/file; done

Then, 5 sequential calls of drop caches:

  $time echo 3 > /proc/sys/vm/drop_caches

1)Before:
  0.00user 13.78system 0:13.78elapsed 99%CPU
  0.00user 5.59system 0:05.60elapsed 99%CPU
  0.00user 5.48system 0:05.48elapsed 99%CPU
  0.00user 8.35system 0:08.35elapsed 99%CPU
  0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
  0.00user 1.10system 0:01.10elapsed 99%CPU
  0.00user 0.00system 0:00.01elapsed 64%CPU
  0.00user 0.01system 0:00.01elapsed 82%CPU
  0.00user 0.00system 0:00.01elapsed 64%CPU
  0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Shakeel Butt tested this patchset with fork-bomb on his configuration:

	> I created 255 memcgs, 255 ext4 mounts and made each memcg create a
	> file containing few KiBs on corresponding mount. Then in a separate
	> memcg of 200 MiB limit ran a fork-bomb.
	>
	> I ran the "perf record -ag -- sleep 60" and below are the results:
	>
	> Without the patch series:
	> Samples: 4M of event 'cycles', Event count (approx.): 3279403076005
	> +  36.40%            fb.sh  [kernel.kallsyms]    [k] shrink_slab
	> +  18.97%            fb.sh  [kernel.kallsyms]    [k] list_lru_count_one
	> +   6.75%            fb.sh  [kernel.kallsyms]    [k] super_cache_count
	> +   0.49%            fb.sh  [kernel.kallsyms]    [k] down_read_trylock
	> +   0.44%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_iter
	> +   0.27%            fb.sh  [kernel.kallsyms]    [k] up_read
	> +   0.21%            fb.sh  [kernel.kallsyms]    [k] osq_lock
	> +   0.13%            fb.sh  [kernel.kallsyms]    [k] shmem_unused_huge_count
	> +   0.08%            fb.sh  [kernel.kallsyms]    [k] shrink_node_memcg
	> +   0.08%            fb.sh  [kernel.kallsyms]    [k] shrink_node
	>
	> With the patch series:
	> Samples: 4M of event 'cycles', Event count (approx.): 2756866824946
	> +  47.49%            fb.sh  [kernel.kallsyms]    [k] down_read_trylock
	> +  30.72%            fb.sh  [kernel.kallsyms]    [k] up_read
	> +   9.51%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_iter
	> +   1.69%            fb.sh  [kernel.kallsyms]    [k] shrink_node_memcg
	> +   1.35%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_protected
	> +   1.05%            fb.sh  [kernel.kallsyms]    [k] queued_spin_lock_slowpath
	> +   0.85%            fb.sh  [kernel.kallsyms]    [k] _raw_spin_lock
	> +   0.78%            fb.sh  [kernel.kallsyms]    [k] lruvec_lru_size
	> +   0.57%            fb.sh  [kernel.kallsyms]    [k] shrink_node
	> +   0.54%            fb.sh  [kernel.kallsyms]    [k] queue_work_on
	> +   0.46%            fb.sh  [kernel.kallsyms]    [k] shrink_slab_memcg

[ktkhai at virtuozzo.com: v9]
  Link: http://lkml.kernel.org/r/153112561772.4097.11011071937553113003.stgit@localhost.localdomain
Link: http://lkml.kernel.org/r/153063070859.1818.11870882950920963480.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
Acked-by: Vladimir Davydov <vdavydov.dev at gmail.com>
Tested-by: Shakeel Butt <shakeelb at google.com>
Cc: Al Viro <viro at zeniv.linux.org.uk>
Cc: Andrey Ryabinin <aryabinin at virtuozzo.com>
Cc: Chris Wilson <chris at chris-wilson.co.uk>
Cc: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
Cc: Guenter Roeck <linux at roeck-us.net>
Cc: "Huang, Ying" <ying.huang at intel.com>
Cc: Johannes Weiner <hannes at cmpxchg.org>
Cc: Josef Bacik <jbacik at fb.com>
Cc: Li RongQing <lirongqing at baidu.com>
Cc: Matthew Wilcox <willy at infradead.org>
Cc: Matthias Kaehlcke <mka at chromium.org>
Cc: Mel Gorman <mgorman at techsingularity.net>
Cc: Michal Hocko <mhocko at kernel.org>
Cc: Minchan Kim <minchan at kernel.org>
Cc: Philippe Ombredanne <pombredanne at nexb.com>
Cc: Roman Gushchin <guro at fb.com>
Cc: Sahitya Tummala <stummala at codeaurora.org>
Cc: Stephen Rothwell <sfr at canb.auug.org.au>
Cc: Tetsuo Handa <penguin-kernel at I-love.SAKURA.ne.jp>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Waiman Long <longman at redhat.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>

https://jira.sw.ru/browse/PSBM-88027

Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
---
 mm/memcontrol.c |    2 ++
 mm/vmscan.c     |   26 ++++++++++++++++++++++++--
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index dec66d859df8..a1baff607b68 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -827,6 +827,8 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
 
 		rcu_read_lock();
 		map = rcu_dereference(memcg->info.nodeinfo[nid]->shrinker_map);
+		/* Pairs with smp mb in shrink_slab() */
+		smp_mb__before_atomic();
 		set_bit(shrinker_id, map->map);
 		rcu_read_unlock();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index da28fc98f0a0..b7a463b5e508 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -464,8 +464,30 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 		}
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
-		if (ret == SHRINK_EMPTY)
-			ret = 0;
+		if (ret == SHRINK_EMPTY) {
+			clear_bit(i, map->map);
+			/*
+			 * After the shrinker reported that it had no objects to
+			 * free, but before we cleared the corresponding bit in
+			 * the memcg shrinker map, a new object might have been
+			 * added. To make sure, we have the bit set in this
+			 * case, we invoke the shrinker one more time and reset
+			 * the bit if it reports that it is not empty anymore.
+			 * The memory barrier here pairs with the barrier in
+			 * memcg_set_shrinker_bit():
+			 *
+			 * list_lru_add()     shrink_slab_memcg()
+			 *   list_add_tail()    clear_bit()
+			 *   <MB>               <MB>
+			 *   set_bit()          do_shrink_slab()
+			 */
+			smp_mb__after_atomic();
+			ret = do_shrink_slab(&sc, shrinker, priority);
+			if (ret == SHRINK_EMPTY)
+				ret = 0;
+			else
+				memcg_set_shrinker_bit(memcg, nid, i);
+		}
 		freed += ret;
 
 		if (rwsem_is_contended(&shrinker_rwsem)) {



More information about the Devel mailing list