[Devel] [PATCH RHEL7 COMMIT] ms/mm/list_lru.c: add memcg argument to list_lru_from_kmem()
Konstantin Khorenko
khorenko at virtuozzo.com
Wed Sep 5 12:37:11 MSK 2018
The commit is pushed to "branch-rh7-3.10.0-862.11.6.vz7.71.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-862.11.6.vz7.71.8
------>
commit 2fe139e3bac52d16692b38e3b3ce681b985d5a87
Author: Kirill Tkhai <ktkhai at virtuozzo.com>
Date: Wed Sep 5 12:37:11 2018 +0300
ms/mm/list_lru.c: add memcg argument to list_lru_from_kmem()
ms commit 44bd4a4759d5
This is just refactoring to allow the next patches to have memcg pointer
in list_lru_from_kmem().
Link: http://lkml.kernel.org/r/153063060664.1818.9541345386733498582.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
Acked-by: Vladimir Davydov <vdavydov.dev at gmail.com>
Tested-by: Shakeel Butt <shakeelb at google.com>
Cc: Al Viro <viro at zeniv.linux.org.uk>
Cc: Andrey Ryabinin <aryabinin at virtuozzo.com>
Cc: Chris Wilson <chris at chris-wilson.co.uk>
Cc: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
Cc: Guenter Roeck <linux at roeck-us.net>
Cc: "Huang, Ying" <ying.huang at intel.com>
Cc: Johannes Weiner <hannes at cmpxchg.org>
Cc: Josef Bacik <jbacik at fb.com>
Cc: Li RongQing <lirongqing at baidu.com>
Cc: Matthew Wilcox <willy at infradead.org>
Cc: Matthias Kaehlcke <mka at chromium.org>
Cc: Mel Gorman <mgorman at techsingularity.net>
Cc: Michal Hocko <mhocko at kernel.org>
Cc: Minchan Kim <minchan at kernel.org>
Cc: Philippe Ombredanne <pombredanne at nexb.com>
Cc: Roman Gushchin <guro at fb.com>
Cc: Sahitya Tummala <stummala at codeaurora.org>
Cc: Stephen Rothwell <sfr at canb.auug.org.au>
Cc: Tetsuo Handa <penguin-kernel at I-love.SAKURA.ne.jp>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Waiman Long <longman at redhat.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
=====================
Patchset description:
Port "Improve shrink_slab() scalability" patchset
https://jira.sw.ru/browse/PSBM-88027
This is backport of the patchset improving the performance
of overcommited containers with many memcgs and mounts.
The original set is in Linus' tree, and came into 4.19-rc1.
Kirill Tkhai (12):
mm: assign id to every memcg-aware shrinker
mm/memcontrol.c: move up for_each_mem_cgroup{, _tree} defines
mm, memcg: assign memcg-aware shrinkers bitmap to memcg
fs: propagate shrinker::id to list_lru
mm/list_lru.c: add memcg argument to list_lru_from_kmem()
mm/list_lru: pass dst_memcg argument to memcg_drain_list_lru_node()
mm/list_lru.c: pass lru argument to memcg_drain_list_lru_node()
mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance
mm/memcontrol.c: export mem_cgroup_is_root()
mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()
mm: add SHRINK_EMPTY shrinker methods return value
mm/vmscan.c: clear shrinker bit if there are no objects related to memcg
Vladimir Davydov (1):
mm/vmscan.c: generalize shrink_slab() calls in shrink_node()
---
mm/list_lru.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 514a862717c5..7cb1640b7151 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -65,21 +65,27 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
}
static inline struct list_lru_one *
-list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
+list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
+ struct mem_cgroup **memcg_ptr)
{
+ struct list_lru_one *l = &nlru->lru;
struct list_lru_memcg *memcg_lrus;
- struct mem_cgroup *memcg;
+ struct mem_cgroup *memcg = NULL;
memcg_lrus = rcu_dereference_check(nlru->memcg_lrus,
lockdep_is_held(&nlru->lock));
if (!memcg_lrus)
- return &nlru->lru;
+ goto out;
memcg = mem_cgroup_from_kmem(ptr);
if (!memcg)
- return &nlru->lru;
+ goto out;
- return list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg));
+ l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg));
+out:
+ if (memcg_ptr)
+ *memcg_ptr = memcg;
+ return l;
}
#else
static inline bool list_lru_memcg_aware(struct list_lru *lru)
@@ -94,8 +100,11 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
}
static inline struct list_lru_one *
-list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
+list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
+ struct mem_cgroup **memcg_ptr)
{
+ if (memcg_ptr)
+ *memcg_ptr = NULL;
return &nlru->lru;
}
#endif /* CONFIG_MEMCG_KMEM */
@@ -107,7 +116,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
struct list_lru_one *l;
spin_lock(&nlru->lock);
- l = list_lru_from_kmem(nlru, item);
+ l = list_lru_from_kmem(nlru, item, NULL);
if (list_empty(item)) {
list_add_tail(item, &l->list);
l->nr_items++;
@@ -126,7 +135,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
struct list_lru_one *l;
spin_lock(&nlru->lock);
- l = list_lru_from_kmem(nlru, item);
+ l = list_lru_from_kmem(nlru, item, NULL);
if (!list_empty(item)) {
list_del_init(item);
l->nr_items--;
More information about the Devel
mailing list