[Devel] [PATCH RHEL7 COMMIT] ms/mm: memcontrol: add sanity checks for memcg->id.ref on get/put
Konstantin Khorenko
khorenko at virtuozzo.com
Wed Jul 5 18:37:02 MSK 2023
The commit is pushed to "branch-rh7-3.10.0-1160.90.1.vz7.200.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1160.90.1.vz7.200.4
------>
commit 41a102b8e528a2e4d8d664338913abb56f3e113f
Author: Vladimir Davydov <vdavydov at virtuozzo.com>
Date: Wed Jul 5 14:39:50 2023 +0800
ms/mm: memcontrol: add sanity checks for memcg->id.ref on get/put
Link: http://lkml.kernel.org/r/1c5ddb1c171dbdfc3262252769d6138a29b35b70.1470219853.git.vdavydov@virtuozzo.com
Signed-off-by: Vladimir Davydov <vdavydov at virtuozzo.com>
Acked-by: Johannes Weiner <hannes at cmpxchg.org>
Acked-by: Michal Hocko <mhocko at suse.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
https://jira.vzint.dev/browse/PSBM-147036
(cherry picked from commit 58fa2a5512d9f224775fb01433f195e639953c5f)
Signed-off-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
=================
Patchset description:
memcg: release id when offlinging cgroup
We see that container user can deplete memory cgroup ids on the system
(64k) and prevent further memory cgroup creation. In crash collected by
our customer in such a situation we see that mem_cgroup_idr is full of
cgroups from one container with same exact path (cgroup of docker
service), cgroups are not released because they have kmem charges, this
kmem charge is for a tmpfs dentry allocated from this cgroup. (And on
vz7 kernel it seems that such a dentry is only released after umounting
tmpfs or removing the corresponding file from tmpfs.)
So there is a valid way to hold kmem cgroup for a long time. Similar
thing was mentioned in mainstream with page cache holding kmem cgroup
for a long time. And they proposed a way to deal with it - just release
cgroup id early so that one can allocate new cgroups immediately.
Reproduce:
https://git.vzint.dev/users/ptikhomirov/repos/helpers/browse/memcg-related/test-mycg-tmpfs.sh
After this fix the number of memory cgroups in /proc/cgroups can now
show > 64k as we allow to leave memory cgroups hanging while releasing
their ids.
Note: Maybe it's a bad idea to allow container to eat kernel
memory with such a hanging cgroups, but yet I don't have better ideas.
https://jira.vzint.dev/browse/PSBM-147473
https://jira.vzint.dev/browse/PSBM-147036
---
mm/memcontrol.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5c0a7dc32908..3d4d2fb2bfc6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6681,6 +6681,7 @@ static void mem_cgroup_id_remove(struct mem_cgroup *memcg)
static void mem_cgroup_id_get_many(struct mem_cgroup *memcg, unsigned int n)
{
+ VM_BUG_ON(atomic_read(&memcg->id.ref) <= 0);
atomic_add(n, &memcg->id.ref);
}
@@ -6704,6 +6705,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
static void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)
{
+ VM_BUG_ON(atomic_read(&memcg->id.ref) < n);
if (atomic_sub_and_test(n, &memcg->id.ref)) {
mem_cgroup_id_remove(memcg);
@@ -6962,7 +6964,7 @@ mem_cgroup_css_online(struct cgroup *cont)
memcg = mem_cgroup_from_cont(cont);
/* Online state pins memcg ID, memcg ID pins CSS */
- mem_cgroup_id_get(memcg);
+ atomic_set(&memcg->id.ref, 1);
css_get(&memcg->css);
if (!cont->parent) {
More information about the Devel
mailing list