[Devel] [PATCH RHEL7 COMMIT] mm/memcg: close potential race of kmem unharge and reparent.
Konstantin Khorenko
khorenko at virtuozzo.com
Wed Aug 14 19:40:32 MSK 2019
The commit is pushed to "branch-rh7-3.10.0-957.27.2.vz7.107.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-957.27.2.vz7.107.3
------>
commit 4a279f9156ef9dbbc637924e5a46397db3e7a42b
Author: Andrey Ryabinin <aryabinin at virtuozzo.com>
Date: Wed Aug 14 19:40:29 2019 +0300
mm/memcg: close potential race of kmem unharge and reparent.
Make sure that we uncharge ->kmem first, so that mem_cgroup_reparent_charges()
won't see false positive ->memory <= ->kmem.
Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
khorenko@: note, this is not only about false positive warning we faced in
PSBM-97012, this is also about theoretical possibility of stopping reparenting
too early due to the race.
---
mm/memcontrol.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 435fdcf8c141..464b6116c62e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3206,13 +3206,13 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, bool kmem_charge
* successful charge implies full memory barrier.
*/
if (unlikely(memcg->is_offline)) {
- page_counter_uncharge(&memcg->memory, batch);
- if (do_swap_account)
- page_counter_uncharge(&memcg->memsw, batch);
if (kmem_charge) {
WARN_ON_ONCE(1);
page_counter_uncharge(&memcg->kmem, nr_pages);
}
+ page_counter_uncharge(&memcg->memory, batch);
+ if (do_swap_account)
+ page_counter_uncharge(&memcg->memsw, batch);
goto bypass;
}
@@ -4380,6 +4380,13 @@ static void mem_cgroup_reparent_charges(struct mem_cgroup *memcg)
* charge before adding to the LRU.
*/
mem = page_counter_read(&memcg->memory);
+
+ /*
+ * Make sure that we won't see ->memory uncharge before ->kmem uncharge,
+ * see uncharge_batch(), memcg_uncharge_kmem(). Pairing barrier provided
+ * by page_counter_ucharge()->page_counter_cancel()->atomic_long_sub_return().
+ */
+ smp_rmb();
kmem = page_counter_read(&memcg->kmem);
} while ((mem - kmem > 0) && time_before(jiffies, timeout));
@@ -7582,12 +7589,12 @@ static void uncharge_batch(struct mem_cgroup *memcg, unsigned long pgpgout,
unsigned long flags;
if (!mem_cgroup_is_root(memcg)) {
+ if (nr_kmem)
+ page_counter_uncharge(&memcg->kmem, nr_kmem);
if (nr_mem + nr_kmem)
page_counter_uncharge(&memcg->memory, nr_mem + nr_kmem);
if (nr_memsw + nr_kmem)
page_counter_uncharge(&memcg->memsw, nr_memsw + nr_kmem);
- if (nr_kmem)
- page_counter_uncharge(&memcg->kmem, nr_kmem);
if (nr_file)
page_counter_uncharge(&memcg->cache, nr_file - nr_shmem);
More information about the Devel
mailing list