[Devel] [PATCH RHEL7 COMMIT] ms/mm: memcontrol: remove ordering between pc->mem_cgroup and PageCgroupUsed
Konstantin Khorenko
khorenko at virtuozzo.com
Mon Jan 16 08:27:13 PST 2017
The commit is pushed to "branch-rh7-3.10.0-514.vz7.27.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-514.vz7.27.10
------>
commit 749786bd77767a1295b9a7cf89d60b8275ce237d
Author: Johannes Weiner <hannes at cmpxchg.org>
Date: Mon Jan 16 20:27:12 2017 +0400
ms/mm: memcontrol: remove ordering between pc->mem_cgroup and PageCgroupUsed
There is a write barrier between setting pc->mem_cgroup and
PageCgroupUsed, which was added to allow LRU operations to lookup the
memcg LRU list of a page without acquiring the page_cgroup lock.
But ever since commit 38c5d72f3ebe ("memcg: simplify LRU handling by new
rule"), pages are ensured to be off-LRU while charging, so nobody else
is changing LRU state while pc->mem_cgroup is being written, and there
are no read barriers anymore.
Remove the unnecessary write barrier.
Signed-off-by: Johannes Weiner <hannes at cmpxchg.org>
Acked-by: Michal Hocko <mhocko at suse.cz>
Cc: Hugh Dickins <hughd at google.com>
Cc: Tejun Heo <tj at kernel.org>
Cc: Vladimir Davydov <vdavydov at parallels.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
https://jira.sw.ru/browse/PSBM-51558
(cherry picked from commit 9a2385eef9f28fb5260c48c45fc8fe01f1da70a6)
Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
mm/memcontrol.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index cddfb93..2b04b1e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2977,14 +2977,6 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
}
pc->mem_cgroup = memcg;
- /*
- * We access a page_cgroup asynchronously without lock_page_cgroup().
- * Especially when a page_cgroup is taken from a page, pc->mem_cgroup
- * is accessed after testing USED bit. To make pc->mem_cgroup visible
- * before USED bit, we need memory barrier here.
- * See mem_cgroup_add_lru_list(), etc.
- */
- smp_wmb();
SetPageCgroupUsed(pc);
if (lrucare) {
@@ -3520,7 +3512,6 @@ void mem_cgroup_split_huge_fixup(struct page *head)
for (i = 1; i < HPAGE_PMD_NR; i++) {
pc = head_pc + i;
pc->mem_cgroup = memcg;
- smp_wmb();/* see __commit_charge() */
pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
}
__this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_RSS_HUGE],
More information about the Devel
mailing list