[Devel] Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re: [PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup()
Daisuke Nishimura
nishimura at mxp.nes.nec.co.jp
Thu Oct 7 18:12:22 PDT 2010
On Thu, 7 Oct 2010 16:14:54 -0700
Andrew Morton <akpm at linux-foundation.org> wrote:
> On Thu, 7 Oct 2010 17:04:05 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com> wrote:
>
> > Now, at task migration among cgroup, memory cgroup scans page table and moving
> > account if flags are properly set.
> >
> > The core code, mem_cgroup_move_charge_pte_range() does
> >
> > pte_offset_map_lock();
> > for all ptes in a page table:
> > 1. look into page table, find_and_get a page
> > 2. remove it from LRU.
> > 3. move charge.
> > 4. putback to LRU. put_page()
> > pte_offset_map_unlock();
> >
> > for pte entries on a 3rd level? page table.
> >
> > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as
> >
> > for 32 pages: pte_offset_map_lock()
> > find_and_get a page
> > record it
> > pte_offset_map_unlock()
> > for all recorded pages
> > isolate it from LRU.
> > move charge
> > putback to LRU
> > for all recorded pages
> > put_page()
>
> The patch makes the code larger, more complex and slower!
>
Before this patch:
text data bss dec hex filename
27163 11782 4100 43045 a825 mm/memcontrol.o
After this patch:
text data bss dec hex filename
27307 12294 4100 43701 aab5 mm/memcontrol.o
hmm, allocating mc.target[] statically might be bad, but I'm now wondering
whether I could allocate mc itself dynamically(I'll try).
> I do think we're owed a more complete description of its benefits than
> "seems a bit long". Have problems been observed? Any measurements
> taken?
>
IIUC, this patch is necessary for "[PATCH] memcg: lock-free clear page writeback"
later, but I agree we should describe it.
Thanks,
Daisuke Nishimura.
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
More information about the Devel
mailing list