[Devel] Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re: [PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup()
Andrew Morton
akpm at linux-foundation.org
Thu Oct 7 21:55:56 PDT 2010
On Fri, 8 Oct 2010 13:37:12 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com> wrote:
> On Thu, 7 Oct 2010 16:14:54 -0700
> Andrew Morton <akpm at linux-foundation.org> wrote:
>
> > On Thu, 7 Oct 2010 17:04:05 +0900
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu at jp.fujitsu.com> wrote:
> >
> > > Now, at task migration among cgroup, memory cgroup scans page table and moving
> > > account if flags are properly set.
> > >
> > > The core code, mem_cgroup_move_charge_pte_range() does
> > >
> > > pte_offset_map_lock();
> > > for all ptes in a page table:
> > > 1. look into page table, find_and_get a page
> > > 2. remove it from LRU.
> > > 3. move charge.
> > > 4. putback to LRU. put_page()
> > > pte_offset_map_unlock();
> > >
> > > for pte entries on a 3rd level? page table.
> > >
> > > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as
> > >
> > > for 32 pages: pte_offset_map_lock()
> > > find_and_get a page
> > > record it
> > > pte_offset_map_unlock()
> > > for all recorded pages
> > > isolate it from LRU.
> > > move charge
> > > putback to LRU
> > > for all recorded pages
> > > put_page()
> >
> > The patch makes the code larger, more complex and slower!
> >
>
> Slower ?
Sure. It walks the same data three times, potentially causing
thrashing in the L1 cache. It takes and releases locks at a higher
frequency. It increases the text size.
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
More information about the Devel
mailing list