[Devel] Re: [PATCH 1/2] memcg: dirty pages accounting and limiting infrastructure

Minchan Kim minchan.kim at gmail.com
Thu Feb 25 22:35:11 PST 2010


On Fri, Feb 26, 2010 at 3:15 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu at jp.fujitsu.com> wrote:
> On Fri, 26 Feb 2010 14:53:39 +0900
> Minchan Kim <minchan.kim at gmail.com> wrote:
>
>> On Fri, Feb 26, 2010 at 2:01 PM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu at jp.fujitsu.com> wrote:
>> > Hi,
>> >
>> > On Fri, 26 Feb 2010 13:50:04 +0900
>> > Minchan Kim <minchan.kim at gmail.com> wrote:
>> >
>> >> > Hm ? I don't read the whole thread but can_attach() is called under
>> >> > cgroup_mutex(). So, it doesn't need to use RCU.
>> >>
>> >> Vivek mentioned memcg is protected by RCU if I understand his intention right.
>> >> So I commented that without enough knowledge of memcg.
>> >> After your comment, I dive into the code.
>> >>
>> >> Just out of curiosity.
>> >>
>> >> Really, memcg is protected by RCU?
>> > yes. All cgroup subsystem is protected by RCU.
>> >
>> >> I think most of RCU around memcg is for protecting task_struct and
>> >> cgroup_subsys_state.
>> >> The memcg is protected by cgroup_mutex as you mentioned.
>> >> Am I missing something?
>> >
>> > There are several levels of protections.
>> >
>> > cgroup subsystem's ->destroy() call back is finally called by
>> >
>> > As this.
>> >
>> >  768                 synchronize_rcu();
>> >  769
>> >  770                 mutex_lock(&cgroup_mutex);
>> >  771                 /*
>> >  772                  * Release the subsystem state objects.
>> >  773                  */
>> >  774                 for_each_subsys(cgrp->root, ss)
>> >  775                         ss->destroy(ss, cgrp);
>> >  776
>> >  777                 cgrp->root->number_of_cgroups--;
>> >  778                 mutex_unlock(&cgroup_mutex);
>> >
>> > Before here,
>> >        - there are no tasks under this cgroup (cgroup's refcnt is 0)
>> >          && cgroup is marked as REMOVED.
>> >
>> > Then, this access
>> >        rcu_read_lock();
>> >        mem = mem_cgroup_from_task(task);
>> >        if (css_tryget(mem->css))   <===============checks cgroup refcnt
>>
>> If it it, do we always need css_tryget after mem_cgroup_from_task
>> without cgroup_mutex to make sure css is vaild?
>>
> On a case by cases.
>
>> But I found several cases that don't call css_tryget
>>
>> 1. mm_match_cgroup
>> It's used by page_referenced_xxx. so we I think we don't grab
>> cgroup_mutex at that time.
>>
> yes. but all check are done under RCU. And this function never
> access contents of memory which pointer holds.
> And, please conider the whole context.
>
>        mem_cgroup_try_charge()
>                mem_cout =..... (refcnt +1)
>                ....
>                try_to_free_mem_cgroup_pages(mem_cont)
>                ....
>                shrink_xxx_list()
>                ....
>                        page_referenced_anon(page, mem_cont)
>                                mm_match_cgroup(mm, mem_cont)
>
> Then, this mem_cont (2nd argument to mm_match_cgroup) is always valid.
>        rcu_read_lock();
>        memcg = mem_cgroup_from_task(rcu_dereference(mm->ownder));
>        rcu_read_unlock();
>        return memcg != mem_cont;
>
> Here,
>        1. mem_cont is never reused. (because refcnt+1)
>        2. we don't access memcg's contents.
>
> I think even rcu_read_lock()/unlock() is unnecessary.
>
>
>
>> 2. mem_cgroup_oom_called
>> I think in here we don't grab cgroup_mutex, too.
>>
> In OOM-killer context, memcg which causes OOM has refcnt +1.
> Then, not necessary.
>
>
>> I guess some design would cover that problems.
> Maybe.
>
>> Could you tell me if you don't mind?
>> Sorry for bothering you.
>>
>
> In my point of view, the most terrible porblem is heavy cost of
> css_tryget() when you run multi-thread heavy program.
> So, I want to see some inovation, but haven't find yet.
>
> I admit this RCU+refcnt is tend to be hard to review. But it's costly
> operation to take refcnt and it's worth to be handled in the best
> usage of each logics, on a case by cases for now.
>

Thanks for always kind explanation, Kame.

> Thanks,
> -Kame
>
>
>



-- 
Kind regards,
Minchan Kim
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list