[Devel] [PATCH 7/7] packet: Pre-account maximum socket buffer into cg memory (v2)

Vladimir Davydov vdavydov at parallels.com
Thu Jun 4 04:54:10 PDT 2015


On Thu, Jun 04, 2015 at 02:31:38PM +0300, Pavel Emelyanov wrote:
> On 06/04/2015 01:53 PM, Vladimir Davydov wrote:
> > On Thu, Jun 04, 2015 at 12:47:47PM +0300, Pavel Emelyanov wrote:
> >> On 06/04/2015 12:02 PM, Vladimir Davydov wrote:
> >>> On Wed, Jun 03, 2015 at 04:35:35PM +0300, Pavel Emelyanov wrote:
> >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>>> index 4d0a756..45bc78d 100644
> >>>> --- a/mm/memcontrol.c
> >>>> +++ b/mm/memcontrol.c
> >>>> @@ -534,6 +534,20 @@ void sock_update_memcg(struct sock *sk)
> >>>>  }
> >>>>  EXPORT_SYMBOL(sock_update_memcg);
> >>>>  
> >>>> +struct mem_cgroup *try_get_mem_cgroup_from_current(void)
> >>>> +{
> >>>> +	struct mem_cgroup *cg;
> >>>> +
> >>>> +	rcu_read_lock();
> >>>> +	cg = mem_cgroup_from_task(current);
> >>>> +	if (mem_cgroup_is_root(cg) || !css_tryget(&cg->css))
> >>>> +		cg = NULL;
> >>>> +	rcu_read_unlock();
> >>>> +
> >>>> +	return cg;
> >>>> +}
> >>>> +EXPORT_SYMBOL(try_get_mem_cgroup_from_current);
> >>>> +
> >>>
> >>> Why not use try_get_mem_cgroup_from_mm(current->mm) instead?
> >>
> >> Because it's stupid, I don't need the steps it does when trying 
> >> to pick the mm owner in a loop.
> > 
> > Our goal is to reduce churn.
> 
> Really?!

Apart from making the code work, yes. I mean, if there are two equal
ways of doing the same, we should prefer the one with less churn.

> 
> > In particular, this means reusing existing functions wherever possible.
> 
> Even "reducing churn" goal doesn't mean so.
> 
> > This particular function does what you need
> > and there is no point to care if it is stupid or not, just use it :-)
> > The caller of this function is not in a hot path anyway.
> 
> Nice concept. Why not remove ve_struct->_venet_dev then? We can always safely
> do the __dev_get_by_name(ve->ve_ns->net_ns, "venet0").

Because it will take more time, which is not the case in this particular
case.



More information about the Devel mailing list