[Devel] [PATCH RESEND -mm 01/12] memcg: flush cache creation works before memcg cache destruction
Michal Hocko
mhocko at suse.cz
Tue Mar 18 01:55:32 PDT 2014
On Tue 18-03-14 12:14:37, Vladimir Davydov wrote:
> On 03/17/2014 08:07 PM, Michal Hocko wrote:
> > On Thu 13-03-14 19:06:39, Vladimir Davydov wrote:
> >> When we get to memcg cache destruction, either from the root cache
> >> destruction path or when turning memcg offline, there still might be
> >> memcg cache creation works pending that was scheduled before we
> >> initiated destruction. We need to flush them before starting to destroy
> >> memcg caches, otherwise we can get a leaked kmem cache or, even worse,
> >> an attempt to use after free.
> > How can we use-after-free? Even if there is a pending work item to
> > create a new cache then we keep the css reference for the memcg and
> > release it from the worker (memcg_create_cache_work_func). So although
> > this can race with memcg offlining the memcg itself will be still alive.
>
> There are actually two issues:
>
> 1) When we destroy a root cache using kmem_cache_destroy(), we should
> ensure all pending memcg creation works for this root cache are over,
> otherwise a work could be executed after the root cache is destroyed
> resulting in use-after-free.
Dunno, but this sounds backwards to me. If we are using a root cache for
a new child creation then the child should make sure that the root
doesn't go away, no? Cannot we take a reference to the root cache before
we schedule memcg_create_cache_work_func?
But I admit that the root cache concept is not entirely clear to me.
> 2) Memcg offline. In this case use-after-free is impossible in a memcg
> creation work handler, because, as you mentioned, the work holds the css
> reference. However, we still have to synchronize against pending
> requests, otherwise a work handler can be executed after we destroyed
> the caches corresponding to the memcg being offlined resulting in a
> kmem_cache leak.
If that is a case then we should come up with a proper synchronization
because synchronization by workqueues and explicit flushing and
canceling is really bad.
--
Michal Hocko
SUSE Labs
More information about the Devel
mailing list