[CRIU] [PATCH 1/2] Re-create cgroups if necessary

Saied Kazemi saied at google.com
Thu Jun 26 11:46:44 PDT 2014


Pavel,

Just a quick follow up:

> 1. cgroups sub-hierarchy
> 2. configuration of cgroups
> 3. populating cgroups with tasks
> 4. creating cgroup mountpoints (if any)

Pertaining to the list above, one of our interns is working on #2 and will
probably have a patch in a month.  Please let me know if someone is already
working on this to avoid duplication of effort.

> With a CLI option we tell CRIU:
>
> 1. Expect the cgroup to already exist, just put the process back in it.
 If cgroup doesn't exist, fail.
> 2. Expect the cgroup not to exist, create it and put the process in it.
 If cgroup exists, fail.

I was wondering if you had a chance to think about this.  As explained
before, there are legitimate cases where the cgroups do already exist prior
to criu restore, so their absence is a definite error.  Also, there are
legitimate cases where the cgroups do not already exist prior to criu
restore, so CRIU should create them.  CRIU on its own cannot determine
which scenario it's dealing with.  We can make the default action to always
create cgroups, but we still need a mechanism to tell it otherwise.

--Saied



On Wed, Jun 25, 2014 at 10:11 AM, Pavel Emelyanov <xemul at parallels.com>
wrote:

> On 06/25/2014 08:36 PM, Serge Hallyn wrote:
> > Quoting Pavel Emelyanov (xemul at parallels.com):
> >> On 06/25/2014 07:55 PM, Serge Hallyn wrote:
> >>> Quoting Pavel Emelyanov (xemul at parallels.com):
> >>>> On 06/25/2014 07:06 PM, Serge Hallyn wrote:
> >>>>
> >>>>> Another complication, btw, will be to do with relative cgroup paths.
>  I can
> >>>>> probably abstract that behind the cgmanager abstraction when I add
> that,
> >>>>> but idea is - if I checkpointed u1 on the host, in /cpuset/lxc/u1,
> and now
> >>>>> restart it inside a nested container, then it should be restarted at
> >>>>> /cpuset/lxc/somecontainer/lxc/u1, perhaps even /cpuset/lxc/u1/lxc/u1.
> >>>>
> >>>> How about do it two-step.
> >>>>
> >>>> First, on dump all cgroup paths are dumped relative to root task
> groups.
> >>>
> >>> What do you mean by 'root task groups'.
> >>
> >> The paths to cgroups where the init task lives. In criu code this
> >> task is referenced by root_task variable, so we call it root always :)
> >>
> >>> I would suggest:  For each task, dump the cgroup path relative to the
> >>> init task being dumped.
> >>
> >> +1, this is what I'm suggesting. But this would be tightly affected by
> >> the hierarchy dump. The thing is -- in task image (the core.proto one)
> >> we don't keep paths. We keep the cg_set identifier, which refers to the
> >> set of cgroups from cgroup image, which in turn contain paths.
> >>
> >>> For the cgroup hierarchy, dump the path up to the dumping task's init's
> >>> cgroups.
> >>
> >> Exactly.
> >>
> >>> Then at restore, simply restore relative to the restoring path's init's
> >>> cgroups.
> >>
> >> Em... Not clear what you mean here. Let's imagine criu lives in /
> cgroups
> >
> > If I'm pid 2048 and call criu to restore a task, criu looks at my init's
> > cgroups (pid 1's cgroups) and restores relative to that.
> >
> >> always. The container you dump lives in e.g. /cpuset/lxc/ct1/ one. On
> >> restore you want to move it into /cpuset/foo/lxc/ct1/ one.
> >
> > No, by default it would go to /cpuset/lxc/ct1, since my init task is in
> > cgroup /.
> >
> > If I'm now restarting it in a container which is in /cpuset/lxc/ct3,
> > then  it gets moved to /cpuset/lxc/ct3/lxc/ct1.
>
> Ah, I see. So we always recreate the same hierarchy structure relative
> to wherever criu sits. That's fine.
>
> Thanks,
> Pavel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/criu/attachments/20140626/0883ee2d/attachment-0001.html>


More information about the CRIU mailing list