[CRIU] Process migration between two RPi2 with Criu

alex vk avankemp at gmail.com
Fri May 20 02:06:53 PDT 2016


Hi Andrew,

You're right, there was no "cpuset" on the Rpi where I tried the restoring
process, whereas it's present on the other one..
If I swap the roles between the two Pis, the checkpoint/restore succeeds :)

I dont't really understand how come the configuration is different as I was
using the exact same image loaded on the SD card..
Do you have an idea why cpuset it disabled on one Rpi, or how I could
enable it ?

Thanks a lot,
Alex.


Raspberry 1 : (Restore on this Rpi)
pi at raspberrypi:~ $ cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpu 2 1 1
cpuacct 2 1 1
memory 0 1 0
devices 3 1 1
freezer 4 1 1
net_cls 5 1 1
blkio 6 1 1

Raspberry 2 : (Snapshot on this Rpi)
#subsys_name hierarchy num_cgroups enabled
cpuset 2 1 1
cpu 3 1 1
cpuacct 3 1 1
memory 0 1 0
devices 4 1 1
freezer 5 1 1
net_cls 6 1 1
blkio 7 1 1

2016-05-19 23:26 GMT+02:00 Andrew Vagin <avagin at virtuozzo.com>:

> Hi Alex,
>
> Can you show /proc/cgroups from both hosts?
>
> On Thu, May 19, 2016 at 06:37:10PM +0200, alex vk wrote:
> >
> > pi at raspberrypi:~/criu/criu $ ./migrate-ssh.sh
> > Warn  (autofs.c:74): Failed to find pipe_ino option (old kernel?)
> > Connection to 192.168.0.2 closed.
> > Snapshoted
> > cgroup.img                                    100% 1948     1.9KB/s
>  00:00
> > core-943.img                                  100%  602     0.6KB/s
>  00:00
> > fdinfo-2.img                                  100%   80     0.1KB/s
>  00:00
> > fs-943.img                                    100%   18     0.0KB/s
>  00:00
> > ids-943.img                                   100%   32     0.0KB/s
>  00:00
> > inetsk.img                                    100%  251     0.3KB/s
>  00:00
> > inventory.img                                 100%   38     0.0KB/s
>  00:00
> > mm-943.img                                    100% 1862     1.8KB/s
>  00:00
> > pagemap-943.img                               100%  242     0.2KB/s
>  00:00
> > pages-1.img                                   100%  252KB 252.0KB/s
>  00:00
> > pstree.img                                    100%   26     0.0KB/s
>  00:00
> > reg-files.img                                 100%  994     1.0KB/s
>  00:00
> > sigacts-943.img                               100%  752     0.7KB/s
>  00:00
> > stats-dump                                    100%   36     0.0KB/s
>  00:00
> > tty.img                                       100%   30     0.0KB/s
>  00:00
> > tty-info.img                                  100%  178     0.2KB/s
>  00:00
> > Image transfered
> > (00.082257) Pagemap is fully functional
> > (00.082543) Found task size of 7f000000
> > (00.082892) Reading image tree
> > (00.083089) Add mnt ns 5 pid 943
> > (00.083211) Migrating process tree (GID 943->805 SID 791->741)
> > (00.083259) Will restore in 0 namespaces
> > (00.083295) NS mask to use 0
> > (00.083331) Collecting 39/18 (flags 1)
> > (00.083428) Collected [dev/pts/0] ID 0x1
> > (00.083486) Collected [lib/arm-linux-gnueabihf/libresolv-2.19.so] ID 0x5
> > (00.083536) Collected [lib/arm-linux-gnueabihf/libnss_dns-2.19.so] ID
> 0x6
> > (00.083581) Collected [lib/arm-linux-gnueabihf/libnss_files-2.19.so] ID
> 0x7
> > (00.083626) Collected [lib/arm-linux-gnueabihf/libc-2.19.so] ID 0x8
> > (00.083672) Collected [lib/arm-linux-gnueabihf/libgcc_s.so.1] ID 0x9
> > (00.083715) Collected [lib/arm-linux-gnueabihf/libm-2.19.so] ID 0xa
> > (00.083759) Collected [usr/lib/arm-linux-gnueabihf/libstdc++.so.6.0.20]
> ID 0xb
> > (00.083804) Collected [lib/arm-linux-gnueabihf/libz.so.1.2.8] ID 0xc
> > (00.083847) Collected [usr/lib/arm-linux-gnueabihf/libenet.so.7.0.0] ID
> 0xd
> > (00.083892) Collected [usr/lib/arm-linux-gnueabihf/libarmmem.so] ID 0xe
> > (00.083935) Collected [lib/arm-linux-gnueabihf/ld-2.19.so] ID 0xf
> > (00.083978) Collected
> [lib/arm-linux-gnueabihf/libnss_mdns4_minimal.so.2] ID
> > 0x10
> > (00.084022) Collected [usr/lib/cube2/cube2_server] ID 0x11
> > (00.084064) Collected [usr/lib/cube2] ID 0x12
> > (00.084104) Collected [.] ID 0x13
> > (00.084151)  `- ... done
> > (00.084182) Collecting 52/56 (flags 0)
> > (00.084259) No remap-fpath.img image
> > (00.084318)  `- ... done
> > (00.084384) Collecting 42/21 (flags 0)
> > (00.084549)  `- ... done
> > (00.084991) cg: Preparing cgroups yard (cgroups restore mode 0x4)
> > (00.089011) cg: Opening .criu.cgyard.lg48ci as cg yard
> > (00.089161) cg:  Making controller dir .criu.cgyard.lg48ci/blkio (blkio)
> > (00.089465) cg: Determined cgroup dir blkio/ already exist
> > (00.089495) cg: Skip restoring properties on cgroup dir blkio/
> > (00.089593) cg:  Making controller dir .criu.cgyard.lg48ci/net_cls
> (net_cls)
> > (00.089862) cg: Determined cgroup dir net_cls/ already exist
> > (00.089899) cg: Skip restoring properties on cgroup dir net_cls/
> > (00.089978) cg:  Making controller dir .criu.cgyard.lg48ci/freezer
> (freezer)
> > (00.090218) cg: Determined cgroup dir freezer/ already exist
> > (00.090257) cg: Skip restoring properties on cgroup dir freezer/
> > (00.090334) cg:  Making controller dir .criu.cgyard.lg48ci/devices
> (devices)
> > (00.090646) cg: Determined cgroup dir devices/ already exist
> > (00.090708) cg: Skip restoring properties on cgroup dir devices/
> > (00.090811) cg:  Making controller dir .criu.cgyard.lg48ci/cpu,cpuacct
> > (cpu,cpuacct)
> > (00.091066) cg: Determined cgroup dir cpu,cpuacct/ already exist
> > (00.091123) cg: Skip restoring properties on cgroup dir cpu,cpuacct/
> > (00.091229) cg:  Making controller dir .criu.cgyard.lg48ci/cpuset
> (cpuset)
> > (00.091431) Error (cgroup.c:1562): cg:  Can't mount controller dir
> > .criu.cgyard.lg48ci/cpuset: No such file or directory
> >
> > Hi everyone,
> >
> > I'm coming back to you because I
> > still have a problem when
> > restoring a snapshotted process
> > from a given RPi2 on another
> > RPi2.
> >
> > The snapshot/restore works well
> > on the same RPi2 thanks to the
> > patch :
> > https://lists.openvz.org/
> > pipermail/criu/2016-April/
> > 027743.html
> > (Criu-dev version + patch and
> > Kernel 3.18)
> >
> > However when I try to restore a
> > snapshotted process from a given
> > RPi2 on another RPi2. (with
> > exactly the same image installed  ssh -t
> > on the SD card) I obtain the      pi at 192.168.0.1
> > following error :                 'pid=$(pidof                    scp
> > (00.091431) Error                 cube2_server);
> pi at 192.168.0.1:               sudo criu restore
> > (cgroup.c:1562): cg:  Can't mount sudo criu dump -D echo
> /home/pi/       echo "Image   -d -D /home/pi/   echo "Image restored"
> > controller dir                    /home/pi/         "Snapshotted"
> checkpoint/* /  transferred"  checkpoint/
>  --------------------------------
> > .criu.cgyard.lg48ci/cpuset: No    checkpoint/ -t
> home/pi/                      --shell-job
> > such file or directory            $pid
> checkpoint/                   --tcp-established
> > (the snapshot process has         --tcp-established
> > succeeded on the first Rpi2)      --shell-job'
> >
> > The dir .criu.cgyard.lg48ci is
> > indeed empty.
> > CONFIG_CGROUPS and CONFIG_MEMCG
> > are enabled (CONFIG_MEMCG_SWAP
> > and CONFIG_MEMCG_KMEM are
> > disabled) if that matters...
> >
> > Am I doing something wrong here ?
> >
> >
> > Thanks again for your help,
> >
> > Alex
> >
> >
> > Here is the simple script i'm
> > using to migrate and the full log
> > of the restore:
> >
> > --------------------------------
>
> > _______________________________________________
> > CRIU mailing list
> > CRIU at openvz.org
> > https://lists.openvz.org/mailman/listinfo/criu
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/criu/attachments/20160520/f382767e/attachment-0001.html>


More information about the CRIU mailing list