<div dir="ltr"><div>Hi Andrew, </div><div><br></div><div>You're right, there was no "cpuset" on the Rpi where I tried the restoring process, whereas it's present on the other one..</div><div>If I swap the roles between the two Pis, the checkpoint/restore succeeds :)</div><div><br></div><div>I dont't really understand how come the configuration is different as I was using the exact same image loaded on the SD card..</div><div>Do you have an idea why cpuset it disabled on one Rpi, or how I could enable it ? </div><div><br></div><div>Thanks a lot, </div><div>Alex.</div><div><br></div><div><br></div><div>Raspberry 1 : (Restore on this Rpi)</div><div>pi@raspberrypi:~ $ cat /proc/cgroups </div><div>#subsys_name<span style="white-space:pre-wrap">        </span>hierarchy<span style="white-space:pre-wrap">        </span>num_cgroups<span style="white-space:pre-wrap">        </span>enabled</div><div>cpu<span style="white-space:pre-wrap">         </span>2<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>cpuacct<span style="white-space:pre-wrap">        </span>2<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>memory<span style="white-space:pre-wrap">        </span>0<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>0</div><div>devices<span style="white-space:pre-wrap">        </span>3<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>freezer<span style="white-space:pre-wrap">        </span>4<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>net_cls<span style="white-space:pre-wrap">        </span>5<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>blkio<span style="white-space:pre-wrap">         </span>6<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div><br></div><div>Raspberry 2 : (Snapshot on this Rpi)</div><div>#subsys_name<span style="white-space:pre-wrap">        </span>hierarchy<span style="white-space:pre-wrap">        </span>num_cgroups<span style="white-space:pre-wrap">        </span>enabled</div><div>cpuset<span style="white-space:pre-wrap">        </span>2<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>cpu<span style="white-space:pre-wrap">         </span>3<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>cpuacct<span style="white-space:pre-wrap">        </span>3<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>memory<span style="white-space:pre-wrap">        </span>0<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>0</div><div>devices<span style="white-space:pre-wrap">        </span>4<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>freezer<span style="white-space:pre-wrap">        </span>5<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>net_cls<span style="white-space:pre-wrap">        </span>6<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div><div>blkio<span style="white-space:pre-wrap">         </span>7<span style="white-space:pre-wrap">        </span>1<span style="white-space:pre-wrap">        </span>1</div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-05-19 23:26 GMT+02:00 Andrew Vagin <span dir="ltr"><<a href="mailto:avagin@virtuozzo.com" target="_blank">avagin@virtuozzo.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Alex,<br>
<br>
Can you show /proc/cgroups from both hosts?<br>
<div><div class="h5"><br>
On Thu, May 19, 2016 at 06:37:10PM +0200, alex vk wrote:<br>
><br>
> pi@raspberrypi:~/criu/criu $ ./migrate-ssh.sh<br>
> Warn (autofs.c:74): Failed to find pipe_ino option (old kernel?)<br>
> Connection to 192.168.0.2 closed.<br>
> Snapshoted<br>
> cgroup.img 100% 1948 1.9KB/s 00:00<br>
> core-943.img 100% 602 0.6KB/s 00:00<br>
> fdinfo-2.img 100% 80 0.1KB/s 00:00<br>
> fs-943.img 100% 18 0.0KB/s 00:00<br>
> ids-943.img 100% 32 0.0KB/s 00:00<br>
> inetsk.img 100% 251 0.3KB/s 00:00<br>
> inventory.img 100% 38 0.0KB/s 00:00<br>
> mm-943.img 100% 1862 1.8KB/s 00:00<br>
> pagemap-943.img 100% 242 0.2KB/s 00:00<br>
> pages-1.img 100% 252KB 252.0KB/s 00:00<br>
> pstree.img 100% 26 0.0KB/s 00:00<br>
> reg-files.img 100% 994 1.0KB/s 00:00<br>
> sigacts-943.img 100% 752 0.7KB/s 00:00<br>
> stats-dump 100% 36 0.0KB/s 00:00<br>
> tty.img 100% 30 0.0KB/s 00:00<br>
> tty-info.img 100% 178 0.2KB/s 00:00<br>
> Image transfered<br>
> (00.082257) Pagemap is fully functional<br>
> (00.082543) Found task size of 7f000000<br>
> (00.082892) Reading image tree<br>
> (00.083089) Add mnt ns 5 pid 943<br>
> (00.083211) Migrating process tree (GID 943->805 SID 791->741)<br>
> (00.083259) Will restore in 0 namespaces<br>
> (00.083295) NS mask to use 0<br>
> (00.083331) Collecting 39/18 (flags 1)<br>
> (00.083428) Collected [dev/pts/0] ID 0x1<br>
> (00.083486) Collected [lib/arm-linux-gnueabihf/<a href="http://libresolv-2.19.so" rel="noreferrer" target="_blank">libresolv-2.19.so</a>] ID 0x5<br>
> (00.083536) Collected [lib/arm-linux-gnueabihf/<a href="http://libnss_dns-2.19.so" rel="noreferrer" target="_blank">libnss_dns-2.19.so</a>] ID 0x6<br>
> (00.083581) Collected [lib/arm-linux-gnueabihf/<a href="http://libnss_files-2.19.so" rel="noreferrer" target="_blank">libnss_files-2.19.so</a>] ID 0x7<br>
> (00.083626) Collected [lib/arm-linux-gnueabihf/<a href="http://libc-2.19.so" rel="noreferrer" target="_blank">libc-2.19.so</a>] ID 0x8<br>
> (00.083672) Collected [lib/arm-linux-gnueabihf/libgcc_s.so.1] ID 0x9<br>
> (00.083715) Collected [lib/arm-linux-gnueabihf/<a href="http://libm-2.19.so" rel="noreferrer" target="_blank">libm-2.19.so</a>] ID 0xa<br>
> (00.083759) Collected [usr/lib/arm-linux-gnueabihf/libstdc++.so.6.0.20] ID 0xb<br>
> (00.083804) Collected [lib/arm-linux-gnueabihf/libz.so.1.2.8] ID 0xc<br>
> (00.083847) Collected [usr/lib/arm-linux-gnueabihf/libenet.so.7.0.0] ID 0xd<br>
> (00.083892) Collected [usr/lib/arm-linux-gnueabihf/libarmmem.so] ID 0xe<br>
> (00.083935) Collected [lib/arm-linux-gnueabihf/<a href="http://ld-2.19.so" rel="noreferrer" target="_blank">ld-2.19.so</a>] ID 0xf<br>
> (00.083978) Collected [lib/arm-linux-gnueabihf/libnss_mdns4_minimal.so.2] ID<br>
> 0x10<br>
> (00.084022) Collected [usr/lib/cube2/cube2_server] ID 0x11<br>
> (00.084064) Collected [usr/lib/cube2] ID 0x12<br>
> (00.084104) Collected [.] ID 0x13<br>
> (00.084151) `- ... done<br>
> (00.084182) Collecting 52/56 (flags 0)<br>
> (00.084259) No remap-fpath.img image<br>
> (00.084318) `- ... done<br>
> (00.084384) Collecting 42/21 (flags 0)<br>
> (00.084549) `- ... done<br>
> (00.084991) cg: Preparing cgroups yard (cgroups restore mode 0x4)<br>
> (00.089011) cg: Opening .criu.cgyard.lg48ci as cg yard<br>
> (00.089161) cg: Making controller dir .criu.cgyard.lg48ci/blkio (blkio)<br>
> (00.089465) cg: Determined cgroup dir blkio/ already exist<br>
> (00.089495) cg: Skip restoring properties on cgroup dir blkio/<br>
> (00.089593) cg: Making controller dir .criu.cgyard.lg48ci/net_cls (net_cls)<br>
> (00.089862) cg: Determined cgroup dir net_cls/ already exist<br>
> (00.089899) cg: Skip restoring properties on cgroup dir net_cls/<br>
> (00.089978) cg: Making controller dir .criu.cgyard.lg48ci/freezer (freezer)<br>
> (00.090218) cg: Determined cgroup dir freezer/ already exist<br>
> (00.090257) cg: Skip restoring properties on cgroup dir freezer/<br>
> (00.090334) cg: Making controller dir .criu.cgyard.lg48ci/devices (devices)<br>
> (00.090646) cg: Determined cgroup dir devices/ already exist<br>
> (00.090708) cg: Skip restoring properties on cgroup dir devices/<br>
> (00.090811) cg: Making controller dir .criu.cgyard.lg48ci/cpu,cpuacct<br>
> (cpu,cpuacct)<br>
> (00.091066) cg: Determined cgroup dir cpu,cpuacct/ already exist<br>
> (00.091123) cg: Skip restoring properties on cgroup dir cpu,cpuacct/<br>
> (00.091229) cg: Making controller dir .criu.cgyard.lg48ci/cpuset (cpuset)<br>
> (00.091431) Error (cgroup.c:1562): cg: Can't mount controller dir<br>
> .criu.cgyard.lg48ci/cpuset: No such file or directory<br>
><br>
> Hi everyone,<br>
><br>
> I'm coming back to you because I<br>
> still have a problem when<br>
> restoring a snapshotted process<br>
> from a given RPi2 on another<br>
> RPi2.<br>
><br>
> The snapshot/restore works well<br>
> on the same RPi2 thanks to the<br>
> patch :<br>
> <a href="https://lists.openvz.org/" rel="noreferrer" target="_blank">https://lists.openvz.org/</a><br>
> pipermail/criu/2016-April/<br>
> 027743.html<br>
> (Criu-dev version + patch and<br>
> Kernel 3.18)<br>
><br>
> However when I try to restore a<br>
> snapshotted process from a given<br>
> RPi2 on another RPi2. (with<br>
</div></div>> exactly the same image installed ssh -t<br>
> on the SD card) I obtain the <a href="mailto:pi@192.168.0.1">pi@192.168.0.1</a><br>
> following error : 'pid=$(pidof scp<br>
> (00.091431) Error cube2_server); <a href="mailto:pi@192.168.0.1">pi@192.168.0.1</a>: sudo criu restore<br>
> (cgroup.c:1562): cg: Can't mount sudo criu dump -D echo /home/pi/ echo "Image -d -D /home/pi/ echo "Image restored"<br>
> controller dir /home/pi/ "Snapshotted" checkpoint/* / transferred" checkpoint/ --------------------------------<br>
> .criu.cgyard.lg48ci/cpuset: No checkpoint/ -t home/pi/ --shell-job<br>
> such file or directory $pid checkpoint/ --tcp-established<br>
> (the snapshot process has --tcp-established<br>
> succeeded on the first Rpi2) --shell-job'<br>
<span class="">><br>
> The dir .criu.cgyard.lg48ci is<br>
> indeed empty.<br>
> CONFIG_CGROUPS and CONFIG_MEMCG<br>
> are enabled (CONFIG_MEMCG_SWAP<br>
> and CONFIG_MEMCG_KMEM are<br>
> disabled) if that matters...<br>
><br>
> Am I doing something wrong here ?<br>
><br>
><br>
> Thanks again for your help,<br>
><br>
> Alex<br>
><br>
><br>
> Here is the simple script i'm<br>
> using to migrate and the full log<br>
> of the restore:<br>
><br>
> --------------------------------<br>
<br>
</span>> _______________________________________________<br>
> CRIU mailing list<br>
> <a href="mailto:CRIU@openvz.org">CRIU@openvz.org</a><br>
> <a href="https://lists.openvz.org/mailman/listinfo/criu" rel="noreferrer" target="_blank">https://lists.openvz.org/mailman/listinfo/criu</a><br>
<br>
</blockquote></div><br></div>