<div dir="ltr"><div><div>Thanks for pointing out this option. I tested it to checkpoint and restore my program. It seems that dumping is successful, but the restore fails. The detailed log message is as follows<br><br></div>veth100: the link in the host''s namespace<br></div>veth101: the link in the child process''s namespace<div><div><div><br># criu dump -t 3737 -vvvv --veth-pair veth101=veth100 -j<br><br>... <br>(00.043143) Dumping pstree (pid: 3737)<br>(00.043167) ----------------------------------------<br>(00.043192) Process: 1(3737)<br>(00.043456) ----------------------------------------<br>(00.043576) Dumping 1(3737)'s namespaces<br>(00.044113) Dump UTS namespace 10 via 3737<br>(00.044234) Dump NET namespace info 8 via 3737<br>(00.044680) Mount ns' sysfs in crtools-sys.ZXQPOh<br>(00.044771) Dump IPC namespace 9 via 3737<br>(00.045112) IPC shared memory segments: 0<br>(00.045134) IPC message queues: 0<br>(00.045150) IPC semaphore sets: 0<br>(00.063024) Dumping netns links<br>(00.063173) LD: Got link 1, type 772<br>(00.063523) LD: Got link 268, type 1<br>(00.063570) Found ll addr (de:../6) for veth101<br>(00.063838) Running ip addr save<br>(00.066257) Running ip route save<br>(00.068508) Running iptables-save for iptables-save<br>(00.090183) Namespaces dump complete<br>(00.090322) cg: All tasks in criu's cgroups. Nothing to dump.<br>(00.090373) Dumping external sockets<br>(00.090409) tty: Unpaired slave 2<br>(00.090445) Tree of 0 objects<br>(00.090507) 0x9.0x1<br>(00.090564) right:<br>(00.090627) 0x96eb4.0x2<br>(00.090685) | SubTree<br>(00.090758) | 0x96eb4.0x2 (self)<br>(00.090827) --s<br>(00.090900) --r<br>(00.090977) | SubTree<br>(00.091200) | 0x9.0x1 (self)<br>(00.091279) --s<br>(00.091947) Running post-dump scripts<br>(00.092077) Unfreezing tasks into 2<br>(00.092129) Unseizing 3737 into 2<br>(00.092334) Writing stats<br>(00.092545) Dumping finished successfully<br><br><br></div><div># criu restore -vvvv --veth-pair veth101=veth100 <br><br>(00.023527) 1: <--<br>(00.023650) 1: Mount procfs in crtools-proc.ppdrLS<br>(00.042713) 1: Preparing info about shared resources<br>(00.042748) 1: Collecting 34/18 (flags 0)<br>(00.042785) 1: Collected [dev/pts/2] ID 0x1<br>(00.042803) 1: Collected [root/namespace_example/ns] ID 0x3<br>(00.042814) 1: Collected [lib/x86_64-linux-gnu/<a href="http://libc-2.19.so">libc-2.19.so</a>] ID 0x4<br>(00.042822) 1: Collected [lib/x86_64-linux-gnu/<a href="http://ld-2.19.so">ld-2.19.so</a>] ID 0x5<br>(00.042830) 1: Collected [root/namespace_example] ID 0x6<br>(00.042838) 1: Collected [.] ID 0x7<br>(00.042850) 1: `- ... done<br>(00.042856) 1: Collecting 47/53 (flags 0)<br>(00.042867) 1: No remap-fpath.img image<br>(00.042874) 1: `- ... done<br>(00.042883) 1: Collecting 36/20 (flags 0)<br>(00.042893) 1: No ns-files.img image<br>(00.042905) 1: `- ... done<br>(00.042916) 1: Collecting 41/25 (flags 0)<br>(00.042955) 1: Collected pipe entry ID 0x2 PIPE ID 0x96ebc<br>(00.042976) 1: `- ... done<br>(00.042983) 1: Collecting 43/26 (flags 0)<br>(00.042995) 1: No fifo.img image<br>(00.043002) 1: `- ... done<br>(00.043009) 1: Collecting 38/22 (flags 1)<br>(00.043019) 1: No unixsk.img image<br>(00.043026) 1: `- ... done<br>(00.043033) 1: Collecting 39/23 (flags 0)<br>(00.043043) 1: No packetsk.img image<br>(00.043050) 1: `- ... done<br>(00.043056) 1: Collecting 40/24 (flags 0)<br>(00.043066) 1: No netlinksk.img image<br>(00.043073) 1: `- ... done<br>(00.043080) 1: Collecting 48/28 (flags 0)<br>(00.043089) 1: No eventfd.img image<br>(00.043096) 1: `- ... done<br>(00.043103) 1: Collecting 74/30 (flags 0)<br>(00.043113) 1: No eventpoll-tfd.img image<br>(00.043120) 1: `- ... done<br>(00.043126) 1: Collecting 49/29 (flags 0)<br>(00.043136) 1: No eventpoll.img image<br>(00.043143) 1: `- ... done<br>(00.043150) 1: Collecting 50/31 (flags 0)<br>(00.043161) 1: No signalfd.img image<br>(00.043176) 1: `- ... done<br>(00.043184) 1: Collecting 51/32 (flags 0)<br>(00.043195) 1: No inotify.img image<br>(00.043203) 1: `- ... done<br>(00.043209) 1: Collecting 72/33 (flags 0)<br>(00.043219) 1: No inotify-wd.img image<br>(00.043226) 1: `- ... done<br>(00.043233) 1: Collecting 52/34 (flags 0)<br>(00.043243) 1: No fanotify.img image<br>(00.043250) 1: `- ... done<br>(00.043257) 1: Collecting 73/35 (flags 0)<br>(00.043267) 1: No fanotify-mark.img image<br>(00.043275) 1: `- ... done<br>(00.043281) 1: Collecting 46/37 (flags 0)<br>(00.043318) 1: `- ... done<br>(00.043327) 1: Collecting 45/36 (flags 0)<br>(00.043349) 1: tty: Collected tty ID 0x1 (pts)<br>(00.043366) 1: `- ... done<br>(00.043373) 1: Collecting 53/42 (flags 0)<br>(00.043383) 1: No tunfile.img image<br>(00.043391) 1: `- ... done<br>(00.043397) 1: Collecting 35/19 (flags 0)<br>(00.043407) 1: No ext-files.img image<br>(00.043414) 1: `- ... done<br>(00.043423) 1: Collecting 55/45 (flags 0)<br>(00.043433) 1: No timerfd.img image<br>(00.043440) 1: `- ... done<br>(00.043446) 1: Collecting 56/38 (flags 0)<br>(00.043456) 1: No filelocks.img image<br>(00.043463) 1: `- ... done<br>(00.043485) 1: Collected pipe data for 0x96ebc (chain 28)<br>(00.043505) 1: No fifo-data.img image<br>(00.043516) 1: Trying to read socket queues image<br>(00.043528) 1: No sk-queues.img image<br>(00.043539) 1: Error (tty.c:333): tty: Found slave peer index 2 without correspond master peer<br>(00.043722) Error (cr-restore.c:1919): Restoring FAILED.<br><br></div><div>Does that error mean the slave index is 2? However, nether veth100 and veth101 is index 2. Thanks.<br><br></div><div>- Hui<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 29, 2015 at 1:28 AM, Ross Boucher <span dir="ltr"><<a href="mailto:rboucher@gmail.com" target="_blank">rboucher@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">You can explicitly map the veths you want when restoring:<div><br></div><div><div> --veth-pair IN=OUT map inside veth device name to outside one</div><div> can optionally append @<bridge-name> to OUT for moving</div><div> the outside veth to the named bridge</div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Tue, Jul 28, 2015 at 9:55 PM, Hui Kang <span dir="ltr"><<a href="mailto:hkang.sunysb@gmail.com" target="_blank">hkang.sunysb@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><div><div><div>Hi,<br></div>I need to restore a process which has one end of the veth, say veth100, in the process's namespace.The other end, say veth101, is in the host namespace.<br><br>However, when I checkpoint and restore the process, the veth1 in the process namespace is connected to a different veth in the host namespace, which is veth0.<br></div>The problem is the veth100 in the restored process is not connected to veth101.<br><br></div>I set logleve=4, but failed to find any information in the restore.log. Does anyone know how to debug this issue? Thanks.<span><font color="#888888"><br><br></font></span></div><span><font color="#888888">- Hui<br> </font></span></div>
<br></div></div>_______________________________________________<br>
CRIU mailing list<br>
<a href="mailto:CRIU@openvz.org" target="_blank">CRIU@openvz.org</a><br>
<a href="https://lists.openvz.org/mailman/listinfo/criu" rel="noreferrer" target="_blank">https://lists.openvz.org/mailman/listinfo/criu</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div></div></div>