[CRIU] Debugging the process to restore veth in a namespace
Hui Kang
hkang.sunysb at gmail.com
Fri Jul 31 08:16:10 PDT 2015
More logs when restoring the veth of the process. It shows the link always
exists, which confuses me.
(00.006593) Wait until namespaces are created
(00.008310) Running setup-namespaces scripts
(00.009489) 1: Restoring namespaces 1 flags 0x6c000000
(00.010543) 1: Restoring link lo type 1
(00.010967) 1: Restoring link veth101 type 2
(00.010975) 1: Restoring netdev veth101 idx 281
(00.010980) 1: Restore ll addr (d6:../6) for device
(00.012124) 1: DEBUG Skip veth101/accept_local, val =0
(00.012293) 1: DEBUG Skip veth101/accept_redirects, val =1
(00.012331) 1: DEBUG Skip veth101/accept_source_route, val =1
(00.012368) 1: DEBUG Skip veth101/arp_accept, val =0
(00.012403) 1: DEBUG Skip veth101/arp_announce, val =0
(00.012443) 1: DEBUG Skip veth101/arp_filter, val =0
(00.012483) 1: DEBUG Skip veth101/arp_ignore, val =0
(00.012529) 1: DEBUG Skip veth101/arp_notify, val =0
(00.012565) 1: DEBUG Skip veth101/bootp_relay, val =0
(00.012606) 1: DEBUG Skip veth101/disable_policy, val =0
(00.012644) 1: DEBUG Skip veth101/disable_xfrm, val =0
(00.012689) 1: DEBUG Skip veth101/force_igmp_version, val =0
(00.012723) 1: DEBUG Skip veth101/forwarding, val =1
(00.012765) 1: DEBUG Skip veth101/igmpv2_unsolicited_report_interval,
val =10000
(00.012802) 1: DEBUG Skip veth101/igmpv3_unsolicited_report_interval,
val =1000
(00.012849) 1: DEBUG Skip veth101/log_martians, val =0
(00.012885) 1: DEBUG Skip veth101/medium_id, val =0
(00.012931) 1: DEBUG Skip veth101/promote_secondaries, val =0
(00.012970) 1: DEBUG Skip veth101/proxy_arp, val =0
(00.013012) 1: DEBUG Skip veth101/proxy_arp_pvlan, val =0
(00.013047) 1: DEBUG Skip veth101/route_localnet, val =0
(00.013086) 1: DEBUG Skip veth101/rp_filter, val =1
(00.013125) 1: DEBUG Skip veth101/secure_redirects, val =1
(00.013165) 1: DEBUG Skip veth101/send_redirects, val =1
(00.013203) 1: DEBUG Skip veth101/shared_media, val =1
(00.013237) 1: DEBUG Skip veth101/src_valid_mark, val =0
(00.013276) 1: DEBUG Skip veth101/tag, val =0
(00.013360) 1: Running ip addr restore
RTNETLINK answers: File exists
RTNETLINK answers: File exists
(00.016170) 1: Running ip route restore
RTNETLINK answers: File exists
(00.017770) 1: Running iptables-restore for iptables-restore
(00.022426) 1: kernel/hostname nr 12
On Fri, Jul 31, 2015 at 10:25 AM, Hui Kang <hkang.sunysb at gmail.com> wrote:
> Thanks for pointing out this option. I tested it to checkpoint and restore
> my program. It seems that dumping is successful, but the restore fails. The
> detailed log message is as follows
>
> veth100: the link in the host''s namespace
> veth101: the link in the child process''s namespace
>
> # criu dump -t 3737 -vvvv --veth-pair veth101=veth100 -j
>
> ...
> (00.043143) Dumping pstree (pid: 3737)
> (00.043167) ----------------------------------------
> (00.043192) Process: 1(3737)
> (00.043456) ----------------------------------------
> (00.043576) Dumping 1(3737)'s namespaces
> (00.044113) Dump UTS namespace 10 via 3737
> (00.044234) Dump NET namespace info 8 via 3737
> (00.044680) Mount ns' sysfs in crtools-sys.ZXQPOh
> (00.044771) Dump IPC namespace 9 via 3737
> (00.045112) IPC shared memory segments: 0
> (00.045134) IPC message queues: 0
> (00.045150) IPC semaphore sets: 0
> (00.063024) Dumping netns links
> (00.063173) LD: Got link 1, type 772
> (00.063523) LD: Got link 268, type 1
> (00.063570) Found ll addr (de:../6) for veth101
> (00.063838) Running ip addr save
> (00.066257) Running ip route save
> (00.068508) Running iptables-save for iptables-save
> (00.090183) Namespaces dump complete
> (00.090322) cg: All tasks in criu's cgroups. Nothing to dump.
> (00.090373) Dumping external sockets
> (00.090409) tty: Unpaired slave 2
> (00.090445) Tree of 0 objects
> (00.090507) 0x9.0x1
> (00.090564) right:
> (00.090627) 0x96eb4.0x2
> (00.090685) | SubTree
> (00.090758) | 0x96eb4.0x2 (self)
> (00.090827) --s
> (00.090900) --r
> (00.090977) | SubTree
> (00.091200) | 0x9.0x1 (self)
> (00.091279) --s
> (00.091947) Running post-dump scripts
> (00.092077) Unfreezing tasks into 2
> (00.092129) Unseizing 3737 into 2
> (00.092334) Writing stats
> (00.092545) Dumping finished successfully
>
>
> # criu restore -vvvv --veth-pair veth101=veth100
>
> (00.023527) 1: <--
> (00.023650) 1: Mount procfs in crtools-proc.ppdrLS
> (00.042713) 1: Preparing info about shared resources
> (00.042748) 1: Collecting 34/18 (flags 0)
> (00.042785) 1: Collected [dev/pts/2] ID 0x1
> (00.042803) 1: Collected [root/namespace_example/ns] ID 0x3
> (00.042814) 1: Collected [lib/x86_64-linux-gnu/libc-2.19.so] ID 0x4
> (00.042822) 1: Collected [lib/x86_64-linux-gnu/ld-2.19.so] ID 0x5
> (00.042830) 1: Collected [root/namespace_example] ID 0x6
> (00.042838) 1: Collected [.] ID 0x7
> (00.042850) 1: `- ... done
> (00.042856) 1: Collecting 47/53 (flags 0)
> (00.042867) 1: No remap-fpath.img image
> (00.042874) 1: `- ... done
> (00.042883) 1: Collecting 36/20 (flags 0)
> (00.042893) 1: No ns-files.img image
> (00.042905) 1: `- ... done
> (00.042916) 1: Collecting 41/25 (flags 0)
> (00.042955) 1: Collected pipe entry ID 0x2 PIPE ID 0x96ebc
> (00.042976) 1: `- ... done
> (00.042983) 1: Collecting 43/26 (flags 0)
> (00.042995) 1: No fifo.img image
> (00.043002) 1: `- ... done
> (00.043009) 1: Collecting 38/22 (flags 1)
> (00.043019) 1: No unixsk.img image
> (00.043026) 1: `- ... done
> (00.043033) 1: Collecting 39/23 (flags 0)
> (00.043043) 1: No packetsk.img image
> (00.043050) 1: `- ... done
> (00.043056) 1: Collecting 40/24 (flags 0)
> (00.043066) 1: No netlinksk.img image
> (00.043073) 1: `- ... done
> (00.043080) 1: Collecting 48/28 (flags 0)
> (00.043089) 1: No eventfd.img image
> (00.043096) 1: `- ... done
> (00.043103) 1: Collecting 74/30 (flags 0)
> (00.043113) 1: No eventpoll-tfd.img image
> (00.043120) 1: `- ... done
> (00.043126) 1: Collecting 49/29 (flags 0)
> (00.043136) 1: No eventpoll.img image
> (00.043143) 1: `- ... done
> (00.043150) 1: Collecting 50/31 (flags 0)
> (00.043161) 1: No signalfd.img image
> (00.043176) 1: `- ... done
> (00.043184) 1: Collecting 51/32 (flags 0)
> (00.043195) 1: No inotify.img image
> (00.043203) 1: `- ... done
> (00.043209) 1: Collecting 72/33 (flags 0)
> (00.043219) 1: No inotify-wd.img image
> (00.043226) 1: `- ... done
> (00.043233) 1: Collecting 52/34 (flags 0)
> (00.043243) 1: No fanotify.img image
> (00.043250) 1: `- ... done
> (00.043257) 1: Collecting 73/35 (flags 0)
> (00.043267) 1: No fanotify-mark.img image
> (00.043275) 1: `- ... done
> (00.043281) 1: Collecting 46/37 (flags 0)
> (00.043318) 1: `- ... done
> (00.043327) 1: Collecting 45/36 (flags 0)
> (00.043349) 1: tty: Collected tty ID 0x1 (pts)
> (00.043366) 1: `- ... done
> (00.043373) 1: Collecting 53/42 (flags 0)
> (00.043383) 1: No tunfile.img image
> (00.043391) 1: `- ... done
> (00.043397) 1: Collecting 35/19 (flags 0)
> (00.043407) 1: No ext-files.img image
> (00.043414) 1: `- ... done
> (00.043423) 1: Collecting 55/45 (flags 0)
> (00.043433) 1: No timerfd.img image
> (00.043440) 1: `- ... done
> (00.043446) 1: Collecting 56/38 (flags 0)
> (00.043456) 1: No filelocks.img image
> (00.043463) 1: `- ... done
> (00.043485) 1: Collected pipe data for 0x96ebc (chain 28)
> (00.043505) 1: No fifo-data.img image
> (00.043516) 1: Trying to read socket queues image
> (00.043528) 1: No sk-queues.img image
> (00.043539) 1: Error (tty.c:333): tty: Found slave peer index 2
> without correspond master peer
> (00.043722) Error (cr-restore.c:1919): Restoring FAILED.
>
> Does that error mean the slave index is 2? However, nether veth100 and
> veth101 is index 2. Thanks.
>
> - Hui
>
> On Wed, Jul 29, 2015 at 1:28 AM, Ross Boucher <rboucher at gmail.com> wrote:
>
>> You can explicitly map the veths you want when restoring:
>>
>> --veth-pair IN=OUT map inside veth device name to outside one
>> can optionally append @<bridge-name>
>> to OUT for moving
>> the outside veth to the named bridge
>>
>>
>> On Tue, Jul 28, 2015 at 9:55 PM, Hui Kang <hkang.sunysb at gmail.com> wrote:
>>
>>> Hi,
>>> I need to restore a process which has one end of the veth, say veth100,
>>> in the process's namespace.The other end, say veth101, is in the host
>>> namespace.
>>>
>>> However, when I checkpoint and restore the process, the veth1 in the
>>> process namespace is connected to a different veth in the host namespace,
>>> which is veth0.
>>> The problem is the veth100 in the restored process is not connected to
>>> veth101.
>>>
>>> I set logleve=4, but failed to find any information in the restore.log.
>>> Does anyone know how to debug this issue? Thanks.
>>>
>>> - Hui
>>>
>>> _______________________________________________
>>> CRIU mailing list
>>> CRIU at openvz.org
>>> https://lists.openvz.org/mailman/listinfo/criu
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/criu/attachments/20150731/bd4fd15b/attachment-0001.html>
More information about the CRIU
mailing list