[CRIU] Re-Checkpoint Applications
Abdullah Yousafzai
yousafzai.abdullah at gmail.com
Mon Feb 22 20:09:53 PST 2016
script:
while :; do
sleep 1
date
done
commands:
setsid ./test.sh < /dev/null &> test.log &
criu dump -t 2221 -vvv -o dump.log
criu restore -d -vvv -o restore.log
but after that when I:
ps -C test.sh
I did not find the script, but when I did :
ps -C criu
the i found it and when I want to kill this process I am unable to kill
with:
kill criu_PID
and the test.log is still updating the date. But if checkpoint the criu
process shown in the ps command it fails and gives me the following log
message:
(00.000022) Probing sock diag modules
(00.000094) Done probing
(00.000100) ========================================
(00.000107) Dumping processes (pid: 2672)
(00.000110) ========================================
(00.000143) Pagemap is fully functional
(00.000181) Found anon-shmem device at 5
(00.000188) Reset 27061's dirty tracking
(00.000211) ... done
(00.006545) Dirty tracking support is OFF
(00.006609) Error (proc_parse.c:839): Can't open 27061/loginuid on procfs:
No such file or directory
(00.006623) irmap: Searching irmap cache in work dir
(00.006629) No irmap-cache image
(00.006630) irmap: Searching irmap cache in parent
(00.006632) irmap: No irmap cache
(00.006699) Perparing image inventory (version 1)
(00.006727) Add pid ns 1 pid 27061
(00.006737) Add net ns 2 pid 27061
(00.006743) Add ipc ns 3 pid 27061
(00.006747) Add uts ns 4 pid 27061
(00.006751) Add mnt ns 5 pid 27061
(00.006754) Add user ns 6 pid 27061
(00.006755) cg: Dumping cgroups for 27061
(00.006769) cg: Set 1 is criu one
(00.006819) Seized task 2672, state 1
(00.006831) Collected (4 attempts, 0 in_progress)
(00.006840) Seized task 2640, state 0
(00.006881) Collected (4 attempts, 0 in_progress)
(00.006888) Collected (4 attempts, 0 in_progress)
(00.006916) Collected 2640 in 1 state
(00.006939) Collected (3 attempts, 0 in_progress)
(00.006941) Collected 2672 in 1 state
(00.006995) Lock network
(00.007016) lockinfo: 1:1 0 20461 00:0e:8656 128 128
(00.007032) lockinfo: 2:1 0 20461 00:0e:8653 1073741826 1073742335
(00.007035) lockinfo: 3:1 0 15462 00:0e:156213 128 128
(00.007038) lockinfo: 4:1 0 15462 00:0e:156210 1073741826 1073742335
(00.007040) lockinfo: 5:1 1 15462 00:0e:156192 1073741824 1073742335
(00.007042) lockinfo: 6:2 1 1620 00:0e:7867 0 EOF
(00.007045) lockinfo: 7:1 0 1325 00:0e:7733 128 128
(00.007047) lockinfo: 8:1 0 1325 00:0e:7730 1073741826 1073742335
(00.007100) type tmpfs source tmpfs mnt_id 13 s_dev 0xd / @ ./ flags
0x300001 options
(00.007108) type ext4 source /dev/loop1 mnt_id 16 s_dev 0x700001 / @
./system flags 0x300001 options data=ordered
(00.007112) type tmpfs source tmpfs mnt_id 17 s_dev 0xe / @ ./data flags
0x300000 options
(00.007116) type tmpfs source tmpfs mnt_id 18 s_dev 0xf / @ ./dev flags
0x300002 options mode=755
(00.007120) type devpts source devpts mnt_id 19 s_dev 0xb / @ ./dev/pts
flags 0x300000 options mode=600
(00.007128) type proc source proc mnt_id 20 s_dev 0x4 / @ ./proc flags
0x300000 options
(00.007131) type sysfs source sysfs mnt_id 21 s_dev 0xc / @ ./sys flags
0x300000 options
(00.007135) type debugfs source debugfs mnt_id 22 s_dev 0x6 / @
./sys/kernel/debug flags 0x300000 options
(00.007138) type cgroup source none mnt_id 23 s_dev 0x10 / @ ./acct flags
0x300000 options cpuacct
(00.007142) type tmpfs source none mnt_id 24 s_dev 0x11 / @ ./sys/fs/cgroup
flags 0x300000 options mode=750,gid=1000
(00.007146) type tmpfs source tmpfs mnt_id 25 s_dev 0x12 / @ ./mnt/asec
flags 0x300000 options mode=755,gid=1000
(00.007150) type tmpfs source tmpfs mnt_id 26 s_dev 0x13 / @ ./mnt/obb
flags 0x300000 options mode=755,gid=1000
(00.007153) type cgroup source none mnt_id 27 s_dev 0x14 / @ ./dev/cpuctl
flags 0x300000 options cpu
(00.007156) type pstore source pstore mnt_id 28 s_dev 0x15 / @
./sys/fs/pstore flags 0x300000 options
(00.007160) type tmpfs source none mnt_id 29 s_dev 0x16 / @ ./cache flags
0x100406 options
(00.007167) type fuse source /dev/fuse mnt_id 30 s_dev 0x17 / @
./mnt/shell/emulated flags 0x30000e options
user_id=1023,group_id=1023,default_permissions,allow_other
(00.007173) mnt: Building mountpoints tree
(00.007176) mnt: Resorting siblings on 13
(00.007177) mnt: Resorting siblings on 30
(00.007178) mnt: Resorting siblings on 29
(00.007178) mnt: Resorting siblings on 26
(00.007179) mnt: Resorting siblings on 25
(00.007180) mnt: Resorting siblings on 23
(00.007180) mnt: Resorting siblings on 21
(00.007181) mnt: Resorting siblings on 28
(00.007182) mnt: Resorting siblings on 24
(00.007182) mnt: Resorting siblings on 22
(00.007183) mnt: Resorting siblings on 20
(00.007184) mnt: Resorting siblings on 18
(00.007186) mnt: Resorting siblings on 27
(00.007187) mnt: Resorting siblings on 19
(00.007187) mnt: Resorting siblings on 17
(00.007188) mnt: Resorting siblings on 16
(00.007189) mnt: Done:
(00.007189) mnt: [./](13->0)
(00.007190) mnt: [./cache](29->13)
(00.007191) mnt: <--
(00.007192) mnt: [./system](16->13)
(00.007193) mnt: <--
(00.007194) mnt: [./data](17->13)
(00.007194) mnt: <--
(00.007195) mnt: [./dev](18->13)
(00.007196) mnt: [./dev/cpuctl](27->18)
(00.007197) mnt: <--
(00.007197) mnt: [./dev/pts](19->18)
(00.007198) mnt: <--
(00.007199) mnt: <--
(00.007199) mnt: [./proc](20->13)
(00.007200) mnt: <--
(00.007201) mnt: [./sys](21->13)
(00.007202) mnt: [./sys/fs/pstore](28->21)
(00.007203) mnt: <--
(00.007203) mnt: [./sys/kernel/debug](22->21)
(00.007204) mnt: <--
(00.007205) mnt: [./sys/fs/cgroup](24->21)
(00.007205) mnt: <--
(00.007206) mnt: <--
(00.007207) mnt: [./acct](23->13)
(00.007208) mnt: <--
(00.007208) mnt: [./mnt/asec](25->13)
(00.007209) mnt: <--
(00.007210) mnt: [./mnt/obb](26->13)
(00.007210) mnt: <--
(00.007211) mnt: [./mnt/shell/emulated](30->13)
(00.007212) mnt: <--
(00.007213) mnt: <--
(00.007214) Collecting netns 2/27061
(00.007456) ========================================
(00.007459) Dumping task (pid: 2672)
(00.007460) ========================================
(00.007460) Obtaining task stat ...
(00.007477)
(00.007478) Collecting mappings (pid: 2672)
(00.007479) ----------------------------------------
(00.007596) Collected, longest area occupies 405 pages
(00.007598) 0x400000-0x595000 (1620K) prot 0x5 flags 0x2 st 0x41 off 0 reg
fp shmid: 0
(00.007600) 0x794000-0x79c000 (32K) prot 0x3 flags 0x2 st 0x41 off 0x194000
reg fp shmid: 0
(00.007601) 0x79c000-0x7b2000 (88K) prot 0x3 flags 0x22 st 0x201 off 0 reg
ap shmid: 0
(00.007603) 0x85c000-0x87f000 (140K) prot 0x3 flags 0x22 st 0x221 off 0 reg
heap ap shmid: 0
(00.007604) 0x7fe73b638000-0x7fe73b63a000 (8K) prot 0x3 flags 0x21 st 0x101
off 0 reg as shmid: 0x92b13
(00.007605) 0x7fe73b63a000-0x7fe73b63c000 (8K) prot 0x3 flags 0x21 st 0x101
off 0 reg as shmid: 0x92b10
(00.007606) 0x7fe73b63c000-0x7fe73b64c000 (64K) prot 0x3 flags 0x22 st
0x201 off 0 reg ap shmid: 0
(00.007607) 0x7ffc0d131000-0x7ffc0d153000 (136K) prot 0x3 flags 0x122 st
0x201 off 0 reg ap shmid: 0
(00.007608) 0x7ffc0d16e000-0x7ffc0d170000 (8K) prot 0x1 flags 0x22 st
0x1201 off 0 reg vvar ap shmid: 0
(00.007609) 0x7ffc0d170000-0x7ffc0d172000 (8K) prot 0x5 flags 0x22 st 0x209
off 0 reg vdso ap shmid: 0
(00.007610) 0xffffffffff600000-0xffffffffff601000 (4K) prot 0x5 flags 0x22
st 0x204 off 0 vsys ap shmid: 0
(00.007612) ----------------------------------------
(00.007614)
(00.007615) Collecting fds (pid: 2672)
(00.007615) ----------------------------------------
(00.007634) Found 11 file descriptors
(00.007635) ----------------------------------------
(00.007687) Set up parasite blob using memfd
(00.007690) Putting parasite blob into 0x7fde89f7a000->0x7fe73b62b000
(00.007702) Dumping GP/FPU registers for 2672
(00.007706) Putting tsock into pid 2672
(00.007734) Wait for parasite being daemonized...
(00.007737) Parasite 2672 has been switched to daemon mode
(00.007753) sid=2688 pgid=2672 pid=2672
(00.007760)
(00.007760) Dumping opened files (pid: 2672)
(00.007760) ----------------------------------------
pie: Error (pie/util-fd.c:92): fcntl(3, F_GETOWN_EX) -> -9
pie: Error (pie/parasite.c:275): send_fds failed (-1)
pie: Error (pie/parasite.c:646): Close the control socket for writing
>
(00.007768) Error (parasite-syscall.c:829): Can't retrieve FDs from socket
(00.007770) Error (parasite-syscall.c:305): Message reply from daemon is
trimmed (12/0)
(00.007772) Error (cr-dump.c:1292): Dump files (pid: 2672) failed with -1
(00.007859) Unlock network
(00.007863) Unfreezing tasks into 1
(00.007873) Error (cr-dump.c:1578): Dumping FAILED.
Note that I am using android
Best Regard -- ALLAH Hafiz, May Almighty ALLAH bless you
[image: --]
Abdullah Yousafzai
[image: https://]about.me/yousafzaiabdullah
<https://about.me/yousafzaiabdullah?promo=email_sig>
On Tue, Feb 23, 2016 at 11:27 AM, Marcos Lilljedahl <marcosnils at gmail.com>
wrote:
> Abdull,
>
> It's totally possible. Can you please provide the commands and script
> you're using to run your test?
>
> Marcos.
>
> On Mon, Feb 22, 2016 at 11:09 PM, Abdullah Yousafzai <
> yousafzai.abdullah at gmail.com> wrote:
>
>> Hello Guys,
>>
>> I have another question is it possible to re-checkpoint an application
>> i.e.
>>
>> 1) checkpoint app -> (2) restore it -> (3) than checkpoint again -> (4)
>> than restore again -> (5) than checkpoint again -> ......
>>
>> I am trying a pattern like this; but when I restored the application at
>> step 2 than I cannot find the app to checkpoint again in ps but I find the
>> criu running (which I am unable to kill/stop).
>>
>> Note I am using the simplest example (bash script) stated in the CRIU How
>> Tos.
>>
>> Please comment on the possibilities and fixes if possible.
>>
>> Thank you all guys.
>>
>>
>> Best Regard -- ALLAH Hafiz, May Almighty ALLAH bless you
>>
>>
>>
>> [image: --]
>>
>> Abdullah Yousafzai
>> [image: https://]about.me/yousafzaiabdullah
>> <https://about.me/yousafzaiabdullah?promo=email_sig>
>>
>>
>> _______________________________________________
>> CRIU mailing list
>> CRIU at openvz.org
>> https://lists.openvz.org/mailman/listinfo/criu
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/criu/attachments/20160223/f87e9c28/attachment-0001.html>
More information about the CRIU
mailing list