[Users] Latest news in Virtuozzo 7 development 20160428
Sergey Bronnikov
sergeyb at openvz.org
Wed May 18 04:13:26 PDT 2016
Hi, Jake
Thanks for details. Also Cyrill asked about logs for P.Haul service.
Unfortunately prlsrvctl report doesn't contain it. Could you provide it too?
And does plain vcztl suspend/resume on this container works fine?
On 13:52 Sat 14 May , jjs - mainphrame wrote:
> Hi Sergey,
>
> prlsrvctl report id: 99722287
>
> Environment details (identical software stack on the 2 hosts):
> virtuozzo-release-7.0.0-39.vz7.x86_64
> CentOS Linux release 7.2.1511 (Core)
>
> Just to reiterate - as of the previous update, live migration began
> working properly; it then broke with the next day's update
>
> I've attached logs from both OVZ hosts (hachi = source, annie = target)
>
> Preparing to submit bug report now -
>
> Jake
>
> On Fri, May 13, 2016 at 2:13 AM, Sergey Bronnikov <sergeyb at openvz.org> wrote:
> > Hi
> >
> > Thank you for report.
> >
> > Could submit a new bug to the bugs.openvz.org?
> > and provide more details about your environment:
> > - Vz7 version (rpm -qa virtuozzo-release)
> > - problem report id (prlsrvctl problem-report --send)
> >
> > Sergey
> >
> > On 19:42 Thu 12 May , jjs - mainphrame wrote:
> >> Aw shucks, I've spoken too soon. After today's updates, live migration
> >> is back to not working:
> >>
> >> [root at hachi ~]# time vzmigrate annie 888 --online
> >> Connection to destination node (annie) is successfully established
> >> Moving/copying CT 888 -> CT 888, [], [] ...
> >> locking 888
> >> Checking bindmounts
> >> Check cluster ID
> >> Checking keep dir for private area copy
> >> Check of requires kernel modules
> >> Checking technologies
> >> Checking templates for CT
> >> Checking IP addresses on destination node
> >> Checking RATE parameters in config
> >> Checking ploop format 2
> >> copy CT private /vz/private/888
> >> Live migration stage started
> >> Phaul service failed to live migrate CT
> >> Can't move/copy CT 888 -> CT 888, [], [] : Phaul service failed to
> >> live migrate CT
> >>
> >> real 2m51.840s
> >> user 0m0.385s
> >> sys 0m3.457s
> >> [root at hachi ~]#
> >>
> >> Jake
> >>
> >> On Wed, May 11, 2016 at 9:28 PM, jjs - mainphrame <jjs at mainphrame.com> wrote:
> >> > Thank you Sergey -
> >> >
> >> > As of today's updates, I find that live container migration is working
> >> > reliably and well here.
> >> >
> >> > Great work!
> >> >
> >> > Jake
> >> >
> >> > On Thu, Apr 28, 2016 at 8:29 AM, Sergey Bronnikov <sergeyb at openvz.org> wrote:
> >> >> Hi everyone,
> >> >>
> >> >> As usual, I'd like to share with you the latest Virtuozzo 7 development news.
> >> >>
> >> >> First and foremost, more features have become available in Virtuozzo 7:
> >> >>
> >> >> - Containers migration via vzmigrate in Vz7 [1]
> >> >> - IP and MAC filtering for VMs [2]
> >> >> - VM data stats supported switch to native KVM ones [3]
> >> >> - AutoFS migration [4]
> >> >>
> >> >> All these features are available in the latests Virtuozzo 7 builds
> >> >> https://download.openvz.org/virtuozzo/factory/x86_64/iso/
> >> >>
> >> >> Also we have published new portions of Virtuozzo source code:
> >> >>
> >> >> - EZ templates:
> >> >> * Ubuntu 15.10 x86_64
> >> >> * Fedora 23 x86_64
> >> >> * Debian 7.0 x86_64
> >> >> * VzLinux 6 x86_64
> >> >> * VzLinux 7 x86_64
> >> >> - DRS module for rmond [5]
> >> >>
> >> >> Events
> >> >> ======
> >> >>
> >> >> Earlier we participated in Linux Storage, Filesystem and Memory Management
> >> >> Conference. See notes from the conference [6] if you interested.
> >> >>
> >> >> Upcoming talks and conferences:
> >> >>
> >> >> - Ross Boucher will talk about running services cloning with Docker and CRIU at
> >> >> DockerCon 2016 [7]
> >> >> - Sanidhya Kashyap will present an instant OS updater using CRIU at USENIX
> >> >> ATC'16 [8]
> >> >> - Checkpoint-Restore micro-conference will be held 2-4 Nov at Santa-Fe, NM at
> >> >> Linux Plumbers 2016. The proposal for the conferense was accepted. [9]
> >> >>
> >> >> Links
> >> >> =====
> >> >>
> >> >> [1] https://lists.openvz.org/pipermail/users/2016-March/006833.html
> >> >> [2] https://lists.openvz.org/pipermail/users/2016-April/006845.html
> >> >> [3] https://lists.openvz.org/pipermail/users/2016-April/006850.html
> >> >> [4] https://lists.openvz.org/pipermail/users/2016-April/006861.html
> >> >> [5] https://github.com/OpenVZ/rmond-drs
> >> >> [6] http://lwn.net/Articles/lsfmm2016/
> >> >> [7] https://blog.docker.com/2016/04/black-belt-talks-dockercon-2016/
> >> >> [8] https://www.usenix.org/conference/atc16/technical-sessions/presentation/kashyap
> >> >> [9] https://www.linuxplumbersconf.org/2016/checkpoint-restore-microconference-accepted-into-2016-linux-plumbers-conference/
> >> >>
> >> >> Sincerely,
> >> >> Sergey
> >> >> _______________________________________________
> >> >> Users mailing list
> >> >> Users at openvz.org
> >> >> https://lists.openvz.org/mailman/listinfo/users
> >> _______________________________________________
> >> Users mailing list
> >> Users at openvz.org
> >> https://lists.openvz.org/mailman/listinfo/users
> >
> > --
> > sergeyb@
> [root at hachi ~]# time vzmigrate annie 888 --online
> Connection to destination node (annie) is successfully established
> Moving/copying CT 888 -> CT 888, [], [] ...
> locking 888
> Checking bindmounts
> Check cluster ID
> Checking keep dir for private area copy
> Check of requires kernel modules
> Checking technologies
> Checking templates for CT
> Checking IP addresses on destination node
> Checking RATE parameters in config
> Checking ploop format 2
> copy CT private /vz/private/888
> Live migration stage started
> Phaul service failed to live migrate CT
> Can't move/copy CT 888 -> CT 888, [], [] : Phaul service failed to live migrate CT
>
> real 2m54.298s
> user 0m0.438s
> sys 0m3.456s
> [root at hachi ~]#
>
>
> 20:26:06.144: 52839:
> 20:26:06.144: 52839:
> 20:26:06.144: 52839:
> 20:26:06.144: 52839: Starting p.haul
> 20:26:06.144: 52839: Use existing connections, fdrpc=9 fdmem=11 fdfs=root.hdd/root.hds:13
> 20:26:06.335: 52839: Setting up local
> 20:26:06.335: 52839: Loading config file from /etc/vz/conf/
> 20:26:06.337: 52839: Initialize ploop hauler
> 20:26:06.337: 52839: `- /vz/private/888/root.hdd/root.hds
> 20:26:06.338: 52839: Passing (ctl:10, data:8) pair to CRIU
> 20:26:06.340: 52839: Setting up remote
> 20:26:06.371: 52839: Start migration in live mode
> 20:26:06.371: 52839: Checking CPU compatibility
> 20:26:06.371: 52839: `- Dumping CPU info
> 20:26:06.372: 52839: `- Sending CPU info
> 20:26:06.382: 52839: `- Checking CPU info
> 20:26:06.387: 52839: Checking criu version
> 20:26:06.405: 52839: Checking for Dirty Tracking
> 20:26:06.405: 52839: `- Explicitly enabled
> 20:26:06.405: 52839: Preliminary FS migration
> 20:28:40.114: 52839: Fs driver transfer 1503657984 bytes (~1434Mb)
> 20:28:40.114: 52839: * Iteration 0
> 20:28:40.124: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/1
> 20:28:40.124: 52839: Issuing pre-dump command to service
> 20:28:50.770: 52839: Dumped 28797 pages, 0 skipped
> 20:28:50.770: 52839: Fs driver transfer 0 bytes
> 20:28:50.770: 52839: Checking iteration progress:
> 20:28:50.770: 52839: > Proceed to next iteration
> 20:28:50.770: 52839: * Iteration 1
> 20:28:50.780: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/2
> 20:28:50.780: 52839: Issuing pre-dump command to service
> 20:28:51.278: 52839: Dumped 134 pages, 28663 skipped
> 20:28:51.278: 52839: Fs driver transfer 0 bytes
> 20:28:51.279: 52839: Checking iteration progress:
> 20:28:51.279: 52839: > Proceed to next iteration
> 20:28:51.279: 52839: * Iteration 2
> 20:28:51.288: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/3
> 20:28:51.288: 52839: Issuing pre-dump command to service
> 20:28:51.778: 52839: Dumped 121 pages, 28676 skipped
> 20:28:51.778: 52839: Fs driver transfer 0 bytes
> 20:28:51.779: 52839: Checking iteration progress:
> 20:28:51.779: 52839: > Proceed to next iteration
> 20:28:51.779: 52839: * Iteration 3
> 20:28:51.788: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/4
> 20:28:51.788: 52839: Issuing pre-dump command to service
> 20:28:52.241: 52839: Dumped 105 pages, 28692 skipped
> 20:28:52.241: 52839: Fs driver transfer 0 bytes
> 20:28:52.241: 52839: Checking iteration progress:
> 20:28:52.242: 52839: > Proceed to next iteration
> 20:28:52.242: 52839: * Iteration 4
> 20:28:52.251: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/5
> 20:28:52.251: 52839: Issuing pre-dump command to service
> 20:28:52.748: 52839: Dumped 123 pages, 28674 skipped
> 20:28:52.748: 52839: Fs driver transfer 0 bytes
> 20:28:52.749: 52839: Checking iteration progress:
> 20:28:52.749: 52839: > Iteration grows
> 20:28:52.749: 52839: Final dump and restore
> 20:28:52.758: 52839: Making directory /vz/dump/dmp-kFPTka-16.05.13-20.26/img/6
> 20:28:52.758: 52839: Issuing dump command to service
> 20:28:52.760: 52839: Notify (pre-dump)
> 20:28:52.766: 52839: Notify (network-lock)
> 20:28:53.438: 52839: Final FS and images sync
> 20:28:54.345: 52839: Sending images to target
> 20:28:54.348: 52839: Pack
> 20:28:54.386: 52839: Add htype images
> 20:28:54.641: 52839: Asking target host to restore
> 20:28:57.588: 52839: Remote exception
> 20:28:57.588: 52839: Restore failed (17)
> Traceback (most recent call last):
> File "/usr/libexec/phaul/p.haul", line 47, in <module>
> worker.start_migration()
> File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 159, in start_migration
> self.__start_live_migration()
> File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 229, in __start_live_migration
> self.target_host.restore_from_images()
> File "/usr/lib/python2.7/site-packages/phaul/xem_rpc_client.py", line 26, in __call__
> raise Exception(resp[1])
> Exception: Restore failed (17)
> 17:26:06.333: 68411:
> 17:26:06.333: 68411:
> 17:26:06.334: 68411:
> 17:26:06.334: 68411: Starting p.haul service
> 17:26:06.334: 68411: Use existing connections, fdrpc=8 fdmem=10 fdfs=root.hdd/root.hds:12
> 17:26:06.335: 68411: Connected
> 17:26:06.342: 68411: Setting up service side ('vz', '888')
> 17:26:06.347: 68411: Loading config file from /etc/vz/conf/
> 17:26:06.356: 68411: Initialize ploop receiver
> 17:26:06.356: 68411: `- /vz/private/888/root.hdd/root.hds
> 17:26:06.359: 68411: Passing (ctl:15, data:7) pair to CRIU
> 17:26:06.375: 68411: Started images server
> 17:26:06.379: 68411: Waiting for images to unpack
> 17:26:06.383: 68411: Checking cpuinfo
> 17:26:06.386: 68411: `- True
> 17:26:06.391: 68411: Checking criu version
> 17:26:06.404: 68411: `- 2.1.0.23 -> 2.1.0.23
> 17:26:06.404: 68411: `- True
> 17:28:40.116: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/1
> 17:28:40.116: 68411: Starting page server for iter 1
> 17:28:40.117: 68411: Sending criu rpc req
> 17:28:40.123: 68411: Page server started at 68999
> 17:28:50.772: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/2
> 17:28:50.773: 68411: Starting page server for iter 2
> 17:28:50.773: 68411: Sending criu rpc req
> 17:28:50.779: 68411: Page server started at 69001
> 17:28:51.280: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/3
> 17:28:51.281: 68411: Starting page server for iter 3
> 17:28:51.281: 68411: Sending criu rpc req
> 17:28:51.287: 68411: Page server started at 69003
> 17:28:51.780: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/4
> 17:28:51.781: 68411: Starting page server for iter 4
> 17:28:51.781: 68411: Sending criu rpc req
> 17:28:51.787: 68411: Page server started at 69005
> 17:28:52.243: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/5
> 17:28:52.244: 68411: Starting page server for iter 5
> 17:28:52.244: 68411: Sending criu rpc req
> 17:28:52.251: 68411: Page server started at 69008
> 17:28:52.750: 68411: Making directory /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/6
> 17:28:52.751: 68411: Starting page server for iter 6
> 17:28:52.751: 68411: Sending criu rpc req
> 17:28:52.758: 68411: Page server started at 69010
> 17:28:54.348: 68411: Started images server
> 17:28:54.392: 68411: Waiting for images to unpack
> 17:28:54.642: 68411: Restoring from images
> 17:28:54.643: 68411: Starting vzctl restore
> 17:28:57.581: 68411: > Restoring the Container ...
> 17:28:57.582: 68411: > Mount image: /vz/private/888/root.hdd
> 17:28:57.582: 68411: > Opening delta /vz/private/888/root.hdd/root.hds
> 17:28:57.582: 68411: > Adding delta dev=/dev/ploop35223 img=/vz/private/888/root.hdd/root.hds (rw)
> 17:28:57.583: 68411: > Mounted /dev/ploop35223p1 at /vz/root/888 fstype=ext4 data=',balloon_ino=12'
> 17:28:57.583: 68411: > Container is mounted
> 17:28:57.583: 68411: > cat: /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/6/vz_clock_bootbased.img: No such file or directory
> 17:28:57.584: 68411: > cat: /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/6/vz_clock_monotonic.img: No such file or directory
> 17:28:57.584: 68411: > cat: /vz/dump/rst-ee6Sr6-16.05.13-17.26/img/6/vz_aio_max_nr.img: No such file or directory
> 17:28:57.585: 68411: > The restore log was saved in /vz/dump/rst-ee6Sr6-16.05.13-17.26/criu_restore.7.log
> 17:28:57.585: 68411: > Unmount image: /vz/private/888/root.hdd
> 17:28:57.585: 68411: > Unmounting file system at /vz/root/888
> 17:28:57.586: 68411: > Unmounting device /dev/ploop35223
> 17:28:57.586: 68411: > Container is unmounted
> 17:28:57.586: 68411: > Failed to restore the Container
> 17:28:57.636: 68411: Disconnected
> 17:28:57.638: 68411: Closing images
> 17:28:57.638: 68411: Keeping images
> 17:28:57.638: 68411: Images are kept in /vz/dump/rst-ee6Sr6-16.05.13-17.26
> 17:28:57.775: 68411: Stop by 15
> 17:28:57.775: 68411: RPC Service stops
> 17:28:57.776: 68411: Bye!
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users
--
sergeyb@
More information about the Users
mailing list