[Users] vzctl compact fails to compact container, what can i do

Rene C. openvz at dokbua.com
Mon Mar 23 05:35:04 PDT 2015


Any news on this?
> We know this problem and going to fix it in a near future.

I'm just going through our servers and are quite surprised how much
diskspace seems to be wasted.

Take for example this VE with a disk of nominally 292G, 235G used

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/ploop50951p1     292G  235G   42G  85% /

but if I go to /vz/private/2202/root.hdd I find

4.0K    DiskDescriptor.xml
0       DiskDescriptor.xml.lck
300G    root.hdd
161G    root.hdd.{8c40287b-2e17-45d1-b58f-1119b3b58b53}
138G    root.hdd.{fb7ba001-cb78-4dd3-9ac8-cb0c8cbab4f6}

It doesn't seem that there are any snapshots though, so I guess somehow
some of these files are orphans?

# vzctl snapshot-list 2202
PARENT_UUID                            C UUID
    DATE                NAME
                                       *
{a684e175-e9a0-4e22-aece-98a11081e2f0} 2014-08-10 18:00:01

Any way I can check what is safe to delete?

I already rand a vzctl compact on the VE but it didn't release any more
diskspace.



On Mon, Dec 23, 2013 at 12:22 PM, Rene C. <openvz at dokbua.com> wrote:

> Good answer, thanks much!!
>
>
> On Mon, Dec 23, 2013 at 4:47 PM, Andrew Vagin <avagin at parallels.com>
> wrote:
> > On Mon, Dec 23, 2013 at 04:28:14PM +0700, Rene C. wrote:
> >> Indeed, that seems to have been at least part of the problem, thanks
> much.
> >>
> >> Still, after having removed all snapshots and rerun vzctl compress, I
> >> still ended up with a pigz compressed backup of 40G for a container
> >> with 2G used disk space (shown by df -h within the container).  Any
> >> idea how that can happen?
> >
> > Yes, I have. The size of ploop clusters is 1MB. It's much bigger than
> > a file system block size (4KB). The file system can allocate 1 block
> > (4K), but one cluster (1Mb) on ploop will be allocated.
> >
> > If a file system spreads blocks far from each other, it can freeze much
> > more space in ploop image.
> >
> > We know this problem and going to fix it in a near future.
> >
> >>
> >> On Sat, Dec 21, 2013 at 11:13 PM, Andrew Vagin <avagin at parallels.com>
> wrote:
> >> > On Sat, Dec 21, 2013 at 09:08:43PM +0700, Rene C. wrote:
> >> >> We have a container that needs moving to another hardware node, but a
> >> >> vzpbackup of it is 250G.
> >> >>
> >> >> Within the container only 1.9G is used:
> >> >>
> >> >> /dev/ploop33244p1  393G  1.9G  371G   1% /
> >> >>
> >> >> Tried running a vzctl compact but it just shows a few lines and stops
> >> >> without having done anything:
> >> >>
> >> >> # vzctl compact 1709
> >> >> Trying to find free extents bigger than 0 bytes
> >> >> Waiting
> >> >> Call FITRIM, for minlen=33554432
> >> >> Call FITRIM, for minlen=16777216
> >> >> Call FITRIM, for minlen=8388608
> >> >> Call FITRIM, for minlen=4194304
> >> >> Call FITRIM, for minlen=2097152
> >> >> Call FITRIM, for minlen=1048576
> >> >> 0 clusters have been relocated
> >> >>
> >> >> /vza1/private/1709/root.hdd has following files:
> >> >>
> >> >> -rw-r--r-- 1 root root            1478 Dec 21 20:35
> DiskDescriptor.xml
> >> >> -rw------- 1 root root               0 Nov 27 12:33
> DiskDescriptor.xml.lck
> >> >> -rw------- 1 root root 257 447 428 096 Dec 20 07:42 root.hdd
> >> >> -rw------- 1 root root     376 438 784 Dec 20 09:27
> root.hdd.{e66fccca-85cf-4bf2-888d-6669988f1da6}
> >> >> -rw------- 1 root root     517 996 544 Dec 21 17:49
> root.hdd.{f881c603-48cd-4e21-af63-45e49ce715ba}
> >> >> drwx------ 2 root root            4096 Nov 27 12:34 root.hdd.mnt
> >> >>
> >> >> What can we do to compress this down to the actual 2G used?
> >> >
> >> > Looks like you did a snapshot, didn't you? In this case only the last
> >> > delta is compacted.
> >> >
> >> > You can try to delete snapshots and compress the disk again.
> >> >
> >> >>
> >> >> 2.6.32-042stab081.5
> >> >> e2fsprogs-resize2fs-static-1.42.3-3.el6.1.ovz.x86_64
> >> >> vzctl-4.5.1-1.x86_64
> >> >> vzkernel-2.6.32-042stab081.5.x86_64
> >> >> vzctl-core-4.5.1-1.x86_64
> >> >> vzquota-3.1-1.x86_64
> >> >> vzstats-0.5.2-1.noarch
> >> >> ploop-lib-1.9-1.x86_64
> >> >> ploop-1.9-1.x86_64
> >> >> _______________________________________________
> >> >> Users mailing list
> >> >> Users at openvz.org
> >> >> https://lists.openvz.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20150323/eb505d7d/attachment-0001.html>


More information about the Users mailing list