[Users] Ploop filesystem provisioned incorrectly (wrong size)

Kir Kolyshkin kir at openvz.org
Tue Jul 22 15:25:52 PDT 2014


On 07/22/2014 02:44 PM, Kevin Holly wrote:
> Hi again,
>
> I just tried to reinstall this container...
> The log of that can be found here:
> https://ezcrypt.it/Hd9n#SiP14ajsc8sV0SLiGm1diEc1

The only strange thing I see is "Killing container", perhaps you use an
incorrect
sequence for vzctl set --userpasswd. Note you do not need to start a
container
to use it. But this is not related, and I guess SolusVM guys are to blame.
 
I failed to reproduce your problem locally:

# vzctl create 345 --ostemplate ubuntu-12.04-x86_64-minimal --layout
ploop --diskspace 2T
...
vzctl start 345
...
# vzctl exec 345 df -h
Executing command: df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/ploop11994p1  2.0T  370M  1.9T   1% /
...
# ploop info /vz/private/345/root.hdd/DiskDescriptor.xml
   resource           Size           Used
  1k-blocks     2113785776         378220
     inodes      134217728          12599


Having said that, I know a way to get to the result you are seeing. If
we run

    vzctl set 345 --diskspace 256G --save
    (or perhaps vzctl set --applyconfig with config containing DISKSPACE
parameter)

it will result in something that you see -- a partition of 2G with a
filesystem of about 256G.
The reason is when shrinking FS ploop uses a hidden balloon rather than
a real resize --
it is much more efficient this way.

So, can you please provide a sequence of commands to lead
to the problem (starting from vzctl create or earlier)? I understand you
are using SolusVM, but in order to demonstrate that there is a issue in
OpenVZ
we should try to eliminate extra variables.

Final question, just curious -- are you paying to SolusVM? If yes, why
don't you use their support
to help with your issues, relying on a free help from OpenVZ community
and deveopers
instead?

Regards,
  Kir.

>
> There is not much difference with the ploop filesystem now. Same size
> (2TB but only 219GiB/235GB available) and "ploop info" on the
> DiskDescriptor.xml also reports "1k-blocks      229233640" (wrong size).
>
> On 22/07/14 21:27, Kevin Holly wrote:
> > Hi,
>
> > I have a customer complaining about his filesystem being too small
> > (product FS size: 250GB, his FS size: 219GiB / 235GB).
>
> > After further debugging this issue I am now here:
>
>
> > Creation of the containers filesystem (I can provide more logs, by
> > default I have LOG_LEVEL=7 and VERBOSE=1
>
> > 2014-06-16T19:20:02+0200 vzctl : CT 2671 : Creating image:
> > /vz/private/2671.tmp/root.hdd/root.hdd size=262144000K
> > 2014-06-16T19:20:02+0200 : Creating delta
> > /vz/private/2671.tmp/root.hdd/root.hdd bs=2048 size=524288000 sectors v2
> > 2014-06-16T19:20:02+0200 : Adding snapshot
> > {5fbaabe3-6958-40ff-92a7-860e329aab41}
> > 2014-06-16T19:20:02+0200 : Storing
> > /vz/private/2671.tmp/root.hdd/DiskDescriptor.xml
> > 2014-06-16T19:20:02+0200 : Opening delta
> > /vz/private/2671.tmp/root.hdd/root.hdd
> > 2014-06-16T19:20:02+0200 : Adding delta dev=/dev/ploop16059
> > img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
> > 2014-06-16T19:20:02+0200 : Running: parted -s /dev/ploop16059 mklabel
> > gpt mkpart primary 1048576b 268434407423b
> > 2014-06-16T19:20:02+0200 : Running: mkfs -t ext4 -j -b4096
> > -Elazy_itable_init,resize=4294967295 -Jsize=128 -i16384
> /dev/ploop16059p1
> > 2014-06-16T19:20:03+0200 : Running: /sbin/tune2fs -ouser_xattr,acl -c0
> > -i0 /dev/ploop16059p1
> > 2014-06-16T19:20:03+0200 : Creating balloon file
> > .balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
> > 2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at
> > /vz/private/2671.tmp/root.hdd/root.hdd.mnt fstype=ext4 data=''
> > 2014-06-16T19:20:03+0200 : Unmounting device /dev/ploop16059
> > 2014-06-16T19:20:03+0200 : Opening delta
> > /vz/private/2671.tmp/root.hdd/root.hdd
> > 2014-06-16T19:20:03+0200 : Adding delta dev=/dev/ploop16059
> > img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
> > 2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at /vz/root/2671
> > fstype=ext4 data='balloon_ino=12,'
>
>
> > solus-ed-ch01:~# vzctl exec 2671 df -B1 /
> > Executing command: df -B1 /
> > Filesystem           1B-blocks        Used    Available Use% Mounted on
> > /dev/ploop16059p1 234735247360 86872219648 134441467904  40% /
> > solus-ed-ch01:~# vzctl exec 2671 df -h /
> > Executing command: df -h /
> > Filesystem         Size  Used Avail Use% Mounted on
> > /dev/ploop16059p1  219G   81G  126G  40% /
>
>
> > solus-ed-ch01:~# gdisk -l /dev/ploop16059
> > GPT fdisk (gdisk) version 0.8.10
>
> > Partition table scan:
> >   MBR: protective
> >   BSD: not present
> >   APM: not present
> >   GPT: present
>
> > Found valid GPT with protective MBR; using GPT.
> > Disk /dev/ploop16059: 4194304000 sectors, 2.0 TiB
> > Logical sector size: 512 bytes
> > Disk identifier (GUID): FA4B229E-10F7-4583-8A4F-C7B99A34945D
> > Partition table holds up to 128 entries
> > First usable sector is 34, last usable sector is 4194303966
> > Partitions will be aligned on 2048-sector boundaries
> > Total free space is 4029 sectors (2.0 MiB)
>
> > Number  Start (sector)    End (sector)  Size       Code  Name
> >    1            2048      4194301951   2.0 TiB     0700  primary
>
>
>
> > So here you can see that the ploop device got provisioned with 2TB
> > ploop filesystem size and the container only sees 234735247360 Byte
> > (235 GB / 219 GiB).
>
>
> > Can someone help me debugging this issue further?
>
> > This host node had no crashes yet, all other containers are working
> > completely fine. It's running the following (patched) kernel version:
>
> > solus-ed-ch01:~# uname -a
> > Linux solus-ed-ch01 2.6.32-042stab090.2 #1 SMP Wed May 21 19:25:03 MSK
> > 2014 x86_64 x86_64 x86_64 GNU/Linux
> > solus-ed-ch01:~# kcare-uname -a
> > Linux solus-ed-ch01 2.6.32-042stab092.2 #1 SMP Wed May 21 19:25:03 MSK
> > 2014 x86_64 x86_64 x86_64 GNU/Linux
>
>
> > Thanks in advance for anyone trying to help!
>
>
> > Best regards
>
> > Kevin Holly - root at hallowe.lt - http://hallowe.lt/
> > _______________________________________________
> > Users mailing list
> > Users at openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20140722/f207f03a/attachment-0001.html>


More information about the Users mailing list