[Users] ZFS vs ploop
Gena Makhomed
gmm at csdoc.com
Sat Jul 25 10:42:11 PDT 2015
On 25.07.2015 4:46, Kir Kolyshkin wrote:
> This tool is to be used for inner ploop ext4. As a result, the data will
> be less sparse, there will be more empty blocks for ploop to discard.
>
> I encourage you to experiment with e4defrag2 and post your results here.
> Usage is something like this (assuming default ploop cluster size of 1M,
> and
> you have /dev/ploop12345p1 mounted on /vz/root/123):
>
> e4defrag2 -v -d 255 -m -s 8 -q 999 \
> -a $((64*1024)) \
> -c $((1024*1024 * 1)) \
> -t $((60*10)) \
> /dev/ploop12345p1 /vz/root/123
>
> Try to run vzctl compact before and after, check if defrag helps. You
> might want to drop -a option,
> or increase argument of -t option. Note I am not the utility author and
> can hardly help much.
# /usr/sbin/vzctl compact 156
0 clusters have been relocated
# /usr/sbin/vzctl compact 156
0 clusters have been relocated
(get old size)
----------------------------------------
# ./e4defrag2 -v -d 255 -m -s 8 -q 999 \
-a $((64*1024)) \
-c $((1024*1024 * 1)) \
-t $((60*10)) \
/dev/ploop30481p1 /vz/root/156
after several minutes of work - it failed with error message:
__get_inode_fiemap fmap stat ino:2645875 hole:0 frag:1 local_ex:0
local_sz:0 group:1
Can not open file CHANGES.md, errno:24
Pass3: Rescan subtree failed
after second e4defrag2 run - it again failed with same error message.
----------------------------------------
# /usr/sbin/vzctl compact 156
1512 clusters have been relocated
# /usr/sbin/vzctl compact 156
0 clusters have been relocated
(get new size)
========================================================================
result:
old - after first compact, before defrag
new - after defrag, after second compact
old ploop image size: 194229834521
new ploop image size: 194958594841
after e4defrag2 and vzctl compact - ploop image increases in size.
delta: +728760320 bytes or +695MiB
run "df -B1" inside container:
old used data size: 163323834368
new used data size: 163324989440
after e4defrag2 and vzctl compact - filesystem data increases in size.
delta: +1155072 bytes or +1.10MiB
data inside container - increases in size by working applications
(writing info into log files, etc)
data size continues grow, now it: 163325067264 (+1.17MiB)
ploop image size also grow, now it: 195092812569 (+823MiB)
now ploop+ext4 overhead is 16.2% or 29.59GiB on 256GiB partition.
> Also, you might try to play with ploop cluster block size. Default is
> 1M, maybe you'll
> have better luck with smaller block size (although it was never tested
> with blocks less
> than 1M). Block size (in sectors, default is 2048 i.e. 2048 * 512 = 1M)
> can be specified
> with ploop init -b option.
As I understand decreasing ploop cluster block size might lead
to performance degradations and increased memory requirements.
And as it written in "ZFS Evil Tuning Guide":
...default values are set by the people who know the most about the
effects of the tuning on the software that they supply. If a better
value exists, it should be the default. While alternative values might
help a given workload, it could quite possibly degrade some other
aspects of performance. Occasionally, catastrophically so.
> What else is there in ploop? As far as I know, a partition table, and
> an ext4 journal with a fixed size of 128MB (its overhead is only
> major if you create pretty small ploop images).
Also - reserved space for 16777216 inodes, each size 256 bytes.
as I understand, 16777216 inodes need at leas 4 GiB of disk space.
For comparison, XFS is old, but still modern file system
and it does not contains fixed size inode tables on disk.
Also XFS has many other advantages.
Filesystem metadata overhead:
Filesystem Allocated kilobytes (out of 1G) Overhead %
ext4 33288 3.2%
xfs 5132 0.5%
https://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/
=====================================================================
Even Red Hat migrate from ext4 to XFS as default file system of RHEL7.
Does you see any reasons why ext4 still should be used with OpenVZ?
--
Best regards,
Gena
More information about the Users
mailing list