[Users] ZFS vs ploop

Gena Makhomed gmm at csdoc.com
Wed Jul 22 10:08:47 PDT 2015


On 22.07.2015 8:39, Kir Kolyshkin wrote:

>> 1) currently even suspend/resume not work reliable:
>> https://bugzilla.openvz.org/show_bug.cgi?id=2470
>> - I can't suspend and resume containers without bugs.
>> and as result - I also can't use it for live migration.
>
> Valid point, we need to figure it out. What I don't understand
> is how lots of users are enjoying live migration despite this bug.
> Me, personally, I never came across this.

Nevertheless, steps to 100% reproduce bug provided in bugreport.

>> 2) I see in google many bugreports about this feature:
>> "openvz live migration kernel panic" - so I prefer make
>> planned downtime of containers at the night instead
>> of unexpected and very painful kernel panics and
>> complete reboots in the middle of the working day.
>> (with data lost, data corruption and other "amenities")
>
> Unlike the previous item, which is valid, this is pure FUD.

Compare two situations:

1) Live migration not used at all

2) Live migration used and containers migrated between HN

In which situation possibility to obtain kernel panic is higher?

If you say "possibility are equals" this means
what OpenVZ live migration code has no errors at all.

Is it feasible? Especially if you see OpenVZ live migration
code volume, code complexity and grandiosity if this task.

If you say "for (1) possibility is lower and for (2)
possibility is higher" - this is the same what I think.

I don't use live migration because I don't want kernel panics.

And you say what "this is pure FUD" ? Why?

>> 4) from technical point of view - it is possible
>> to do live migration using ZFS, so "live migration"
>> currently is only one advantage of ploop over ZFS
>
> I wouldn't say so. If you have some real world comparison
> of zfs vs ploop, feel free to share. Like density or performance
> measurements, done in a controlled environment.

Ok.

My experience with ploop:

DISKSPACE limited to 256 GiB, real data used inside container
was near 40-50% of limit 256 GiB, but ploop image is lot bigger,
it use near 256 GiB of space at hardware node. Overhead ~ 50-60%

I found workaround for this: run "/usr/sbin/vzctl compact $CT"
via cron every night, and now ploop image has less overhead.

current state:

on hardware node:

# du -b /vz/private/155/root.hdd
205963399961    /vz/private/155/root.hdd

inside container:

# df -B1
Filesystem               1B-blocks          Used    Available Use% 
Mounted on
/dev/ploop38149p1     270426705920  163129053184  94928560128  64% /

====================================

used space, bytes: 163129053184

image size, bytes: 205963399961

"ext4 over ploop over ext4" solution disk space overhead is near 26%,
or is near 40 GiB, if see this disk space overhead in absolute numbers.

This is main disadvantage of ploop.

And this disadvantage can't be avoided - it is "by design".

=======================================================================

My experience with ZFS:

real data used inside container near 62 GiB,
real space used on hard disk is near 11 GiB.

"simfs over ZFS" solution disk space overhead: zero.

Even more, I have economy near 51 GiB of disk space
after migrating from "simfs over ext4" to "simfs over ZFS".

It is obvious what ZFS allow host more containers on same
hardware node, comparing to variant "ext4 over ploop over ext4".

Performance of ZFS also higher if server have 2HDDs for main storage
and 2SSDs for L2ARC/ZIL. If ploop is used - it is not possible to use
SSD for accelerating read/write access for data located on HDD in ploop.

You can easy obtain own density and performance
measurements, done in a controlled environment,
comparing "simfs over ZFS" and "ext4 over ploop over ext4".

Additional information you can found here:
https://github.com/pavel-odintsov/OpenVZ_ZFS

for example, great article
"Why ploop is not an ideal disk layout for OpenVZ":
https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/ploop_issues.md

-- 
Best regards,
  Gena


More information about the Users mailing list