[Users] SIMFS users
Bosson VZ
fabian at bosson.eu
Thu Aug 6 05:20:04 PDT 2015
Hello,
online resizing is limited by the capabilities of EXT4. Since EXT4 does
allow only online upsizing we have to do the downsizing offline. On the other
hand, our customers almost always request adding more space which is doable
online with a single command. Another problem with downsizing (even offline) is
to correctly align the FS size and the block device size. This is easy
when upsizing since EXT4 takes up all the space provided by the LVM. The other
way around it is more error prone. How would ploop overcome the shortcomings
of the guest EXT4?
My question regarding the shared ploop storage aimed for the fact that
you normally cannot have a single file system *mounted* on two different
cluster nodes/computers. You need some sort of cluster locking (protecting the
FS meta-data), hence you need a cluster file-system. My understanding of ploop
is that it provides a "block device in a file". It does not provide you with
a guest FS, does it? How can ploop protect the guest meta-data from
concurrent accesses from different computers? Also, FUSE is probably not the
technology we would like to use for our containers since a lot of them are
pretty IO heavy and we try to get the most from our block storage without
slowing it down too much with unnecessary levels of indirection.
True, you can use a simple bind mount. What I like about simfs (and that's
why all bind mounts - private->root, host->guest in bossonvz are done with
simfs) is that it hides the mount source within the guest so the guest
administrator does not get any info about the host devices/directories.
--
David Fabian
Cluster Design, s.r.o.
Dne St 5. srpna 2015 11:06:59, Maxim Patlasov napsal(a):
> Hi,
>
> On 08/04/2015 12:51 AM, Bosson VZ wrote:
> > Hello,
> >
> > yes, we have 200+ containers all running on top of simfs in clusters. Most
> > of the limitations of simfs mentioned above are not a problem to us as we
> > use clustered LVMs on top of DRBD storage. Every container has its own
> > file-system sitting on an LVM block device. This layout is universal for
> > our OpenVZ and KVM guests (where a raw block device is passed to the
> > guest). Different containers use different file-systems (ext4, xfs, nfs,
> > ...), some containers share part of their file-system hierarchy with
> > others in the cluster using ocfs2/gfs clustered file-systems. On
> > non-cluster hosts, we sometimes use host->
> >> guest bind-mounts as the way to share some data. Thanks to DRBD, live
> >
> > migration is also possible with minimal downtime.
>
> The solution you described looks very interesting. How do you reclaim
> unused disk space from container to host system? More specifically:
> suppose, at the moment of creation of a container you devoted 100GB LVM
> block device for that container; later on you decided that 10GB would be
> enough for it; what do you do to return those unused 90GB back to the
> host (or to other containers)? I guess the only way is to fully shutdown
> the container, shrink its filesystem, then shrink LVM block device, then
> start it up again.
>
> > The bossonvz libvirt driver at the moment only supports simfs (as that's
> > what we are using nowadays).
> >
> > Is it somehow possible to use ploop in a cluster and share the virtual
> > file- system between hosts?
>
> You can keep ploop image files on a shared storage if its client
> filesystem is FUSE-based. For other filesystems more work needed, but it
> must be doable.
>
> > How can host->guest bind-mount be achieved without simfs?
>
> The same way as for simfs: "mount --bind /shared_space
> /vz/root/101/mnt"? It shouldn't matter whether mount-point
> "/vz/root/101" came from simfs or from ploop. Am I missing something
> obvious?
>
> Thanks,
> Maxim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20150806/ac7f8f66/attachment-0001.html>
More information about the Users
mailing list