[Users] flashcache

Bosson VZ bossonvz at bosson.eu
Thu Jul 10 04:09:27 PDT 2014


Helo,

I see you have been developing your own openvz admin layer. Wouldn't this be of any use 
http://bossonvz.bosson.eu?

It's interesting to hear that you use ZFS snapshotting and scp/ssh copying of container 
HDD data during migrations. How well does it scale for containers with very large partitions 
(hundreds of gigabytes, terabytes)?

-- 
David Fabian
Cluster Design, s.r.o.

Dne Čt 10. července 2014 12:52:18, Pavel Snajdr napsal(a):
> On 07/10/2014 12:50 PM, Pavel Snajdr wrote:
> > On 07/10/2014 12:32 PM, Pavel Odintsov wrote:
> >> Could you share your patches to vzmigrate and vzctl?
> > 
> > We don't have any, where vzctl/vzmigrate didn't satisfy our needs, we've
> > went the way around these utilities and let vpsAdmin on the hwnode
> > manage things.
> > 
> > You can take a look here:
> > 
> > https://github.com/vpsfreecz/vpsadmind
> > 
> > I wouldn't recommend anyone outside of our organization to use vpsAdmin
> > yet, as the 2.0 transition to self-describing RESTful API is still
> > underway. As soon as it's finished and well documented, I'll post a note
> > here as well.
> > 
> > The 2.0 version will be primarily controled via a CLI tool, which
> > autogenerates itself from the API description.
> > 
> > A running version of the API can be seen here:
> > 
> > https://api.vpsfree.cz/v1/
> > 
> > Github repos:
> > 
> > https://github.com/vpsfreecz/vpsadminapi (the API)
> > https://github.com/vpsfreecz/vpsadminctl (the CLI tool)
> > 
> > https://github.com/vpsfreecz/vpsadmind (deamon run on hwnode)
> > https://github.com/vpsfreecz/vpsadmindctl (CLI tool to control the daemon)
> > 
> > https://github.com/vpsfreecz/vpsadmin
> > 
> > The last repo is the vpsAdmin 1.x, which all 2.0 things still require to
> > run, it's a pain to get this running yourself, but stay tuned, once we
> > get rid of 1.x and document 2.0 properly, it's going to be a great thing.
> > 
> > /snajpa
> 
> Though, if you don't mind managing things via a web interface, vpsAdmin
> 1.x can be installed through these scripts:
> 
> https://github.com/vpsfreecz/vpsadmininstall
> 
> /snajpa
> 
> >> On Thu, Jul 10, 2014 at 2:25 PM, Pavel Odintsov
> >> 
> >> <pavel.odintsov at gmail.com> wrote:
> >>> Thank you for your answers! It's really useful information.
> >>> 
> >>> On Thu, Jul 10, 2014 at 2:08 PM, Pavel Snajdr <lists at snajpa.net> wrote:
> >>>> On 07/10/2014 11:35 AM, Pavel Odintsov wrote:
> >>>>>> Not true, IO limits are working as they should (if we're talking
> >>>>>> vzctl
> >>>>>> set --iolimit/--iopslimit). I've kicked the ZoL guys around to add IO
> >>>>>> accounting support, so it is there.
> >>>>> 
> >>>>> You can share tests with us? For standard folders like simfs this
> >>>>> limits works bad in big number of cases
> >>>> 
> >>>> If you can give me concrete tests to run, sure, I'm curious to see if
> >>>> you're right - then we'd have something concrete to fix :)
> >>>> 
> >>>>>> How? ZFS doesn't have a limit on number of files (2^48 isn't a limit
> >>>>>> really)>>>>> 
> >>>>> It's ok when your customer create 1 billion of small files on 10GB VPS
> >>>>> and you will try to archive it for backup? On slow disk system it's
> >>>>> really nightmare because a lot of disk operations which kills your
> >>>>> I/O.
> >>>> 
> >>>> zfs snapshot <dataset>@<snapname>
> >>>> zfs send <dataset>@<snapname> > your-file or | ssh backuper zfs recv
> >>>> <backupdataset>
> >>>> 
> >>>> That's done on block level. No need to run rsync anymore, it's a lot
> >>>> faster this way.
> >>>> 
> >>>>>> Why? ZFS send/receive is able to do bit-by-bit identical copy of the
> >>>>>> FS,
> >>>>>> I thought the point of migration is to don't have the CT notice any
> >>>>>> change, I don't see why the inode numbers should change.
> >>>>> 
> >>>>> Do you have really working zero downtime vzmigrate on ZFS?
> >>>> 
> >>>> Nope, vzmigrate isn't zero downtime. Due to vzctl/vzmigrate not
> >>>> supporting ZFS, we're implementing this our own way in vpsAdmin, which
> >>>> in it's 2.0 re-implementation will go opensource under GPL.
> >>>> 
> >>>>>> How exactly? I haven't seen a problem with any userspace software,
> >>>>>> other
> >>>>>> than MySQL default setting to AIO (it fallbacks to older method),
> >>>>>> which
> >>>>>> ZFS doesn't support (*yet*, they have it in their plans).
> >>>>> 
> >>>>> I speaks about MySQL primarily. I have thousands of containers and I
> >>>>> can tune MySQL for another mode for all customers, it's impossible.
> >>>> 
> >>>> As I said, this is under development and will improve.
> >>>> 
> >>>>>> L2ARC cache really smart
> >>>>> 
> >>>>> Yep, fine, I knew. But can you account L2ARC cache usage per customer?
> >>>>> OpenVZ can it via flag:
> >>>>> sysctl -a|grep pagecache_isola
> >>>>> ubc.pagecache_isolation = 0
> >>>> 
> >>>> I can't account for caches per CT, but I didn't have any need to do so.
> >>>> 
> >>>> L2ARC != ARC, ARC is in system RAM, L2ARC is intended to be on SSD for
> >>>> the content of ARC that is the least significant in case of low memory
> >>>> -
> >>>> it gets pushed from ARC to L2ARC.
> >>>> 
> >>>> ARC has two primary lists of cached data - most frequently used and
> >>>> most
> >>>> recently used and these two lists are divided by a boundary marking
> >>>> which data can be pushed away in low mem situation.
> >>>> 
> >>>> It doesn't happen like with Linux VFS cache that you're copying one big
> >>>> file and it pushes out all of the other useful data there.
> >>>> 
> >>>> Thanks to this distinction of MRU and MFU ARC achieves far better
> >>>> hitrates.
> >>>> 
> >>>>> But one customer can eat almost all L2ARC cache and displace another
> >>>>> customers data.
> >>>> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20140710/22579f8e/attachment-0001.html>


More information about the Users mailing list