[Users] OpenVZ & Max File Descriptors

Chip chiapas at aktivix.org
Mon Feb 24 13:50:00 PST 2014


On 24 February 2014 16:39, Scott Dowdle <dowdle at montanalinux.org> wrote:
>
> Greetings,

Good day Scott - thanks for your reply. See below.

> ----- Original Message -----
> > I've had a problem with a service on a CT failing with "too many open
> > files".
> >
> > The max number of file descriptors was 1024 - I've now increased the
> > max number of open files and max user processes to 65535 on the
> > hardware node per the info here:
> >
> > http://ithubinfo.blogspot.co.uk/2013/07/how-to-increase-ulimit-open-file-and.html
> >
> > I'm running Centos v6.5 as follows
> >
> > [root at machine ~]# uname -a
> > Linux example.com 2.6.32-042stab084.17 #1 SMP Fri Dec 27 17:10:20 MSK
> > 2013 x86_64 x86_64 x86_64 GNU/Linux
> >
> > The output of ulimit -a on the hardware node is as follows:
> >
> > [root at example ~]# ulimit -a
> > core file size (blocks, -c) 0
> > data seg size (kbytes, -d) unlimited
> > scheduling priority (-e) 0
> > file size (blocks, -f) unlimited
> > pending signals (-i) 126948
> > max locked memory (kbytes, -l) 64
> > max memory size (kbytes, -m) unlimited
> > open files (-n) 65535
> > pipe size (512 bytes, -p) 8
> > POSIX message queues (bytes, -q) 819200
> > real-time priority (-r) 0
> > stack size (kbytes, -s) 10240
> > cpu time (seconds, -t) unlimited
> > max user processes (-u) 65535
> > virtual memory (kbytes, -v) unlimited
> > file locks (-x) unlimited
> >
> > The output of ulimit -a on the CT is the same.
> >
> > Question: is this set-up 'correct' and unlikely to cause future
> > issues with access to files and resources i.e. is there a danger
> > that I have over-committed the server?
>
> Since you are using the EL6-based OpenVZ kernel you have the ability to use a vswap style config for your container.

I do, yes - this is great:

http://openvz.org/VSwap

> You talk about shell / account specific limit controls but not really what, if any, resource limits you have set on the
> container via it's OpenVZ config file.  The vswap style config generally only limits ram and vswap and sets everything
> else to unlimited.

I have set CPUs, and limited RAM to about 50% of the HN and swap is
double RAM. Could these resource limits result in "too many open
files" errors?

> To answer your question, I'm guessing you should be fine... but can any single container exhaust all of your resources?

Hope not!

>  Perhaps but I don't think it is likely with normal usage patterns.  I guess you'll just have to see how well it works for you.

Great! Tomorrow evening will be busy so I'll keep my fingers crossed
for then. We get the "too many open files" errors when the server gets
busy doing media streaming (which, I believe, is numproc intensive) so
it is possible to predict when problems are likely to occur.

>  There isn't one set of settings that is good for everything and everyone.

Yes, indeed. Just wanted to check that what I was doing was reasonably
sensible - it seems it is, so I hope I have finally got to the bottom
of the issue.

Thanks for getting back to me and all the best

Chip Scooter

> TYL,
> --
> Scott Dowdle
> 704 Church Street
> Belgrade, MT 59714
> (406)388-0827 [home]
> (406)994-3931 [work]




-- 
Admin: imc-uk-radio-owner at lists.indymedia.org
Co-admin: imc-uk-video-owner at lists.indymedia.org


More information about the Users mailing list