[Users] OpenVZ & Max File Descriptors

Chip chiapas at aktivix.org
Mon Feb 24 06:40:59 PST 2014


Hi

I've had a problem with a service on a CT failing with "too many open
files".

The max number of file descriptors was 1024 - I've now increased the max
number of open files and max user processes to 65535 on the hardware node
per the info here:

http://ithubinfo.blogspot.co.uk/2013/07/how-to-increase-ulimit-open-file-and.html

I'm running Centos v6.5 as follows

[root at machine ~]# uname -a
Linux example.com 2.6.32-042stab084.17 #1 SMP Fri Dec 27 17:10:20 MSK 2013
x86_64 x86_64 x86_64 GNU/Linux

The output of ulimit -a on the hardware node is as follows:

[root at example ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 126948
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65535
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The output of ulimit -a on the CT is the same.

Question: is this set-up 'correct' and unlikely to cause future issues with
access to files and resources i.e. is there a danger that I have
over-committed the server?

Many thanks and best regards

Chip Scooter
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20140224/133386df/attachment.html>


More information about the Users mailing list