<div dir="ltr"><div>Hi<br><br>I've had a problem with a service on a CT failing with "too many open files".<br><br>The max number of file descriptors was 1024 - I've now increased the max number of open files and max user processes to 65535 on the hardware node per the info here:<br>
<br><a href="http://ithubinfo.blogspot.co.uk/2013/07/how-to-increase-ulimit-open-file-and.html">http://ithubinfo.blogspot.co.uk/2013/07/how-to-increase-ulimit-open-file-and.html</a><br><br></div>I'm running Centos v6.5 as follows<br>
<br>[root@machine ~]# uname -a<br>Linux <a href="http://example.com">example.com</a> 2.6.32-042stab084.17 #1 SMP Fri Dec 27 17:10:20 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux<br><br><div>The output of ulimit -a on the hardware node is as follows:<br>
<br>[root@example ~]# ulimit -a<br>core file size (blocks, -c) 0<br>data seg size (kbytes, -d) unlimited<br>scheduling priority (-e) 0<br>file size (blocks, -f) unlimited<br>pending signals (-i) 126948<br>
max locked memory (kbytes, -l) 64<br>max memory size (kbytes, -m) unlimited<br>open files (-n) 65535<br>pipe size (512 bytes, -p) 8<br>POSIX message queues (bytes, -q) 819200<br>
real-time priority (-r) 0<br>stack size (kbytes, -s) 10240<br>cpu time (seconds, -t) unlimited<br>max user processes (-u) 65535<br>virtual memory (kbytes, -v) unlimited<br>
file locks (-x) unlimited<br><br></div><div>The output of ulimit -a on the CT is the same.<br><br></div><div>Question: is this set-up 'correct' and unlikely to cause future issues with access to files and resources i.e. is there a danger that I have over-committed the server?<br>
<br></div><div>Many thanks and best regards<br><br></div><div>Chip Scooter<br></div><div><br><br>
</div></div>