Thank you, JR. As it turns out, I was *severely* starving my VMs. vzsplit worked like a charm...<br><br>--b<br><br><div class="gmail_quote">On Fri, Oct 14, 2011 at 5:35 PM, JR Richardson <span dir="ltr">&lt;<a href="mailto:jmr.richardson@gmail.com">jmr.richardson@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div></div><div class="h5">&gt; I know this is has probably been discussed ad nauseum, but I haven&#39;t found<br>

&gt; what I&#39;m looking for yet, so I thought I would ask here.<br>
&gt;<br>
&gt; I have been running OpenVZ for a few years, but in the last couple of weeks,<br>
&gt; I have noticed over the past couple of weeks that several VMs were getting<br>
&gt; out of spec settings, mainly dcachesize growing too large.<br>
&gt;<br>
&gt; These VMs started on a Debian openvz box, and as my virtual infrastructure<br>
&gt; grew, I started using a pair of proxmox-ve machines (which is Debian-lenny<br>
&gt; based as well), which are clustered.<br>
&gt;<br>
&gt; I have 8 VMs that were created over time, some on 32-bit host machines, some<br>
&gt; on 64-bit. Thus, some have /proc/user_beancounters that look like:<br>
&gt;<br>
&gt;        1:  kmemsize                 13775736<br>
&gt; 15028224             48811846             51254098                63446<br>
&gt;            lockedpages                     0<br>
&gt; 447               393216               393216                    0<br>
&gt;            privvmpages                 15152<br>
&gt; 105895               426752               439252                    0<br>
&gt;            shmpages                      648<br>
&gt; 1304                21504                21504                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            numproc                        47<br>
&gt; 72                  240                  240                    0<br>
&gt;            physpages                  166345<br>
&gt; 425143                    0           2147483647                    0<br>
&gt;            vmguarpages                     0<br>
&gt; 0               426752           2147483647                    0<br>
&gt;            oomguarpages                 6374<br>
&gt; 97683               426752           2147483647                    0<br>
&gt;            numtcpsock                     44<br>
&gt; 48                  360                  360                    0<br>
&gt;            numflock                        1<br>
&gt; 7                  188                  206                    0<br>
&gt;            numpty                          0<br>
&gt; 2                   16                   16                    0<br>
&gt;            numsiginfo                      1<br>
&gt; 27                  256                  256                    0<br>
&gt;            tcpsndbuf                  525744<br>
&gt; 1026352              4212558              6014798                    0<br>
&gt;            tcprcvbuf                  524552<br>
&gt; 3052984              4212558              6014798                    0<br>
&gt;            othersockbuf                46240<br>
&gt; 65808              1126080              2097152                    0<br>
&gt;            dgramrcvbuf                     0<br>
&gt; 101600               262144               262144                    0<br>
&gt;            numothersock                   75<br>
&gt; 82                  360                  360                    0<br>
&gt;            dcachesize                9997638<br>
&gt; 10000000              8000000             10000000                    0<br>
&gt;            numfile                       508<br>
&gt; 695                 9312                 9312                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            numiptent                      20<br>
&gt; 20                  128                  128                    0<br>
&gt;<br>
&gt; While others have effectively unlimited barrier and limit settings:<br>
&gt;<br>
&gt;        7:  kmemsize                 93292551            107253760<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            lockedpages                     0<br>
&gt; 16               393216               393216                    0<br>
&gt;            privvmpages                299033<br>
&gt; 413214               524288               536788                    0<br>
&gt;            shmpages                       68                  724<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            numproc                        86<br>
&gt; 108                 1024                 1024                    0<br>
&gt;            physpages                  321589<br>
&gt; 496217                    0  9223372036854775807                    0<br>
&gt;            vmguarpages                     0<br>
&gt; 0               524288  9223372036854775807                    0<br>
&gt;            oomguarpages               155305<br>
&gt; 180405               524288  9223372036854775807                    0<br>
&gt;            numtcpsock                     13                   17<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            numflock                        3                    9<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            numpty                          0<br>
&gt; 2                  255                  255                    0<br>
&gt;            numsiginfo                      1<br>
&gt; 15                 1024                 1024                    0<br>
&gt;            tcpsndbuf                  226720               329312<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            tcprcvbuf                  277072              5662864<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            othersockbuf                43928                66680<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            dgramrcvbuf                     0                 5648<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            numothersock                   63                   69<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            dcachesize               88045648            101016538<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            numfile                       360                  605<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            dummy                           0<br>
&gt; 0                    0                    0                    0<br>
&gt;            numiptent                      20                   20<br>
&gt; 9223372036854775807  9223372036854775807                    0<br>
&gt;<br>
&gt; I have three questions. First, I know that leaving everything unlimited is a<br>
&gt; quick path to running out of resources on the host machine. That said, I&#39;ve<br>
&gt; been having troubles recently with the VMs with &quot;normal&quot; settings. It<br>
&gt; started out with dcachesize going out of spec, which, when I adjusted it,<br>
&gt; within an hour, I started getting out of memory errors, requiring me to up<br>
&gt; the kmemsize...This then caused problems on another &quot;normal&quot; VM, and so<br>
&gt; forth.<br>
&gt;<br>
&gt; As I said, I know setting everything to unlimited is probably not<br>
&gt; recommended, so what is the recommended way to set the proper values for<br>
&gt; user_beancounters? Every time I change values in user_beancounters,<br>
&gt; something else comes unglued, except for the ones that have unlimited<br>
&gt; kmemsize and dcachesize.<br>
&gt;<br>
&gt; Is there a tool to set up the values based on the use of the particular VM?<br>
&gt; Is there any more information I need to provide?<br>
&gt;<br>
&gt; Thanks,<br>
&gt; --b<br>
</div></div>Try using vzsplit to segment your VE&#39;s equally, start there and<br>
increase/decrease resources per the demand of each VE.  Once you<br>
adjust your config conf files, use vzcfgvalidate to ensure your beans<br>
are adjusted propperly.<br>
<br>
Good luck.<br>
<br>
JR<br>
<font color="#888888">--<br>
JR Richardson<br>
Engineering for the Masses<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@openvz.org">Users@openvz.org</a><br>
<a href="https://openvz.org/mailman/listinfo/users" target="_blank">https://openvz.org/mailman/listinfo/users</a><br>
</font></blockquote></div><br>