Thank you, JR. As it turns out, I was *severely* starving my VMs. vzsplit worked like a charm...<br><br>--b<br><br><div class="gmail_quote">On Fri, Oct 14, 2011 at 5:35 PM, JR Richardson <span dir="ltr"><<a href="mailto:jmr.richardson@gmail.com">jmr.richardson@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div></div><div class="h5">> I know this is has probably been discussed ad nauseum, but I haven't found<br>
> what I'm looking for yet, so I thought I would ask here.<br>
><br>
> I have been running OpenVZ for a few years, but in the last couple of weeks,<br>
> I have noticed over the past couple of weeks that several VMs were getting<br>
> out of spec settings, mainly dcachesize growing too large.<br>
><br>
> These VMs started on a Debian openvz box, and as my virtual infrastructure<br>
> grew, I started using a pair of proxmox-ve machines (which is Debian-lenny<br>
> based as well), which are clustered.<br>
><br>
> I have 8 VMs that were created over time, some on 32-bit host machines, some<br>
> on 64-bit. Thus, some have /proc/user_beancounters that look like:<br>
><br>
> 1: kmemsize 13775736<br>
> 15028224 48811846 51254098 63446<br>
> lockedpages 0<br>
> 447 393216 393216 0<br>
> privvmpages 15152<br>
> 105895 426752 439252 0<br>
> shmpages 648<br>
> 1304 21504 21504 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> numproc 47<br>
> 72 240 240 0<br>
> physpages 166345<br>
> 425143 0 2147483647 0<br>
> vmguarpages 0<br>
> 0 426752 2147483647 0<br>
> oomguarpages 6374<br>
> 97683 426752 2147483647 0<br>
> numtcpsock 44<br>
> 48 360 360 0<br>
> numflock 1<br>
> 7 188 206 0<br>
> numpty 0<br>
> 2 16 16 0<br>
> numsiginfo 1<br>
> 27 256 256 0<br>
> tcpsndbuf 525744<br>
> 1026352 4212558 6014798 0<br>
> tcprcvbuf 524552<br>
> 3052984 4212558 6014798 0<br>
> othersockbuf 46240<br>
> 65808 1126080 2097152 0<br>
> dgramrcvbuf 0<br>
> 101600 262144 262144 0<br>
> numothersock 75<br>
> 82 360 360 0<br>
> dcachesize 9997638<br>
> 10000000 8000000 10000000 0<br>
> numfile 508<br>
> 695 9312 9312 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> numiptent 20<br>
> 20 128 128 0<br>
><br>
> While others have effectively unlimited barrier and limit settings:<br>
><br>
> 7: kmemsize 93292551 107253760<br>
> 9223372036854775807 9223372036854775807 0<br>
> lockedpages 0<br>
> 16 393216 393216 0<br>
> privvmpages 299033<br>
> 413214 524288 536788 0<br>
> shmpages 68 724<br>
> 9223372036854775807 9223372036854775807 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> numproc 86<br>
> 108 1024 1024 0<br>
> physpages 321589<br>
> 496217 0 9223372036854775807 0<br>
> vmguarpages 0<br>
> 0 524288 9223372036854775807 0<br>
> oomguarpages 155305<br>
> 180405 524288 9223372036854775807 0<br>
> numtcpsock 13 17<br>
> 9223372036854775807 9223372036854775807 0<br>
> numflock 3 9<br>
> 9223372036854775807 9223372036854775807 0<br>
> numpty 0<br>
> 2 255 255 0<br>
> numsiginfo 1<br>
> 15 1024 1024 0<br>
> tcpsndbuf 226720 329312<br>
> 9223372036854775807 9223372036854775807 0<br>
> tcprcvbuf 277072 5662864<br>
> 9223372036854775807 9223372036854775807 0<br>
> othersockbuf 43928 66680<br>
> 9223372036854775807 9223372036854775807 0<br>
> dgramrcvbuf 0 5648<br>
> 9223372036854775807 9223372036854775807 0<br>
> numothersock 63 69<br>
> 9223372036854775807 9223372036854775807 0<br>
> dcachesize 88045648 101016538<br>
> 9223372036854775807 9223372036854775807 0<br>
> numfile 360 605<br>
> 9223372036854775807 9223372036854775807 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> dummy 0<br>
> 0 0 0 0<br>
> numiptent 20 20<br>
> 9223372036854775807 9223372036854775807 0<br>
><br>
> I have three questions. First, I know that leaving everything unlimited is a<br>
> quick path to running out of resources on the host machine. That said, I've<br>
> been having troubles recently with the VMs with "normal" settings. It<br>
> started out with dcachesize going out of spec, which, when I adjusted it,<br>
> within an hour, I started getting out of memory errors, requiring me to up<br>
> the kmemsize...This then caused problems on another "normal" VM, and so<br>
> forth.<br>
><br>
> As I said, I know setting everything to unlimited is probably not<br>
> recommended, so what is the recommended way to set the proper values for<br>
> user_beancounters? Every time I change values in user_beancounters,<br>
> something else comes unglued, except for the ones that have unlimited<br>
> kmemsize and dcachesize.<br>
><br>
> Is there a tool to set up the values based on the use of the particular VM?<br>
> Is there any more information I need to provide?<br>
><br>
> Thanks,<br>
> --b<br>
</div></div>Try using vzsplit to segment your VE's equally, start there and<br>
increase/decrease resources per the demand of each VE. Once you<br>
adjust your config conf files, use vzcfgvalidate to ensure your beans<br>
are adjusted propperly.<br>
<br>
Good luck.<br>
<br>
JR<br>
<font color="#888888">--<br>
JR Richardson<br>
Engineering for the Masses<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@openvz.org">Users@openvz.org</a><br>
<a href="https://openvz.org/mailman/listinfo/users" target="_blank">https://openvz.org/mailman/listinfo/users</a><br>
</font></blockquote></div><br>