[Users] VE affecting all server node.
Solar Designer
solar at openwall.com
Tue Feb 23 04:37:04 EST 2010
This is in addition to the excellent response by Gregor at HostGIS.
I second Gregor's advice, but I obviously won't repeat it here.
On Wed, Feb 17, 2010 at 12:59:56PM -0500, Jos? David Bravo ?lvarez wrote:
> We have one VE that have inside MySQL Server, this VPS is running a
> mysql query that load very high all the server node and affecting others
> VE's. How can I limit this VPS to avoid the high load caused by the
> MySQL query.
Our experience with mysqld in OpenVZ containers suggests that quite
often the impact on other containers is associated with temporary files
created by mysqld. You could want to check where your mysqld creates
temporary files and whether that directory is on a tmpfs mount or on an
on-disk filesystem. It really ought to be on tmpfs. If it is on disk,
then every time a large temporary file is created and is kept for more
than a few seconds, data gets written out to disk. As you probably
know, data writes (as opposed to reads) are particularly unfriendly in
terms of affecting other accesses (including by other containers) on
RAID devices.
Additionally, even with tiny and short-lived temporary files, as well as
with reads of non-temporary files (such as MySQL tables), the directory
or file "atime" (access time) may need to be updated on disk after each
file access.(*) This involves extra disk seeks and writes (to both
drives at once in a RAID-1).
(*) Well, no more often than once per second per directory (for
temporary files) or once per second per file (for non-temporary files
being read) if the filesystem uses 1 second granularity for the "atime".
Yet this may have a significant impact on performance of a RAID array,
especially when the number of accessed files is large.
So you need to ensure that mysqld's temporary files directory (maybe
just /tmp) is on tmpfs, and you could want to mount the filesystem on
which the container's directory tree resides with the "noatime" mount
option (this applies to the underlying filesystem, not to simfs).
Additionally, you could want to adjust the size of the tmpfs mount vs.
your RAM size and the barriers for privvmpages and shmpages. You need
to ensure that tmpfs size is strictly less than the privvmpages barrier,
because both files created on tmpfs and memory allocations by processes
inside the container are counted against that beancounter. You could
want to experiment with the tmpfs size such that some MySQL queries that
would result in multi-gigabyte temporary files (yes, I've seen that
happen) would actually fail before they impact the server at large by
purging this much of cached data from RAM.
Finally, I'd like to point out that "load averages" are just one of the
measures of server load. They should be taken with a grain of salt.
High load average numbers are not necessarily indicative of unacceptably
high server load. For example, if you limit a container's allotted CPU
units and/or if you "nice" specific processes, this will not decrease
the reported load average numbers (in fact, it might even increase
them), yet the impact on other containers and other processes will be
reduced. Also, it is quite realistic for a busy server handling many
concurrent short-lived requests to have load averages in the 10.0 to
100.0 range, yet have idle resources available for use (CPU time and
disk seeks "capacity" - both at once). Of course, this is not always
the case (e.g., it is also realistic for a server to exhaust a resource
at a load average of 1.0), but sometimes it is. So it is wrong to rely
on load averages alone.
I hope this helps.
Alexander Peslyak <solar at openwall.com>
GPG key ID: 5B341F15 fp: B3FB 63F4 D7A3 BCCC 6F6E FC55 A2FC 027C 5B34 1F15
http://www.openwall.com - bringing security into open computing environments
More information about the Users
mailing list