[Devel] [PATCH p.haul] Increate a limit for opened files for criu pre-dump and page-server
Andrei Vagin
avagin at openvz.org
Tue Nov 14 00:31:17 MSK 2017
criu restore has to be resored with a standard limit, because the kernel
doesn't shrink fdtable, when a limit is reduced. fdtable-s are charged
to kmem, so if we run criu restore with a big limit, all restored
proccess are forked with this limit and only then they restore their
limits, but fdtable-s are allocated for the initial limit, so they eat
much more kernel memory then they have to.
https://jira.sw.ru/browse/PSBM-67194
Cc: Cyrill Gorcunov <gorcunov at gmail.com>
Cc: Pavel Vokhmyanin <pvokhmyanin at virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin at openvz.org>
---
phaul/criu_api.py | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/phaul/criu_api.py b/phaul/criu_api.py
index 73c642a..4627d5f 100644
--- a/phaul/criu_api.py
+++ b/phaul/criu_api.py
@@ -9,6 +9,7 @@ import re
import socket
import subprocess
import util
+import resource
import pycriu
@@ -36,9 +37,16 @@ class criu_conn(object):
util.set_cloexec(css[1])
logging.info("Passing (ctl:%d, data:%d) pair to CRIU",
css[0].fileno(), mem_sk.fileno())
+
+ # criu uses a lot of pipes to pre-dump memory, so we need to
+ # increate a limit for opened files.
+ fileno_max = int(open("/proc/sys/fs/nr_open").read())
+ fileno_old = resource.getrlimit(resource.RLIMIT_NOFILE)
+ resource.setrlimit(resource.RLIMIT_NOFILE, (fileno_max, fileno_max))
self._swrk = subprocess.Popen([criu_binary,
"swrk", "%d" % css[0].fileno()])
css[0].close()
+ resource.setrlimit(resource.RLIMIT_NOFILE, fileno_old)
self._cs = css[1]
self._last_req = -1
self._mem_fd = mem_sk.fileno()
--
2.13.6
More information about the Devel
mailing list