[CRIU] [PATCHv3 2/3] test: Add rpc test for dump/restore with --remote

Rodrigo Bruno rbruno at gsd.inesc-id.pt
Thu May 10 12:05:37 MSK 2018


Hi Andrei and Radostin,

in our last iteration, I sent to Andei a patch to rework the cache-proxy
code to be single threaded.

Is it possible to merge that patch into CRIU dev?

I think it would be easy this way because we would be then working on small
code changes to fix bugs related to tests failing.

I would be able to help fixing the implementation.

best,
rodrigo

2018-05-10 2:10 GMT+01:00 Andrei Vagin <avagin at virtuozzo.com>:

> Hi Radostin,
>
> Should we run this test in scripts/travis/travis-tests?
>
> Rodrigo has patches, which reworks image proxy & cache to a single
> thread model. These patches are mostly ready to be merged, but need
> some work.
>
> Would it be interesting for you to finish criu --remote, that we could
> merge it into the master branch?
>
> For that, all tests have to pass, and image and cache have to
> be single-thread processes.
>
> There is one inline comment.
>
> On Mon, Feb 12, 2018 at 10:54:05AM +0000, Radostin Stoyanov wrote:
> > Signed-off-by: Radostin Stoyanov <rstoyanov1 at gmail.com>
> > ---
> >  test/others/rpc/remote.py | 86 ++++++++++++++++++++++++++++++
> +++++++++++++++++
> >  test/others/rpc/run.sh    | 26 ++++++++++++++
> >  2 files changed, 112 insertions(+)
> >  create mode 100644 test/others/rpc/remote.py
> >
> > diff --git a/test/others/rpc/remote.py b/test/others/rpc/remote.py
> > new file mode 100644
> > index 00000000..96ce3e21
> > --- /dev/null
> > +++ b/test/others/rpc/remote.py
> > @@ -0,0 +1,86 @@
> > +#!/usr/bin/env python2
> > +
> > +import socket, os, imp, sys, errno, signal
> > +import rpc_pb2 as rpc
> > +import argparse
> > +
> > +MAX_MSG_SIZE = 1024
> > +
> > +parser = argparse.ArgumentParser(description="Test --remote option
> using CRIU RPC")
> > +parser.add_argument('socket', type = str, help = "CRIU service socket")
> > +parser.add_argument('dir', type = str, help = "Directory where CRIU
> images should be placed")
> > +parser.add_argument('pid', type = int, help = "PID of process to be
> dumped")
> > +
> > +args = vars(parser.parse_args())
> > +
> > +# Connect to RPC socket
> > +s = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
> > +s.connect(args['socket'])
> > +
> > +# Open images-dir
> > +dir_fd = os.open(args['dir'], os.O_DIRECTORY)
> > +if dir_fd < 0:
> > +     print "Failed to open dir %s" % args['dir']
> > +     sys.exit(-1)
> > +
> > +# Prepare dump request
> > +req = rpc.criu_req()
> > +req.type = rpc.DUMP
> > +req.opts.remote      = True
> > +req.opts.log_level = 4
> > +req.opts.pid = args['pid']
> > +req.opts.images_dir_fd       = dir_fd
> > +
> > +# Send dump request
> > +s.send(req.SerializeToString())
> > +
> > +# Receive responce
> > +resp = rpc.criu_resp()
> > +resp.ParseFromString(s.recv(MAX_MSG_SIZE))
> > +
> > +# Reconnect to RPC socket
> > +s.close()
> > +s = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
> > +s.connect(args['socket'])
> > +
> > +
> > +if resp.type != rpc.DUMP:
> > +     print 'Unexpected dump msg type'
> > +     sys.exit(-1)
> > +else:
> > +     if resp.success:
> > +             print 'Dump Success'
> > +     else:
> > +             print 'Dump Fail'
> > +             sys.exit(-1)
> > +
> > +req                  = rpc.criu_req()
> > +req.type             = rpc.RESTORE
> > +req.opts.remote      = True
> > +req.opts.log_level = 4
> > +req.opts.images_dir_fd       = dir_fd
> > +
> > +# Send restore request
> > +s.send(req.SerializeToString())
> > +
> > +# Receive response
> > +resp         = rpc.criu_resp()
> > +resp.ParseFromString(s.recv(MAX_MSG_SIZE))
> > +
> > +# Close RPC socket
> > +s.close()
> > +# Close fd of images dir
> > +os.close(dir_fd)
> > +
> > +if resp.type != rpc.RESTORE:
> > +     print 'Unexpected restore msg type'
> > +     sys.exit(-1)
> > +else:
> > +     if resp.success:
> > +             print 'Restore success'
> > +             print "PID of the restored program is %d\n" %
> (resp.restore.pid)
> > +             # Kill restored process
> > +             os.kill(resp.restore.pid, signal.SIGTERM)
> > +     else:
> > +             print 'Restore fail'
> > +             sys.exit(-1)
> > diff --git a/test/others/rpc/run.sh b/test/others/rpc/run.sh
> > index ed99addb..5364cc90 100755
> > --- a/test/others/rpc/run.sh
> > +++ b/test/others/rpc/run.sh
> > @@ -76,6 +76,31 @@ function test_errno {
> >       setsid ./errno.py build/criu_service.socket build/imgs_errno <
> /dev/null &>> build/output_errno
> >  }
> >
> > +function test_remote {
> > +     mkdir -p build/imgs_remote
> > +
> > +     title_print "Run image-cache"
> > +     ${CRIU} image-cache -v4 -o dump-loop.log -D build/imgs_remote
> --port 9996 &
> > +     CACHE_PID=${!}
> > +
> > +     title_print "Run image-proxy"
> > +     ${CRIU} image-proxy -v4 -o dump-loop.log -D build/imgs_remote
> --address localhost --port 9996 &
>
> How do you wait when image-cache creates a listen socket?
>
> > +     PROXY_PID=${!}
> > +
> > +     title_print "Run loop.sh"
> > +     setsid ./loop.sh < /dev/null &> build/loop.log &
> > +     LOOP_PID=${!}
> > +     echo "Start loop with pid ${P}"
> > +
> > +     title_print "Run dump/restore with --remote test"
> > +     ./remote.py build/criu_service.socket build/imgs_remote $P <
> /dev/null &>> build/output_remote
> > +
> > +     # Clean up on failure
> > +     kill -SIGTERM ${LOOP_PID}
> > +     kill -SIGTERM ${CACHE_PID}
> > +     kill -SIGTERM ${PROXT_PID}
> > +}
> > +
> >  trap 'echo "FAIL"; stop_server' EXIT
> >
> >  start_server
> > @@ -85,6 +110,7 @@ test_py
> >  test_restore_loop
> >  test_ps
> >  test_errno
> > +test_remote
> >
> >  stop_server
> >
> > --
> > 2.14.3
> >
> > _______________________________________________
> > CRIU mailing list
> > CRIU at openvz.org
> > https://lists.openvz.org/mailman/listinfo/criu
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/criu/attachments/20180510/51406561/attachment.html>


More information about the CRIU mailing list