[Users] Remote storage and migration

Jan Tomasek jan at tomasek.cz
Mon Aug 10 12:38:29 EDT 2009


Hello Gastón,

> Hello, list. I've been using OpenVZ since last year. Now I'd like to
> configure the cluster I've been using to use remote storage, so that
> migrations do not require to copy the container's filesystem.
> 
> Reading the wiki and the forum, I've learned that NFS is not
> recommended, but I don't know why. iSCSI has been suggested as a way
> to go, although no success story supported the suggestion. Finally,
> somebody said that he was using GFS without problems, but one of the
> OpenVZ developers said (at that time) that GFS was not recommended due
> to stability issuess.
> 
> I've been told that remote storage with OpenVZ is not a well-known
> area, but I have to ask anyway, so as to leverage in previous
> experiences. Any information is welcome, even more the problems that
> were found with each approach, so that I know were to go and were not
> to.

I just checked list archive with idea to ask some questions about this
and I see I can help you a bit.

I have two DELL PE R805 and one DELL MD3000i - it's iSCSI array. I plan
to provide one LUN per one VE. With help from page:

http://wiki.openvz.org/Checkpointing_and_live_migration

I just successfully tested migration from one system (chlivek3) to
second system (chlivek2). Both systems are exactly the same architecture.

On chlivek3 I did this:

> chlivek3:~# export VEID=233204
> chlivek3:~# vzctl chkpnt $VEID --suspend
> Setting up checkpoint...
>         suspend...
>         get context...
> Checkpointing completed succesfully
> chlivek3:~# vzctl chkpnt $VEID --dump --dumpfile /tmp/$VEID
> Setting up checkpoint...
>         join context..
>         dump...
> Checkpointing completed succesfully
> chlivek3:~# vzctl chkpnt $VEID --kill
> Killing...
> chlivek3:~# vzlist -a
>       CTID      NPROC STATUS  IP_ADDR         HOSTNAME                        
>     192002         11 running 192.168.1.2     dnetc                           
>     233204          - mounted 195.113.233.204 mig.cesnet.cz                   
> chlivek3:~# vzctl umount $VEID
> Container is unmounted
> chlivek3:~# vzlist -a
>       CTID      NPROC STATUS  IP_ADDR         HOSTNAME                        
>     192002         11 running 192.168.1.2     dnetc                           
>     233204          - stopped 195.113.233.204 mig.cesnet.cz      
> chlivek3:~# umount /vz/private/$VEID

0n chlivek2 I did this:

> chlivek2:~# mount LABEL=233204 /vz/private/233204/
> chlivek2:~# scp root at chlivek3:/tmp/233204 /tmp/
> 233204                                        100% 1823KB   1.8MB/s   00:00    
> chlivek2:~# ls -l /tmp/233204 
> -rw------- 1 root root 1866624 srp 10 17:33 /tmp/233204
> chlivek2:~# vzctl restore 233204 --undump --dumpfile /tmp/233204 
> Restoring container ...
> Starting container ...
> Container is mounted
>         undump...
> Adding IP address(es): 195.113.233.204
> Setting CPU units: 1000
> Configure meminfo: 65536
>         get context...
> Container start in progress...
> Restoring completed succesfully
> chlivek2:~# vzctl restore 233204 --resume
> Resuming...
> chlivek2:~# vzlist -a
>       CTID      NPROC STATUS  IP_ADDR         HOSTNAME                        
>     192002         11 running 192.168.1.2     dnetc                           
>     233204          8 running 195.113.233.204 mig.cesnet.cz                   

VE is up and working. After this I feel very powerful! ;)

I see some possible problems with this:

a) what to do if `vzctl restore 233204 --undump ...` fails? To be able
umount storage I must kill VE in first place. Maybe I might be able to
undump VE on source system maybe not. If not than this is effectively
same as switching the machine off.

b) how to maintain config files? I'm planing to use `vzctl set VE
--disabled yes --save`.

c) how to prevent both hosts from mounting the same LUN? I can check
`tune2fs -l | grep needs_recovery` and stop VE startup if true. But this
will require my manual actions in case of incorrect host shutdown. Now
both hots are in the same hostgroup and LUN mapping permits LUN to be
accessed by both hosts. I can change mapping that only one host might be
permitted to access LUN but this will require change mapping every time
VE migrate - another slowdown.


Right now I'm not sure I will really deploy this feature. It is
definitely cute toy to impress my boss with. ;) It has to be tuned a bit
to get closet to one click migration I've seen in VMware... But I'm not
sure if I really need it. If it proves to be reliable it might be handy
to be able reboot host without user notice, but I ask my users to setup
systems correctly to come up well after reboot and I rarely need to
reboot whole system.


I would like to read others options on problems I see on migration.

Thanks
-- 
-----------------------
Jan Tomasek aka Semik
http://www.tomasek.cz/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: OpenPGP digital signature
Url : http://openvz.org/pipermail/users/attachments/20090810/74d143a1/signature.bin


More information about the Users mailing list