[Users] Starting With OpenVZ and Having Interface Issues
Matt
matt.mailinglists at gmail.com
Thu Feb 6 07:45:19 PST 2014
>> I just installed OpenVZ on CentOS 6.x 64 bit following this guide a
>> few days ago.
>>
>> http://openvz.org/Quick_Installation_CentOS_6
>>
>> I am mostly interested in creating containers with veth interfaces so
>> I can assign multiple public IPv4 and IPv6 addresses from inside the
>> container. I noticed that when I am logged into the CentOS physical
>> server with putty through ssh I can start a container that has a veth
>> interface with 'vzctl start 102' and it starts just fine. I can ping
>> its IPv4 and IPv6 addresses just fine. However when I issue 'vzctl
>> stop 102' my putty ssh connection to the OpenVZ server is dropped and
>> I must reconnect. I am able to reconnect just fine. Again, I am
>> logged into the OpenVZ master not the container.
>
>
> I can't help much in here, since you are following the unofficial guide
> (sorry for that, I just marked it as such on wiki), but it looks like it is
Is there a more official guide for Centos 6.x that includes bridged interfaces?
I see this:
http://openvz.org/Quick_installation
But no mention of how to add bridging to it.
> not correct to add your primary NIC device right to the bridge you are
> using for containers. I don't remember details at the moment, but
> it looks like in such setup you have the effect you are complaining about.
> It's something about bridge state and the state of devices that are in
> (or change in their state, i.e. ups and downs).
>
> Again, I am not quite sure how to configure this to achieve what you want
> (I'd just use venet and --ipadd/--ipdel to set up in-CT IPs), but I remember
I am putting a few directadmin instances on here and life is much
easier if they can manage there own IP space.
> having good experience with the virbr0 device which is pre-configured
> on RHEL/CentOS (for use by KVM guests), with all bells and whistles
I prefer the lower over head and easy expansion of OpenVZ containers
or will virbr0 work with Openvz?
> including
> a DHCP server. If this is not what you want, you have to dig deeper into how
> bridged networking and bridge devices work.
No need for DHCP.
Thanks.
>
> Hope that helps.
>
>>
>> Can anyone tell me what is going on here? After further testing I
>> determined it does not always do it. The first time I stop a
>> container after I reboot the master, physical server, it seems most
>> prone to act up.
>>
>> ######### just did reboot 30 minutes ago or so............
>>
>> [root at ex1 ~]# vzlist
>> CTID NPROC STATUS IP_ADDR HOSTNAME
>> 103 92 running - server.example456.com
>> [root at ex1 ~]# vzctl start 102
>> Starting container...
>> Container is mounted
>> Setting CPU units: 1000
>> Setting CPUs: 1
>> Configure veth devices: veth102.0
>> Adding interface veth102.0 to bridge vmbr0 on CT0 for CT102
>> Container start in progress...
>> [root at ex1 ~]# vzlist
>> CTID NPROC STATUS IP_ADDR HOSTNAME
>> 102 19 running - server102.example.com
>> 103 92 running - server.example456.com
>> [root at ex1 ~]# vzctl stop 102
>> Stopping container ...
>>
>> ############## Here is where I get kicked off putty/ssh......
More information about the Users
mailing list