[Users] Virtuozzo 7 - bridged veth interfaces for container

Axton axton.grams at gmail.com
Sat Feb 13 13:23:46 PST 2016


vznetcfg seems to do what I need, but it seems vznetcfg is missing from
beta4 (all packages) when installed on an existing OS.  Here is the doc
page referencing this utility:

https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_vznetcfg.html

I have this script on the machine, but it is not the script referenced from
the documentation:

[root at cluster-02 tmp]# find / -name vznetcfg
/usr/libexec/libvzctl/scripts/vznetcfg

Axton Grams


On Sat, Feb 13, 2016 at 2:53 PM, Axton <axton.grams at gmail.com> wrote:

> I found the container action scripts capability, but this is not
> sufficient for my needs:
>
> https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_container_action_scripts.html
>
> The start script runs in the context of the container; I need the
> equivalent trigger to run a script on the host.  The mount script is
> executed on the host, but the veth interface has not been created on the
> host when this script runs.
>
> On Sat, Feb 13, 2016 at 10:49 AM, Axton <axton.grams at gmail.com> wrote:
>
>> Digging through my notes, I accomplished this using a script
>> /usr/sbin/vznetaddr which parses the ctid.conf file and adds the CT veth
>> interfaces to a bridge.
>>
>>
>> and a config file:
>>
>> $ cat /etc/vz/vznet.conf
>> EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
>>
>> If no native capabilities exist to make the bridge configuration changes
>> in Virtuozzo 7, what I need is a way to call a script when the container is
>> started, after the interface has been created on the host.  Is there a way
>> to hook into the container startup in this way?
>>
>> At the end of the day, what I am looking to do is automatlly attach at
>> container to one or more vlans using veth interfaces.  I accomplished this
>> in the past using bridge utils. It is important that I can specify which
>> bridge each veth interface is added to.  Let me know if there is a better
>> way to do this.
>>
>> Thanks,
>> Axton Grams
>>
>> On Fri, Feb 12, 2016 at 9:22 PM, Axton <axton.grams at gmail.com> wrote:
>>
>>> My host's network uses teaming, bridge, and vlan interfaces.
>>>
>>> There are 4 physical interfaces: enp0s20f0-enp0s20f3
>>> The 4 physical interfaces are grouped using lacp via teaming.
>>>
>>> [root at cluster-02 ~]# teamdctl team0 config dump actual
>>> {
>>>     "device": "team0",
>>>     "link_watch": {
>>>         "name": "ethtool"
>>>     },
>>>     "ports": {
>>>         "enp0s20f0": {},
>>>         "enp0s20f1": {},
>>>         "enp0s20f2": {},
>>>         "enp0s20f3": {}
>>>     },
>>>     "runner": {
>>>         "active": true,
>>>         "fast_rate": true,
>>>         "name": "lacp",
>>>         "tx_balancer": {
>>>             "name": "basic"
>>>         },
>>>         "tx_hash": [
>>>             "eth",
>>>             "ipv4",
>>>             "ipv6"
>>>         ]
>>>     }
>>> }
>>>
>>> The switch is configured to send 802.1q (vlan) encapsulated packets for
>>> a series of vlans.  For each vlan, a vlan interface is configured on the
>>> linux host and a bridge is configured on top of the vlan iface.  Take vlan
>>> 80 as an example:
>>>
>>> [root at cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0.80
>>> DEVICE=team0.80
>>> VLAN=yes
>>> BOOTPROTO=none
>>> NM_CONTROLLED=no
>>> BRIDGE=vmbr80
>>> IPV6INIT=no
>>> MTU=9000
>>>
>>> [root at cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vmbr80
>>> VLAN=yes
>>> TYPE=Bridge
>>> DEVICE=vmbr80
>>> PHYSDEV=team0
>>> VLAN_ID=80
>>> REORDER_HDR=0
>>> IPV4_FAILURE_FATAL=no
>>> IPV6INIT=no
>>> IPV6_AUTOCONF=no
>>> IPV6_DEFROUTE=no
>>> IPV6_FAILURE_FATAL=no
>>> NAME=vlan-vlan80
>>> ONBOOT=yes
>>> MTU=9000
>>> BOOTPROTO=none
>>> STP=off
>>> DELAY=0
>>> IPV4_DEFROUTE=no
>>>
>>>
>>> In a nutshell: physical ifaces (4) -> team0 -> team0.<vlanid> (10) ->
>>> vmbr<vlanid> (bridge).  The bridges are all in place and everything works
>>> as expected with the interface/network configuration on the host:
>>>
>>> [root at cluster-02 ~]# brctl show
>>> bridge name     bridge id               STP enabled     interfaces
>>> host-routed             8000.000000000000       no
>>> virbr1          8000.525400885137       no              virbr1-nic
>>> vmbr10          8000.0cc47a6b9554       no              team0.10
>>> vmbr20          8000.0cc47a6b9554       no              team0.20
>>> vmbr30          8000.0cc47a6b9554       no              team0.30
>>> vmbr40          8000.0cc47a6b9554       no              team0.40
>>> vmbr5           8000.0cc47a6b9554       no              team0.5
>>> vmbr70          8000.0cc47a6b9554       no              team0.70
>>> vmbr80          8000.0cc47a6b9554       no              team0.80
>>> vmbr90          8000.0cc47a6b9554       no              team0.90
>>> vmbr91          8000.0cc47a6b9554       no              team0.91
>>> vmbr92          8000.0cc47a6b9554       no              team0.92
>>>
>>> In openvz on RHEL 6, I added an interface to a container using vzctl
>>> like this:
>>> vzctl set 30000 --netif_add eth4,,veth30000.40,,vmbr40 --save
>>>
>>> I am trying to add a veth interface to a container and specify the
>>> bridge interface on Virtuozzo 7.
>>> - vzctl behaves differently, it expects macs.
>>> - prlctl does not have sufficient options; I can add an interface but
>>> not specify the bridge device
>>> - prlsrvctl looks like it only supports physical interfaces (see
>>> following commands)
>>> [root at cluster-02 ~]# ip link show vmbr80
>>> 21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
>>> state UP mode DEFAULT
>>>     link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff
>>> [root at cluster-02 ~]# ip a show vmbr80
>>> 21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
>>> state UP
>>>     link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff
>>> [root at cluster-02 ~]# prlsrvctl net add vlan80 -i vmbr90 -t bridged --ip
>>> 10.1.0.0/22
>>> Failed to find network adapter vmbr90 on the server.
>>>
>>> - vznetcfg does not seem to exist in a state that matches the virtuozzo
>>> 7 documentation (only see /usr/libexec/libvzctl/scripts/vznetcfg on my
>>> machine)
>>>
>>> I thought maybe I should add a virtual network and bridge it, but I
>>> can't seem to find a way to do this.
>>> I thought maybe I would use vzctl like I did on rhel 6, but that
>>> requires me to provide mac addresses (rhel 6 would autogen macs)
>>>
>>> At a bit of a loss.  Any help is appreciated.
>>>
>>> Thanks,
>>> Axton Grams
>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20160213/4b1acbb6/attachment-0001.html>


More information about the Users mailing list