[Users] Virtuozzo 7 - bridged veth interfaces for container
Axton
axton.grams at gmail.com
Sat Feb 13 21:36:13 PST 2016
One last update to this. Updating the interfaces in libvirt so that the
interfaces are mode=bridge allows the standard network configuration,
libvirt, and virtuozzo to all happily coexist. This xml is better:
[root at cluster-02 ~]# virsh
# net-edit vmbr5
<network>
<name>vmbr5</name>
<uuid>e73593b5-0ae2-402f-b707-81c33469b149</uuid>
<forward mode='bridge'/>
<bridge name='vmbr5'/>
</network>
The prlsrvctl net list looks better since the interfaces show type=bridged
instead of host-only:
[root at cluster-02 ~]# prlsrvctl net list
Network ID Type Bound To Bridge
vmbr70 bridged vmbr70
vmbr40 bridged vmbr40
vmbr20 bridged vmbr20
Host-Only host-only virbr1
vmbr10 bridged vmbr10
vmbr91 bridged vmbr91
vmbr80 bridged vmbr80
vmbr90 bridged vmbr90
vmbr5 bridged vmbr5
vmbr92 bridged vmbr92
vmbr30 bridged vmbr30
This also cleaned up some errors in the systemd journal from libvirt.
Default bridges looks good:
[root at cluster-02 ~]# brctl show
bridge name bridge id STP enabled interfaces
host-routed 8000.000000000000 no
virbr0 8000.5254001f1ad1 yes virbr0-nic
virbr1 8000.525400885137 no virbr1-nic
vmbr10 8000.0cc47a6b9554 no team0.10
vmbr20 8000.0cc47a6b9554 no team0.20
vmbr30 8000.0cc47a6b9554 no team0.30
vmbr40 8000.0cc47a6b9554 no team0.40
vmbr5 8000.0cc47a6b9554 no team0.5
vmbr70 8000.0cc47a6b9554 no team0.70
vmbr80 8000.0cc47a6b9554 no team0.80
vmbr90 8000.0cc47a6b9554 no team0.90
vmbr91 8000.0cc47a6b9554 no team0.91
vmbr92 8000.0cc47a6b9554 no team0.92
And when a container is started, connected to vlans 90, 91, and 92, the
bridge looks good:
[root at cluster-02 ~]# brctl show
bridge name bridge id STP enabled interfaces
host-routed 8000.000000000000 no
virbr0 8000.5254001f1ad1 yes virbr0-nic
virbr1 8000.525400885137 no virbr1-nic
vmbr10 8000.0cc47a6b9554 no team0.10
vmbr20 8000.0cc47a6b9554 no team0.20
vmbr30 8000.0cc47a6b9554 no team0.30
vmbr40 8000.0cc47a6b9554 no team0.40
vmbr5 8000.0cc47a6b9554 no team0.5
vmbr70 8000.0cc47a6b9554 no team0.70
vmbr80 8000.0cc47a6b9554 no team0.80
vmbr90 8000.0cc47a6b9554 no team0.90
veth428bd8a6
vmbr91 8000.0cc47a6b9554 no team0.91
veth423b42f9
vmbr92 8000.0cc47a6b9554 no team0.92
veth42421a76
Axton Grams
On Sat, Feb 13, 2016 at 7:02 PM, Axton <axton.grams at gmail.com> wrote:
> This works, but it seems like prlsrvctl net set is not working as
> expected. I was able to bypass the issue by updating the bridge
> information using virsh.
>
> In this example, I want to create a bridged virtual network, vlan20, to
> use in the containers.
>
> *** This is the bridge that is configured on the system that I would like
> to use for the virtual network:
>
> [root at cluster-02 ~]# ip -d l show vmbr20
> 11: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP mode DEFAULT
> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff promiscuity 0
> bridge addrgenmode eui64
>
> [root at cluster-02 ~]# ip -d a show vmbr20
> 11: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff promiscuity 0
> bridge
>
> *** Create a new virtual network, vlan20
>
> [root at cluster-02 ~]# prlsrvctl net add vlan20
>
> *** A list of the virtual networks to make sure vlan20 was created; the
> default Bridge interface is virbr5 for the new virtual network, vlan20
>
> [root at cluster-02 ~]# prlsrvctl net list
> Network ID Type Bound To Bridge
> Host-Only host-only virbr1
> vlan30 host-only vmbr30
> vlan20 host-only virbr5
> vlan40 host-only vmbr40
>
> *** Attempt to change the bridge interface from virbr5 to vmbr20 for
> virtual network network vlan20, but receive an error
>
> [root at cluster-02 ~]# prlsrvctl net set vlan20 -i vmbr20
> Failed to find network adapter vmbr20 on the server.
>
> *** I start virsh to change the bridge interface from virbr5 to vmbr20 for
> virtual network network vlan20
>
> [root at cluster-02 ~]# virsh
> Welcome to virsh, the virtualization interactive terminal.
>
> Type: 'help' for help with commands
> 'quit' to quit
>
> *** I edit the xml for vlan20
>
> virsh # net-edit vlan20
> Network vlan20 XML configuration edited.
>
> *** xml before modification
> <network>
> <name>vlan20</name>
> <uuid>9199dcd6-f694-457c-9656-474b871fc42c</uuid>
> <bridge name='virbr5' stp='off' delay='0'/>
> <mac address='52:54:00:e6:cc:79'/>
> </network>
>
> *** xml after modification
> <network>
> <name>vlan20</name>
> <uuid>9199dcd6-f694-457c-9656-474b871fc42c</uuid>
> <bridge name='vmbr20' stp='off' delay='0'/>
> <mac address='52:54:00:e6:cc:79'/>
> </network>
>
> *** Exit virsh
> virsh # ^C
>
> *** Now the interfaces show configured as desired per prlsrvctl
>
> [root at cluster-02 ~]# prlsrvctl net list
> Network ID Type Bound To Bridge
> Host-Only host-only virbr1
> vlan30 host-only vmbr30
> vlan20 host-only vmbr20
> vlan40 host-only vmbr40
>
>
> From here, everything seems to work the way I would like:
> - I can add networks to a container using the following commands:
> # prlctl set <ctname> --netif_add eth<id>
> # prlctl set <ctname> --ifname eth<id> --network vlan<id>
> - The interfaces are properly managed when containers are
> stopped/started/restarted
> - The interfaces are properly managed when the host is rebooted
>
> Let me know if this seems like a bug and I will submit a defect.
>
> Thanks,
> Axton Grams
>
> On Sat, Feb 13, 2016 at 3:23 PM, Axton <axton.grams at gmail.com> wrote:
>
>> vznetcfg seems to do what I need, but it seems vznetcfg is missing from
>> beta4 (all packages) when installed on an existing OS. Here is the doc
>> page referencing this utility:
>>
>>
>> https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_vznetcfg.html
>>
>> I have this script on the machine, but it is not the script referenced
>> from the documentation:
>>
>> [root at cluster-02 tmp]# find / -name vznetcfg
>> /usr/libexec/libvzctl/scripts/vznetcfg
>>
>> Axton Grams
>>
>>
>> On Sat, Feb 13, 2016 at 2:53 PM, Axton <axton.grams at gmail.com> wrote:
>>
>>> I found the container action scripts capability, but this is not
>>> sufficient for my needs:
>>>
>>> https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_container_action_scripts.html
>>>
>>> The start script runs in the context of the container; I need the
>>> equivalent trigger to run a script on the host. The mount script is
>>> executed on the host, but the veth interface has not been created on the
>>> host when this script runs.
>>>
>>> On Sat, Feb 13, 2016 at 10:49 AM, Axton <axton.grams at gmail.com> wrote:
>>>
>>>> Digging through my notes, I accomplished this using a script
>>>> /usr/sbin/vznetaddr which parses the ctid.conf file and adds the CT veth
>>>> interfaces to a bridge.
>>>>
>>>>
>>>> and a config file:
>>>>
>>>> $ cat /etc/vz/vznet.conf
>>>> EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
>>>>
>>>> If no native capabilities exist to make the bridge configuration
>>>> changes in Virtuozzo 7, what I need is a way to call a script when the
>>>> container is started, after the interface has been created on the host. Is
>>>> there a way to hook into the container startup in this way?
>>>>
>>>> At the end of the day, what I am looking to do is automatlly attach at
>>>> container to one or more vlans using veth interfaces. I accomplished this
>>>> in the past using bridge utils. It is important that I can specify which
>>>> bridge each veth interface is added to. Let me know if there is a better
>>>> way to do this.
>>>>
>>>> Thanks,
>>>> Axton Grams
>>>>
>>>> On Fri, Feb 12, 2016 at 9:22 PM, Axton <axton.grams at gmail.com> wrote:
>>>>
>>>>> My host's network uses teaming, bridge, and vlan interfaces.
>>>>>
>>>>> There are 4 physical interfaces: enp0s20f0-enp0s20f3
>>>>> The 4 physical interfaces are grouped using lacp via teaming.
>>>>>
>>>>> [root at cluster-02 ~]# teamdctl team0 config dump actual
>>>>> {
>>>>> "device": "team0",
>>>>> "link_watch": {
>>>>> "name": "ethtool"
>>>>> },
>>>>> "ports": {
>>>>> "enp0s20f0": {},
>>>>> "enp0s20f1": {},
>>>>> "enp0s20f2": {},
>>>>> "enp0s20f3": {}
>>>>> },
>>>>> "runner": {
>>>>> "active": true,
>>>>> "fast_rate": true,
>>>>> "name": "lacp",
>>>>> "tx_balancer": {
>>>>> "name": "basic"
>>>>> },
>>>>> "tx_hash": [
>>>>> "eth",
>>>>> "ipv4",
>>>>> "ipv6"
>>>>> ]
>>>>> }
>>>>> }
>>>>>
>>>>> The switch is configured to send 802.1q (vlan) encapsulated packets
>>>>> for a series of vlans. For each vlan, a vlan interface is configured on
>>>>> the linux host and a bridge is configured on top of the vlan iface. Take
>>>>> vlan 80 as an example:
>>>>>
>>>>> [root at cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0.80
>>>>> DEVICE=team0.80
>>>>> VLAN=yes
>>>>> BOOTPROTO=none
>>>>> NM_CONTROLLED=no
>>>>> BRIDGE=vmbr80
>>>>> IPV6INIT=no
>>>>> MTU=9000
>>>>>
>>>>> [root at cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vmbr80
>>>>> VLAN=yes
>>>>> TYPE=Bridge
>>>>> DEVICE=vmbr80
>>>>> PHYSDEV=team0
>>>>> VLAN_ID=80
>>>>> REORDER_HDR=0
>>>>> IPV4_FAILURE_FATAL=no
>>>>> IPV6INIT=no
>>>>> IPV6_AUTOCONF=no
>>>>> IPV6_DEFROUTE=no
>>>>> IPV6_FAILURE_FATAL=no
>>>>> NAME=vlan-vlan80
>>>>> ONBOOT=yes
>>>>> MTU=9000
>>>>> BOOTPROTO=none
>>>>> STP=off
>>>>> DELAY=0
>>>>> IPV4_DEFROUTE=no
>>>>>
>>>>>
>>>>> In a nutshell: physical ifaces (4) -> team0 -> team0.<vlanid> (10) ->
>>>>> vmbr<vlanid> (bridge). The bridges are all in place and everything works
>>>>> as expected with the interface/network configuration on the host:
>>>>>
>>>>> [root at cluster-02 ~]# brctl show
>>>>> bridge name bridge id STP enabled interfaces
>>>>> host-routed 8000.000000000000 no
>>>>> virbr1 8000.525400885137 no virbr1-nic
>>>>> vmbr10 8000.0cc47a6b9554 no team0.10
>>>>> vmbr20 8000.0cc47a6b9554 no team0.20
>>>>> vmbr30 8000.0cc47a6b9554 no team0.30
>>>>> vmbr40 8000.0cc47a6b9554 no team0.40
>>>>> vmbr5 8000.0cc47a6b9554 no team0.5
>>>>> vmbr70 8000.0cc47a6b9554 no team0.70
>>>>> vmbr80 8000.0cc47a6b9554 no team0.80
>>>>> vmbr90 8000.0cc47a6b9554 no team0.90
>>>>> vmbr91 8000.0cc47a6b9554 no team0.91
>>>>> vmbr92 8000.0cc47a6b9554 no team0.92
>>>>>
>>>>> In openvz on RHEL 6, I added an interface to a container using vzctl
>>>>> like this:
>>>>> vzctl set 30000 --netif_add eth4,,veth30000.40,,vmbr40 --save
>>>>>
>>>>> I am trying to add a veth interface to a container and specify the
>>>>> bridge interface on Virtuozzo 7.
>>>>> - vzctl behaves differently, it expects macs.
>>>>> - prlctl does not have sufficient options; I can add an interface but
>>>>> not specify the bridge device
>>>>> - prlsrvctl looks like it only supports physical interfaces (see
>>>>> following commands)
>>>>> [root at cluster-02 ~]# ip link show vmbr80
>>>>> 21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
>>>>> state UP mode DEFAULT
>>>>> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff
>>>>> [root at cluster-02 ~]# ip a show vmbr80
>>>>> 21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
>>>>> state UP
>>>>> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff
>>>>> [root at cluster-02 ~]# prlsrvctl net add vlan80 -i vmbr90 -t bridged
>>>>> --ip 10.1.0.0/22
>>>>> Failed to find network adapter vmbr90 on the server.
>>>>>
>>>>> - vznetcfg does not seem to exist in a state that matches the
>>>>> virtuozzo 7 documentation (only see /usr/libexec/libvzctl/scripts/vznetcfg
>>>>> on my machine)
>>>>>
>>>>> I thought maybe I should add a virtual network and bridge it, but I
>>>>> can't seem to find a way to do this.
>>>>> I thought maybe I would use vzctl like I did on rhel 6, but that
>>>>> requires me to provide mac addresses (rhel 6 would autogen macs)
>>>>>
>>>>> At a bit of a loss. Any help is appreciated.
>>>>>
>>>>> Thanks,
>>>>> Axton Grams
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20160213/acaaf75f/attachment-0001.html>
More information about the Users
mailing list