<div dir="ltr">This works, but it seems like prlsrvctl net set is not working as expected. I was able to bypass the issue by updating the bridge information using virsh.<div><br></div><div>In this example, I want to create a bridged virtual network, vlan20, to use in the containers.</div><div><br></div><div>*** This is the bridge that is configured on the system that I would like to use for the virtual network:</div><div><br></div><div><div>[root@cluster-02 ~]# ip -d l show vmbr20</div><div>11: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT</div><div> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff promiscuity 0</div><div> bridge addrgenmode eui64</div><div><br></div><div>[root@cluster-02 ~]# ip -d a show vmbr20</div><div>11: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</div><div> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff promiscuity 0</div><div> bridge</div><div><br></div><div>*** Create a new virtual network, vlan20</div><div><br></div><div>[root@cluster-02 ~]# prlsrvctl net add vlan20</div><div><br></div><div>*** A list of the virtual networks to make sure vlan20 was created; the default Bridge interface is virbr5 for the new virtual network, vlan20</div><div><br></div><div>[root@cluster-02 ~]# prlsrvctl net list</div><div>Network ID Type Bound To Bridge</div><div>Host-Only host-only virbr1</div><div>vlan30 host-only vmbr30</div><div>vlan20 host-only virbr5</div><div>vlan40 host-only vmbr40</div><div><br></div><div>*** Attempt to change the bridge interface from virbr5 to vmbr20 for virtual network network vlan20, but receive an error<br></div><div><br></div><div>[root@cluster-02 ~]# prlsrvctl net set vlan20 -i vmbr20</div><div>Failed to find network adapter vmbr20 on the server.</div><div><br></div><div>*** I start virsh to change the bridge interface from virbr5 to vmbr20 for virtual network network vlan20</div><div><br></div><div>[root@cluster-02 ~]# virsh</div><div>Welcome to virsh, the virtualization interactive terminal.</div><div><br></div><div>Type: 'help' for help with commands</div><div> 'quit' to quit</div><div><br></div><div>*** I edit the xml for vlan20</div><div><br></div><div>virsh # net-edit vlan20</div><div>Network vlan20 XML configuration edited.</div><div><br></div><div>*** xml before modification</div><div><network></div><div> <name>vlan20</name></div><div> <uuid>9199dcd6-f694-457c-9656-474b871fc42c</uuid></div><div> <bridge name='virbr5' stp='off' delay='0'/></div><div> <mac address='52:54:00:e6:cc:79'/></div><div></network></div><div><br></div><div>*** xml after modification</div><div><network></div><div> <name>vlan20</name></div><div> <uuid>9199dcd6-f694-457c-9656-474b871fc42c</uuid></div><div> <bridge name='vmbr20' stp='off' delay='0'/></div><div> <mac address='52:54:00:e6:cc:79'/></div><div></network></div><div><br></div><div>*** Exit virsh</div><div>virsh # ^C</div><div><br></div><div>*** Now the interfaces show configured as desired per prlsrvctl</div><div><br></div><div>[root@cluster-02 ~]# prlsrvctl net list</div><div>Network ID Type Bound To Bridge</div><div>Host-Only host-only virbr1</div><div>vlan30 host-only vmbr30</div><div>vlan20 host-only vmbr20</div><div>vlan40 host-only vmbr40</div></div><div><br></div><div><br></div><div>From here, everything seems to work the way I would like:</div><div>- I can add networks to a container using the following commands:</div><div><div> # prlctl set <ctname> --netif_add eth<id></div><div> # prlctl set <ctname> --ifname eth<id> --network vlan<id></div></div><div>- The interfaces are properly managed when containers are stopped/started/restarted</div><div>- The interfaces are properly managed when the host is rebooted</div><div><br></div><div>Let me know if this seems like a bug and I will submit a defect.</div><div><br></div><div>Thanks,</div><div>Axton Grams</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Feb 13, 2016 at 3:23 PM, Axton <span dir="ltr"><<a href="mailto:axton.grams@gmail.com" target="_blank">axton.grams@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">vznetcfg seems to do what I need, but it seems vznetcfg is missing from beta4 (all packages) when installed on an existing OS. Here is the doc page referencing this utility:<div><br></div><div><a href="https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_vznetcfg.html" target="_blank">https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_vznetcfg.html</a><br></div><div><br></div><div>I have this script on the machine, but it is not the script referenced from the documentation:</div><div><div><br></div><div>[root@cluster-02 tmp]# find / -name vznetcfg</div><div>/usr/libexec/libvzctl/scripts/vznetcfg</div></div><div><br></div><div>Axton Grams</div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Feb 13, 2016 at 2:53 PM, Axton <span dir="ltr"><<a href="mailto:axton.grams@gmail.com" target="_blank">axton.grams@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I found the container action scripts capability, but this is not sufficient for my needs:<div><a href="https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_container_action_scripts.html" target="_blank">https://docs.openvz.org/virtuozzo_7_command_line_reference.webhelp/_container_action_scripts.html</a><br></div><div><br></div><div>The start script runs in the context of the container; I need the equivalent trigger to run a script on the host. The mount script is executed on the host, but the veth interface has not been created on the host when this script runs.</div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Feb 13, 2016 at 10:49 AM, Axton <span dir="ltr"><<a href="mailto:axton.grams@gmail.com" target="_blank">axton.grams@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Digging through my notes, I accomplished this using a script /usr/sbin/vznetaddr which parses the ctid.conf file and adds the CT veth interfaces to a bridge.<div><br></div><div><br></div><div>and a config file:<div><div><br></div><div>$ cat /etc/vz/vznet.conf</div><div>EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"</div></div><div><br></div><div>If no native capabilities exist to make the bridge configuration changes in Virtuozzo 7, what I need is a way to call a script when the container is started, after the interface has been created on the host. Is there a way to hook into the container startup in this way?</div><div><br></div><div>At the end of the day, what I am looking to do is automatlly attach at container to one or more vlans using veth interfaces. I accomplished this in the past using bridge utils. It is important that I can specify which bridge each veth interface is added to. Let me know if there is a better way to do this. </div><div><br></div><div>Thanks,</div><div>Axton Grams</div></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 12, 2016 at 9:22 PM, Axton <span dir="ltr"><<a href="mailto:axton.grams@gmail.com" target="_blank">axton.grams@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">My host's network uses teaming, bridge, and vlan interfaces.<div><br></div><div>There are 4 physical interfaces: enp0s20f0-enp0s20f3</div><div>The 4 physical interfaces are grouped using lacp via teaming.</div><div><br></div><div><div><div>[root@cluster-02 ~]# teamdctl team0 config dump actual</div><div>{</div><div> "device": "team0",</div><div> "link_watch": {</div><div> "name": "ethtool"</div><div> },</div><div> "ports": {</div><div> "enp0s20f0": {},</div><div> "enp0s20f1": {},</div><div> "enp0s20f2": {},</div><div> "enp0s20f3": {}</div><div> },</div><div> "runner": {</div><div> "active": true,</div><div> "fast_rate": true,</div><div> "name": "lacp",</div><div> "tx_balancer": {</div><div> "name": "basic"</div><div> },</div><div> "tx_hash": [</div><div> "eth",</div><div> "ipv4",</div><div> "ipv6"</div><div> ]</div><div> }</div><div>}</div></div></div><div><br></div><div>The switch is configured to send 802.1q (vlan) encapsulated packets for a series of vlans. For each vlan, a vlan interface is configured on the linux host and a bridge is configured on top of the vlan iface. Take vlan 80 as an example:</div><div><br></div><div><div>[root@cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0.80</div><div>DEVICE=team0.80</div><div>VLAN=yes</div><div>BOOTPROTO=none</div><div>NM_CONTROLLED=no</div><div>BRIDGE=vmbr80</div><div>IPV6INIT=no</div><div>MTU=9000</div><div><br></div><div>[root@cluster-02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vmbr80</div><div>VLAN=yes</div><div>TYPE=Bridge</div><div>DEVICE=vmbr80</div><div>PHYSDEV=team0</div><div>VLAN_ID=80</div><div>REORDER_HDR=0</div><div>IPV4_FAILURE_FATAL=no</div><div>IPV6INIT=no</div><div>IPV6_AUTOCONF=no</div><div>IPV6_DEFROUTE=no</div><div>IPV6_FAILURE_FATAL=no</div><div>NAME=vlan-vlan80</div><div>ONBOOT=yes</div><div>MTU=9000</div><div>BOOTPROTO=none</div><div>STP=off</div><div>DELAY=0</div><div>IPV4_DEFROUTE=no</div></div><div><br></div><div><br></div><div>In a nutshell: physical ifaces (4) -> team0 -> team0.<vlanid> (10) -> vmbr<vlanid> (bridge). The bridges are all in place and everything works as expected with the interface/network configuration on the host:</div><div><div><br></div><div>[root@cluster-02 ~]# brctl show</div><div>bridge name bridge id STP enabled interfaces</div><div>host-routed 8000.000000000000 no</div><div>virbr1 8000.525400885137 no virbr1-nic</div><div>vmbr10 8000.0cc47a6b9554 no team0.10</div><div>vmbr20 8000.0cc47a6b9554 no team0.20</div><div>vmbr30 8000.0cc47a6b9554 no team0.30</div><div>vmbr40 8000.0cc47a6b9554 no team0.40</div><div>vmbr5 8000.0cc47a6b9554 no team0.5</div><div>vmbr70 8000.0cc47a6b9554 no team0.70</div><div>vmbr80 8000.0cc47a6b9554 no team0.80</div><div>vmbr90 8000.0cc47a6b9554 no team0.90</div><div>vmbr91 8000.0cc47a6b9554 no team0.91</div><div>vmbr92 8000.0cc47a6b9554 no team0.92</div></div><div><br></div><div>In openvz on RHEL 6, I added an interface to a container using vzctl like this:<br></div><div>vzctl set 30000 --netif_add eth4,,veth30000.40,,vmbr40 --save<br></div><div><br></div><div>I am trying to add a veth interface to a container and specify the bridge interface on Virtuozzo 7.</div><div>- vzctl behaves differently, it expects macs.</div><div>- prlctl does not have sufficient options; I can add an interface but not specify the bridge device<br></div><div>- prlsrvctl looks like it only supports physical interfaces (see following commands)</div><div><div>[root@cluster-02 ~]# ip link show vmbr80</div><div>21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT</div><div> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff</div><div>[root@cluster-02 ~]# ip a show vmbr80</div><div>21: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP</div><div> link/ether 0c:c4:7a:6b:95:54 brd ff:ff:ff:ff:ff:ff</div><div>[root@cluster-02 ~]# prlsrvctl net add vlan80 -i vmbr90 -t bridged --ip <a href="http://10.1.0.0/22" target="_blank">10.1.0.0/22</a></div><div>Failed to find network adapter vmbr90 on the server.</div></div><div><br></div><div>- vznetcfg does not seem to exist in a state that matches the virtuozzo 7 documentation (only see /usr/libexec/libvzctl/scripts/vznetcfg on my machine)</div><div><br></div><div>I thought maybe I should add a virtual network and bridge it, but I can't seem to find a way to do this.</div><div>I thought maybe I would use vzctl like I did on rhel 6, but that requires me to provide mac addresses (rhel 6 would autogen macs)</div><div><br></div><div>At a bit of a loss. Any help is appreciated.</div><div><br></div><div>Thanks,</div><div>Axton Grams</div><div><br></div><div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>