[Users] Virtuozzo7 beta - jumbo frames on veth

Axton axton.grams at gmail.com
Sun Feb 28 09:27:46 PST 2016


I need to configure some veth interfaces with jumbo frames.  I can setup
everything properly on the host where the interfaces all have mtu 9000:
2: enp0s20f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master
team0 state UP mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
3: enp0s20f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master
team0 state UP mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
4: enp0s20f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master
team0 state UP mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
5: enp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master
team0 state UP mode DEFAULT qlen 1000
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
7: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
mode DEFAULT
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
46: team0.97 at team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
noqueue master vmbr97 state UP mode DEFAULT
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state
UP mode DEFAULT
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff

When I start a container with a veth interface on vmbr97, the bridge
interface falls back to mtu 1500:

[root at cluster-02 network-scripts]# prlctl start ha21t02dh.tech.abc.org
Starting the CT...
The CT has been successfully started.
[root at cluster-02 network-scripts]# ip link show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT
    link/ether 0c:c4:7a:6b:95:55 brd ff:ff:ff:ff:ff:ff

brctl shows that this container is the only one on this network:

[root at cluster-02 network-scripts]# brctl show vmbr97
bridge name     bridge id               STP enabled     interfaces
vmbr97          8000.0cc47a6b9555       no              team0.97
                                                        veth42f2f0a5

The container is running centos7.  I have setup mtu 9000 inside the
container:

CT-6598defa /# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
HOSTNAME=ha21t02dh-c.tech.abc.org
NM_CONTROLLED=no
TYPE=Ethernet
MTU=9000
IPADDR=10.1.28.9
PREFIX=22
DEFROUTE=no
IPV6INIT=no
IPV6_AUTOCONF=no
DOMAIN="..."
DNS1=10.0.20.250
DNS2=10.0.20.252

I can manually fix the issue by setting the mtu to 9000 on the veth
interface after the container is started as follows.

[root at cluster-01 ~]# ip l show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT
    link/ether 0c:c4:7a:6b:92:56 brd ff:ff:ff:ff:ff:ff
[root at cluster-01 ~]# ip link set dev veth42346d2f mtu 9000
[root at cluster-01 ~]# ip l show vmbr97
47: vmbr97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state
UP mode DEFAULT
    link/ether 0c:c4:7a:6b:92:56 brd ff:ff:ff:ff:ff:ff

The problem with having to do this each time a container is stopped/started
should be obvious.

My question is this: how do I configure the host and guest so the mtu
settings for jumbo frames are setup properly at container start time for
the veth interface on the host.

Axton Grams
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20160228/76c8bd14/attachment.html>


More information about the Users mailing list