[Users] fib6_clean_all: NULL exec env (table->owner_env)
Andrew Vagin
avagin at parallels.com
Sat Jan 26 00:21:25 EST 2013
Hello Acton,
2.6.32-5-openvz-amd64 is out of date.
Currently only two branches 2.6.18-028stab* and 2.6.32-042 stab* are
officially supported by the OpenVZ team.
You can follow the instructions from here:
http://wiki.openvz.org/Installation_on_Debian and install the last
stable kernel http://openvz.org/Download/kernel/rhel6/042stab072.10
Then if this bug will be reproduced again on the last kernel,
I recomend you to file the bug at bugzilla.openvz.org
Thanks
On Fri, Jan 25, 2013 at 05:20:51PM -0600, Axton wrote:
> While running containers with ipv6 addresses on Debian I get the
> following log entries on CT0 in /var/log/messages:
> Jan 25 16:58:17 cluster-02 kernel: [845803.804023] fib6_clean_all:
> NULL exec env (table->owner_env)
> Jan 25 16:58:47 cluster-02 kernel: [845833.804024] fib6_clean_all:
> NULL exec env (table->owner_env)
> Jan 25 16:59:17 cluster-02 kernel: [845863.804024] fib6_clean_all:
> NULL exec env (table->owner_env)
> Jan 25 16:59:47 cluster-02 kernel: [845893.804024] fib6_clean_all:
> NULL exec env (table->owner_env)
>
> Here is my system information:
>
> root at cluster-02:~# uname -a
> Linux cluster-02 2.6.32-5-openvz-amd64 #1 SMP Sun Sep 23 11:32:18 UTC
> 2012 x86_64 GNU/Linux
>
> root at cluster-02:~# vzctl --version
> vzctl version 3.0.24
>
> If I stop the containers, the errors stop. They seem to occur at a
> rate of x messages every 30 seconds (more containers means more log
> entries). The host has 1 interface connected to an untagged port on
> my switch. 4 vlans are accessible to this port: 10, 20, 30, and 40.
> Only vlan40 is used for access to CT0. The other 3 vlans are for the
> containers.
>
> The interfaces are configured as follows:
>
> root at cluster-02:~# cat /etc/network/interfaces
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> iface eth1 inet manual
>
> auto vlan10
> iface vlan10 inet manual
> vlan_raw_device eth1
>
> auto vlan20
> iface vlan20 inet manual
> vlan_raw_device eth1
>
> auto vlan30
> iface vlan30 inet manual
> vlan_raw_device eth1
>
> auto vlan40
> iface vlan40 inet manual
> vlan_raw_device eth1
>
> auto br10
> iface br10 inet manual
> up ip link set $IFACE up
> bridge_ports vlan10
> bridge_stp off
> bridge_fd 0
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/autoconf
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_defrtr
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_pinfo
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_rtr_pref
>
> auto br20
> iface br20 inet manual
> up ip link set $IFACE up
> bridge_ports vlan20
> bridge_stp off
> bridge_fd 0
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/autoconf
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_defrtr
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_pinfo
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_rtr_pref
>
> auto br30
> iface br30 inet manual
> up ip link set $IFACE up
> bridge_ports vlan30
> bridge_stp off
> bridge_fd 0
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/autoconf
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_defrtr
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_pinfo
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_rtr_pref
>
> auto br40
> iface br40 inet static
> address 10.0.20.58
> netmask 255.255.252.0
> broadcast 10.0.23.255
> gateway 10.0.20.1
> bridge_ports vlan40
> bridge_stp off
> bridge_fd 0
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/autoconf
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_defrtr
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_pinfo
> up echo 0 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra_rtr_pref
> up ip -6 addr add 2001:470:e067:40::58/64 dev $IFACE
> up ip -6 route add fe80::1 dev $IFACE
> up ip -6 route add default via 2001:470:e067:40::10
> down ip -6 addr del 2001:470:e067:40::58/64 dev $IFACE
> down ip -6 route del fe80::1 dev $IFACE
> down ip -6 route del default via 2001:470:e067:40::10
>
> Everything works fine up to this point. The issue comes when I start
> one of the containers. I use veth interfaces for the containers. I
> do not use venet.
>
> root at cluster-02:~# vzctl start 5000
> Starting container ...
> Container is mounted
> Setting CPU units: 1000
> Set hostname: fs01
> File resolv.conf was modified
> Configure veth devices: veth5000.1 veth5000.2 veth5000.3 veth5000.4
> Adding interface veth5000.1 to bridge br10 on CT0 for CT5000
> Adding interface veth5000.2 to bridge br20 on CT0 for CT5000
> Adding interface veth5000.3 to bridge br30 on CT0 for CT5000
> Adding interface veth5000.4 to bridge br40 on CT0 for CT5000
> Container start in progress...
>
> Within container 5000, I have the following configuration for the interfaces:
>
> root at fs01:/# cat /etc/network/interfaces
> # Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
> # /usr/share/doc/ifupdown/examples for more information.
>
> auto lo eth1 eth2 eth3 eth4
>
> iface lo inet loopback
>
> iface eth1 inet static
> address 10.0.4.249
> netmask 255.255.252.0
> broadcast 10.0.7.255
> post-up ip route add table 100 to 10.0.4.0/22 dev eth1
> post-up ip route add table 100 to default via 10.0.4.1 dev eth1
> post-up ip rule add from 10.0.4.0/22 table 100 priority 100
> post-up ip route flush cache
> pre-down ip route del table 100 to 10.0.4.0/22 dev eth1
> pre-down ip rule del from 10.0.4.0/22 table 100 priority 100
> pre-down ip route flush cache
>
> iface eth1 inet6 static
> address 2001:470:e067:10::249
> netmask 64
> pre-up echo 0 > /proc/sys/net/ipv6/conf/eth1/autoconf
> pre-up echo 0 > /proc/sys/net/ipv6/conf/eth1/accept_ra
> post-up ip -6 route add table 100 to 2001:470:e067:10::/64 dev eth1
> post-up ip -6 route add table 100 to default via
> 2001:470:e067:10::10 dev eth1
> post-up ip -6 rule add from 2001:470:e067:10::/64 table 100 priority 100
> post-up ip -6 route flush cache
> pre-down ip -6 rule del from 2001:470:e067:10::/64 table 100
> priority 100
> pre-down ip -6 route flush cache
>
> iface eth2 inet static
> address 10.0.0.249
> netmask 255.255.252.0
> broadcast 10.0.3.255
> post-up ip route add table 200 to 10.0.0.0/22 dev eth2
> post-up ip route add table 200 to default via 10.0.0.1 dev eth2
> post-up ip rule add from 10.0.0.0/22 table 200 priority 200
> post-up ip route flush cache
> pre-down ip rule del from 10.0.0.0/22 table 200 priority 200
> pre-down ip route flush cache
>
> iface eth3 inet static
> address 10.0.16.249
> netmask 255.255.252.0
> broadcast 10.0.19.255
> post-up ip route add table 300 to 10.0.16.0/22 dev eth3
> post-up ip route add table 300 to default via 10.0.16.1 dev eth3
> post-up ip rule add from 10.0.16.0/22 table 300 priority 300
> post-up ip route flush cache
> pre-down ip rule del from 10.0.16.0/22 table 300 priority 300
> pre-down ip route flush cache
>
> iface eth4 inet static
> address 10.0.20.249
> netmask 255.255.252.0
> broadcast 10.0.23.255
> gateway 10.0.20.1
> post-up ip route add table 400 to 10.0.20.0/22 dev eth4
> post-up ip route add table 400 to default via 10.0.20.1 dev eth4
> post-up ip rule add from 10.0.20.0/22 table 400 priority 400
> post-up ip route flush cache
> pre-down ip route del table 400 to 10.0.20.0/22 dev eth4
> pre-down ip route del table 400 to default via 10.0.20.1 dev eth4
> pre-down ip rule del from 10.0.20.0/22 table 400 priority 400
> pre-down ip route flush cache
>
>
> I use separate inet/inet6 configuration stanzas on CT0 because I can
> not use more than one configuration stanza for a bridge interface.
> Within the container I have separate inet/inet6 stanzas because these
> are not bridge interfaces as far as the container is concerned. The
> rules in the container are there to keep traffic bound to a given
> interface (e.g., if some host on vlan20 talks to the container ip on
> vlan40, the return packets will go out the same interface they came
> in).
>
> As far as functionality goes, I do not have any issues. I simply have
> these /var/log/messages entries that fill up the log file. I do not
> know what is causing these errors. More containers a given host means
> more of the same log entries. I tried searching the web for clues,
> but I seem to have stumbled into something that is not common. Any
> help is appreciated. If you need more information let me know. If I
> do not have any ipv6 address in the containers I do not have these
> errors.
>
> As a side note (probably not related), I updated the vznetaddbr script
> used to add veth interfaces to the respective bridge on CT0 to retain
> the appropriate MAC address by adding the following lines:
>
> # Retain the existing MAC address when the veth interfaces are
> added to the bridge
> #ifconfig $VZHOSTBR hw ether $($ifconfig eth0 | awk '{print $5; exit}')
> ifconfig $bridge hw ether 00:25:90:61:74:53
>
> This really just causes the bridge to not change mac address. The mac
> changing had the side effect that all tcp sessions were dropped (i.e.,
> when I stop a container, I would lose my ssh session to CT0).
>
> Thanks in advance,
> Axton Grams
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users
More information about the Users
mailing list