[Users] How to configure a server to use multiple subnets/gateways

Rene C. openvz at dokbua.com
Sat Aug 24 14:50:45 EDT 2013


I know I'm probably being a PITA, but I'm still stuck with this.

The problem is very simple at this time - I'm following the guide
that's referred, but at the second step I get an error, and I have no
idea how to proceed from here.

# ip rule add from xxx.13.31.0/24 table 6
# ip route add default dev eth0 via xxx.13.31.129 table 6
RTNETLINK answers: No such process

 Help. Please.


On Wed, Aug 21, 2013 at 12:39 AM, Rene C. <openvz at dokbua.com> wrote:
> Hi Kir,
>
> Thanks for taking the time to reply!!  I understand this is probably
> tedious routine, but it's a blocking issue for me. Yes the provider
> should probably have taken care of it, but I have the privilege of
> bringing Internet to a third world country and we sometimes have to
> live with what we get.
>
> I can ping the gateway fine from the hardware node:
>
> [root at server17 ~]# ping xxx.13.31.129
> PING xxx.13.31.129 (xxx.13.31.129) 56(84) bytes of data.
> 64 bytes from xxx.13.31.129: icmp_seq=1 ttl=255 time=0.337 ms
> 64 bytes from xxx.13.31.129: icmp_seq=2 ttl=255 time=3.46 ms
> 64 bytes from xxx.13.31.129: icmp_seq=3 ttl=255 time=2.68 ms
> ...
>
> So if I understand you correct, the only thing to do next is to set up
> source routing. I tried that before and already explained that it
> stops with an error:
>
> # ip rule add from xxx.13.31.0/24 table 6
> # ip route add default dev eth0 via xxx.13.31.129 table 6
> RTNETLINK answers: No such process
>
> I've got my container and one of the allocated IP's bound to it, but I
> can't ping either in or out from the container.
>
> // Rene
>
>
> On Tue, Aug 20, 2013 at 11:48 PM, Kir Kolyshkin <kir at openvz.org> wrote:
>> On 08/20/2013 09:13 AM, Rene C. wrote:
>>>
>>> No takers!?  Is it more complicated than I imagine?  I have tried to
>>> explain it as well as I can. Please let me know if there is anything
>>> unclear and I'll try to clarify.
>>
>>
>> As I explained earlier, you don't have to use bridging in this scenario.
>>
>> All you need to do is to add the proper static route to your system so
>> that other network is reachable from your host, that is it.
>>
>> First, make sure that the gateway they specified is reachable from your
>> host:
>> ping xxx.13.31.129
>>
>> I am assuming it is not, and you only have one network card (eth0). So, you
>> need
>> to tell your host that this network is actually there:
>>
>> ip route add xxx.13.31.128/27 dev eth0 scope link
>>
>> After that, the above ping should work.
>>
>> Next, you should probably set up source routing for these IPs, so that
>> containers in this range will use the gateway provided. Check
>> http://openvz.org/Source_based_routing for details.
>>
>> Finally, you can set an IP for your container in a usual manner, using
>> venet:
>>
>> vzctl set NNN --ipadd xxx.13.31.130/27 --save
>>
>> and then check that everything works (ping from inside container etc.).
>>
>> Kir.
>>
>> PS frankly speaking, this is what your hoster should've explained to you. If
>> they do
>> such extravagant setups, they should be able to help their customers setting
>> those up.
>>
>>>
>>> // Rene
>>>
>>> On Sun, Aug 18, 2013 at 1:22 PM, Rene C. <openvz at dokbua.com> wrote:
>>>>
>>>> ... continued
>>>>
>>>>
>>>> So going the simple/obvious way of bridging the CT0 interface I try
>>>> the longer route:
>>>>
>>>> [root at server17 ~]# ifconfig veth1706.0 0
>>>> [root at server17 ~]# echo 1 > /proc/sys/net/ipv4/conf/veth1706.0/forwarding
>>>> [root at server17 ~]# echo 1 > /proc/sys/net/ipv4/conf/veth1706.0/proxy_arp
>>>> [root at server17 ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding
>>>> [root at server17 ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
>>>> [root at server17 ~]# vzctl enter 1706
>>>> entered into CT 1706
>>>> [root at vps1706 /]# ifconfig eth0 0
>>>> [root at vps1706 /]# ip addr add xxx.13.31.131 dev eth0
>>>> [root at vps1706 /]# route add default dev eth0
>>>> [root at vps1706 /]# logout
>>>> exited from CT 1706
>>>> [root at server17 ~]# ip route add xxx dev veth1706.0
>>>> RTNETLINK answers: File exists
>>>>
>>>>
>>>> To recap the problem:
>>>>
>>>> I have this hardware node with IP xxx.22.181.158
>>>>
>>>> Node runs Centos 6, so does all containers.
>>>>
>>>> I already have 4 containers with IP addresses on the same subnet
>>>> (xxx.22.181.*) running fine.
>>>>
>>>> Problem is, now my data center gave me 3 IP addresses in a new subnet
>>>> with a separate gateway:
>>>>
>>>> IP add  : xxx.13.31.130  -  132
>>>> subnet : 255.255.255.224
>>>> gateway : xxx.13.31.129
>>>>
>>>> How can I make this work. Please be specific. I don't mind reading and
>>>> learning, but the learning curve at this stage is too high, I'm not
>>>> getting anywhere. Thanks.
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Aug 18, 2013 at 12:28 PM, Rene C. <openvz at dokbua.com> wrote:
>>>>>
>>>>> I'm sorry but networking is obviously not one of my strong areas and
>>>>> for all the good intentions, all the buzzwords confuse me more than
>>>>> they help me.
>>>>>
>>>>> I had a look at http://openvz.org/Virtual_Ethernet_device, and it
>>>>> gives detailed information about a number of scenarios, for example
>>>>> "Simple configuration with virtual Ethernet devices" and then proceeds
>>>>> with 50 steps to set it up. (Ok I exaggerate but you get my drift).  I
>>>>> think my requirement is very very simple, like I explained before, my
>>>>> DC gave me a bunch of IP addresses on a new subnet requiring a
>>>>> different gateway for it to work.
>>>>>
>>>>> I tried.
>>>>>
>>>>> Ok so I start at the "imple configuration with virtual Ethernet
>>>>> device", with the vzctl start and set commands listed. Then it says
>>>>> "The following steps are needed when the CT is not bridged to a CT0
>>>>> network interface.". Ok, I guess I should make the "CT bridged to a
>>>>> CT0 network inteface" then... but how?   There's a section
>>>>> "Independent Virtual Ethernet communication through the bridge". It
>>>>> starts with "create bridge device", starting with "brctl addbr vzbr0".
>>>>> Ok, I try that...
>>>>>
>>>>> # brctl addbr vzbr0
>>>>> -bash: brctl: command not found
>>>>>
>>>>> Now what?
>>>>>
>>>>> I just need to set this up. Not how to enable a VPN tunnel or multiple
>>>>> 192.168 networks.  I'm sure someone in the know could tell me this is
>>>>> a matter of two lines instead of this information overload.
>>>>>
>>>>> Thanks!
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Aug 18, 2013 at 3:36 AM, Jean-Marc Pigeon <jmp at safe.ca> wrote:
>>>>>>
>>>>>> Bonjour Rene C.
>>>>>>
>>>>>> My understanding you want to route VPS IP not related to host IP.
>>>>>> Just to tell you we have such config.
>>>>>> Using veth  within the VPS and the host with Bridge interface.
>>>>>> Our config is working IP double stack (IPV4 + IPV6).
>>>>>>
>>>>>> The VPS eth0 interface is a very straightforward one.
>>>>>> VPS ifcfg-eth0
>>>>>> DEVICE=eth0
>>>>>> BOOTPROTO=static
>>>>>> ONBOOT=yes
>>>>>> IPADDR=X.Y.Z.T
>>>>>> NETMASK=255.255.255.255
>>>>>> IPV6INIT=yes
>>>>>> IPV6ADDR=XX:YY.......ZZ:TT
>>>>>>
>>>>>> Keyword are veth, IPV4 Routing, Bridge.
>>>>>> http://openvz.org/Virtual_Ethernet_device
>>>>>> seems to me a good starting point.
>>>>>>
>>>>>>
>>>>>> Quoting "Rene C." <openvz at dokbua.com>:
>>>>>>
>>>>>>> Thanks Jean-Marc, I don't think this is what I need though - I don't
>>>>>>> have any bridge interfaces anywhere, and frankly don't quite see how
>>>>>>> it fits into the server. There's only a ifcfg-eth0 file.
>>>>>>>
>>>>>>> I had a look at this page -
>>>>>>> http://wiki.openvz.org/Source_based_routing - am I on the right track?
>>>>>>>
>>>>>>> I tried some of the commands but it threw an error early on so I have
>>>>>>> a feeling I'm not.
>>>>>>>
>>>>>>> # ip rule add from xxx.13.31.0/24 table 6
>>>>>>> # ip route add default dev eth0 via xxx.13.31.129 table 6
>>>>>>> RTNETLINK answers: No such process
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Aug 17, 2013 at 10:28 PM, Jean-Marc Pigeon <jmp at safe.ca>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Bonjour Rene C,
>>>>>>>>
>>>>>>>> My config:
>>>>>>>>
>>>>>>>> ifcfg-br0
>>>>>>>> #definition Bridge interface
>>>>>>>> DEVICE=br0
>>>>>>>> ONBOOT=yes
>>>>>>>> TYPE=Bridge
>>>>>>>> BOOTPROTO=static
>>>>>>>> IPADDR=HOST IP number
>>>>>>>> NETMASK=255.255.255.224  #(My HOST SUBNET MASK)
>>>>>>>> IPV6INIT=yes
>>>>>>>> IPV6ADDR=PP:XX:.....YY:ZZ
>>>>>>>>
>>>>>>>> ifcfg-br0:brgd
>>>>>>>> DEVICE=br0:brgd
>>>>>>>> ONBOOT=yes
>>>>>>>> TYPE=Bridge
>>>>>>>> BOOTPROTO=static
>>>>>>>> IPADDR=192.0.2.1
>>>>>>>> NETMASK=255.255.255.255
>>>>>>>> #to avoid checking for already set IP
>>>>>>>> ARPCHECK=no
>>>>>>>>
>>>>>>>> I am using Quagga(RIP) to transparently route (and displace) VPS IP
>>>>>>>> among
>>>>>>>> HOST
>>>>>>>> such the VPS can be "somewhere" within Hardware cloud. (then VPS
>>>>>>>> can be set with an IP unrelated to HOST).
>>>>>>>>
>>>>>>>> Hoping that help.
>>>>>>>> Contact me privately if I can help.
>>>>>>>>
>>>>>>>> Quoting "Rene C." <openvz at dokbua.com>:
>>>>>>>>
>>>>>>>>> Kirill, do you know of a page where this procedure is documented?
>>>>>>>>> Thanks!
>>>>>>>>>
>>>>>>>>> On Sat, Aug 17, 2013 at 4:54 PM, Kirill Korotaev <dev at parallels.com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Rene, if I got your problem correct you need just create a routing
>>>>>>>>>> rule
>>>>>>>>>> in the host, so that it knew where to route your IPs.
>>>>>>>>>>
>>>>>>>>>> Or use bridged networking with veth interface instead.
>>>>>>>>>>
>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>
>>>>>>>>>> On 17.08.2013, at 13:33, "Rene C." <openvz at dokbua.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have this hardware node with IP xxx.22.181.158
>>>>>>>>>>>
>>>>>>>>>>> Node runs Centos 6, so does all containers.
>>>>>>>>>>>
>>>>>>>>>>> I already have 4 containers with IP addreses on the same submit
>>>>>>>>>>> (xxx.22.181.*) running fine.
>>>>>>>>>>>
>>>>>>>>>>> Problem is, now my data center gave me 3 IP addresses in a new
>>>>>>>>>>> subnet
>>>>>>>>>>> with a separate gateway:
>>>>>>>>>>>
>>>>>>>>>>> IP add  : xxx.13.31.130  -  132
>>>>>>>>>>> subnet : 255.255.255.224
>>>>>>>>>>> gateway : xxx.13.31.129
>>>>>>>>>>>
>>>>>>>>>>> The only way I can make this work is by taking one of these IP
>>>>>>>>>>> addresses and bind to the hardware node, then I can use the
>>>>>>>>>>> remaining
>>>>>>>>>>> IP addresses with containers - but this way I lose an IP address -
>>>>>>>>>>> the
>>>>>>>>>>> one bound to the hardware node, which seems no longer usable for
>>>>>>>>>>> containers.
>>>>>>>>>>>
>>>>>>>>>>> This is a problem both because there's a limit to how many IP's
>>>>>>>>>>> the DC
>>>>>>>>>>> will allocate to a server, and because the IP addresses are quite
>>>>>>>>>>> costly.
>>>>>>>>>>>
>>>>>>>>>>> Did I misunderstand something?
>>>>>>>>>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at openvz.org
>> https://lists.openvz.org/mailman/listinfo/users


More information about the Users mailing list