6in4 - Troubles with multiple DHCP wans and secure updates

I have been dealing with a weird issue for sometime now and thought I would try and get some traction here. Basically I have 2 dhcp based wans and 1 static wan. If I avoid all dynamic updating I can get things to work (although broken local addressing gets assigned to the interface (link/sit address)). What I am hoping to do is assign an account/blocks to each wan with dynamic updating still intact.

I believe the problems are 2 fold,

  1. my wan seems to swing between any interface I have with default routes (I would think this should reply with the lowest metric'd default route but :confused:) as returned via the __network_wan function. This ultimately changes the ipaddr (? my bash skills are bad) being utilized by 6in4.sh

  2. uclient-fetch as utilized in 6in4.sh does not appear to be using any sort of source address or interface pinning, meaning it is impossible to pin more than 1 connection with this scripting (even now I am putting in static routes for the update addressing to force it out the interface I desire)

I realize I could script my way to success outside of luci and the 6in4 package/uci network architecture, but i feel like this can be solved quite easily with some trivial line changes.

I am thinking putting two optional parameters against the 6in4 interface type. 1 being the interface name which will be used for ipaddr and uclient-fetch resolution. 2 being perhaps the index of ipv4-address so that a static interface defined with multiple addresses can be used as well.

As an aside, personally I think that any tunnel type connection should have these options. It would make things much more flexible and basically put feature parity on some of these things higher than that of the pfsenses of the world :slight_smile:

#1 is standard Linux kernel l behaviour. Unless you get into BGP or similar, or configure specific routes manually, the kernel will round robin between all the available WAN interfaces.

However, when there are several routes available to reach a remote host, the kernel will always choose the most restrictive rule. Just as an idea, perhaps you could try to configure specific routes to your 6in4 peers.

1 Like

#1 is most restrictive first yes, but then its lowest metric first, if you happen to have two with the same destination and metric then I can see that being the case. The problem is __network_wan does not appear to sort by metric to take the lowest one. When I curl with no source address/interface pinning the kernel handles the config i have flawlessly, its when things that utilize openwrt scripts that things start behaving strangely. I am going to start sharpening my bash and make some code suggestions.

Your second statement with regards to specifying the routes is what I do currently, however that limits the ability of dynamic updating for ALL wans to just the one I am pushing to setup the tunnel on.

For IPv6, the interface metric is only evaluated after choosing based on the longest match of the prefix, in practice this means that unless both WAN uplinks come with basically the same prefix (e.g. same ISP), the metric is never looked at.

1 Like

The IPv6 side I do not have any issues with. Its specifically an issue with establishing multiple tunnels each with using their own ipv4 wan/address. I have worked out the first part of this issue. Basically the __network_ifstatus used by __network_wan in network.sh does no ordering on return, meaning however the json gets populated with addressing information at that particular moment in time creates random responses. My fix here is going to be returning sorted by metric, lowest to highest, perhaps using a parameter to force the ordering one way or another and by which attribute to return.

proto_add_host_dependency used by the netifd 6in4 protocol handler doesn't take routes from dynamic routing protocols into account anyway in my experience.

Resurrection alert!!

Figured I would try to make this work, it ended up being a lot more trivial than I originally thought.
Pull request is here

Basically there is an additional parameter that can be used against the he.net tunnel broker api.