WireGuard AllowedIPs multiple tunnels

Hi,

TL:DR trying to route port 80 inbound on a VPS, through a WireGuard tunnel to OpenWrt behind my ISPs public IP and then to a container on a VLAN.


I'm going round the bend here trying to figure this out, hopefully someone can put me out of my misery :slight_smile:

I have recently provisioned a VPS and have set up a WireGuard tunnel between the VPS and OpenWrt at home (a static ISP IP).
The VPS has 10.10.10.1 for its wg0 interface and is the listening port, and OpenWrt has a tunnel with IP 10.10.10.10 on its IONOS WireGuard interface

I also have a WireGuard tunnel to ProtonVPN with a static route using a table for one of my VLANs; this is working fine.

I also have a site-to-site WireGuard tunnel for another building which isn't currently working (the other end is also OpenWrt using a USB 4G connection and has worked in the past; I just haven't bothered to troubleshoot it as it's not that important)


What I'm trying to achieve is, to forward inbound port 80 on the VPS, through the tunnel to a container on one of the VLANs behind OpenWrt. The container is called mail and its internal IP is 192.168.22.25 in a zone called DMZ. Therefore the full route would be

VPS_PUBLIC_IP@ens6 <> 10.10.10.1@wg0 <> 10.10.10.10@IONOS <> 192.168.22.25@eno1.22

I have a Port Forward from the IONOS zone on OpenWrt for port 80 on 192.168.22.25 in the DMZ zone.

nftables.conf looks as follows on the VPS

flush ruleset

define DEV_WORLD = ens6
define DEV_PRIVATE = wg0
define NET_PRIVATE = { 10.10.10.0/24 }

table ip global {                                                                                     
        chain inbound_world {
                # accept SSH connections to this VPS only from ISP
                ip saddr *ISP_PUBLIC_IP* tcp dport ssh accept
                # accept inbound WireGuard connections to this VPS
                udp dport 51820 accept
        }

        chain inbound_private {
                # accept rate-limited ping from LAN for testing
                icmp type echo-request limit rate 5/second accept
        }

        chain inbound {
                type filter hook input priority 0; policy drop;
                ct state vmap { established : accept, related : accept, invalid : drop }
                iifname vmap { lo : accept, $DEV_WORLD : jump inbound_world, $DEV_PRIVATE : jump inbound_private }
        }

        chain forward {
                type filter hook forward priority filter; policy drop;
                ct state vmap { established : accept, related : accept, invalid : drop }
                iifname $DEV_PRIVATE accept;
                # forward from this VPS to WireGuard tunnel
                iifname $DEV_WORLD oifname $DEV_PRIVATE accept
        }

        chain prerouting {
                type nat hook prerouting priority dstnat; policy accept;
                # outbonud NAT connections to this VPS on port 80, to port 80 on the WireGuard peer (OpenWrt)
                iifname $DEV_WORLD tcp dport 80 dnat 10.10.10.10:80
        }

        chain postrouting {
                type nat hook postrouting priority srcnat; policy accept;
                # masquerade returning connections from the WireGuard perr (OpenWrt) to this VPS' public interface
                ip saddr $NET_PRIVATE oifname $DEV_WORLD masquerade
        }
}

Currently, I can curl http://10.10.10.10 from the VPS and nginx responds from 192.168.22.25
What I can't get to work is curl http://VPS_PUBLIC_IP and have 192.168.22.25 respond.
I did get it to work at one point, and I think it was by adding 0.0.0.0/0 to AllowedIPs on the OpenWrt peer, but this changed the default route for everything which is definitely not what I want (all home traffic going to the VPS, which didn't work anyway for obvious reasons!).

The VPS has AllowedIPs = 10.10.10.10/32, 192.168.22.25/32
OpenWrt has AllowedIPs = 10.10.10.1/32

AllowedIPs hurts my head. I just saw it described as Inbound is like an ACL and Outbound is a routing table.


I really don't want to use PBR for this. I have tried several times to get on with PBR and it doesn't gel for me. It's also another level of abstraction, whereas I'd like to understand how to do this myself. I have my reasons for forwarding port 80; primarily it's for testing so I can get an understanding and working configuration for both nftables and routes in OpenWrt before opening other services.

Help me understand, please :smiley:


BTW, OpenWrt is 24.10 and the VPS is Debian 12 if that's relevant.
Also, on the VPS:

sysctl net.ipv4.ip_forward
    net.ipv4.ip_forward = 1

You need to SNAT the connections that do not originate from the VPS itself to the IP address of its WireGuard interface.

chain postrouting {
                type nat hook postrouting priority srcnat; policy accept;
                # masquerade returning connections from the WireGuard perr (OpenWrt) to this VPS' public interface
                ip saddr $NET_PRIVATE oifname $DEV_WORLD masquerade
		        # masquerade connections to the WireGuard peer (OpenWrt) to this VPS' wg interface
		        ip daddr $NET_PRIVATE oifname $DEV_PRIVATE masquerade
        }

Note that you will not be able to see the real IP address of the initiator in the log files.

1 Like

Hi Pavel,

I'm delighted to say that this is working!
Many thanks as it turned out not to be anything to do with my OpenWrt configuration.
I was close :slight_smile:

If you wouldn't mind, I am still having trouble with what that is actually doing. Could you explain?

I pared nftables.conf down a bit and now have

flush ruleset

define DEV_WORLD = ens6
define DEV_PRIVATE = wg0
define NET_PRIVATE = { 10.10.10.0/24 }

table ip global {
        chain inbound_world {
                # accept SSH connections to this VPS only from ISP
                ip saddr ISP_PUBLIC_IP tcp dport ssh accept
                # accept inbound WireGuard connections to this VPS
                udp dport 51820 accept
        }

        chain inbound_private {
                # accept rate-limited ping from LAN for testing
                icmp type echo-request limit rate 5/second accept
        }

        chain inbound {
                type filter hook input priority 0; policy drop;
                ct state vmap { established : accept, related : accept, invalid : drop }
                iifname vmap { lo : accept, $DEV_WORLD : jump inbound_world, $DEV_PRIVATE : jump inbound_private }
        }

        chain forward {
                type filter hook forward priority filter; policy drop;
                ct state vmap { established : accept, related : accept, invalid : drop }
                # forward from this VPS to WireGuard tunnel
                iifname $DEV_WORLD oifname $DEV_PRIVATE accept
        }

        chain prerouting {
                type nat hook prerouting priority dstnat; policy accept;
                # outboud NAT connections to this VPS on port 80, to port 8008 on the WireGuard peer (OpenWrt)
                iifname $DEV_WORLD tcp dport 80 dnat 10.10.10.10:80
        }

        chain postrouting {
                type nat hook postrouting priority srcnat; policy accept;
                # masquerade connections to the WireGuard peer (OpenWrt) to this VPS' WireGuard interface
                ip daddr $NET_PRIVATE oifname $DEV_PRIVATE masquerade
        }
}

It appears I didn't need some of those other rules.


The highlights then are:

chain forward
    iifname $DEV_WORLD oifname $DEV_PRIVATE accept

chain prerouting
    iifname $DEV_WORLD tcp dport 80 dnat 10.10.10.10:80

chain postrouting
    ip daddr $NET_PRIVATE oifname $DEV_PRIVATE masquerade

so chain forward says (if established or related) forward packets from the internet to the WireGuard tunnel

chain prerouting says packets from the internet for port 80 should have their IP changed to 10.10.10.10 port 80

chain postrouting says packets with a destination in the WireGuard network should be masqueraded through the WireGuard interface?

With regard to that postrouting rule, is that taking inbound packets from the internet and changing their IP to 10.10.10.x? (EDIT: this doesn't make sense, as that's what the prerouting NAT is doing isn't it)

If my ultimate goal is to run postfix/dovecot at home, using the VPS for its inbound IP (which has a better rep than my home IP), the inability to see the originating address is going to be a problem isn't it, because I won't be able to check the senders IP matches SPF etc.
Can you suggest a solution for that, other than configuring postfix on the VPS?

Thanks again for your solution Pavel :smiley:

When a request for a site comes from the Internet, the VPS NATs the requestor's actual IP to the VPS tunnel IP, so the rest of the network only has to deal with one source IP instead of the whole Internet.

The downside is as @pavelgl said, the web server will not see the requestor's IP, so it cannot log who is using the site or geofence them.

Another approach could be to terminate the Wireguard tunnel in the web server machine, so requests from the Internet tunnel all the way to the destination without needing intermediate routing. Or run a reverse proxy at the VPS which would make requests to the server on internal IPs.

As a separate project, I already have nginx (in a container) as a proxy on my ISP public address (not the VPS public address) and use that to get the IP with X-Forwarded-For/X-Real-IP. This is how I serve webpages from home, via Cloudflare-origin. The nginx container is proxying to another container with a LAMP stack. The nginx container only has nginx and a firewall running on it.

I have the VPS because I want to use its IP for SMTP because my ISP IP is on a blacklist and my ISP also doesn't allow port 25. I may switch the Cloudflare-origin solution I have above to the VPS at some point and would then configure nginx on the VPS, but this won't help with port 587, 993 etc

My reason for not terminating the tunnel in the container is that I likely want to route several containers in the same way which would necessitate multiple WireGuard tunnels (perhaps this isn't an issue? Or perhaps I can configure the containers to only need 1 tunnel)

I'm using http as a learning/testing tool, as it's the easiest way to have a daemon running that I could think of. My intention is to run postfix and dovecot with LMTP and SASL and route 25/587 via the VPS (possibly outbound via SMTP2go)
I'd really just like the VPS to be a packet proxy and keep it simple if possible, but I need the senders IP for SPF and relay checking etc.

I'm referring to the series of posts here https://brokkr.net/2015/10/15/lets-do-postfix-again-but-slowly-and-properly-this-time-part-1-a-simple-local-mail-receiving-server/ as a guide.
Perhaps I can use a tunnel from VPS to postfix for SMTP and nginx on the VPS for proxy to dovecot.

Or maybe it would make sense/be easier to have another postfix instance on the VPS and use it as a relay, but either way, I want the emails stored at home. I'd also rather just route the packets than maintain another instance of postfix ideally.

The rule in the prerouting chain changes the destination address of packets destined for tcp dport 80 (on the local machine) to 10.10.10.10:80, while preserving the original source address.

The rule in the forward chain allows all traffic passing through the VPS that enters through $DEV_WORLD and leaves through $DEV_PRIVATE. You could make this rule more restrictive by only allowing port forwards.

chain forward
    iifname $DEV_WORLD oifname $DEV_PRIVATE ct status dnat counter accept

The rule in the postrouting chain changes the source address of every packet leaving $DEV_PRIVATE to 10.10.10.1 (the IP address of the VPS's Wireguard interface). This is needed because according to your settings (OpenWrt has AllowedIPs = 10.10.10.1/32), the OpenWrt device will not accept any packets with a source address other than 10.10.10.1.

One way or another, you have to use pbr. If you are not willing to install the package, you can create the necessary rules manually:

Disable the masquerading rule in the VPS postrouting chain.

On OpenWrt, change allowed_ips to 0.0.0.0/0, but disable route_allowed_ips.
Run these commands, substituting $wg_if with the correct interface name.

nft insert rule inet fw4 mangle_prerouting ip saddr 192.168.22.25 tcp sport 80 counter mark set 0x5
ip rule add fwmark 0x5 table 105 prio 5
ip route add default dev $wg_if table 105

If it works this way, we will help you translate it to uci and then you can adapt it for the mail server.

Thanks for the clarification of prerouting. I should really think what dnat stands for!

The problem for source IPs from the internet not being available at 192.168.22.25 is double NAT here isn't it?

Sorry if I'm being dense. I interpret that as:

The rule in the postrouting chain changes the source address of every packet leaving the VPS through $DEV_PRIVATE, destined for the OpenWrt WireGuard peer, to be 10.10.10.1. This is needed because the OpenWrt WireGuard peer has AllowedIPs = 10.10.10.1/32 which means it will not accept any inbound packets unless they're addressed from 10.10.10.1?

Incidentally, the OpenWrt WireGuard peer currently has AllowedIPs = 0.0.0.0/0 with Route Allowed IPs disabled.

Are your commands for nft and ip essentially creating entries in Static IPv4 Routes and IPv4 Rules to use table 105 for anything from 192.168.22.25:80 to go out through the OpenWrt WireGuard interface?

This, and disabling the masquerade in the VPS postrouting chain is removing the double NAT problem isn't it?

My reason for the pbr reluctance is that it's a black box for me at the moment and I'd like to understand what's actually going on. That and the Luci interface for pbr was muddying the waters further for me I'm afraid :upside_down_face:

Thanks for sharing your knowledge with me Pavel

When request packets from Internet users are not NATd on their way to the web server, the server will return web pages to the requestor's source IP, which could be anywhere on the Internet. So the default route for the web server must be the wireguard tunnel, while the default route for everything else connected to the web server's router / wireguard terminus is the regular local ISP or a different VPN.

This requires source-conditional routing in that router, either with manually set up multiple tables, or pbr, which is an assistant to set up multiple tables. The end result is the same.

On point to point links this is entirely practical. When a wireguard interface has multiple peers, a curated non-overlapping set of allowed_ips is essential so the kernel module can properly encrypt and dispatch outgoing packets to the correct peer. That is not an issue with point to point links and it is common to set allowed_ips to 0.0.0.0/0 and control routing externally. route_allowed_ips is a convenient part of the OpenWrt configuration script, it is not a feature of Wireguard itself. By itself Wireguard with one peer and allowed_ips set to /0 at both ends is a basic tunnel.

As with any interface when the configuration places an IP address with a mask larger than /32 on it, a route is installed to that subnet in the main routing table. IP addresses are not required at all on point to point tunnels that route between networks, but are useful for testing.

1 Like

When you want to have a route to your VPS you can add, besides 0.0.0.0/0, the subnet of the VPS, then you enable Route Allowed IPs, but you do not want a default route, to stop the default route you can Disable/untick Use Default Gateway on Advanced tab of the WireGuard interface

This way you do have a route to your VPS but no default route via the VPN

2 Likes

I think I made it this way because DNS wasn't working for 192.168.22.25.
Currently, Use Default Gateway is enabled for the OpenWrt WireGuard interface, but the output of route shows default to be pppoe-wan (which is what I want)

So, if I have AllowedIPs = 10.10.10.1/32 and Route Allowed IPs, but not Use Default Gateway (presumably this should say Use AS Default Gateway?), I will have a route to the VPN but OpenWrt won't use it as the default route? Doesn't this mean DNS will stop working for 192.168.22.25 (see above)

So on a WireGuard peer acting as the listening "server", AllowedIPs needs to be specific so the "server" can differentiate between multiple WireGuard peers. However on the peer at the other end, which only has one WireGuard tunnel, AllowedIPs can be 0.0.0.0/0 as there can be no confusion about which tunnel to use?

So the observation "Inbound is like an ACL and Outbound is a routing table" makes sense.