Mwan3 port to nftables

I could not reproduce your issue, so at this point you'd need to do some testing and traces using similar commands to those in the detailed trace output I posted. It's pretty hard to debug something when I have fragments of configs and a vague description of what you're actually doing, with no access to your router.

I think wait until I've tested and released the non-destructive mark version. No point testing on a version that will soon be superseded.

after thoroughly checking it seems that dnsmasq restart after pbr restart is always failing, so adding a script to restart dnsmasq when pbr restart fixed the issue.

seems all works just like usinng mwan3 iptables.

yet it seems weird, that "dnsmasq restarting" is never really a problem with the iptables mwan3

1 Like

I'd noticed that too. pbr is sending dnsmasq a SIGTERM. i had to restart dnsmasq more than once after restarting pbr. procd should be respawing it.

3.2 with the non-destructive mark per the above looks very promising

1 Like

I can see why you used pbr as well as mwan3 while mwan3 was still iptables and not dnsmasq nft set aware, although with a dnsmasq nft set aware mwan3 I don't see anything in the use-cases you shared that indicates a need for both packages at the same time, since mwan3 can do all your routing through your wireguard tunnels....

welp, you probably got a point, but i just haven't got the time to reconfigure and retry again, but the last time using the iptables one iirc creating wireguard routing rules under mwan3 won't work, and the other reason that the way using pbr is just more an ease of use without having to create ipsets under firewall and dns.

When pbr starts or restarts with domain-based policies configured (dnsmasq.nftset resolver in use), it calls /etc/init.d/dnsmasq restart to force dnsmasq to reload its nftset configuration (as does mwan3).

Due to a bug in pbr's hash comparison logic, this can trigger dnsmasq to restart two or three times in quick succession rather than once. This matters because procd will stop respawning dnsmasq after 5 unexpected exits within an hour. If pbr is started or restarted repeatedly, for example during WAN failover, dnsmasq can hit that limit and stop running entirely.

The bug is that pbr stores a hash of its dnsmasq config file in a variable to detect whether a restart is needed. When pbr handles an interface reload event (via the 70-pbr hotplug script), it spawns a fresh process and of course that local variable is always empty in the new process. The hash comparison therefore never matches and always triggers a dnsmasq restart, even if the config hasn't changed.

Try to avoid restarting pbr repeatedly in quick succession. Your script file is the best solution until the bug is fixed.

mwan3 version 3.2 is up for testing

Implements mwan3-iptables-equivalent non-destructive masked connmark save/restore via a new vmap-dispatch architecture and moves base-chain priority back to its architecturally correct mangle + 1 placement. This change allows mwan3 to be injected into mangle chains in an order independent fashion.

Adds an enhancement in mwan3 in the form of opt-in IPv6 SNAT via the new per-interface snat6 UCI option to address router-originated, mark-rerouted, wrong-saddr failure mode for IPv6.

See Changelog for further details. Documentation updated substantially to cover the new connmark mechanism.

3 Likes

Small addition the 70-pbr hotplug script is no longer in use with PBR.
Interface reloading is handled with procd interface triggers nowadays :slight_smile:

i've tried the latest version for 6hours all seems working great with pbr the routing works consistent and none broke the "dnsmasq" like before, i'll report if anything comes up again. thanks for the fixed.

and about this:

i've tried setup one rule for routing it thru wireguard, and yes the outcome it's like using the iptables one, cant connect to wireguard by mwan3 rule.

config interface 'wgid'
        option proto 'wireguard'
        option private_key 'redacted='
        list addresses '10.5.0.2/16'
        list dns '1.1.1.1'
        option multipath 'off'
        option metric '90'      <------------assign the metric
root@ax6000:~# cat /etc/config/dhcp 
config ipset
        list name 'routecheckid'
        list domain 'browserleaks.com'
        option table_family 'inet'
root@ax6000:~# cat /etc/config/firewall
config ipset
        option name 'routecheckid'
        option family 'ipv4'
        list match 'dest_ip'
root@ax6000:~# cat /etc/config/mwan3
config interface 'wgid'
        option enabled '1'
        option initial_state 'online'
        option family 'ipv4'
        list track_ip '1.1.1.1'
        list track_ip '9.9.9.9'
        option track_method 'ping'
        option reliability '1'
        option count '1'
        option size '56'
        option max_ttl '60'
        option timeout '4'
        option interval '10'
        option failure_interval '5'
        option recovery_interval '5'
        option down '5'
        option up '5'

config member 'wgid_m1'
        option interface 'wgid'
        option metric '1'
        option weight '1'

config policy 'wgid_policy'
        list use_member 'wgid_m1'
        option last_resort 'unreachable'

config rule 'routecheck_wgid'
        option family 'ipv4'
        option proto 'all'
        option sticky '0'
        option ipset 'routecheckid'
        option use_policy 'wgid_policy'
Interface status:
 interface wan is online and tracking is active (online 00h:01m:41s, uptime 00h:42m:13s)
 interface wan2 is online and tracking is active (online 00h:01m:41s, uptime 00h:36m:53s)
 interface wgid is online and tracking is active (online 00h:01m:41s, uptime 00h:41m:45s)

Current policies:
balanced:
 wan2 (50%)
 wan (50%)
wansatuaja:
 wan
wanduaaja:
 wan2
wgid_policy:
 wgid

Directly connected ipv4 networks:
 [redacted]

Directly connected ipv6 networks:
fe80::

Active user rules:
 ip daddr @routecheckid meta mark & 0x00003f00 == 0x00000000 - wgid_policy
 [redacted rules]
 tcp dport 443 meta mark & 0x00003f00 == 0x00000000 S https
 ip daddr 0.0.0.0/0 meta mark & 0x00003f00 == 0x00000000 - balanced

The Test

root@ax6000:~# traceroute browserleaks.com
traceroute to browserleaks.com (104.236.69.55), 30 hops max, 46 byte packets
 1traceroute: sendto: Network unreachable

root@ax6000:~# ping browserleaks.com
PING browserleaks.com (104.236.69.55): 56 data bytes
ping: sendto: Network unreachable

root@ax6000:~# curl -v browserleaks.com
^C

Just some cosmetics….

installation errors

root@w0wkinXXXNETXXprimary:~# (2/2) Installing mwan3 (3.2-r1)

  • type route hook output priority mangle + 1; policy accept;
    

-ash: syntax error: unexpected word
root@w0wkinXXXNETXXprimary:~# Installing file to etc/config/mwan3.apk-new
-ash: Installing: not found
root@w0wkinXXXNETXXprimary:~# Executing mwan3-3.2-r1.post-install
-ash: Executing: not found
root@w0wkinXXXNETXXprimary:~# * In file included from /dev/stdin:384:2-57:
-ash: luci-app-mwan3-26.093.05533~4fe9221.apk: not found
root@w0wkinXXXNETXXprimary:~# * /usr/share/nftables.d/table-post/10-mwan3.nft:
49:2-48: Error: Chain "mwan3_prerouting" already exists in table inet 'fw4' with
different declaration
-ash: luci-app-mwan3-26.093.05533~4fe9221.apk: not found
root@w0wkinXXXNETXXprimary:~# * type filter hook prerouting priority mangl
e + 1; policy accept;
-ash: luci-app-mwan3-26.093.05533~4fe9221.apk: not found
-ash: policy: not found

This status display issue you reported (and deleted) was a genuine bug. I've fixed it and substantially enhanced the luci Status menu app as well, which I will release shortly.

Lots of things going on here. Hard to understand all without commentary from you.

  1. Looks like apk's stdout is being fed back into the shell as input somehow? Some kind of terminal issue? Even the first line is a comment being echoed back into the shell. What came before?
  2. The nft error: maybe you had a running mwan3 instance (possibly already v3.2 from a previous install attempt?) that left mwan3_prerouting alive in fw4, and fw4 -q reload in the postinst failed silently. The chain exists, so nft rejects loading 10-mwan3.nft. Run fw4 reload manually and then mwan3 restart to recover

Are you installing my pre-packaged apks or are you building your own?

Can you give me some more background here to make it easier to interpret this output?

EDIT: looks like I broke the Makefile for 3.2 on the github repo by pushing one with a wrong postinst. I fixed it and force pushed, so you will need to reclone the repo rather than pull

I will shortly push a new version 3.2.1 later today, so might want to wait for that one.

mwan3 version 3.2.1 and luci-app-mwan3 version 3.2.1 up for testing

Fixes a dnsmasq SIGHUP race that could kill dnsmasq during fw4-reload recovery, corrects policy status reporting to show all configured members with traffic share percentages, and shows the installed mwan3 package version in mwan3 internal output. The v3.2 postinst chain-cleanup migration was also corrected - the original v3.2 tag had the upgrade direction backwards for users coming from v3.1.4; the v3.2-1 tag has been updated.

The luci-app-mwan3 status pages have been substantially redesigned. The Overview tab now shows interface status cards in a flex layout alongside a full policies table with per-member traffic share percentages and a rules summary. The Status tab replaces the previous static cards with per-interface tracking health panels showing probe IP status, tracking mode and score. The Troubleshooting tab replaces the raw text dump with collapsible per-section panels, adds IPv6 diagnostic output alongside IPv4, and filters vmap-dispatch boilerplate from the nftables listing.

2 Likes

Deleted because my config was not consistent at the moment, so I decided not to mess everything, but I am glad that you sorted things out. Thanks.

Yep, building own. Will test the latest version later to confirm everything is good or not.

The 3.2 builds, including the latest one, seem to introduce some strange instability. Pages load slowly, and some sites appear broken. For example, YouTube may load normally, show only a gray page, or get stuck halfway through loading. I have IPv6 enabled — could this be related to the new features? My PCs are set to prefer IPv6. This time I used pre-build apk from your repository.

broken openwrt page. I have seen similar when it was MTU/MSS broken

anonimyzed mwan output if it helps
Summary

This text will be hidden

Software-Version

mwan3 - 2.12.0-r3

Output of "ip -4 a show"

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 10.10.100.101/24 brd 10.10.100.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.10.100.155/24 scope global secondary proto keepalived eth0:vip
valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 10.10.88.1/24 brd 10.10.88.255 scope global eth5
valid_lft forever preferred_lft forever
12: bond0.87@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 10.10.87.1/24 brd 10.10.87.255 scope global bond0.87
valid_lft forever preferred_lft forever
inet 10.10.90.136/32 scope global proto keepalived bond0.87:vip
valid_lft forever preferred_lft forever
inet 10.10.87.5/24 scope global secondary proto keepalived bond0.87:vip
valid_lft forever preferred_lft forever
13: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 10.10.77.1/24 brd 10.10.77.255 scope global br-lan
valid_lft forever preferred_lft forever
inet 10.10.90.1/24 brd 10.10.90.255 scope global br-lan
valid_lft forever preferred_lft forever
inet 10.10.77.5/24 scope global secondary proto keepalived br-lan:vip
valid_lft forever preferred_lft forever
inet 10.10.90.5/24 scope global secondary proto keepalived br-lan:vip
valid_lft forever preferred_lft forever
15: wireguard: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
inet 10.10.6.1/24 brd 10.10.6.255 scope global wireguard
valid_lft forever preferred_lft forever
17: pppoe-wan: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc cake state UNKNOWN group default qlen 3
inet 198.51.100.47 peer 203.0.113.248/32 scope global pppoe-wan
valid_lft forever preferred_lft forever

Output of "ip -4 route show"

default via 203.0.113.248 dev pppoe-wan proto static metric 10
default via 10.10.100.1 dev eth0 proto static metric 20
10.10.77.0/24 dev br-lan proto kernel scope link src 10.10.77.1
10.10.87.0/24 dev bond0.87 proto kernel scope link src 10.10.87.1
10.10.88.0/24 dev eth5 proto kernel scope link src 10.10.88.1
10.10.90.0/24 dev br-lan proto kernel scope link src 10.10.90.1
10.10.100.0/24 dev eth0 proto static scope link metric 20
10.10.6.0/24 dev wireguard proto kernel scope link src 10.10.6.1
203.0.113.248 dev pppoe-wan proto kernel scope link src 198.51.100.47

Output of "ip -4 rule show"

0: from all lookup local
1001: from all iif pppoe-wan lookup 1
1002: from all iif eth0 lookup 2
2001: from all fwmark 0x100/0x3f00 lookup 1
2002: from all fwmark 0x200/0x3f00 lookup 2
2061: from all fwmark 0x3d00/0x3f00 blackhole
2062: from all fwmark 0x3e00/0x3f00 unreachable
3001: from all fwmark 0x100/0x3f00 unreachable
3002: from all fwmark 0x200/0x3f00 unreachable
32766: from all lookup main
32767: from all lookup default

Output of "ip -4 route list table 1-250"

Routing table 1:
default via 203.0.113.248 dev pppoe-wan proto static metric 10
10.10.77.0/24 dev br-lan proto kernel scope link src 10.10.77.1
10.10.87.0/24 dev bond0.87 proto kernel scope link src 10.10.87.1
10.10.88.0/24 dev eth5 proto kernel scope link src 10.10.88.1
10.10.90.0/24 dev br-lan proto kernel scope link src 10.10.90.1
10.10.6.0/24 dev wireguard proto kernel scope link src 10.10.6.1
203.0.113.248 dev pppoe-wan proto kernel scope link src 198.51.100.47

Routing table 2:
default via 10.10.100.1 dev eth0 proto static metric 20
10.10.77.0/24 dev br-lan proto kernel scope link src 10.10.77.1
10.10.87.0/24 dev bond0.87 proto kernel scope link src 10.10.87.1
10.10.88.0/24 dev eth5 proto kernel scope link src 10.10.88.1
10.10.90.0/24 dev br-lan proto kernel scope link src 10.10.90.1
10.10.100.0/24 dev eth0 proto static scope link metric 20
10.10.6.0/24 dev wireguard proto kernel scope link src 10.10.6.1
203.0.113.248 dev pppoe-wan proto unspec scope link src 198.51.100.47

Output of "nft list table inet fw4" (mwan3 chains)

chain mwan3_ifaces_in {
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_iface_in_wan
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_iface_in_wanb
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_iface_in_wan_6
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_iface_in_wanb_6
}

chain mwan3_rules {
    ip daddr 0.0.0.0/0 meta mark & 0x00003f00 == 0x00000000 jump mwan3_policy_wan_wanb
    ip6 daddr ::/0 meta mark & 0x00003f00 == 0x00000000 jump mwan3_policy_wan_6_wanb_6
}

chain mwan3_connected {
    ip daddr @mwan3_connected_v4 meta mark set meta mark | 0x00003f00 return
    ip6 daddr @mwan3_connected_v6 meta mark set meta mark | 0x00003f00 return
}

chain mwan3_custom {
    ip daddr @mwan3_custom_v4 meta mark set meta mark | 0x00003f00 return
    ip6 daddr @mwan3_custom_v6 meta mark set meta mark | 0x00003f00 return
}

chain mwan3_dynamic {
    ip daddr @mwan3_dynamic_v4 meta mark set meta mark | 0x00003f00 return
    ip6 daddr @mwan3_dynamic_v6 meta mark set meta mark | 0x00003f00 return
}

chain upnp_forward {
}

chain upnp_prerouting {
}

chain upnp_postrouting {
}

chain mwan3_policy_wan_only {
    meta nfproto ipv4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc1ff | 0x00000100
    meta nfproto ipv6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc3ff | 0x00000300
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xfffffeff | 0x00003e00
}

chain mwan3_policy_wanb_only {
    meta nfproto ipv4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc2ff | 0x00000200
    meta nfproto ipv6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc4ff | 0x00000400
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xfffffeff | 0x00003e00
}

chain mwan3_policy_wan_wanb {
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc1ff | 0x00000100
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xfffffeff | 0x00003e00
}

chain mwan3_policy_wanb_wan {
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc2ff | 0x00000200
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xfffffeff | 0x00003e00
}

chain mwan3_policy_wan_6_wanb_6 {
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc3ff | 0x00000300
    meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xfffffeff | 0x00003e00
}

chain mwan3_iface_in_wanb {
    iifname "eth0" ip saddr @mwan3_connected_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" ip saddr @mwan3_custom_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" ip saddr @mwan3_dynamic_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc2ff | 0x00000200
}

chain mwan3_iface_in_wan {
    iifname "pppoe-wan" ip saddr @mwan3_connected_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" ip saddr @mwan3_custom_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" ip saddr @mwan3_dynamic_v4 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc1ff | 0x00000100
}

chain mwan3_iface_in_wan_6 {
    iifname "pppoe-wan" ip6 saddr @mwan3_connected_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" ip6 saddr @mwan3_custom_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" ip6 saddr @mwan3_dynamic_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "pppoe-wan" meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc3ff | 0x00000300
}

chain mwan3_iface_in_wanb_6 {
    iifname "eth0" ip6 saddr @mwan3_connected_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" ip6 saddr @mwan3_custom_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" ip6 saddr @mwan3_dynamic_v6 meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark | 0x00003f00
    iifname "eth0" meta mark & 0x00003f00 == 0x00000000 meta mark set meta mark & 0xffffc4ff | 0x00000400
}

chain mwan3_prerouting {
    type filter hook prerouting priority mangle + 1; policy accept;
    icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
    meta mark & 0x00003f00 == 0x00000000 ct mark & 0x00003f00 vmap { 0x00000100 : jump mwan3_or_meta_0x100, 0x00000200 : jump mwan3_or_meta_0x200, 0x00000300 : jump mwan3_or_meta_0x300, 0x00000400 : jump mwan3_or_meta_0x400, 0x00000500 : jump mwan3_or_meta_0x500, 0x00000600 : jump mwan3_or_meta_0x600, 0x00000700 : jump mwan3_or_meta_0x700, 0x00000800 : jump mwan3_or_meta_0x800, 0x00000900 : jump mwan3_or_meta_0x900, 0x00000a00 : jump mwan3_or_meta_0xa00, 0x00000b00 : jump mwan3_or_meta_0xb00, 0x00000c00 : jump mwan3_or_meta_0xc00, 0x00000d00 : jump mwan3_or_meta_0xd00, 0x00000e00 : jump mwan3_or_meta_0xe00, 0x00000f00 : jump mwan3_or_meta_0xf00, 0x00001000 : jump mwan3_or_meta_0x1000, 0x00001100 : jump mwan3_or_meta_0x1100, 0x00001200 : jump mwan3_or_meta_0x1200, 0x00001300 : jump mwan3_or_meta_0x1300, 0x00001400 : jump mwan3_or_meta_0x1400, 0x00001500 : jump mwan3_or_meta_0x1500, 0x00001600 : jump mwan3_or_meta_0x1600, 0x00001700 : jump mwan3_or_meta_0x1700, 0x00001800 : jump mwan3_or_meta_0x1800, 0x00001900 : jump mwan3_or_meta_0x1900, 0x00001a00 : jump mwan3_or_meta_0x1a00, 0x00001b00 : jump mwan3_or_meta_0x1b00, 0x00001c00 : jump mwan3_or_meta_0x1c00, 0x00001d00 : jump mwan3_or_meta_0x1d00, 0x00001e00 : jump mwan3_or_meta_0x1e00, 0x00001f00 : jump mwan3_or_meta_0x1f00, 0x00002000 : jump mwan3_or_meta_0x2000, 0x00002100 : jump mwan3_or_meta_0x2100, 0x00002200 : jump mwan3_or_meta_0x2200, 0x00002300 : jump mwan3_or_meta_0x2300, 0x00002400 : jump mwan3_or_meta_0x2400, 0x00002500 : jump mwan3_or_meta_0x2500, 0x00002600 : jump mwan3_or_meta_0x2600, 0x00002700 : jump mwan3_or_meta_0x2700, 0x00002800 : jump mwan3_or_meta_0x2800, 0x00002900 : jump mwan3_or_meta_0x2900, 0x00002a00 : jump mwan3_or_meta_0x2a00, 0x00002b00 : jump mwan3_or_meta_0x2b00, 0x00002c00 : jump mwan3_or_meta_0x2c00, 0x00002d00 : jump mwan3_or_meta_0x2d00, 0x00002e00 : jump mwan3_or_meta_0x2e00, 0x00002f00 : jump mwan3_or_meta_0x2f00, 0x00003000 : jump mwan3_or_meta_0x3000, 0x00003100 : jump mwan3_or_meta_0x3100, 0x00003200 : jump mwan3_or_meta_0x3200, 0x00003300 : jump mwan3_or_meta_0x3300, 0x00003400 : jump mwan3_or_meta_0x3400, 0x00003500 : jump mwan3_or_meta_0x3500, 0x00003600 : jump mwan3_or_meta_0x3600, 0x00003700 : jump mwan3_or_meta_0x3700, 0x00003800 : jump mwan3_or_meta_0x3800, 0x00003900 : jump mwan3_or_meta_0x3900, 0x00003a00 : jump mwan3_or_meta_0x3a00, 0x00003b00 : jump mwan3_or_meta_0x3b00, 0x00003c00 : jump mwan3_or_meta_0x3c00, 0x00003d00 : jump mwan3_or_meta_0x3d00, 0x00003e00 : jump mwan3_or_meta_0x3e00, 0x00003f00 : jump mwan3_or_meta_0x3f00 }
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_ifaces_in
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_custom
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_connected
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_dynamic
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_rules
    ct mark set ct mark & 0xffffc0ff
    meta mark & 0x00003f00 vmap { 0x00000100 : jump mwan3_or_ct_0x100, 0x00000200 : jump mwan3_or_ct_0x200, 0x00000300 : jump mwan3_or_ct_0x300, 0x00000400 : jump mwan3_or_ct_0x400, 0x00000500 : jump mwan3_or_ct_0x500, 0x00000600 : jump mwan3_or_ct_0x600, 0x00000700 : jump mwan3_or_ct_0x700, 0x00000800 : jump mwan3_or_ct_0x800, 0x00000900 : jump mwan3_or_ct_0x900, 0x00000a00 : jump mwan3_or_ct_0xa00, 0x00000b00 : jump mwan3_or_ct_0xb00, 0x00000c00 : jump mwan3_or_ct_0xc00, 0x00000d00 : jump mwan3_or_ct_0xd00, 0x00000e00 : jump mwan3_or_ct_0xe00, 0x00000f00 : jump mwan3_or_ct_0xf00, 0x00001000 : jump mwan3_or_ct_0x1000, 0x00001100 : jump mwan3_or_ct_0x1100, 0x00001200 : jump mwan3_or_ct_0x1200, 0x00001300 : jump mwan3_or_ct_0x1300, 0x00001400 : jump mwan3_or_ct_0x1400, 0x00001500 : jump mwan3_or_ct_0x1500, 0x00001600 : jump mwan3_or_ct_0x1600, 0x00001700 : jump mwan3_or_ct_0x1700, 0x00001800 : jump mwan3_or_ct_0x1800, 0x00001900 : jump mwan3_or_ct_0x1900, 0x00001a00 : jump mwan3_or_ct_0x1a00, 0x00001b00 : jump mwan3_or_ct_0x1b00, 0x00001c00 : jump mwan3_or_ct_0x1c00, 0x00001d00 : jump mwan3_or_ct_0x1d00, 0x00001e00 : jump mwan3_or_ct_0x1e00, 0x00001f00 : jump mwan3_or_ct_0x1f00, 0x00002000 : jump mwan3_or_ct_0x2000, 0x00002100 : jump mwan3_or_ct_0x2100, 0x00002200 : jump mwan3_or_ct_0x2200, 0x00002300 : jump mwan3_or_ct_0x2300, 0x00002400 : jump mwan3_or_ct_0x2400, 0x00002500 : jump mwan3_or_ct_0x2500, 0x00002600 : jump mwan3_or_ct_0x2600, 0x00002700 : jump mwan3_or_ct_0x2700, 0x00002800 : jump mwan3_or_ct_0x2800, 0x00002900 : jump mwan3_or_ct_0x2900, 0x00002a00 : jump mwan3_or_ct_0x2a00, 0x00002b00 : jump mwan3_or_ct_0x2b00, 0x00002c00 : jump mwan3_or_ct_0x2c00, 0x00002d00 : jump mwan3_or_ct_0x2d00, 0x00002e00 : jump mwan3_or_ct_0x2e00, 0x00002f00 : jump mwan3_or_ct_0x2f00, 0x00003000 : jump mwan3_or_ct_0x3000, 0x00003100 : jump mwan3_or_ct_0x3100, 0x00003200 : jump mwan3_or_ct_0x3200, 0x00003300 : jump mwan3_or_ct_0x3300, 0x00003400 : jump mwan3_or_ct_0x3400, 0x00003500 : jump mwan3_or_ct_0x3500, 0x00003600 : jump mwan3_or_ct_0x3600, 0x00003700 : jump mwan3_or_ct_0x3700, 0x00003800 : jump mwan3_or_ct_0x3800, 0x00003900 : jump mwan3_or_ct_0x3900, 0x00003a00 : jump mwan3_or_ct_0x3a00, 0x00003b00 : jump mwan3_or_ct_0x3b00, 0x00003c00 : jump mwan3_or_ct_0x3c00, 0x00003d00 : jump mwan3_or_ct_0x3d00, 0x00003e00 : jump mwan3_or_ct_0x3e00, 0x00003f00 : jump mwan3_or_ct_0x3f00 }
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_custom
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_connected
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_dynamic
}

chain mwan3_output {
    type route hook output priority mangle + 1; policy accept;
    meta mark & 0x00003f00 == 0x00000000 ct mark & 0x00003f00 vmap { 0x00000100 : jump mwan3_or_meta_0x100, 0x00000200 : jump mwan3_or_meta_0x200, 0x00000300 : jump mwan3_or_meta_0x300, 0x00000400 : jump mwan3_or_meta_0x400, 0x00000500 : jump mwan3_or_meta_0x500, 0x00000600 : jump mwan3_or_meta_0x600, 0x00000700 : jump mwan3_or_meta_0x700, 0x00000800 : jump mwan3_or_meta_0x800, 0x00000900 : jump mwan3_or_meta_0x900, 0x00000a00 : jump mwan3_or_meta_0xa00, 0x00000b00 : jump mwan3_or_meta_0xb00, 0x00000c00 : jump mwan3_or_meta_0xc00, 0x00000d00 : jump mwan3_or_meta_0xd00, 0x00000e00 : jump mwan3_or_meta_0xe00, 0x00000f00 : jump mwan3_or_meta_0xf00, 0x00001000 : jump mwan3_or_meta_0x1000, 0x00001100 : jump mwan3_or_meta_0x1100, 0x00001200 : jump mwan3_or_meta_0x1200, 0x00001300 : jump mwan3_or_meta_0x1300, 0x00001400 : jump mwan3_or_meta_0x1400, 0x00001500 : jump mwan3_or_meta_0x1500, 0x00001600 : jump mwan3_or_meta_0x1600, 0x00001700 : jump mwan3_or_meta_0x1700, 0x00001800 : jump mwan3_or_meta_0x1800, 0x00001900 : jump mwan3_or_meta_0x1900, 0x00001a00 : jump mwan3_or_meta_0x1a00, 0x00001b00 : jump mwan3_or_meta_0x1b00, 0x00001c00 : jump mwan3_or_meta_0x1c00, 0x00001d00 : jump mwan3_or_meta_0x1d00, 0x00001e00 : jump mwan3_or_meta_0x1e00, 0x00001f00 : jump mwan3_or_meta_0x1f00, 0x00002000 : jump mwan3_or_meta_0x2000, 0x00002100 : jump mwan3_or_meta_0x2100, 0x00002200 : jump mwan3_or_meta_0x2200, 0x00002300 : jump mwan3_or_meta_0x2300, 0x00002400 : jump mwan3_or_meta_0x2400, 0x00002500 : jump mwan3_or_meta_0x2500, 0x00002600 : jump mwan3_or_meta_0x2600, 0x00002700 : jump mwan3_or_meta_0x2700, 0x00002800 : jump mwan3_or_meta_0x2800, 0x00002900 : jump mwan3_or_meta_0x2900, 0x00002a00 : jump mwan3_or_meta_0x2a00, 0x00002b00 : jump mwan3_or_meta_0x2b00, 0x00002c00 : jump mwan3_or_meta_0x2c00, 0x00002d00 : jump mwan3_or_meta_0x2d00, 0x00002e00 : jump mwan3_or_meta_0x2e00, 0x00002f00 : jump mwan3_or_meta_0x2f00, 0x00003000 : jump mwan3_or_meta_0x3000, 0x00003100 : jump mwan3_or_meta_0x3100, 0x00003200 : jump mwan3_or_meta_0x3200, 0x00003300 : jump mwan3_or_meta_0x3300, 0x00003400 : jump mwan3_or_meta_0x3400, 0x00003500 : jump mwan3_or_meta_0x3500, 0x00003600 : jump mwan3_or_meta_0x3600, 0x00003700 : jump mwan3_or_meta_0x3700, 0x00003800 : jump mwan3_or_meta_0x3800, 0x00003900 : jump mwan3_or_meta_0x3900, 0x00003a00 : jump mwan3_or_meta_0x3a00, 0x00003b00 : jump mwan3_or_meta_0x3b00, 0x00003c00 : jump mwan3_or_meta_0x3c00, 0x00003d00 : jump mwan3_or_meta_0x3d00, 0x00003e00 : jump mwan3_or_meta_0x3e00, 0x00003f00 : jump mwan3_or_meta_0x3f00 }
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_ifaces_in
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_custom
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_connected
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_dynamic
    meta mark & 0x00003f00 == 0x00000000 jump mwan3_rules
    ct mark set ct mark & 0xffffc0ff
    meta mark & 0x00003f00 vmap { 0x00000100 : jump mwan3_or_ct_0x100, 0x00000200 : jump mwan3_or_ct_0x200, 0x00000300 : jump mwan3_or_ct_0x300, 0x00000400 : jump mwan3_or_ct_0x400, 0x00000500 : jump mwan3_or_ct_0x500, 0x00000600 : jump mwan3_or_ct_0x600, 0x00000700 : jump mwan3_or_ct_0x700, 0x00000800 : jump mwan3_or_ct_0x800, 0x00000900 : jump mwan3_or_ct_0x900, 0x00000a00 : jump mwan3_or_ct_0xa00, 0x00000b00 : jump mwan3_or_ct_0xb00, 0x00000c00 : jump mwan3_or_ct_0xc00, 0x00000d00 : jump mwan3_or_ct_0xd00, 0x00000e00 : jump mwan3_or_ct_0xe00, 0x00000f00 : jump mwan3_or_ct_0xf00, 0x00001000 : jump mwan3_or_ct_0x1000, 0x00001100 : jump mwan3_or_ct_0x1100, 0x00001200 : jump mwan3_or_ct_0x1200, 0x00001300 : jump mwan3_or_ct_0x1300, 0x00001400 : jump mwan3_or_ct_0x1400, 0x00001500 : jump mwan3_or_ct_0x1500, 0x00001600 : jump mwan3_or_ct_0x1600, 0x00001700 : jump mwan3_or_ct_0x1700, 0x00001800 : jump mwan3_or_ct_0x1800, 0x00001900 : jump mwan3_or_ct_0x1900, 0x00001a00 : jump mwan3_or_ct_0x1a00, 0x00001b00 : jump mwan3_or_ct_0x1b00, 0x00001c00 : jump mwan3_or_ct_0x1c00, 0x00001d00 : jump mwan3_or_ct_0x1d00, 0x00001e00 : jump mwan3_or_ct_0x1e00, 0x00001f00 : jump mwan3_or_ct_0x1f00, 0x00002000 : jump mwan3_or_ct_0x2000, 0x00002100 : jump mwan3_or_ct_0x2100, 0x00002200 : jump mwan3_or_ct_0x2200, 0x00002300 : jump mwan3_or_ct_0x2300, 0x00002400 : jump mwan3_or_ct_0x2400, 0x00002500 : jump mwan3_or_ct_0x2500, 0x00002600 : jump mwan3_or_ct_0x2600, 0x00002700 : jump mwan3_or_ct_0x2700, 0x00002800 : jump mwan3_or_ct_0x2800, 0x00002900 : jump mwan3_or_ct_0x2900, 0x00002a00 : jump mwan3_or_ct_0x2a00, 0x00002b00 : jump mwan3_or_ct_0x2b00, 0x00002c00 : jump mwan3_or_ct_0x2c00, 0x00002d00 : jump mwan3_or_ct_0x2d00, 0x00002e00 : jump mwan3_or_ct_0x2e00, 0x00002f00 : jump mwan3_or_ct_0x2f00, 0x00003000 : jump mwan3_or_ct_0x3000, 0x00003100 : jump mwan3_or_ct_0x3100, 0x00003200 : jump mwan3_or_ct_0x3200, 0x00003300 : jump mwan3_or_ct_0x3300, 0x00003400 : jump mwan3_or_ct_0x3400, 0x00003500 : jump mwan3_or_ct_0x3500, 0x00003600 : jump mwan3_or_ct_0x3600, 0x00003700 : jump mwan3_or_ct_0x3700, 0x00003800 : jump mwan3_or_ct_0x3800, 0x00003900 : jump mwan3_or_ct_0x3900, 0x00003a00 : jump mwan3_or_ct_0x3a00, 0x00003b00 : jump mwan3_or_ct_0x3b00, 0x00003c00 : jump mwan3_or_ct_0x3c00, 0x00003d00 : jump mwan3_or_ct_0x3d00, 0x00003e00 : jump mwan3_or_ct_0x3e00, 0x00003f00 : jump mwan3_or_ct_0x3f00 }
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_custom
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_connected
    meta mark & 0x00003f00 != 0x00003f00 jump mwan3_dynamic
}

chain mwan3_postrouting {
    type nat hook postrouting priority srcnat - 1; policy accept;
}

chain mwan3_or_meta_0x100 {
    meta mark set meta mark | 0x00000100
}

chain mwan3_or_ct_0x100 {
    ct mark set ct mark | 0x00000100
}

chain mwan3_or_meta_0x200 {
    meta mark set meta mark | 0x00000200
}

chain mwan3_or_ct_0x200 {
    ct mark set ct mark | 0x00000200
}

chain mwan3_or_meta_0x300 {
    meta mark set meta mark | 0x00000300
}

chain mwan3_or_ct_0x300 {
    ct mark set ct mark | 0x00000300
}

chain mwan3_or_meta_0x400 {
    meta mark set meta mark | 0x00000400
}

chain mwan3_or_ct_0x400 {
    ct mark set ct mark | 0x00000400
}

chain mwan3_or_meta_0x500 {
    meta mark set meta mark | 0x00000500
}

chain mwan3_or_ct_0x500 {
    ct mark set ct mark | 0x00000500
}

chain mwan3_or_meta_0x600 {
    meta mark set meta mark | 0x00000600
}

chain mwan3_or_ct_0x600 {
    ct mark set ct mark | 0x00000600
}

chain mwan3_or_meta_0x700 {
    meta mark set meta mark | 0x00000700
}

chain mwan3_or_ct_0x700 {
    ct mark set ct mark | 0x00000700
}

chain mwan3_or_meta_0x800 {
    meta mark set meta mark | 0x00000800
}

chain mwan3_or_ct_0x800 {
    ct mark set ct mark | 0x00000800
}

chain mwan3_or_meta_0x900 {
    meta mark set meta mark | 0x00000900
}

chain mwan3_or_ct_0x900 {
    ct mark set ct mark | 0x00000900
}

chain mwan3_or_meta_0xa00 {
    meta mark set meta mark | 0x00000a00
}

chain mwan3_or_ct_0xa00 {
    ct mark set ct mark | 0x00000a00
}

chain mwan3_or_meta_0xb00 {
    meta mark set meta mark | 0x00000b00
}

chain mwan3_or_ct_0xb00 {
    ct mark set ct mark | 0x00000b00
}

chain mwan3_or_meta_0xc00 {
    meta mark set meta mark | 0x00000c00
}

chain mwan3_or_ct_0xc00 {
    ct mark set ct mark | 0x00000c00
}

chain mwan3_or_meta_0xd00 {
    meta mark set meta mark | 0x00000d00
}

chain mwan3_or_ct_0xd00 {
    ct mark set ct mark | 0x00000d00
}

chain mwan3_or_meta_0xe00 {
    meta mark set meta mark | 0x00000e00
}

chain mwan3_or_ct_0xe00 {
    ct mark set ct mark | 0x00000e00
}

chain mwan3_or_meta_0xf00 {
    meta mark set meta mark | 0x00000f00
}

chain mwan3_or_ct_0xf00 {
    ct mark set ct mark | 0x00000f00
}

chain mwan3_or_meta_0x1000 {
    meta mark set meta mark | 0x00001000
}

chain mwan3_or_ct_0x1000 {
    ct mark set ct mark | 0x00001000
}

chain mwan3_or_meta_0x1100 {
    meta mark set meta mark | 0x00001100
}

chain mwan3_or_ct_0x1100 {
    ct mark set ct mark | 0x00001100
}

chain mwan3_or_meta_0x1200 {
    meta mark set meta mark | 0x00001200
}

chain mwan3_or_ct_0x1200 {
    ct mark set ct mark | 0x00001200
}

chain mwan3_or_meta_0x1300 {
    meta mark set meta mark | 0x00001300
}

chain mwan3_or_ct_0x1300 {
    ct mark set ct mark | 0x00001300
}

chain mwan3_or_meta_0x1400 {
    meta mark set meta mark | 0x00001400
}

chain mwan3_or_ct_0x1400 {
    ct mark set ct mark | 0x00001400
}

chain mwan3_or_meta_0x1500 {
    meta mark set meta mark | 0x00001500
}

chain mwan3_or_ct_0x1500 {
    ct mark set ct mark | 0x00001500
}

chain mwan3_or_meta_0x1600 {
    meta mark set meta mark | 0x00001600
}

chain mwan3_or_ct_0x1600 {
    ct mark set ct mark | 0x00001600
}

chain mwan3_or_meta_0x1700 {
    meta mark set meta mark | 0x00001700
}

chain mwan3_or_ct_0x1700 {
    ct mark set ct mark | 0x00001700
}

chain mwan3_or_meta_0x1800 {
    meta mark set meta mark | 0x00001800
}

chain mwan3_or_ct_0x1800 {
    ct mark set ct mark | 0x00001800
}

chain mwan3_or_meta_0x1900 {
    meta mark set meta mark | 0x00001900
}

chain mwan3_or_ct_0x1900 {
    ct mark set ct mark | 0x00001900
}

chain mwan3_or_meta_0x1a00 {
    meta mark set meta mark | 0x00001a00
}

chain mwan3_or_ct_0x1a00 {
    ct mark set ct mark | 0x00001a00
}

chain mwan3_or_meta_0x1b00 {
    meta mark set meta mark | 0x00001b00
}

chain mwan3_or_ct_0x1b00 {
    ct mark set ct mark | 0x00001b00
}

chain mwan3_or_meta_0x1c00 {
    meta mark set meta mark | 0x00001c00
}

chain mwan3_or_ct_0x1c00 {
    ct mark set ct mark | 0x00001c00
}

chain mwan3_or_meta_0x1d00 {
    meta mark set meta mark | 0x00001d00
}

chain mwan3_or_ct_0x1d00 {
    ct mark set ct mark | 0x00001d00
}

chain mwan3_or_meta_0x1e00 {
    meta mark set meta mark | 0x00001e00
}

chain mwan3_or_ct_0x1e00 {
    ct mark set ct mark | 0x00001e00
}

chain mwan3_or_meta_0x1f00 {
    meta mark set meta mark | 0x00001f00
}

chain mwan3_or_ct_0x1f00 {
    ct mark set ct mark | 0x00001f00
}

chain mwan3_or_meta_0x2000 {
    meta mark set meta mark | 0x00002000
}

chain mwan3_or_ct_0x2000 {
    ct mark set ct mark | 0x00002000
}

chain mwan3_or_meta_0x2100 {
    meta mark set meta mark | 0x00002100
}

chain mwan3_or_ct_0x2100 {
    ct mark set ct mark | 0x00002100
}

chain mwan3_or_meta_0x2200 {
    meta mark set meta mark | 0x00002200
}

chain mwan3_or_ct_0x2200 {
    ct mark set ct mark | 0x00002200
}

chain mwan3_or_meta_0x2300 {
    meta mark set meta mark | 0x00002300
}

chain mwan3_or_ct_0x2300 {
    ct mark set ct mark | 0x00002300
}

chain mwan3_or_meta_0x2400 {
    meta mark set meta mark | 0x00002400
}

chain mwan3_or_ct_0x2400 {
    ct mark set ct mark | 0x00002400
}

chain mwan3_or_meta_0x2500 {
    meta mark set meta mark | 0x00002500
}

chain mwan3_or_ct_0x2500 {
    ct mark set ct mark | 0x00002500
}

chain mwan3_or_meta_0x2600 {
    meta mark set meta mark | 0x00002600
}

chain mwan3_or_ct_0x2600 {
    ct mark set ct mark | 0x00002600
}

chain mwan3_or_meta_0x2700 {
    meta mark set meta mark | 0x00002700
}

chain mwan3_or_ct_0x2700 {
    ct mark set ct mark | 0x00002700
}

chain mwan3_or_meta_0x2800 {
    meta mark set meta mark | 0x00002800
}

chain mwan3_or_ct_0x2800 {
    ct mark set ct mark | 0x00002800
}

chain mwan3_or_meta_0x2900 {
    meta mark set meta mark | 0x00002900
}

chain mwan3_or_ct_0x2900 {
    ct mark set ct mark | 0x00002900
}

chain mwan3_or_meta_0x2a00 {
    meta mark set meta mark | 0x00002a00
}

chain mwan3_or_ct_0x2a00 {
    ct mark set ct mark | 0x00002a00
}

chain mwan3_or_meta_0x2b00 {
    meta mark set meta mark | 0x00002b00
}

chain mwan3_or_ct_0x2b00 {
    ct mark set ct mark | 0x00002b00
}

chain mwan3_or_meta_0x2c00 {
    meta mark set meta mark | 0x00002c00
}

chain mwan3_or_ct_0x2c00 {
    ct mark set ct mark | 0x00002c00
}

chain mwan3_or_meta_0x2d00 {
    meta mark set meta mark | 0x00002d00
}

chain mwan3_or_ct_0x2d00 {
    ct mark set ct mark | 0x00002d00
}

chain mwan3_or_meta_0x2e00 {
    meta mark set meta mark | 0x00002e00
}

chain mwan3_or_ct_0x2e00 {
    ct mark set ct mark | 0x00002e00
}

chain mwan3_or_meta_0x2f00 {
    meta mark set meta mark | 0x00002f00
}

chain mwan3_or_ct_0x2f00 {
    ct mark set ct mark | 0x00002f00
}

chain mwan3_or_meta_0x3000 {
    meta mark set meta mark | 0x00003000
}

chain mwan3_or_ct_0x3000 {
    ct mark set ct mark | 0x00003000
}

chain mwan3_or_meta_0x3100 {
    meta mark set meta mark | 0x00003100
}

chain mwan3_or_ct_0x3100 {
    ct mark set ct mark | 0x00003100
}

chain mwan3_or_meta_0x3200 {
    meta mark set meta mark | 0x00003200
}

chain mwan3_or_ct_0x3200 {
    ct mark set ct mark | 0x00003200
}

chain mwan3_or_meta_0x3300 {
    meta mark set meta mark | 0x00003300
}

chain mwan3_or_ct_0x3300 {
    ct mark set ct mark | 0x00003300
}

chain mwan3_or_meta_0x3400 {
    meta mark set meta mark | 0x00003400
}

chain mwan3_or_ct_0x3400 {
    ct mark set ct mark | 0x00003400
}

chain mwan3_or_meta_0x3500 {
    meta mark set meta mark | 0x00003500
}

chain mwan3_or_ct_0x3500 {
    ct mark set ct mark | 0x00003500
}

chain mwan3_or_meta_0x3600 {
    meta mark set meta mark | 0x00003600
}

chain mwan3_or_ct_0x3600 {
    ct mark set ct mark | 0x00003600
}

chain mwan3_or_meta_0x3700 {
    meta mark set meta mark | 0x00003700
}

chain mwan3_or_ct_0x3700 {
    ct mark set ct mark | 0x00003700
}

chain mwan3_or_meta_0x3800 {
    meta mark set meta mark | 0x00003800
}

chain mwan3_or_ct_0x3800 {
    ct mark set ct mark | 0x00003800
}

chain mwan3_or_meta_0x3900 {
    meta mark set meta mark | 0x00003900
}

chain mwan3_or_ct_0x3900 {
    ct mark set ct mark | 0x00003900
}

chain mwan3_or_meta_0x3a00 {
    meta mark set meta mark | 0x00003a00
}

chain mwan3_or_ct_0x3a00 {
    ct mark set ct mark | 0x00003a00
}

chain mwan3_or_meta_0x3b00 {
    meta mark set meta mark | 0x00003b00
}

chain mwan3_or_ct_0x3b00 {
    ct mark set ct mark | 0x00003b00
}

chain mwan3_or_meta_0x3c00 {
    meta mark set meta mark | 0x00003c00
}

chain mwan3_or_ct_0x3c00 {
    ct mark set ct mark | 0x00003c00
}

chain mwan3_or_meta_0x3f00 {
    meta mark set meta mark | 0x00003f00
}

chain mwan3_or_ct_0x3f00 {
    ct mark set ct mark | 0x00003f00
}

chain mwan3_or_meta_0x3d00 {
    meta mark set meta mark | 0x00003d00
}

chain mwan3_or_ct_0x3d00 {
    ct mark set ct mark | 0x00003d00
}

chain mwan3_or_meta_0x3e00 {
    meta mark set meta mark | 0x00003e00
}

chain mwan3_or_ct_0x3e00 {
    ct mark set ct mark | 0x00003e00
}

}

good news, the visual bug seems to be fixed

If you turn off mwan3 does the problem go away?

No, but it looks like restarting dnsmasq helps. If I stop mwan3, restart dnsmasq, and then start mwan3 again, everything works normally. Any idea what could be causing that?

I’ve tried various things also plain SIGHUP to dnsmasq.
From the logs, a plain SIGHUP to dnsmasq does not reproduce the issue.
The real problem seems to be repeated full dnsmasq restarts (SIGTERM + start) happening around fw4 reload / mwan3 rebuild events, while keepalived-ha is also reloading dnsmasq and adblock is active at the same time. There are also mwan3-hotplug errors during rule/chain deletion, so this looks more like a reload/restart timing/race problem than a simple dnsmasq HUP problem. keepalived-ha is my custom script

keepalived-ha

#!/bin/sh

PATH=/usr/sbin:/usr/bin:/sbin:/bin

TAG="ha-role-sync"

LOCK_DIR="/tmp/ha-role-sync.lock"
STATE_FILE="/tmp/ha-role-sync.state"
APPLIED_FILE="/tmp/ha-role-sync.applied"

DHCP sections that must be ACTIVE only on MASTER

CLIENT_DHCP_SECTIONS="lan guest_net"

Services that should run only on MASTER

MASTER_ONLY_SERVICE="odhcpd"

WAN interfaces that should be brought up only on MASTER

WAN_MASTER_IFACES="wan"

React ONLY to this keepalived instance

TARGET_TYPE="INSTANCE"
TARGET_NAME="VI_MAIN"

How long to wait for dnsmasq after reload/start

DNSMASQ_WAIT_TRIES=5
DNSMASQ_WAIT_DELAY=1

log() {
logger -t "$TAG" "$*"
}

mono_ms() {
awk '{ printf "%d\n", $1 * 1000 }' /proc/uptime 2>/dev/null
}

should_handle_event() {
[ "${TYPE:-}" = "$TARGET_TYPE" ] || return 1
[ "${NAME:-}" = "$TARGET_NAME" ] || return 1

case "${ACTION:-}" in
	NOTIFY_MASTER|NOTIFY_BACKUP|NOTIFY_FAULT|NOTIFY_STOP)
		return 0
		;;
	*)
		return 1
		;;
esac

}

desired_from_action() {
case "$1" in
NOTIFY_MASTER)
echo "MASTER"
;;
NOTIFY_BACKUP|NOTIFY_FAULT|NOTIFY_STOP)
echo "BACKUP"
;;
*)
return 1
;;
esac
}

write_latest_state() {
local desired="$1"
local token="$2"
local action="$3"
local type="$4"
local name="$5"

printf '%s|%s|%s|%s|%s\n' \
	"$desired" "$token" "$action" "$type" "$name" > "${STATE_FILE}.tmp.$$" || return 1
mv "${STATE_FILE}.tmp.$$" "$STATE_FILE"

}

read_latest_state() {
[ -s "$STATE_FILE" ] || return 1
IFS='|' read -r LATEST_DESIRED LATEST_TOKEN LATEST_ACTION LATEST_TYPE LATEST_NAME < "$STATE_FILE"
[ -n "$LATEST_DESIRED" ]
}

get_applied_state() {
cat "$APPLIED_FILE" 2>/dev/null || true
}

set_uci_value() {
local key="$1"
local value="$2"
local cur

cur="$(uci -q get "$key" 2>/dev/null || true)"
[ "$cur" = "$value" ] && return 1

uci set "$key=$value"
return 0

}

dnsmasq_running() {
pidof dnsmasq >/dev/null 2>&1
}

wait_for_dnsmasq() {
local i=0

while [ "$i" -lt "$DNSMASQ_WAIT_TRIES" ]; do
	if dnsmasq_running; then
		return 0
	fi
	sleep "$DNSMASQ_WAIT_DELAY"
	i=$((i + 1))
done

return 1

}

ensure_dnsmasq_running() {
if dnsmasq_running; then
return 0
fi

log "dnsmasq is not running, starting it"
/etc/init.d/dnsmasq start >/dev/null 2>&1 || true

if wait_for_dnsmasq; then
	log "dnsmasq started successfully"
	return 0
fi

log "ERROR: dnsmasq did not start"
return 1

}

reload_dnsmasq_safe() {
log "reloading dnsmasq"
/etc/init.d/dnsmasq reload >/dev/null 2>&1 || true

if wait_for_dnsmasq; then
	log "dnsmasq is running after reload"
	return 0
fi

log "dnsmasq not running after reload, trying explicit start"
/etc/init.d/dnsmasq start >/dev/null 2>&1 || true

if wait_for_dnsmasq; then
	log "dnsmasq recovered after explicit start"
	return 0
fi

log "ERROR: dnsmasq is still not running after reload/start"
return 1

}

set_dnsmasq_mode() {
local mode="$1"
local changed=0
local sec

case "$mode" in
	MASTER)
		for sec in $CLIENT_DHCP_SECTIONS; do
			set_uci_value "dhcp.$sec.ignore" "0" && changed=1
		done
		;;
	BACKUP)
		for sec in $CLIENT_DHCP_SECTIONS; do
			set_uci_value "dhcp.$sec.ignore" "1" && changed=1
		done
		;;
	*)
		log "unknown dnsmasq mode: $mode"
		return 1
		;;
esac

if [ "$changed" -eq 1 ]; then
	uci commit dhcp >/dev/null 2>&1 || true
	reload_dnsmasq_safe || return 1
else
	ensure_dnsmasq_running || return 1
fi

return 0

}

start_master_only_service() {
local svc="$1"
log "starting master-only service: $svc"
/etc/init.d/"$svc" start >/dev/null 2>&1 || true
}

stop_master_only_service() {
local svc="$1"
log "stopping backup-only service: $svc"
/etc/init.d/"$svc" stop >/dev/null 2>&1 || true
}

bring_up_master_wan() {
local ifc
for ifc in $WAN_MASTER_IFACES; do
log "ifup $ifc"
ifup "$ifc" >/dev/null 2>&1 || true
done
}

bring_down_master_wan() {
local ifc
for ifc in $WAN_MASTER_IFACES; do
log "ifdown $ifc"
ifdown "$ifc" >/dev/null 2>&1 || true
done
}

apply_state() {
local desired="$1"

case "$desired" in
	MASTER)
		bring_up_master_wan
		set_dnsmasq_mode MASTER || return 1
		start_master_only_service "$MASTER_ONLY_SERVICE"
		ensure_dnsmasq_running || return 1
		;;
	BACKUP)
		stop_master_only_service "$MASTER_ONLY_SERVICE"
		set_dnsmasq_mode BACKUP || return 1
		bring_down_master_wan
		ensure_dnsmasq_running || return 1
		;;
	*)
		log "unknown desired state: $desired"
		return 1
		;;
esac

printf '%s\n' "$desired" > "$APPLIED_FILE"
return 0

}

acquire_lock() {
mkdir "$LOCK_DIR" 2>/dev/null
}

release_lock() {
rmdir "$LOCK_DIR" 2>/dev/null || true
}

Ignore everything except INSTANCE/VI_MAIN with explicit state callbacks

if ! should_handle_event; then
exit 0
fi

DESIRED="$(desired_from_action "${ACTION:-}")" || exit 0
TOKEN="${DESIRED}:$$:$(mono_ms)"

write_latest_state "$DESIRED" "$TOKEN" "${ACTION:-}" "${TYPE:-}" "${NAME:-}" || exit 1

if ! acquire_lock; then
log "handler busy, queued desired=$DESIRED token=$TOKEN"
exit 0
fi

trap 'release_lock' EXIT INT TERM

while :; do
read_latest_state || break

APPLIED="$(get_applied_state)"
if [ "$LATEST_DESIRED" = "$APPLIED" ]; then
	break
fi

log "role=$LATEST_ACTION desired=$LATEST_DESIRED type=$LATEST_TYPE name=$LATEST_NAME token=$LATEST_TOKEN"

if ! apply_state "$LATEST_DESIRED"; then
	log "failed to apply state: $LATEST_DESIRED"
	break
fi

done

read_latest_state || exit 0
FINAL_APPLIED="$(get_applied_state)"

if [ "$TOKEN" != "$LATEST_TOKEN" ] || [ "$DESIRED" != "$FINAL_APPLIED" ]; then
log "stale event ignored: action=${ACTION:-} desired=$DESIRED token=$TOKEN latest=$LATEST_DESIRED"
fi

exit 0

Mentonied errors

Fri Apr 10 19:05:56 2026 user.err mwan3-hotplug[15758]: nft delete rule inet fw4 mwan3_ifaces_in handle 4220 4225 4230 4235: Error: syntax error, unexpected number delete rule inet fw4 mwan3_ifaces_in handle 4220 ^^^^ Error: syntax error, unexpected number delete rule inet fw4 mwan3_ifaces_in handle 4220 ^^^^ Error: syntax error, unexpected number delete rule inet fw4 mwan3_ifaces_in handle 4220 ^^^^ Fri Apr 10 19:05:56 2026 user.err mwan3-hotplug[15758]: nft delete chain inet fw4 mwan3_iface_in_wan: Error: Could not process rule: Resource busy delete chain inet fw4 mwan3_iface_in_wan

nft delete chain inet fw4 mwan3_iface_in_wan:
Error: Could not process rule: Resource busy

And

rpcd: Timeout waiting for /etc/init.d/dnsmasq

You have a couple of issues raised in the 2 posts. Let's deal with them separately

I think you have happened on a real bug here and the dnsmasq restart is just a (possibly coincidental) workaround.

The mwan3 catchall rule misclassifies inbound IPv6 traffic with no conntrack mark when an IPv4 and an IPv6 mwan3 interface share the same physical device, giving it the wrong routing mark, making them go via the main routing table instead of the wan specific routing table. The bug is bypassed for established connections.

The bug triggers in cases when, for example, the conntrack entry of a QUIC connection expires (eg a youtube tab left open but idle). There's inconsistent routing across the connection that results in dropped packets and stalled or failed streams somewhat like the image you posted.

I don't fully understand why restarting dnsmasq appears to fix the problem. It might just be coincidental

Stopping mwan3 does not help because the wrong mark is still saved in the conntrack table. Simply stopping the service doesn't flush conntrack. A more direct test would be to issue a conntrack -F and see if it fixes it (that might fix the current state, but it will break again because of the bug).

I've fixed this bug and will push it in the next release, so hopefully it will clear up this symptom.

This is a bug causing a cascade of errors. The keepalived-ha script is generating successive fw4 reloads because it's triggering ifup/ifdown repeatedly.

This causes multiple successive hotplug events and then one of the nft delete rules invoked via the mwan3 hotplug handler gets a list of space separated handles to delete instead of a single handle, which is syntactically incorrect in a single rule: the list should instead be parsed in a while loop.

So the rule delete operation fails because of the incorrect construct. Then the subsequent chain delete fails too because the chain still has a reference to the rule. With that rule and chain delete failed, the next fw4 reload + rebuild cycle causes another copy of the rule to accumulate and so on.

While there's a bug here that's the root cause, your script is calling bring_up_master_wan() and bring_down_master_wan() and that generates ifup/ifdown of the wan interfaces. This greatly exacerbates the problem by driving repeated fw4 reload cycles.

You don't need to tear down and bring up the wan interface just to switch HA roles, since mwan3 already tracks the wan interface state via hotplug. A keepalived-ha role change should not require explicitly bouncing the wan interface, so recommend you don't call ifup/ifdown in the script for HA role transitions.

The wan interface state should be managed by netifd independently of HA role change. If the DHCP section enable/disable is all that's needed, just commit the UCI change and reload dnsmasq; no need to bounce the interface.

The sequence is currently

- MASTER: bring up the wan interface (ifup wan), enable DHCP sections for lan/guest_net in dnsmasq, start odhcpd, reload dnsmasq
- BACKUP: stop odhcpd, disable those DHCP sections, bring down the wan interface (ifdown wan), reload dnsmasq

It should be

- MASTER: enable DHCP sections for lan/guest_net in dnsmasq, start odhcpd, reload dnsmasq
- BACKUP: stop odhcpd, disable those DHCP sections, reload dnsmasq

Just remove bring_up_master_wan() and bring_down_master_wan() from the apply_state function entirely. The explicit wan bounces drop out of the HA role transition logic completely.

keepalived already requires wan to be up in order to be running and winning the MASTER election in the first place. So by the time the MASTER notification fires, the wan is already up anyway. ifup on an already-up interface still triggers a hotplug event, which causes fw4 reload detection and a full mwan3 nft rebuild. So it's not harmless even though wan is already up. That's what's driving the repeated rebuild cycles.

On a PPPoE interface to ifup an already up interface is more than just redundant, it's actively harmful. netifd tears the interface fully down first, then brings it back up. It's like an ifdown followed by an ifup. On my machine, an ifup of an already up PPPoE interface caused three separate fw4 reloads and mwan3 rebuilds, two dnsmasq restarts and two PPPoE reconnects!

If the node is transitioning to BACKUP, calling ifdown wan explicitly is either redundant or actively harmful. If the wan going down is what triggered the BACKUP transition, netifd will handle the interface independently. The explicit ifdown is redundant and generates an unnecessary hotplug event and fw4 rebuild cycle. If the BACKUP transition is due to the peer coming back online with higher priority, then the wan on this node may still be up, in which case calling ifdown wan brings it down when it should stay up

I'm pleased someone with two IPv6 connections is testing because dual-homed ipv6 is a scenario I am unable to test as I have a single IPv6 capable interface.

1 Like