NAT leakage on TL-WR1043ND v4

The problem and proposed fix (kind of) is described here:

http://www.smythies.com/~doug/network/iptables_notes/index.html

It has absolutely nothing with LEDE, hardware NAT or VLAN configuration.

I think that @andreas already checked that information and it was not helping (see preceding posts)

How serious is this problem....should I be worried if I don't do anything, e.g. not adding extra rules to the FORWARD chain?

And this is not only related to 1043ND right?.....but related to all routers (it's a "problem" with linux)?

These packages will hit the gateway of my ISP and probably not go any further right?

Can an intruder use these packages to do any harm from outside, e,g. will the NAT be open in some way?

What else should I be worried about in regards to this leak?

Hi folks,
It seems I was able to find a cause for the NAT leakage. I am currently testing a new setup with a workaround, and will hopefully be able to provide details tomorrow.

@MrM40 So far, I have only tested 2 TL-WR1043ND v4. My new findings, though, suggest indeed that there's a bug in LEDE that potentially affects other devices.

That depends on the network setup. Theoretically, the packets can end up anywhere. Generally, the problem with NAT leakage AFAIK (I am no security expert) is disclosure of your private network infrastructure. Personally, I consider this a critical vulnerability. And I am surprised that TP-Link did not react accordingly to my reports.

The packets that I observed are generally packets that were supposed to be sent to the WAN anyway. What is disclosed is only the source IP of your private network. The firewall is not affected. However, since the packets have an incorrect source IP any reply to the packets will not reach your router and will be lost. In the worst case, a reply might end up elsewhere. Then data that was meant for you could be disclosed to third parties.

Moreover, especially since we do not know what causes the leakage and what triggers it, there is a potential performance issue when a considerable amount of packets gets lost.

1 Like

My overnight monitoring showed no NAT leakage with the new setup, so here we go:

TL;DR: LEDE 17.01 is not dropping invalid packets. Add a custom rule like iptables -I forwarding_rule -m state --state INVALID -j DROP (no warranties included)

LuCI has an option "Drop invalid packets", however, I couldn't find a respective line in iptables -L, see my bug report Drop invalid packets doesn't do anything #1068. I knew that this can cause NAT leakage. So, I added iptables -I forwarding_rule -m state --state INVALID -j DROP to the custom rules @ /cgi-bin/luci/admin/network/firewall/custom (i.e. /etc/firewall.user). With this line added I monitored the traffic of the TL-WR1043ND in 2 different settings for about 40h without a single leaked packet.

Mind you that I have always only looked at IPv4 traffic. I cannot speak for IPv6. In the bug report jow- wrote that there is a possible interference with "IPv6 multicast traffic".

To not drop invalid packets was a change in LEDE committed in August 2016 (see the bug report for details). This explains why it is missing in LEDE. It does not explain the NAT leakage that occurs with TP-Link's stock firmware. But I am guessing the reasons are similar. Unfortunately, I never heard from TP-Link support again.

What I am still very curious about is how the invalid packets can be triggered in the first place and why they are invalid. I tried a lot but failed. Yet, just running arbitrary clients always led to NAT leakage, generally within several hours.

2 Likes

The drop invalid rule should only apply to outgoing traffic imho, otherwise a lot of legitimate inbound traffic is dropped.
Can you try this rule instead and see if you still reproduce the leak?

iptables -t nat -A zone_wan_postrouting -m conntrack --ctstate INVALID -j DROP

Correction:
iptables -t filter -A zone_wan_dest_ACCEPT -m conntrack --ctstate INVALID -j DROP

Shows leakage (and fortunately quite immediate; GMT+1):

12:48:10.225137 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 184019810, ack 3923615243, win 16361, length 0
12:48:10.630523 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 0, ack 1, win 16361, length 0
12:48:11.441749 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 0, ack 1, win 16361, length 0
12:48:13.048601 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 0, ack 1, win 16361, length 0
12:48:16.262225 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 0, ack 1, win 16361, length 0
12:48:22.673872 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [F.], seq 0, ack 1, win 16361, length 0
12:48:35.481631 IP 10.0.0.48.49187 > 216.55.137.169.80: Flags [R.], seq 1, ack 1, win 0, length 0
12:48:48.742433 IP 10.0.0.48.49189 > 23.42.27.27.80: Flags [R.], seq 637184114, ack 2983779411, win 0, length 0

Was the rule reached? Can you check iptables --nvL zone_wan_dest_ACCEPT ?

I checked that the rule was added, yes. But, you wrote -A zone_wan_dest_ACCEPT. Shouldn't that be -I instead of -A to inject before the default ACCEPT?

With -I and after a quick test iptables -nvL zone_wan_dest_ACCEPT shows

Chain zone_wan_dest_ACCEPT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
   42  1680 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
  783 50041 ACCEPT     all  --  *      eth0.2  0.0.0.0/0            0.0.0.0/0            /* !fw3 */

And no leakage :slight_smile:

1 Like

i also use this on public/private borders:

#!/bin/sh
# take care of no-internet routeable addresses

mode=blackhole
mode=unreachable

ip route add $mode 0.0.0.0/8 metric 999
ip route add $mode 10.0.0.0/8 metric 999
ip route add $mode 100.64.0.0/10 metric 999
ip route add $mode 127.0.0.0/8 metric 999
ip route add $mode 169.254.0.0/16 metric 999
ip route add $mode 172.16.0.0/12 metric 999
ip route add $mode 192.0.0.0/24 metric 999
ip route add $mode 192.0.2.0/24 metric 999
ip route add $mode 192.168.0.0/16 metric 999
ip route add $mode 198.18.0.0/15 metric 999
ip route add $mode 198.51.100.0/24 metric 999
ip route add $mode 203.0.113.0/24 metric 999
ip route add $mode 224.0.0.0/4 metric 999
ip route add $mode 240.0.0.0/4 metric 999

Perfect. I'll see if we can add this to the firewall program then. The idea is to emit this rule only for zones with masquerading enabled to avoid affecting unrelated/unmasked traffic.

3 Likes

Where do I put this?
Do I still need this if I use bcp38?

That looks pretty much like a manual implementation of the bcp38 package (with some additional blocks added to the list).

About those additional addresses, is there any reason why aren't they included in bcp38?

Somehow I can't figure out why this address is added:
ip route add $mode 192.0.2.0/24 metric 999

even though this address is already there:
ip route add $mode 192.0.0.0/24 metric 999

you can put it in rc.local (should be persistent) or in '/etc/hotplug.d/iface/' ...

it's not quite bcp38 in that it doesn't care about src addresses, but i makes sure that
traffic destined to unused private nets/bogos does not get to your upstream default router.

the list is from: http://www.team-cymru.org/bogon-bit-notation.html

Perfect. I'll see if we can add this to the firewall program then. The idea is to emit this rule only for zones with masquerading enabled to avoid affecting unrelated/unmasked traffic.

jow, does it mean you will update one of these packages:

  • "firewall", current version "2017-01-13-37cb4cb4-1"
  • "luci-app-firewall", current version "git-17.051.53299-a100738-1"

..with this rule:
iptables -t filter -I zone_wan_dest_ACCEPT -m conntrack --ctstate INVALID -j DROP

And what do you expect the timeframe to be? :slight_smile:
When can one expect the firmware package to be updated too?
Sorry for the novice question :slight_smile:

1 Like

Was this fixed with LEDE 17.01.1 ? Is that LEDE version good to use v4?

AFAIK, no.

Just add the rule mentioned below to /etc/firewall.user file...

iptables -t filter -I zone_wan_dest_ACCEPT -m conntrack --ctstate INVALID -j DROP

...and after that either reboot the router or issue service firewall restart command. Now you're good to go.

1 Like

Is this fixed in master somehow?