Hello.
I'm looking to setup fixed value MSS clamping on my router.
Automatic path MTU discovery is broken because I am behind a VPN that fragments packets internally when they are larger than the real MTU.
While it works for sending large packets, it tanks throughput, so I'm looking to set a proper MSS value to work around it.
I also can't reduce the interface's MTU because IPv6 standard enforces a 1280 mtu minimum.
I have found some documentation online on how to do this with nft, I'm just not sure about the correct way to apply it in openwrt's FW4's context in a permanent manner.
This is an end-to-end technology, so you would configure this on the SRC client/host. If you router has issues, disable MSS Clamping and set any MTU values properly. This will ensure the MSS size.
So just to be clear, this isn't possible. You may be able to alter the packet in transit (i.e. artificially altering the MSS Clamping), but this may cause other issues.
All VPNs would fragment a packet larger than the real MTU (provided the packet is actually that large), so it's not clear what your statement means.
I'm trying to understand the OP's theory: rather configure 2 small TCP packets (which both must be acknowledged), opposed to the router fragmenting the packet and delivering them to the other VPN endpoint (one acknowledgement). That could tank download throughput.
I forgot the actual value, but around 1238 bytes if I'm not mistaken.
lower than the real MTU. 1200 sounds like a good start. I can find the optimal value after figuring out how to set MSS value.
In my current, relying on automatic path MTU discovery to set the correct MSS value, leads to a high >1450 value. Which reduces throughput significantly.
I have tested various MSS values with iperf3 (-M switch) and throughput gets significantly better when I set MSS to 1200.
I want that value to apply to all tcp connections that go through the vpn interface.
As mentioned already, my actual MTU is lower than minimum imposed by IPv6, so I can't do that without disabling IPv6.
Since the VPN handles larger packets internally, I don't see the need to do that, when there is a way to ask the client to send smaller packets via MSS clamping.
If for whatever reason a large packet was to be sent anyway, then the VPN will handle it at a lower throughput.
Unfortunately, it's not possible with my setup.
I understand it comes off as unusual. but believe me I have my reasons.
However, thanks to you. I learned about the /etc/nftables.d/ directory. it was what I was looking for
That's unfortunate, but still doesn't answer the question. Additionally, you just provided a detail you never mention in your first post. To be clear, this information was needed to assist you.
We need to know the REAL Physical connection medium you use to connect your your actual Internet Service Provider. This is your real MTU.
Afterward we can discuss the nested VPNs, how configured, MSS/MTU issues, etc. (and if the OpenWrt is involved).
1480 is usually the MTU on a PPPoE. This is running on the OpenWrt, correct?
Can you describe more?
We now need to know how you nested the VPNs.
Then we can discuss their MTU's, subtract the nesting overhead, etc.
Ummmm, that depends on how you're nesting VPNs - anyways, again, we needed to know your real maximum MTU to help understand that "top most layer".
Everything else you're discussing is virtual and runs over that physical connection. If you're uncomfortable discussing how/why you have this setup, you may which to see if the VPN providers offer support for their service(s).
I appreciate your help.
I know it feels like I'm being dodgy, I can explain the full situation over PM if you'd like, but I'm not comfortable to discuss it on a public forum.
Anyways, I have added this to /etc/nftables.d/10-custom-filter-chains.nft and it seems to be doing what I want. I shall experiment with different values and see how it affects throughput.
Though ideally I should adopt a proper value for IPv4 and IPv6 connections separately, this is a great start.
chain user_post_forward {
type filter hook forward priority 1; policy accept;
iifname { "tun0" } tcp flags syn tcp option maxseg size set 1200
oifname { "tun0" } tcp flags syn tcp option maxseg size set 1200
}
I do have nested tunnels. but they aren't visible to openwrt.
it's like (sorry I can't picture it well): openwrt <===( server 1 <===> server 2 )==> server 3 <==> internet
Yes the latency is horrible.
I do control 2 of the middle servers.
I think you are suggesting that I count the overhead of each tunnel encapsulation and reduce the sum from 1480 (my physical link mtu) to set on the interface.
I have done this before, but the resulting mtu was <1280, which kernel refused to set on the interface because the interface had an active IPv6 address.