Starlink on OpenWrt 25.x โ€” IPv6, MSS clamp fix, BBR and fq_codel optimisation guide

I put this together after spending a few days diagnosing various issues with Starlink on OpenWrt 25.12.0. Posting here in case it saves someone else the same trouble. Tested on a GL-iNet Beryl AX (MT3000).

There are four main problems this covers:

  1. Starlink sends very short IPv6 prefix lifetimes (~300s valid, ~150s preferred). The OpenWrt default RA interval (up to 600s) is longer than these lifetimes, so LAN clients can see their preferred lifetime expire before the next RA arrives, causing them to stop using the address for new connections.
  2. fw4 egress MSS bug (openwrt/openwrt#12112) โ€” on OpenWrt 23.05 and earlier, mtu_fix 1 only generated an ingress MSS clamp rule. Outbound TCP SYN packets left unclamped, causing large downloads to stall. Fixed in firewall4 commit 698a533; OpenWrt 24.10+ generates both rules automatically when mtu_fix 1 is set on the wan zone.
  3. Default TCP congestion control โ€” cubic and reno grow their congestion window based on RTT, which can penalise high-loss links. Hybla normalises window growth against a reference RTT.
  4. Default conntrack table (often 16384 on embedded routers) can exhaust on busy or IoT-heavy networks, and the default timeouts are longer than they need to be.

1. IPv6 โ€” DHCPv6-PD and LAN assignment

Check if wan6 already exists: uci show network.wan6

If not, or if proto is not dhcpv6:

uci set network.wan6=interface
uci set network.wan6.device='@wan'
uci set network.wan6.proto='dhcpv6'
uci set network.wan6.reqaddress='try'
uci set network.wan6.reqprefix='auto'
uci set network.lan.ip6assign='64'
uci commit network

Note on /64: Standard residential Starlink delegates a /56. A /64 cannot be sub-delegated, so LAN clients can't get their own prefix. If you only get a /64, NDP proxying is the fallback (LAN clients share the WAN /64, limited to ~250 hosts, no DHCPv6 on LAN).


2. Fix Starlink's short IPv6 prefix lifetimes

This is the only genuinely Starlink-specific odhcpd change required.

What's happening: Starlink advertises ~150s preferred / ~300s valid lifetimes on the delegated prefix. odhcpd renews the DHCPv6-PD lease every ~75s (half the preferred lifetime), so the prefix itself stays valid. The problem is on the LAN side: odhcpd advertises the remaining lease time in each RA message. With the default OpenWrt max RA interval of 600s, the preferred lifetime advertised to clients can expire before the next RA arrives, causing clients to stop using the address for new connections.

The fix: reduce the max RA interval so clients receive refreshes within the renewal cycle.

uci set dhcp.lan.ra='server'
uci set dhcp.lan.dhcpv6='server'
uci set dhcp.lan.ra_maxinterval='60'
uci set dhcp.lan.ra_mininterval='20'
uci commit dhcp
service odhcpd restart

ra_maxinterval=60 ensures clients get a refreshed RA well within the ~75s DHCPv6-PD renewal cycle. ra_mininterval=20 follows the RFC 4861 recommendation of 1/3 of the max.

What doesn't work: max_preferred_lifetime and max_valid_lifetime are maximums, not minimums. Since Starlink's lifetimes are already well below the odhcpd defaults (2700s/5400s), these settings have no effect on the delegated prefix. They only cap the ULA (fdxx:) prefix, which is not the problem. You may see these recommended in other posts โ€” they don't do what the authors think they do.

Note: this fixes the address churn caused by short lifetimes not being refreshed in time. It does not prevent renumbering if Starlink assigns a genuinely new prefix (e.g. after a dish reboot). In that case LAN clients will get new addresses regardless of this config.


3. DNS

uci set network.wan.peerdns='0'
uci set network.wan.dns='1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4'
uci set network.wan6.peerdns='0'
uci set network.wan6.dns='2606:4700:4700::1111 2606:4700:4700::1001 2001:4860:4860::8888 2001:4860:4860::8844'
uci commit network

4. NTP โ€” GPS-disciplined Stratum 1 from the dish

The Starlink dish at 192.168.100.1 has been serving GPS-disciplined NTP on port 123 since mid-2024. It is Stratum 1 โ€” directly from GPS, not a pool relay. Accuracy in practice is around 85โ€“123ยตs. No packages or extra tooling needed.

uci add_list system.ntp.server='192.168.100.1'
uci commit system
service sysntpd restart

This adds the dish alongside your existing pool servers rather than replacing them. If the dish is unreachable for any reason (bypass mode with different topology, etc.) the pool servers act as fallback. Check the result:

uci get system.ntp.server

5. MSS clamping

On OpenWrt 24.10+ (including 25.x), mtu_fix is a zone-level option โ€” it belongs on the wan zone, not the defaults section. On 24.10+ it already defaults to 1 on the wan zone, but set it explicitly to be safe:

WAN_ZONE=$(uci show firewall | grep -m1 "\.name='wan'" | cut -d. -f2)
uci set firewall.$WAN_ZONE.mtu_fix='1'
uci commit firewall
service firewall restart

fw4 will generate both an ingress clamp rule (mangle_forward) and an egress clamp rule (mangle_postrouting) automatically. Verify:

nft list chain inet fw4 mangle_postrouting | grep maxseg
nft list chain inet fw4 mangle_forward | grep maxseg

Both should show a rule with tcp option maxseg size set rt mtu. rt mtu uses the routing table MTU dynamically โ€” no need to hardcode a value like 1452.

On OpenWrt 23.05, mtu_fix 1 only generates the ingress rule. The egress rule is missing due to fw4 bug openwrt/openwrt#12112 (fixed in firewall4 commit 698a533). On 23.05 you need to add the egress rule manually via a drop-in file โ€” but do not use this approach on 25.x.

Drop-in files are broken on 25.12: fw4 renders its entire ruleset as a single inline nftables script. A drop-in containing a top-level table inet fw4 block causes a syntax conflict and service firewall restart fails with "unexpected table" errors. Use mtu_fix 1 on 24.10+.


6. Kernel optimisation โ€” hybla, fq_codel, conntrack

Install hybla (OpenWrt 25.x uses apk; older versions use opkg):

# OpenWrt 25.x
apk add kmod-tcp-hybla

# OpenWrt 23.05 / 24.10
opkg update && opkg install kmod-tcp-hybla

Append to /etc/sysctl.conf:

cat >> /etc/sysctl.conf << 'EOF'

# TCP optimisation
net.core.default_qdisc = fq_codel
net.ipv4.tcp_congestion_control = hybla
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_mtu_probing = 2

# IPv6 โ€” required for Starlink router mode
# accept_ra=2: Linux ignores RAs when forwarding=1; =2 overrides this so the
# router receives its upstream default route from Starlink via RA.
net.ipv6.conf.all.accept_ra = 2
net.ipv6.conf.default.accept_ra = 2
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1

# Conntrack โ€” values based on official Starlink firmware sysctl.conf
# tcp_timeout_established=7440 (2h) avoids dropping long-lived NAT sessions
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_tcp_timeout_established = 7440
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_udp_timeout = 60
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_generic_timeout = 600
EOF

Apply without rebooting:

sysctl -p /etc/sysctl.conf

Verify:

sysctl net.ipv4.tcp_congestion_control
# expect: hybla

sysctl net.core.default_qdisc
# expect: fq_codel

Hybla scope: net.ipv4.tcp_congestion_control = hybla on the router only affects TCP sessions terminating at the router โ€” WireGuard, OpenVPN, a local proxy, etc. It has no effect on flows from LAN clients passing through NAT (browsers, streaming apps). For a plain NAT router this setting is nearly a no-op. It is useful if you run WireGuard or another TCP-based service on the router itself.

Hybla and Starlink: Hybla was designed for GEO satellites with ~500ms RTT, where the RTT bias in loss-based algorithms (cubic, reno) is severe. Starlink is LEO with ~20-50ms RTT, much closer to regular broadband โ€” so the RTT-normalisation benefit is much smaller than for GEO. That said, Starlink has higher packet loss than typical fibre or cable, so hybla may still be marginally better than cubic for router-terminated sessions. It is otherwise standard loss-based behaviour and fair to other flows โ€” unlike BBRv1 which probes aggressively.

The script installs kmod-tcp-hybla and auto-selects it if available. If not present on a given kernel build, it falls back to CDG, then BBR, then cubic.

On bufferbloat: fq_codel is set as the default qdisc here rather than CAKE. For Starlink this is the right call โ€” fq_codel works by measuring actual queue delay and needs no bandwidth configuration, so it adapts automatically regardless of how much your Starlink speeds vary. CAKE requires accurate bandwidth values to actually prevent bufferbloat (without them it only does flow fairness, not shaping), and re-tuning it constantly as Starlink throughput changes is not practical.


7. Restart services

service network restart && service odhcpd restart && service dnsmasq restart && service firewall restart && service sysntpd restart

8. Verify

# WAN IPv6 from Starlink
ip -6 addr show dev eth0

# LAN delegated prefix
ip -6 addr show dev br-lan

# IPv6 default route
ip -6 route show default

# Connectivity
ping6 -c 3 ipv6.google.com

# hybla active
sysctl net.ipv4.tcp_congestion_control

# MSS clamp rule present
nft list chain inet fw4 mangle_forward

On a LAN client, ip -6 addr show should show a 2xxx: address. The valid_lft and preferred_lft values will reflect Starlink's short lifetimes (~300s/~150s), but they'll be refreshed by RA messages before expiring. https://test-ipv6.com should score 10/10.

If br-lan shows no global prefix after 30โ€“60 seconds, try service network restart and wait another 30 seconds. DHCPv6-PD can take a moment to complete after network restart.


Hope this is useful. Happy to answer questions โ€” particularly around the fw4 MSS bug as that one cost me the most time to track down.

I've also put together a setup script that applies all of the above automatically on a fresh OpenWrt install:

7 Likes

Thanks, posts like this are always appreciated!

Two comments:

Regarding fq-codel and cake, I am not sure your analysis is precise... fq-codel is a combination of a stochastic flow queueing scheduler with a codel-type AQM, cake basically adds a few components to this set: an optional traffic shaper, a secondary BLUE type AQM that operates in tandem with codel to reign in underesponsive flows eventually, and clever tricks to allow equitable sharing between not only flows, but also between IP addresses, and more.
Since the traffic shaper is optional if you configure cake with bandwidth unlimited it will behave similarly to fq-codel, albeit with a somewhat higher CPU cost, as it still does more things than fq-codel.
Both cake and fq-codel actually need a traffic shaper (actually all they need is relevant back pressure, both work fine with e.g. line rate ethernet with BQL) to effectively counter bufferbloat.

Regarding BBR, unless you actually have a lot of TCP traffic terminating at the router itself (say you run a fileserver, you terminate a TCP based VPN or run a proxy server), this will not really have much effect as most TCP flows are terminating on your end devices, you might want to consider changing the CC algorithm of all devices in your network.

Your Right, post has been updated, will upload a script once tested to automate this to make life easy.

On fq_codel/CAKE โ€” you're right, I overstated the case against CAKE. The real reason fq_codel makes more sense here is that Starlink's bottleneck is at the satellite link rather than the LAN-side ethernet port, so neither qdisc does much without a software shaper on the WAN side โ€” and keeping that shaper accurately tuned is impractical when Starlink throughput varies as much as it does. CAKE without a bandwidth limit still adds per-IP fairness and BLUE AQM over fq_codel (at higher CPU cost), so if anyone does add an SQM layer, CAKE would be the better pick. I've updated the post to reflect this.

On BBR โ€” fair point, I missed the obvious: net.ipv4.tcp_congestion_control only affects TCP sessions terminating at the router itself. For most home setups that's close to nothing, unless you're running WireGuard or a proxy on the router (which is common on GL-iNet hardware. I've updated the post and the sysctl comment to make that clear.

Wireguard, if I recall correctly, only works over UDP... but OpenVPN can use TCP.
There are good reasons to not run TCP inside a TCP tunnel, as bothe the inner and outer TCP stacks will independently initiate retransmits of lost segments so wireguard's choice has merits. On the other hand, some ISPs filter out UDP so TCP might be the only option, so OpenVPN's choice also has merits.

Have you tried to flag this with the developers? This might on purpose as fw3 did the same iirc, you had to enable mss clamping for br-lan. I do agree though that clamping should better be bidirectional.

updated missed that about wire-guard, not yet I need to look into it a bit more about the clamping, please feel free to ask a developer if you have the opportunity, it should really be bidirectional, my connection has been incredibly stable since implementing these changes, so its not causing any issues either way.

It turns out it was a confirmed bug, not intentional โ€” tracked in openwrt/openwrt#12112
and fixed in firewall4 commit 698a533 (3 November 2023). OpenWrt 24.10 and 25.12 both include the fix.

So the correct approach on 25.12 is just:

uci set firewall.@defaults[0].mtu_fix='1'
uci commit firewall
service firewall restart

fw4 then generates both an ingress rule in mangle_forward and an egress rule in mangle_postrouting. Verified on
25.12.0.

Worth flagging for anyone searching for the manual drop-in workaround: drop-in files with a top-level table inet fw4 {
... } block are broken on 25.12. fw4 now renders its entire ruleset as a single inline nftables script, so a table
declaration in a drop-in causes a syntax conflict and service firewall restart fails. mtu_fix 1 is the right fix and
it's already there โ€” I just hadn't enabled it.

I've updated the original post to reflect all of this.

Current kernel BBR implementation is old and should not be used. BBR2 was abandoned too without even making it into kernel... But they are making another one, now with ECN support.

1 Like

Does Starlink actually change your prefix while connected, or only require frequent renewal? If the prefix changes, LAN devices will need to update their IPv6 address to the new one as the old prefix is no longer routed to your dish.

Also I've heard reports that the new low end subscriptions only have a /64 prefix like LTE does. I don't have Starlink myself.

If only the would... what they support is dctcp-style congestion signalling... not rfc3168 style... I understand why Google might want to do that but this is not a generall "supports ECN" kind of situation in that it will not work well with cake's rfc3168 style marking...

1 Like

Yeah, I managed to restrain myself without ranting about Google's next big thing idea :slight_smile:

@timur.davletshin โ€” fair point on BBR v1. Worth noting it only affects TCP sessions terminating at the router anyway (OpenVPN etc) โ€” not LAN client traffic through NAT. For
a plain NAT router it's nearly a no-op. Left it in as it's harmless if you do run OpenVPN locally.

@mk24 โ€” from direct experience (Starlink residential, Australia): the prefix stays stable day-to-day. Starlink reissues the same /56 repeatedly with short lifetimes rather than
changing it, so the odhcpd fix holds up well. The prefix does change on a dish reboot or beam handoff but that's occasional.

On /64 โ€” couldn't find any confirmed reports of plan tiers being restricted to /64, standard residential always gives a /56. The more likely cause is a Router Solicitation keepalive
failure: if the router stops sending RS packets Starlink falls back to delegating a /64. On OpenWrt 25.x odhcp6c handles this natively. On 23.05/24.10 opkg install ndisc6 fixes it.

1 Like

Well, partly true, WireGuard still does not work over TCP by itself (you can built a TCP tunnel through which to run WireGuard's UDP tunnel but wireguard does not support that natively).

Useful for OpenVPN over TCP, sure, for WireGuard, less so. But harmless all the same.

Client devices already use it via Google services which are served by some customized version of BBR for more than 5 years. It is employed both by TCP and Quic connections.

For a plain NAT router, the two are completely independent. Clients already benefit from server-side BBR on Google
connections regardless of what the router kernel is set to, and setting BBR on the router does nothing to change that.
The router sysctl only matters for connections terminating at the router-OpenVPN, a local proxy, etc.

That was my point - 99% don't need it, because they are mere clients and use what they are served by content providers. BBRv1 (the one in OpenWrt's kernel) is dated and has known problems - just don't use it. Wanna play with congestion control and local proxy? Try tuning up CDG for your needs. It may provide better performance in case of WiFi.

Your right switched congestion control to hybla , instructions updated managed, to get openwrt ipv6 config from starlink gen 2 router, the ipv6 settings now mirror their official router with some tweaks, post updated

1 Like

Do you have a login to the wiki? If so this would be a good candidate for a wiki page.

no dont have wiki access, yes i think it would be handy up there

Not trying to be mean here, but I get the feeling that in both this topic and the previous one that discussed it, people are just trying configuration changes at random without really understanding what is going on, then posting whole config sets when they find one that appears to fix some problem. There's some valuable info here, to be sure (I didn't know about the NTP thing, for example), but I think most of these config settings are probably unnecessary.

Apologies for the length of this post, but I believe it is important to understand the technical details here to prevent old information from just propagating without good reason. The TL;DR is that I expect the only really critical bit (at least for OpenWRT 24.10+) that is Starlink-specific is the min/max RA intervals. Feel free to disagree, of course.

Specific feedback based on what's present on your GitHub repo and what I remember of the deleted posts, as the 1st post doesn't seem to reflect the last few changes you made:

Clarification: the Prefix Delegation (PD) does not come from Router Solicitation (RS) / Router Advertisement (RA) messages, it comes from DHCPv6 request, renew, reply messages. It's the WAN IP address that uses RS and RA messages, as part of SLAAC address auto-assignment. That being said, if it loses the SLAAC address, it may stop renewing the DHCPv6 lease, so it's not unrelated.

There's not supposed to be a need to send periodic RS messages from the client to maintain the address, the upstream router is supposed to send unsolicited RA messages frequently enough to prevent the lifetimes from expiring. Again, though, this is about the WAN IP, not the PD.

It used to be the case that SpaceX had their IPv6 infrastructure misconfigured such that it wasn't sending out unsolicited RA messages nearly often enough, but I thought that got fixed years ago. It certainly seems to be OK on my own Starlink connection on a router running OpenWRT 24.10 without the need for periodic RS messages, but I obviously cannot speak to whether they have misconfigurations elsewhere. Anyway, this calls into question the need for ndisc6. It shouldn't harm anything, but unless a user actually observes a problem with too-infrequent RA messages or has unusually high packet loss, I would not personally recommend they keep workarounds in place for a problem that has long since been fixed.

That's... probably only partially true. Without some of the changes, the preferred lifetime can expire, but the valid lifetime shouldn't unless you have an outage. Can still be bad, though.

Not sure what this is meant to do. It will cause the router to announce itself as a default route even if there is no default route present, which doesn't seem particularly useful. Maybe it helps prevent address loss while the dish reboots, though, I don't know.

This reduces the time LAN clients will consider this router able to default route from the default value of 2700s to 600s. Doesn't seem useful, especially given that this time limit will get reset every RA interval, anyway.

These are the actual important lines to prevent problems with IP address changing due to lifetime expiration. Here's why:

The router gets IPv6 address info from 2 different sources. It gets its WAN IP via SLAAC, and the PD prefix via DHCPv6. It then advertises the PD prefix on the LAN interface using RA messages and DHCPv6 (if configured to do so).

As far as I can tell, SpaceX has things configured such that both the WAN IP and the PD always get a 300s valid lifetime and 150s preferred lifetime at the time of renewal.

For the WAN IP, this renewal happens every time it sees a RA message from the upstream router. As along it sees one of those at least every 2.5 minutes, it won't expire even its preferred lifetime, let alone the valid lifetime.

For the PD, the renewal happens at half the preferred lifetime, so every 75s, unless there is packet loss, in which case it will take a bit longer as it will need to retry.

For the prefix that is advertised on the LAN, the valid and preferred lifetimes will be set to however much time is remaining on the lease at the time the RA message is sent out. Since it renews that lease every 75s, this means the valid lifetime advertised can be as low as 225s and the preferred lifetime can be as low as 75s, depending on the timing of when it sends the RA message vs when it last renewed the DHCPv6 lease.

Meanwhile, the OpenWRT default min RA interval is 200s and the default max is 600s. odhcpd will automatically lower the max interval to the remaining valid lifetime if it is lower (and it will be), but it does not do so for the preferred lifetime. So the default settings here can easily result in expiration of the preferred lifetime before the next RA comes along to reset things, which can cause clients to switch to using other IPv6 addresses for new connections if they are available.

This doesn't mean the default values in OpenWRT are somehow wrong, as those are the default values specified in RFC 4861. They just don't work well with Starlink's wacky network configuration.

So anyway, setting the max RA interval to 60s should be sufficient to prevent expiration of the preferred lifetime, even if the DHCPv6 renewal is delayed a little due to packet loss on the WAN side.

For the min RA interval, that RFC lists the default for that (MinRtrAdvInterval in the RFC doc) to be 1/3 of the max, which would be 20s. In practice, I don't think 20s vs 30s is going to make much of a difference, so I expect either is fine.

I'm pretty sure these don't do what you seem to think they do. As their names imply, they enforce a maximum value to the lifetimes, not a minimum. They won't do anything for the PD prefix since those lifetimes are already way below the default maximums (2700s and 5400s), but will change the ULA prefix (the one that starts with fdxx:) to start advertising these longer lifetimes instead of those defaults, which does not seem especially useful.

As far as I can tell, no configuration setting will result in advertising valid or preferred lifetimes that are longer than the remaining lease times, short of configuring the public IPv6 addresses as static IPs.

At least in 24.10 and 25.12, mtu_fix is not valid in the defaults section and results in the following error when firewall service is restarted:

Section @defaults[0] specifies unknown option 'mtu_fix'

Rather, mtu_fix needs to be set in the zone sections. However, 1 is already the default value for the wan zone, which I'm assuming is the one that would need it. So really nothing should need to be done for OpenWRT 24.10 or later, except maybe pulling out 23.05 bug workarounds.

Hybla was designed for satellite and high-latency links

(This one is from your GitHub repo)

I'm not especially familiar with this problem space, but if this statement is true, then hybla may not be appropriate for Starlink users. The 500-600ms latency numbers you see mentioned for satellite Internet is for geosynchronous satellites. Starlink uses Low Earth Orbit (LEO) satellites, which provide for a much lower latency, in line with typical broadband Internet. Packet loss is probably higher than typical fiber or cable broadband, though, so it wouldn't surprise me if some TCP tweaks would help, I'm just not sure one targeted at high latency would be the best.

While these do seem like they may be interesting, they don't see especially specific to Starlink connections. Unless there is some interaction with the Starlink CGNAT, but I can't think of anything obvious that would make it so.

Also, some of these are setting values that are already the default values, as far as I can tell.

1 Like