Tl-wr1043nd -> r7800 ->?

I've had a TL-1043ND for many years with OpenWRT, and it works great. However, I recently upgraded to 1000/45 cable, and it can only do NAT at 150mbps, so it was time for an upgrade.

Unfortunately the R7800 has not been that. With some tweaking I was able to manage 900mbit/s (without QoS) but it is very unreliable and kernel panics, sometimes as often as 3x a day.

It seems like the choices are now extremely limited. Options I'm considering include a NUC with openwrt (possibly using the tl-1043nd for wifi), or possibly the Turris Omnia/Mox (might be a sidegrade to the R7800, though not crashing would be a plus).

Another option is just to switch back to the tl-1043nd and hold on tight until wifi6 devices start appearing in quantity.

The r7800 is a pretty solid open source all in one. I run SQM on the upload side for my asymmetric cable connection and regularly get decent speeds, A bufferbloat, and good roaming (802.11r):

The theoretical maximum for a one gig line is 940mbps. I’m able to get within 10% of that on better bandwidth measuring sites like

What is your goal with your new hardware? What problem are you trying to solve?

This is a good short thread discussion on r7800 SQM and performance tweaks if you want to compare settings:

1 Like

The performance without SQM is OK. I do get bufferbloat on the downlink over wifi, though. SQM did seem to help with this, but maybe it's also just because it's throttling the bandwidth down so much.

The real dealbreaker is the crashes. Reading the forum I came under the assumption that this is a common problem for this model, but maybe it isn't, and my particular hardware could be bad? I could try connecting to the ttl serial port and investigate them further.

Did you already try a current master snapshot (usual caveats apply), respectively one of hnyman's master based community builds?

There has been an issue with certain types of Jumbo frames for stmmac (the ethernet chipset used on ipq806x and other devices), which is fixed in kernel 4.19 and 5.4 (as in current master), so there is a good chance that your issue might be fixed there (or you really have a hardware issue).

I was previously running master for the L2 cache fix but switched back to 19.07 once the fix landed there. I'll switch back to a master build and try that. I haven't figured out the exact trigger so it might be a few days before I can confirm if it's better there.

I've had over 8 days of uptime with hnyman's master build, so my problems are solved for now. Don't know whether it was due to the new kernel or something else with that custom build, probably the former.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.