I noticed that after install on ERLite3 only the IPv6 DHCP server is running. I had to replace that with the IPv4 capable DHCP server. Why is that?
When wired in, I only get 10% of the speed I got with the Ubiquity firmware, which is the same speed as I get from another OpenWRT device. Is this normal? Is there some hardware on the ERLite that is unsupported? Is there something I need to enable?
edit:
It would be useful if I could test the speed past 200mb/s on the router itself
edit2:
Perhaps I spoke too soon. The slowness is due to lack of proprietary offloading support. This has been brought over into DebWRT, but AFAIK that's discontinued. Basically, if this router goes EOL, it'll either be 10x slower or be insecure. Food for thought. Next time I buy something, I need to dig into the chipsets to see how open they are to figure out how long a device will work securely for! What a PITA.
An IPv4 server should be active by default on all official OpenWrt images.
We'd need to see your config.
But as you mention, yes, they do have some secret sauce. That said, if you don't bridge the physical interfaces, that may help considerably. Again, lets see the config.
As far as the vendor firmware, it does seem that UI is working on a major update to the EdgeMax firmware. Check the EA software releases for them and you'll see. I have no idea how good it will be and how committed they are to continuing to support that line, but I was surprised when I saw an entirely new firmware in the works.
There are many edges of "performance"
For what offload is concerned it applies to forwarded traffic and tcp+udp ie nat and routing, not io to device or bridging.
Test via forwarding:
But you can install iperf3 or netperf on a wired machine on different subnet from test client and asses speed at home.
Whichever test you choose:
Install htop, f2 -> unhide kernel threads, enable cpu detail -> f10
Now check router under test - red and lilac bars should be +/-10% equal on both cores.
If not - enable packet steering
....
expecting waveform test links from you and a screenshot.
Thanks. I didn't think I could get much more speed without offloading. Glad to hear that it might be possible.
How do I make an anonymised config file and post it here? Like this?
You were asked for links to waveform tests with offload on and off and a screenshot from htop while tests are running. If you have problems with other packages, not overall forwarding performance you have to open new topics.
OK! Got it. Sorry, got mixed up. By testing with and without offloading, I can only use offloading with the original Ubiquity firmware, so I'll need to plug that back in to test that. I don't quite understand that part of processing other than to confirm bufferbloat with offloading?
So if I understand this correctly, because both CPU cores are not equal, I should setup packet steering via:
`#opkg install shortcut-fe
config flow
option src 'wan'
option dest 'lan'
option proto 'tcp'
option helper 'shortcut-fe-tcp'
config flow
option src 'lan'
option dest 'wan'
option proto 'tcp'
option helper 'shortcut-fe-tcp'
Looks like you need to upgrade to openwrt 23.05.3 and make measurements under generic configuration.
fe-lite (aka turboacc) from immortalwrt et al defeats any QoS by pushing packets through stack ahead of any QoS queues.
19.07 is really old. It has been EOL for several years and has many known security vulnerabilities, and it is also unsupported. Please upgrade to the latest (23.05). During the upgrade, do not keep settings as they are not compatible.
Redo measurements without any additional QoS configuration, and compare wired/wireless. We cannot clean your air from radio waves, but we can maximize speed to LAN and consequentially reduce CPU load while on wifi.
Testing with the wired ethernet needs me to enable a new ethernet port and give it internet access. When I add this eth0 port to br-lan, I lose DHCP access to clients and connectivity on this in-use network, so after a lot of wrangling I'm just testing behind the wifi AP. Hopefully this testing is good enough?
After upgrading to
"version": "23.05.3",
"revision": "r23809-234f1a2efa", "kernel": "5.15.150", "target": "octeon/generic",
I got about 80mb/s, down from ~150mb/s. Thanks to your advice, I enabled PACKET STEERING and now throughput is up to nearly 200mb/s.
Bufferbloat is still grade C though:
Along the way, I installed NextDNS CLI and this setting wasn't passed onto DHCP clients initially. So I set 192.168.1.1 as the custom DNS in DHCP and now after a lot of restarts clients are working again.
CPU load between the cores varies under load from ~30-50%.
Also check that in network/interfaces/global packet steering checkbox is enabled.
The load should be +/-10% between cores and certainly not hitting 100% on one core.
Current source of latency is few packets dropped and retransmitted due to unavailable CPU time.
While at soft offload you can try one of SQM qdiscs, though you need to switch to lighter one (noqueue<bfifo<pfifo_fast<fq_codel<pie<cake) if CPU starts hitting 100% when SQM qdisc is enabled.
Thanks! Great patch!
Load came down to ~60% under load with offloading enabled after the patches.
However, bufferbloat is still grade C.
I thought SQM is incompatible with offloading though? I only have cake and fq_codel available to try in the interface at the moment. I'm not sure if I should applying QoS to the bridge interface, or just the LAN or WAN. I'll read up on SQM.
Just to confirm, checking the hardware offloading box won't do anything, since hardware is unsupported? But it also won't lock me out of the router?
You have more latency with upload, which means that with limited cpu you may set download bandwidth to zero disabling ingress shaping and halving CPU hunger of qdiscs.
I think it can't be called br-wan, because the dash isn't allowed, so I could use br-wan
Why the need to apply SQM at the physical interface? What's the drawback of just applying to the bridge that's already in place, even just for testing?