R7800 performance

Thanks for that. I sort of thought that as well. The thing that made me doubt is that the ISP's router itself, on 2.4 Ghz is easily going up to 90 Mbs in the same room. So it seems that the Netgear router is performing worse, which I was not expecting.

OEM proprietary drivers sometimes have a speed advantage. r7800 2.4ghz isn’t the strongest performer.

Ok, good to know, thanks.

I get almost 100Mbps on 2.4GHz with smartphone using 40MHz mode. Most of the latest smartphones I've used lately only support 20MHz on 2.4GHz. That's in the same room and the 2.4GHz is of course relatively crowded.
With the same smartphone I get ~60-70Mbps through three brick walls and over 12 meters from the router.
Just for comparison only two other routers I've tried (Belkin RT3200 and an old TL-WR1043ND) were able to send the signal to the same spot but they were able to provide 15-25Mbps at most through the three walls. Still this is good considering that more than 10 other routers just couldn't even send the signal to that further point. There was no connection at all.
I use ath10k drivers and firmware and I think the R7800 is the most potent performer over 2.4GHz. This is with just one fifth of its full power.
As said many times on the forum if you use Speedtest to obtain the result you may get really wrong values.

Hi @sppmaster, thanks for this insight. This is very interesting and indicates a problem in my configuration, I think.

I have tried several different online broadband speed test, and they all indicate speed below 60 Mbps.
I have tried to make my configuration vary, 40 Mhz mode, WPA2 instead of WPA3, etc. I can never get this correctly.
I use the latest firmware (OpenWrt 22.03.0) which I assume uses ath10k driver?

Do you have any pointer as to what I could be missing?

If you wish to try use @ACwifidude NSS enabled build. I recommend you first try his latest master branch. This one has ath10k driver/firmware

R7800-20220926-MasterNSS-ath10k-sysupgrade.bin

Then test the WLAN performance with iperf3 server and client. Server to be run on wired computer connected to the R7800 switch. Search for details on the same thread. There are many posts about it.
Probably the version you currently use has ath10k-ct driver/firmware which is default for OpenWrt. I couldn't get good results with this driver hence I use ath10k.
Another "wrong" thing that I've found is that for ISP speeds of 100Mbps and below the Speedtest results are really inaccurate. Even on 5GHz the test gives most of the time only 50-60Mbps.

1 Like

Hello guys,

on my side I've been using 19.07.3 release for quite a long time, I was getting ~500 to 600 Mbps download through wireless and > 600 Mbps upload behind a Gbps link. (900 to 1Gbps using Netgear firmware).

A few months ago I've tried to update to 21.02.1 with which I was hardly reaching 300 to 400 Mbps in download. I've just tried the latest 22.03.0 and it's approximately the same story, sometimes I'm not even able to reach 300 Mbps.

Does anybody has an idea on differences between releases that could lead to that and how to solve that ?

You should use NSS enabled OpenWrt builds in order to reach 1Gbps. The original Netgear firmware has NSS cores enabled.

Actually the idea is not to reach the same transfer rate level as netgear firmware, the comparison with netgear firmware was just there to justify that my internet link and laptop are able to go at this transfer rate.

In the end I would be happy if I can simply maintain the level of performance I was getting with 19.07.3 release.

For NSS enabled builds, I think I've already tried that in the past without that much success, I'll take time to look at that again, thanks.

I get your point. Are you sure that you've enabled software flow offloading in firewall settings.
In my opinion you'd better use NSS builds just because in addition to reaching full 1Gbps NAT speed you get that with no CPU usage at all which may be better for the life of your device.
And NSS builds already reached enough stability.
But it's your decision after all.

Yes software flow offloading was enabled. Basically when flashing to newer versions, I was restoring my 19.07.3 setup in which it was enabled, and checked that it was correctly enabled after restoring setup.

You're perfectly right if I can get the performance I want with NSS enabled builds it's better, the fact is just that I was not able to reach the desired level of performance when I tried them. I'll have a new attempt with latest NSS builds, it's been a long time I've not tried that.

I forgot to mention that with 19.07.3 or 22.03.0, when launching bandwidth tests, the cores are not really overloaded, I have something like 45% / 5% load.

Just make sure that you read the recommendations in the NSS thread. Summarizing it, with NSS build you don't enable software flow offloading because you have hardware NSS acceleration on.
It is much better (just not to say compulsory) if you start with clean configuration and do not restore an older one.
CPU usage with NSS is 0% load.
Cheers!

The performance difference might be related to the packet steering option (as its default changed between versions, it's not default-on now). Another thing to test would be enabling the performance governor for both cores and locking their frequencies to the max, at least for testing (the whole scheduler and cpufreq support has been worked on and the ramp-up time might take longer than the duration of your speedtest).

So...

I redid a few tests with 22.03.0 and NSS enabled build (ath10k) re-setuping from scratch

With 22.03.0 I've been able to reach 500-600 down, almost 600 up without even enabling packet steering nor performance governor

With latest NSS build from 10/11 based on stable 2203 I'm reaching 500-550 down, without any cpu usage.

2 Likes

I used the latest stable build from October 2022, and I get over 900mbit with Gbit link. Only problem I had was vlan performance, which only gave around 450 - 500mbit.

After fixing some settings and adding irqbalance I am just losing some 5 to 10 mbit compared to plugging into my modem, which might as well be because of testing setup and randomness in speedtest results.

I am not at home right now, will post my settings later today.

Edit:
These are my performance settings @tetienne:

Installed IRQ balance:

opkg update && opkg install irqbalance
nano /etc/config/irqbalance #change enable to 1
/etc/init.d/irqbalance start

startup:

# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.

#Pin ETH0 and ETH1 to cpu0 and cpu1
echo 1 > /proc/irq/37/smp_affinity
echo 2 > /proc/irq/38/smp_affinity
#There might be additional settings that will squeeze out more, but this is good enough for me.

#Not sure what the defaults are, probably not needed in version 22, but can't hurt either
echo 35 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold
echo 10 > /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor

exit 0

No QoS or QSM, just using Software flow offloading in firewall settings.

1 Like

@MathijsNL

thanks for sharing. When you talk about 900mb, are you talking about wired transfer or wireless transfer?

I would suggest only manually pinning irq's or explicitly excluding your custom settings from being changed inside the irqbalance config.

1 Like

That is wired. Wireless goes around 450, but I haven't really tried to optimize that yet.

The docs only say to put the desired vallue in the smp_affinity file. It does look like you are right about adding the irqs to the banirq list.

--banirq=
Add the specified irq list to the set of banned irqs. irqbalance will not affect the affinity of any irqs on the banned list, allowing them to be specified manually. This option is addative and can be specified multiple times

I am seeing similar performance reliably once I pin the ports to the cores.
Interestingly without irq balancing enabled but not pinning I had to run the sped test twice in quick succession. The first one started slow, and speed up. The second test saturated.

`Idle Latency: 2.22 ms (jitter: 0.05ms, low: 2.18ms, high: 2.27ms)

Download:   915.48 Mbps (data used: 457.1 MB)                                                   
              8.95 ms   (jitter: 0.90ms, low: 1.95ms, high: 11.50ms)
  Upload:   924.81 Mbps (data used: 884.5 MB)                                                   
              7.09 ms   (jitter: 0.55ms, low: 2.16ms, high: 10.35ms)

Packet Loss: 0.0%
`

Overall, the story is like this:
Symmetric 1 Gig WAN
No SQM, QoS
Packet steering enabled
Stock 22.03.0: 350 Mbps up/down
22.03 + software offloading: 620 Mbps up/down
22.03 + software + hardware offloading: 620 Mbps up/down
22.03 + software offloading + irq balancing: starts at 600 Mbps, but shows 910 Mbps on second test.
22.03 + software offloading + irq port-core pinning + ondemand governor: 812/920 Mbps down/up
22.03 + software offloading + irq port-core pinning + performance governor: 915/930 Mbps down/up

I think the slight down/up asymmetry is real.

Pretty cool that I started the day at 350 Mbps, and with a few easy things (but a couple of hours of reading) got to 900+ Mbps.

Is there anything left to squeeze out?

3 Likes