I remember having this issue back in 2023 (where packet loss increased after flashing OpenWRT), and the instructions to flash stock firmware failed me, so I stopped streaming over WiFi, but started again recently, so I wanted to deal with this, whether by reporting it or by getting assistance in reverting to stock.
It’s known in the Virtual Desktop community (check their discord) that only a select few WiFi 5 routers are stable enough to stream VR at 200Mbps, and that includes the Archer C6 and A6. WiFi AC is stable up to ~200Mbps, AX - up to ~400Mbps, and AXE/BE - up to ~600Mbps. Above those thresholds, packets start dropping. However, on OpenWRT firmware, streaming remains stable (although not perfect) at around 50Mbps on AC, rather than the usual 200Mbps. By the way, I can easily download data at 400Mbps, which proves I’m using a 5GHz 80Mhz channel, but the top speed doesn’t matter here; it’s the stability that matters.
So when I stream VR at 200Mbps, packet loss causes drops to 50Mbps every few seconds. This doesn’t happen on stock firmware on the C6 v4, and I recall that C6 v3 was more stable on stock firmware. I don’t know any ways of easily testing it (without needing a Pico or Quest headset), as this requires testing for packet loss at a high bandwidth. Tests don’t show any packet loss at low bandwidth.
I also have a D-Link DIR-853 R1 running OpenWRT, which exhibits even more packet loss, but I don’t recall if it had this issue on stock firmware. Additionally, I have an Archer C6 v4 on stock firmware that’s pretty stable.
Linux accounts dropped packets (c6v4 is proprietary architecture, no indication of features/speeds, running vxworks, ir completely unrelated device outside the sticker)
Please connect to your OpenWrt device using ssh and copy the output of the following commands and post it here using the "Preformatted text </> " button (red circle; this works best in the 'Markdown' composer view in the blue oval):
Remember to redact passwords, VPN keys, MAC addresses and any public IP addresses you may have:
You have to show ethtool output. Install any of available package options. Also the nf_conntrack is missing (I am looking for very high "invalid" and "error" numbers, like in percents.)
Interrupts are not very well balanced.
Are you using some "performance script" or whatever that all network processing goes to 3rd core?
Try other packaet steering options. (Luci/Network/Interfaces/global options) and measure.
That disbalance/overburdening one core causes RX drops in network stack.
If the problem still persists after trafffic is .5/2x balanced between first and last 2 cores follow on with this:
Increase netdev budget as in redhat guidance
This is contrary to normal 7621 workings with irqs on CPU0 or irqbalanced (that works sometimes) moving half of traffic to CPU2 (ie between the "heads" of SMT)
Normall would be +/-20% balance between cpu0/cpu2
Reading this feels like reading green numbers on a monitor in The Matrix.
If the problem still persists after trafffic is .5/2x balanced between first and last 2 cores follow on with this:
Increase netdev budget as in redhat guidance
It says:
To determine whether tuning the net.core.netdev_budget parameter is needed, display the counters in the /proc/net/softnet_stat file:
Well, the decimal output suggests there are no dropped packets:
Ok, so with RPS, the packets are equally distributed between CPU 0, 1 and 2. But the speed still drops to 50 Mbps when I physically move the receiving device quickly. It happens even with Hardware Flow Offloading. Where the stock Archer would drop from 150 Mbps to 100 Mbps, the OpenWRT Archer drops to 40-50 Mbps.
Contrary to what the Redhat guide says, the values in the column aren’t raising anymore, even when bandwidth drops, so there’s no reason to make the changes they’re suggesting.
Oh, so Hardware Offloading enables beamforming? Nice to know. However, enabling/disabling it makes no difference in my case.
Anyway, looking at the debugging info, it seems that the issue isn’t dropped packets, but another kind of signal loss, mainly caused by movement (such as when ducking). Is there anything else I can try or should I wait for an update and hope for the best?
Hardware offloading doesn’t get involved on beamforming (only helps with downloading at 500+ Mbps speed from the Internet). On VR streaming, I assume that all the traffic is generated from your local network, and HW offload doesn’t work here.
Archer C6 v3 uses MT7621 SoC (well supported) but uses MT7613BE chip for Wi-Fi 5 duty. Personally, I am inclined to believe that the latter is the culprit. The open source mt76 driver has not been given much importance to mt7613/mt7663 and its support needs a lot of refinement (even basic features such as DFS are not yet supported).
I am not very familiar with the use of VR headsets, but consider trying Wireshark on your PC to identify what type of traffic is being used. Is it TCP or UDP? Are there multiple connections, or just one? What is the general size of the packets?
I have several routers with MT7613BE including a lot of C6 V3, and with that data I might be able to reproduce the error on my own.
You are wrong here. No relation between beamforming and offload. Beamforming works at limited movement speeds and has any effecr 25-30m or more away from AP and you need at least wifi5 client.
This is not a chatbot race, provide config files and describe the client to get any help.
Do not enable it for any tests, it adds its own class of incompatibility.
Wifi beamforming is designed to work at walking speeds....
As beamforming hadn’t been mentioned before, but you put it right after a sentence advising against using hardware flow offloading, I inferred that the offloading enables beamforming. But now you said there’s no relation between those two. Can you explain then why you mentioned beamforming in the first place? Is it because a router can miss the beam if the receiver is moving too quickly?
Also, sorry for the slow replies. I accepted the issue as normal a while ago. I’m used to having multiple issues at any given time, especially on Linux.
I’ll check if WiVRn (the VR streaming app) uses TCP or UDP, any other details, and if it can be changed.