AFAIK, 21.02.1 was the last version using the original round-robin scheduler for ATF. Starting from 21.02.2 and until 22.03.1 RC4, ATF used the virtual time-based airtime scheduler for which many people complained about excessive latency. Starting from 22.03.1 RC5, @nbd changed it back to the round-robin scheduler with some other enhancements. I asked nbd to commit the fix to the 21.02.x branch as well so 21.02.4 will be good again, but so far no such commit has been done to the 21.02.x branch.
But based on your finding, it looks like the additional enhancements may have caused some problem when uPNP is enabled.
So far so good! Looks like it's working. I need to do some more testing, but it has to wait until a few hours later. I'll keep this new version with the commit running.
Note: Network under heavy use, YouTube in a mobile device, Citrix session encapsulated in a SSH tunnel, Teams video conference, Facetime call and Roblox game at full steam. And UPNP active and working (I can see Parsec entries). No hiccups!
That could just mean that the station and AP are not acquiring airtime equally aggressively. The side with the higher throughput seems to have an edge in getting airtime... same as I saw in my old RRUL test over WiFi where the macbook was eating the AP's lunch in spite of both using all four ACs...
I saw this happening in another testing where connectivity was:
Macbook Pro --1 Gbps USB Ethernet dongle--> NanoHD --4x4 MIMO WiFi connection--> NanoHD --1 Gbps ethernet--> Router
I might be able to run a rrul_be in this scenario a bit later today.
Just for curiosity's sake and to check if the patch I published on the forum works, instead of patches 330-337 I applied (once again) patches 330-333 but with the mentioned patch.
Results:
Speed from a greater distance is not very different from that with the latest patches. - It does not block clients after time.
Pings do jump a bit more, but it is also not that annoying.
Does not throw errors that block or degrade the link.
Edit: For the record, the @nbd patches (330-333) alone blocked customer access in the short term, after the additional patch they no longer do this.
In contrast, the latest ones, i.e. 330-337 are as OK as possible with me.
Hrm, how does this look when you bridge via ethernet cable over the WiFi link? The question is, is that imbalance caused by the WiFi link or maybe macosx has some sort of prioritization going on for marked egress packets?
BTW @amteza does this just mean performance is back to the level it was before the airtime fairness change or does this mean that performance may now actually beat what was there before? And hats off to you and everyone here for the tenacity!
This is the tool we used to validate airtime fairness, but it requires an aircap.
If the two stations show equal airtime, then the mcs rate being used by the AP is probably lower than the client. With an aircap in hand, we can see the negotiated mcs rates.
There are other possible issues in packing aggregates at the AP, achieving the same mcs rate but not the same airtime, there is also the number of "won elections" for airtime that can be pulled from that.
One "feature" of all this code is that latency with more than one active station tends to get better for a while as we service the sparsest stations first. Due to hardware limitations we had to have "one TXOP in the hardware", and "one ready to go", which means (so long as only the BE queue is in use), a maximum of 11.4ms latency buried in the 802.11 stack that cannot be FQed.
With one machine running the rrul at full throttle, and another just doing voip, we can end up with one small txop in the hardware (say, 250us) and one large one "ready to go" (5.7ms) for the rrul one. three stations, you might have two 250us txops outstanding, so the rrul test experiences just that latency in that round. most normal traffic does not drive wifi as hard as these tests do, but the revolution (dare I say it) of the fq ATF scheduler is getting the most lowest latency requiring packets to stations first.
One old technique that I would like us to try to further reduce jitter and improve multiplexing in the future is further tightening the announced wmm parameters in the beacon as well as on the AP to as low as 1.3ms.