Yup, that's the idea. With the caveat that there is not always a direct relation between bandwidth ratios and airtime ratios (that's the whole point with having the fairness scheduler in the first place). However, if your laptops are relatively comparable and we thus tacitly assume that they are running at the same phy rate, then it does translate, and your results look promising. At least the aptch doesn't obviously break anything, which is a good start
Thanks a lot for testing! I'll submit the patch to upstream once the merge window is over, and we'll see if it can get some more testing that way round...
In your own testing, have you seen any scenarios where there's a substantial difference in the performance between the default DRR and virtual airtime? From what I remember virtual airtime was supposed to limit the amount of throttling of slow stations.
No, the virtual airtime scheduler is mostly a code reorganisation thing that means we don't have to rely on the round-robin scheduling to advance station's deficit. This can be a benefit for making the scheduler interact better with other things such as AQL; but the virtual airtime scheduler in itself is not supposed to change the observed behaviour at the macro level. So from that perspective "no change" is good
All this looks really promising... but are you also testing the target 10ms patch and the patch ripping out the adjustment to the codel target at +5 stations?
No, these test were carried out with stock AQL settings. It would probably be a good idea to run some tests with the other patches as well. The problem is that I don't have enough clients to run tests with the patch that changes the 5+ clients logic.
A last test, with ecn enabled on both sides, with a packet capture, would also be really excellent. I'm weird, I use the ecn CE marks to show when the AQM is kicking in, and there is so much jitter in wifi in the first place.....
OSX no longer has a reliable means of enabling ecn, but on linux
I've compiled two builds, and I'll run some tests tomorrow when I have time. Unfortunately, I won't be able to run a test with a packet capture since I don't have a third client to do the capture.
capturing on either host is fine. in one way you would see ece the other ce. No biggie, by eye the difference between 10 and 20 ought to stand out. if any.
Ok, I think I actually managed to confuse capturing airtime statistics and running a packet capture =) I can probably make that work. Do you know if the netperf-eu.bufferbloat.net server has ECN enabled? That's the one I run my tests against.
I'm forced to use that server in order to run the tests since I lack a third machine to act as a server. It's not ideal, but I make sure that the network is as quiet as possible during the tests to not affect the results.
I should also be clear that ripping out the stations limitation is important also, because if you have more than 5 stations on the network, even if inactive, it bumps the target up.
The test setup was the same as in the previous test. I ran one test with a codel target of 10 ms and another with 20 ms, and both builds had Toke's virtual airtime patch applied. Since the laptops had almost identical signal strengths I gave the first an airtime weight of 768 while the other kept the default 256. This was to make sure that airtime fairness was actually working during the tests. In addition I made sure that ECN was enabled on both laptops, and that Cake was disabled so it couldn't interfere with the ECN markings.
Looking at the results there doesn't seem to be much of a difference between the two targets:
I only had two stations connected during the tests, but for good measure I ripped out the multistation logic in both builds and replaced it with the respective codel target as per: www.taht.net/~d/982-do-codel-right.patch
I run my tests in conditions that have much lower throughput than the other tests in this thread (very poor signal). Could that have something to do with it?