MT6000 custom build with LuCi and some optimization - kernel 6.6.x

Test of r27009-37ccb16af6 (3.1.0.mtk) on M3 still are way off from 2.9.3.mtk

Test1

Test2

Test3

I'll update with intel but a bit letter, as getting ready form my trip.


Same here @pesa1234 , have a nice one there, Im sure I will :smiley:

Thanks for the early tests. I'm also up and running on 3.1.0 now and will do some Crusader tests a little later today.

As to your test results--have you been running with TWT running all along? Is it enabled now? The regularity of the latency spikes almost looks like TWT related thing.

Off on every test so far, not needed for my environment as I understand its more for battery save for iot type devices and phones.


Update1:

Well, for whatever reason it now shows much better results, perhaps that's because first tests on M3 were done couple of moments after installing new image :face_with_raised_eyebrow:.

Test1

Test2

Test3

1 Like

Ok, that's a mac issue.
So I still don't understand where...

This evening I compile for all. Thanks

4 Likes

I just wanted to say that I've been flawlessly running r27039 (r2.9.2) for a couple of weeks, now scoring 9days of uptime.

The only issue I've encountered are the SER errors which however are not perceived.

Due to the holiday period, I'll keep this build for some time untile I'll be ready to take down my network again :slight_smile:

Enjoy your vacation @pesa1234 and have fun testing without me (at least for some time!)

2 Likes

@stachdude Have you disabled awdl0 on your Macs, by any chance? @pesa1234's reply reminded me that awdl0 uses channel 149 exclusively and does a channel scan every couple seconds.

For the good of the community, more info here for Mac users:

Scripts for disabling awdl0:

Personally, I opted for the auto-disable method: https://github.com/meterup/awdl_wifi_scripts?tab=readme-ov-file#run-it-automatically-after-a-restart


Update:
It seems this issue may be corrected (finally!!!!!) in latest versions of MacOS. Refer to this post and look for the post I linked in that post ( :slight_smile: ) to see info about the SCAN request received event that awdl0 fired that wreaked havoc on wireless latency for MacOS clients.

Not claiming this is a non-issue for all. You need to test for yourself to determine if the issue is gone for you. YMMV.

1 Like

FYI, r2.9.3 has been dubbed the "golden standard"/benchmark that @stachdude and I have been measuring against. You might want to grab it instead since it's extremely stable and performant. :slight_smile:

2 Likes

Thanks for pointing it out @_FailSafe, yes I have awdl disabled on my mac, I believe that those first tests results can be ignored. I've been running 3.1.0.mtk for 4h now, and from time to time I run couple tests, all of which seem stable and latency and speed are awsome. To early to say I guess but looks promising. Will run intel tests and we will compare.

2 Likes

Awesome! FWIW, I have also noticed that Crusader tests tend to show some rough results until things have stabilized for 10-15 minutes. In my case, I assumed it was churn from my STAs bouncing between active APs (remember, I have three MT6000s in dumb AP mode) whenever I upload a new image. When I flash a new image, I do a rolling update across my three APs so as to keep at least one of them live while others are rebooting.

It takes a few minutes after all three devices have rebooted for all my clients to settle back into their typical associations (mostly for my stationary devices).

@pesa1234 Things are also looking good for me on r3.1.0 at this point. I still haven't run any Crusader tests from my normal testing spot, but usability feels good and it seems reasonably "snappy" in terms of performance.

3 Likes

Intel results on 3.1.0.mtk

Test1

Test2

Test3

Test4

Test5

Test6


Looking at those I would summarize: Not bad, not bad at all.

2 Likes

upload: 2024.07.27_r27011_6.6.41_next-r3.1.0.mtk

  • revert mac80211 update and mt76 update respect vanilla openwrt
  • mac80211: cfg80211 fully move wiphy work to unbound workqueue
  • mac80211: cfg80211: use correct nla_get
  • mt76: fix-oops-on-non-dbdc-mt7986
  • mac80211: fix warning and track capability opmode NSS separately
  • update kernel to 6.6.41

I'll be back in 15 days.

Thanks to all and a special thanks @stachdude and @_FailSafe for their test on new mac80211 v6.9.9.

14 Likes

Thanks for all you do! Enjoy your time away!! :sunglasses:

3 Likes

I've been using the r27039 (r2.9.2) of Pesa's build and uptime is currently 16 days, i'm really happy with it.

But, where would I be able to get hold of r.2.9.3 of Pesa's build as mentioned as it is not on the github site?

Pardon my ignorance, but although I get the general gist of this thread some of the implied details and assumptions are lost on me :slight_smile:

1 Like

I was mistaken in thinking Pesa had released r2.9.3, but as I just confirmed, r2.9.2 was the latest release made prior to yesterday's 3.1.0 release. I build Pesa's firmware from source, so I don't often see the actual releases on Github.

That said, r3.1.0 has been solid and very similar in performance as r2.9.3. I would go ahead with r3.1.0 and you could always roll back to r2.9.2 if you don't like r3.1.0 or otherwise find an issue with it.

2 Likes

Super, thank you for the reply, i'll take r3.1.0 for a spin.

NB. To add I am using my 2x MT6000's as APs, and since using Pesa's spin on OpenWRT they have been rock solid in operation (it is only as I like to tinker and seek to understand more that I upgrade really). A very impressive device and a very impressive build by Pesa as a result of yours and others feedback /thumbsup

3 Likes

Hello
Is this still relevant now?
I just see that the scripts haven't been updated for 2 years, and there are no described problems on Mac m1.
Ps.: My WiFi is on completely different channels.

TL;DR:

If you have awdl0 enabled on your Mac and see events in your Mac WiFi log while "the command" below is running, then you are likely still a 'victim' of the awdl0 SCAN (which is a disruptive, latency-inducing channel scan :face_vomiting:).

The Command:
tail -f /var/log/wifi.log | grep 'SCAN request received'


It's a good question. I can't say for sure on MacOS 14, but I'm running MacOS 15 betas and I don't see the same behavior as I once did when awdl0 was enabled. I hesitate to say it's fixed because it's hard to know what MacOS versions everyone is using.

But if you have awdl0 enabled and don't see the behavior and/or log messages I described in detail here (back in 2020, no less :stuck_out_tongue: ), then it's probably safe to say you're running a MacOS version where the behavior has been corrected.

1 Like

I have macos 14.4
I checked the log files and found no mention of this problem.
It looks like the problem has either been fixed or covered up.

Awesome! Glad to know you're not seeing that in the WiFi log on MacOS 14.4. :+1:t2:

If you start a constant ping to a well-connected host on your network, such as pinging from your MacBook (on wireless) to your router and see anything like this (chunks of random high latency, then returning to "normal") happening at a regular interval, you would know that the issue was covered up (vs fixed):

...
15:13:54.744131 64 bytes from 192.168.45.15: icmp_seq=18631 ttl=64 time=4.029 ms
15:13:54.845859 64 bytes from 192.168.45.15: icmp_seq=18632 ttl=64 time=3.179 ms
15:13:54.947478 64 bytes from 192.168.45.15: icmp_seq=18633 ttl=64 time=2.827 ms
15:13:55.052476 64 bytes from 192.168.45.15: icmp_seq=18634 ttl=64 time=5.650 ms
15:13:55.154552 64 bytes from 192.168.45.15: icmp_seq=18635 ttl=64 time=3.871 ms
15:13:55.258871 64 bytes from 192.168.45.15: icmp_seq=18636 ttl=64 time=3.038 ms
15:13:55.359884 64 bytes from 192.168.45.15: icmp_seq=18637 ttl=64 time=3.102 ms
15:13:55.461159 64 bytes from 192.168.45.15: icmp_seq=18638 ttl=64 time=4.039 ms
15:13:55.562248 64 bytes from 192.168.45.15: icmp_seq=18639 ttl=64 time=3.183 ms
15:13:55.667160 64 bytes from 192.168.45.15: icmp_seq=18640 ttl=64 time=4.194 ms
15:13:55.887877 64 bytes from 192.168.45.15: icmp_seq=18641 ttl=64 time=124.260 ms
15:13:55.887931 64 bytes from 192.168.45.15: icmp_seq=18642 ttl=64 time=19.149 ms
15:13:56.011081 64 bytes from 192.168.45.15: icmp_seq=18643 ttl=64 time=40.408 ms
15:13:56.210344 64 bytes from 192.168.45.15: icmp_seq=18644 ttl=64 time=139.222 ms
15:13:56.210391 64 bytes from 192.168.45.15: icmp_seq=18645 ttl=64 time=34.771 ms
15:13:56.350011 64 bytes from 192.168.45.15: icmp_seq=18646 ttl=64 time=71.301 ms
15:13:56.538577 64 bytes from 192.168.45.15: icmp_seq=18647 ttl=64 time=156.479 ms
15:13:56.538627 64 bytes from 192.168.45.15: icmp_seq=18648 ttl=64 time=56.158 ms
15:13:56.627752 64 bytes from 192.168.45.15: icmp_seq=18649 ttl=64 time=42.300 ms
15:13:56.734975 64 bytes from 192.168.45.15: icmp_seq=18650 ttl=64 time=44.869 ms
15:13:56.916944 64 bytes from 192.168.45.15: icmp_seq=18651 ttl=64 time=123.662 ms
15:13:56.917004 64 bytes from 192.168.45.15: icmp_seq=18652 ttl=64 time=22.945 ms
15:13:57.097255 64 bytes from 192.168.45.15: icmp_seq=18653 ttl=64 time=100.624 ms
15:13:57.098835 64 bytes from 192.168.45.15: icmp_seq=18654 ttl=64 time=1.497 ms
15:13:57.278506 64 bytes from 192.168.45.15: icmp_seq=18655 ttl=64 time=79.731 ms
15:13:57.303504 64 bytes from 192.168.45.15: icmp_seq=18656 ttl=64 time=2.969 ms
15:13:57.460359 64 bytes from 192.168.45.15: icmp_seq=18657 ttl=64 time=56.355 ms
15:13:57.614368 64 bytes from 192.168.45.15: icmp_seq=18658 ttl=64 time=106.021 ms
15:13:57.614422 64 bytes from 192.168.45.15: icmp_seq=18659 ttl=64 time=3.553 ms
15:13:57.795755 64 bytes from 192.168.45.15: icmp_seq=18660 ttl=64 time=80.417 ms
15:13:57.822536 64 bytes from 192.168.45.15: icmp_seq=18661 ttl=64 time=2.305 ms
15:13:57.974278 64 bytes from 192.168.45.15: icmp_seq=18662 ttl=64 time=51.962 ms
15:13:58.152339 64 bytes from 192.168.45.15: icmp_seq=18663 ttl=64 time=128.924 ms
15:13:58.152359 64 bytes from 192.168.45.15: icmp_seq=18664 ttl=64 time=23.837 ms
15:13:58.333419 64 bytes from 192.168.45.15: icmp_seq=18665 ttl=64 time=99.724 ms
15:13:58.335526 64 bytes from 192.168.45.15: icmp_seq=18666 ttl=64 time=1.669 ms
15:13:58.513027 64 bytes from 192.168.45.15: icmp_seq=18667 ttl=64 time=78.029 ms
15:13:58.667726 64 bytes from 192.168.45.15: icmp_seq=18668 ttl=64 time=130.366 ms
15:13:58.667775 64 bytes from 192.168.45.15: icmp_seq=18669 ttl=64 time=26.966 ms
15:13:58.849483 64 bytes from 192.168.45.15: icmp_seq=18670 ttl=64 time=104.263 ms
15:13:58.849538 64 bytes from 192.168.45.15: icmp_seq=18671 ttl=64 time=4.126 ms
15:13:59.025957 64 bytes from 192.168.45.15: icmp_seq=18672 ttl=64 time=80.084 ms
15:13:59.053476 64 bytes from 192.168.45.15: icmp_seq=18673 ttl=64 time=2.443 ms
15:13:59.205727 64 bytes from 192.168.45.15: icmp_seq=18674 ttl=64 time=50.688 ms
15:13:59.388851 64 bytes from 192.168.45.15: icmp_seq=18675 ttl=64 time=132.034 ms
15:13:59.388898 64 bytes from 192.168.45.15: icmp_seq=18676 ttl=64 time=31.738 ms
15:13:59.566273 64 bytes from 192.168.45.15: icmp_seq=18677 ttl=64 time=105.030 ms
15:13:59.567464 64 bytes from 192.168.45.15: icmp_seq=18678 ttl=64 time=2.476 ms
15:13:59.722718 64 bytes from 192.168.45.15: icmp_seq=18679 ttl=64 time=55.094 ms
15:13:59.904533 64 bytes from 192.168.45.15: icmp_seq=18680 ttl=64 time=136.794 ms
15:13:59.904562 64 bytes from 192.168.45.15: icmp_seq=18681 ttl=64 time=31.656 ms
15:13:59.975121 64 bytes from 192.168.45.15: icmp_seq=18682 ttl=64 time=1.640 ms
15:14:00.077185 64 bytes from 192.168.45.15: icmp_seq=18683 ttl=64 time=3.313 ms
15:14:00.182709 64 bytes from 192.168.45.15: icmp_seq=18684 ttl=64 time=3.706 ms
15:14:00.284626 64 bytes from 192.168.45.15: icmp_seq=18685 ttl=64 time=4.542 ms
15:14:00.388569 64 bytes from 192.168.45.15: icmp_seq=18686 ttl=64 time=3.346 ms
15:14:00.493400 64 bytes from 192.168.45.15: icmp_seq=18687 ttl=64 time=3.185 ms
15:14:00.596481 64 bytes from 192.168.45.15: icmp_seq=18688 ttl=64 time=3.349 ms
15:14:00.702054 64 bytes from 192.168.45.15: icmp_seq=18689 ttl=64 time=4.081 ms
...

Anyway, I have digressed into off-topic territory here, so I won't belabor this awdl0 thing. Glad to know it is no longer an issue for you (and for me, too!) :slight_smile:

1 Like

I finally decided to switch from GL inet's "stable" firmware and in to this (using 3.10's compiled image) and the latency improvement was very commendable! Even without QoS enabled, the latency on bufferbloat test was almost perfectly still on download while uploads shown a much lower latency compared to GL inet's latest stable firmware. Even a simple ookla speedtest shows better latency. The 2.4ghz also works properly now on older android non-ax devices which was my biggest problem before when I try to switch from GL to vanilla OpenWRT. Thanks a lot Pesa!

6 Likes