No, up and down stay around 200-300Mbps. Also, according to the site linked in OpenWrt's SQM page, bufferbloat is worse with SQM than without. Generally, I have a hard time actually saturating 1Gbps, so I never felt the need for SQM.
If you tell me where the switch for that is in OpenWrt, I can try. But given the SQM results, I highly doubt it.
If SQM runs out of cpu for the speed you set then perhaps it would give worse performance.
For many home users saturating a gigabit connection would require multiple people using the network. Without traffic shaping my family of four can do it pretty easily. Just arrive home with a couple phones that want to sync new videos of the kids to google photos, while someone starts up the Netflix front page and another person loads up a news site... doing that will completely bork a voip phone call for example. Still I imagine a single user would have a harder time. Also some ISPs do a better job managing buffers than others. If you have very little buffering due to the ISP doing a good job then no need for SQM on your router.
I'm going to offer a contrarian viewpoint here. Save money with a 250mbps to 350mbps connection and use good SQM (for example, the IQrouter v3, or any reasonable performance OpenWrt-compatible router.)
Unless you're unusual, and the transfer speeds of your bulk up/downloads are unacceptable, it's likely that a lower speed to your ISP with a modestly-priced router that controls latency will make you just as happy. At a lower price from your ISP and your router.
Good point! IIRC, cake will try to maintain the configured shaper rates under CPU starvation, but will accept higher latency under load, while HTB+fq_codel scripts in SQM will honor the latency deadlines but will fail to meet the configured traffic rates if CPU starved. So, depending on the selected script, rate and/or latency will suffer if a router's CPU is not up to the task the router is confronted with. But that not only depends on SQM/the shaper but also what ever else the router is tasked to do. IDS like sentinel or snort are typically quite CPU intensive...
There never is a need to use SQM, but even with a low bufferbloat link, SQM's sharing guarantees can still be desirable, if only to restrict say the "fall-out" of a heavy torrent user in a network to that user's IP address/computer....
@moeller0 PS I was a bit confused by your statement above, "... There never is a need to use SQM,..." Given that a huge fraction of people do suffer from bufferbloat, their experience will benefit from using SQM. Perhaps you mean that, "If you are not experiencing high latency or lag, then there's no need for SQM..."
Oh, my point is simply that SQM is not mandatory, and everybody is free to either use it or not use it as they see fit.
I also happen to believe that most networks would be well served to operate SQM or something similar, but in my foreigner's understanding of english, that is not a strict requirement or need or in IETF-ese a MUST.
My goal is to tell people the consequences of using or not using SQM and then let them make their own informed decisions what they want to do in their own network. Especially in situations where a router is not up to operate close to contracted/link rate, and where one needs to make compromises.
That said, myself I followed your "contrarian"* proposal and operated my nominally 100/40 link at 49/36 since that was the most reliable shaping I got of my old wndr3700v2 with recent OpenWrt. A bit of testing convinced me that my family of 5 would be better served with SQM then with 55 Mbps more aggregate thoughput. (I have since switched routers, the trusty wndr now serves as AP, while a turris omnia took over primary routing and that device has no issues running SQM and an IDS (team turris' pakon) simultaneously at 95/36).
*) IMHO that is actually a common sense proposal and not much in the vein of the common anti-this anti-that contrarian, as you actually have a rationale to back this proposal up
This aligns with what I'm calling Rich's second rule of network troubleshooting which is, "If you're happy, then I'm happy " I don't feel an urge to optimize your network if you don't feel there's a problem. And I certainly won't point out a lot of problems that I see (or that might exist) if you are content with things the way they are.
@dlakelan Good post. I recently upgraded my Internet connection and now have that "Tim the tool man issue" too. Plus I am dependent on the nice features like wireguard, DDNS, VLANs,regular security updates and friendly expert support forum of openwrt.
I like your decentralized approach of using a unit that has one feature and it does it well. That will also bypass some of the proprietary firmware roadblocks.
Anybody using WiFi 6 AP that does mulitple SSIDs connected on different VLANs? I have three VLANs. I already have Atomic PI and tp-link USB UE300.
Yea but the point is that's without SQM... My WRT32X also handles 1Gbit without SQM, and with SQM Cake it's perfectly reliable at 600+Mbits, while also doing 100MB/s USB 3.0 NAS, Adblock, WiFi, Samba, etc. (I think now that the R7800 NSS offloading works it has similar performance.) The point of the post is that people shouldn't expect an SQM capable OpenWrt router to handle 1Gbit for a long time. Router CPUs just aren't fast enough right now and both the hardware and software support is a long ways off.