I was reading a thread on this and learning quite a bit. While I had no argument with most of what I read, I took issue with this comment in particular: SQM on gigabit fiber
Unless that gigabit fiber uplink is asynchronous, I have a hard time believing even 8 people saturating that link with streaming and mobile device uploading. However, regardless of that probability, I would highly recommend people take objective measurements when deciding what to do with their connection. I have prometheus/grafana monitoring my edge router's traffic and smokeping for latency, and they are both relatively easy to setup and maintain and provide the kind of historical data necessary to characterize usage and latency. Make sure to add the first-hop router of your ISP to your smokeping config so you can see the latency of just the uplink alone.
While traffic at my house is fairly modest, the data would suggest that it never impacts latency, even when approaching the uplink capacity. I've done it in the past on much slower uplinks, and have considered configuring some kind of traffic shaping because it's just so damn cool, but objectively I have no actual need for it, and I suspect most gigabit customers are in the same boat.
Oh sure, with the right kind of activity I can saturate 1gigabit all by myself. I was addressing the typical usage mentioned in the original link. Point taken regarding the p2p traffic, tho, and all the more reason to apply appropriate bandwidth bounds in the p2p client.
The point I was hoping to make, though, is to obtain objective measurements prior to spending time & money on a traffic shaping solution that may only wind up increasing latency.
Just for the purpose of demonstration, I setup a ping from a host on my lan to the first-hop router at my ISP. I then used 2 hosts to run a local speedtest (1 instance each) and saturated my connection at about 966mbs and observed the changes in latency.
prior to the deluge, the latency to that router varied from about 3-9ms. During the deluge, both down and up, latency increased to a range of about 9-15ms.
What I find cool is that even with my 10-70Mbit/s LTE connection I can still get a smoother connection for things like Microsoft Teams or Zoom than those with much higher bandwidth connections. When I am on calls with clients and my competitors from prestigious firms I am generally not the one experiencing connection issues.
Also isn't one of the benefits of CAKE that it provides flow fairness so it dishes out bandwidth nice and evenly between different flows?
For me the difference is enormous, but I guess it is true that the difference on a 1 Gbit/s connection may not be so dramatic as compared to on lower bandwidth or LTE/Starlink connections.
Anyway, it's nice to set up a robust and well-oiled machine that can handle anything you throw at it:
Only if actual queues build up, if a 1000Mbps link is shared by a 900 Mbps flow and a 99Mbps flow cake will do nothing, it will only become active if the 99 Mbps flow starts to send at a higher rate... in other words if the 1Gbps link never gets saturated, cake will not actually change the traffic patterns, but will do so at a steep CPU cost. Now, if a LAN client saturates the 1 Gbps uplink and one or more WiFi clients access the internet at the same time, cake will be advantageous (as there will be queue build-up).
It boils down to "can you guarantee" that actual saturation/overload never happens when you could not live with the consequences? If yes it can not happen or you can accept the consequences (which might be mild) no traffic shaper required. However, I would probably still run cake in unlimited mode and try to set-up BQL for egress that way cake will run at a limited CPU-cost but might still help on egress overload (which admittedly is rarer than download overload for typical internet use-cases).
That is immensely cool and demonstrates the power of good traffic shaping. All the traffic shapers I've worked with in the past have required an upper limit so as to know when to start queuing. This has been great on uplinks with guaranteed bandwidth, but very difficult on those that are variable. Typically you would choose the lowest possible value for the upper limit, but if variability is high, that might be very low indeed. Do you have to set the max bandwidth in CAKE?
Oh yeah, back in the highly asynchronous days (only 4mbs up), I relied heavily on my traffic shaper to keep that limited upload side queued nice and fair. With fully synchronous uplinks, it's almost impossible to saturate that upload side without trying really hard.