Could you post the output of "tc -s qdisc" for the cake(420000) and the cake(500000) case? I want to see if/how the quantum variable changes in relation to the set speed. Like HTB/TBF' burst buffer these can help cake to keep the real interface fed during CPU shortages (but at the cost of more induced latency, so this needs to be seen as a trade-off).
Have you manually installed the most recent upstream version of sqm-scripts, before this test? If not please try... And have a look at /usr/lib/sqm/defaults.sh, you will find variables that let you influence the buffer sizing by the duration required to empty that buffer at the configured bandwidth, it would be quite interesting to see whether increasing the buffer duration might give you back the apparently lost bandwidth.
BTW, it is not the leaf qdisc component of cake or fq_codel but either cake's inbuild shaper component (for layer_cake/piece_of_cake) or HTB/TBF in simple/simplest that actually drags in the high computational demands...
Is it possibly that the CMTS can detect if a device can't reach full speed and then throttles down the download? Like some kind of threshold? If threshold(s) > increase the usage of download channels?
This only works with HTB. Cake is much better at autoscaling things, but if the new buffer-sizing code in sqm-scripts solves the problem for simple.qos then it might be possible to make this also configurable in cake (and even though this will come at the cost of more latency so will never be the preferred solution).
Hmm.
I don't know...
Upload is also weird.
Without sqm:
[SUM] 8.00-9.00 sec 4.82 MBytes 40.4 Mbits/sec
With sqm and limit set to 41984
[SUM] 7.00-8.00 sec 2.31 MBytes 19.4 Mbits/sec
With sqm and limit set to 100000
[SUM] 7.00-7.61 sec 3.00 MBytes 41.6 Mbits/sec
I guess my point is, without knowing your /etc/config/sqm, the output of "tc -s qdisc" and a description of how you performed the above tests, all I could do is pure speculation (and meltdown and spectre probably reminded all of us that that might have side effects ).
Hmm i have the feeling my isp is doing some kind of QoS/AQM?
When i disable sqm and saturate the upload, pings are around ~75ms.
I would expect a much higher latency under a saturated link.
But still too high for my taste.
Do you know by chance what the max channel bandwith for docsis 3.0/QAM16 is (upstream)?
You mean perfomance wise?
I don't know how much it does effect the arm cortex a9.
I think it is not affected by meltdown only spectre v2.
But i don't think it will effect the shaping of 40Mbit/s upload....
I wanted to give the updated sqm-scripts ago with htb or hfsc.
But somehow only fq_codel and cake are showing up as available qdiscs?
//edit
Tried the old qos scripts package from openwrt.
Shows the same behavior as the sqm package.
As soon as i enable qos the bandwidth is halved.
Seems like the problem was, the cpu scaling patch, i recently added.
But seems like...
the segment here is overloaded.
Yesterday late at night i was able to set sqm to 100% sync rate.
And had nice low pings while saturating the uplink.
Today pings already start to raise when upload speeds reach ~ 30 MBit/s.
Yeah fast and good internet in germany.
Sorry we don't do that here...
I feel like going back in time
Sorry, this one should not have crept below my quality control
Great, I get rewarded for being late, you solved the riddle yourself...
The joy of DOCSIS. Well, I take that back as this is not really cable specific, all shared-something techniques suffer from this*, the issue is segment size (measured in number of users) versus segment aggregate bandwidth. Now at least with the prospect of DOCSIS3.1 around the corner you can hope that the mandatory PIE-AQM in the modem should at least give you a tolerable worst-case bufferbloat in the egress/uplink direction...
*) The question is not "is there a shared" segment, but rather at what point in the network path the sharing starts. But DOCSIS traditionally has less favorable split than GPON...
I think it will take some time before docsis 3.1 will be available here for the wide range of users.
I read, they tested 3.1 in some cities but only had problems.
Also all upload channels are running with qam16 modulation, in their support forum someone wrote that only a small amount of connections are running with qam64 modulation.
Completely switching over to qam64 would give more bandwidth? more headroom?
Actually i don't know what they are doing.
The installation down in the cellar looks like crap. (the installation in the old house was also bad)
If i had the equipment i would fix it myself.
They use ds lite, if you were lucky you could get a native ipv4 connection.
Now they over dual stack but the ipv4 part is crippled to 1460 mtu.
If you want only ipv4, sorry we can't do that, but dual stack with ipv4 and ipv6 is no problem x)
They don't over plain modems, all routers they give out have the intel puma6 bug.
In other countries they are operating in, they offer at least a bridge/modem mode.
But sorry in germany we can't do that either.
And now vodafone want to buy that company.
When worse gets even worse x)
I assume so, QAM16 will only transmit 4 bits per symbol, while QAM64 will use 6 bits per symbol, so a QAM64 channel will have 1.5 times the bandwidth than a QAM16 one. I have very little recent experience with docsis but I belief QAM16 to be the worst case uplink modulation, this is pretty terrible.
So I assume this is UM, are they really using full dual stack or rather ds-lite? But even in the ds-lite case (where each IPv4 packet incurs an addition 40 byte IPv6 header, I believe the idea was that the CPE-CMTS connection should use baby jumbo frames so that the MTU into the internet should still be 1500, just showing how naive my beliefs seem to be...).
Well, personally I would never want IPv4 only, IPv6 is not only the future but the transition already started so it is also the present (plus IPv6 elegantly side-steps the nasty reachability-from-the-outside issues caused by CG-NAT).
Not even "fixed" firmware releases?
Not sure, I heard great things about the DOCSIS section of Vodafone in Germany, including that they seem to allow customers with non-rented modems to choose between dual-stack, ds-lite and IPv4, and they seem to have a decent information policy towards end customers, so not all in this coming change might be as bleak as you might think...
BTW, this seems to be not uncommon, that traffic shaping and cpu scaling does not seem to harmonize very well, it could be that the shaper is bursty enough for the CPU to scale down prematurely or maybe the scaling governors might not be looking at sirq load carefully enough....
And for the most part NAT64 works great. I think just go with ipv6 only they'll shove the entirety of the ipv4 internet into a tiny corner of ipv6 and translate over for you... I've tested Android, Linux and windows machines on ipv6 only LAN with tayga on the router and it works well for most things. I do suspect a few games and things will suffer. For those devices you can run clat on the router, give out a few static ipv4s to the few machines that need it.
Or just use their dslite but have your router only give out ipv4 static reservations to the few devices that can't handle ipv6 only, game stations etc
Yes. They offer full dual stack now but you have to add option to your plan.
Either "Power-Upload" (which obviously is useless cause segment overloads everywhere) or "Telefon Comfort" Option.
I think the MTU is 1460 because the "main" gateway is ipv6 and the ipv4 part is handled by a different gateway. So they created a tunnel between the two?
That could be a IPv6 tunnel (as the IPv6 header takes 40 bytes) or potentially a tool that reports TCP maximum segment size (MSS) instead of MTU (the 20 byte IPv4 header and the 20 bytes TCP header are deducted from the MTU to get the MSS (I am simplifying here a bit)).
I never had DSLITE (luckily i had a native ipv4 connection in the past and now dual stack).
From forum posts (unoffical isp forum) i could infer that with dslite there are some problems...
The IP is changing alot and no port forwards are working. (cause the aftr is doing nat?)
Now the question is...
Is it possible to configure an AFTR Gateway to operate like a "normal" gateway?
So it assigns a somewhat static ipv4 address and opens up the ports ?
Then the "main" gateway has an ipv6 tunnel connection to the AFTR gateway which serves the ipv4 connection?
I replaced my isp router with a plain modem(not easy to get an eurodocsis modem in germany),
connection is much much better.
Less Errors, better latency, more download channels (32 vs 24)
I set cake to the advertised speeds (400/40 Mbit/s) which ends up in ~380/38 MBit/s (tcp/ip overhead?)
Works good, nice low pings ~20ms while the connection is saturated.
Only thing that bugs me a bit...
cake puts by default all arp traffic into the high priority tin.
On a docsis connection that can be a lot of traffic/packets that end up in the high priority tin.
I measured gigabytes over a month of arp traffic. Most of it is useless anyway.
Maybe i create an arptables rule that drops all that unneeded traffic or remove the arp to cs7 mapping in cake.