SQM setting question - Link Layer Adaptation

Oh, don't be sorry, you gave good advise, I just wanted to add specifics for the OP's precise situation, knowing about ATM_overhead_detector's limitations :wink:

Best Regards

Nice :relaxed:

Hello is there a guide to using ATM_overhead_detector as my brain cell is having trouble understanding how to use / work, video would be nice

I can help you if you are using a Linux distro etc Ubuntu.

Thanks, guys. Hadn't dug up a Cake man page yet, appreciate that. And Moeller0, think I've read you before on this, thanks. Now, who gets to notify the docs people here on the LEDE and OpenWRT sites? :wink:

A few more questions. I've been mostly using luci, so not so familiar with what it writes, as well as not so familiar with the process of manually inputting. Just starting to do tc -d qdisc and begin to understand the outputs...

So, sounds like with luci, setting "Ethernet with overhead...." and 18 bytes would do it for Cable Modems? Or is it different or better to do your manual command?

Second question, how does one enter manual commands, and set them up so they are there at bootup, if not using standard luci interface stuff?

I have another issue with my UL behavior, but I think it really belongs in it's own thread, so I'll save asking what various tc -d and -s outputs mean, and what might be better settings than what I have, for that...

Oops, I guess I assumed that you would had manually configured cake in the first place. Doing it via the GUI is not hard either:

  1. select the "Link Layer Adaptation" tab (in luci-app-sqm)
  2. select ""Ethernet with overhead: select for e.g. VDSL2."" in the "Which link layer to account for:" dropdown box
  3. enter 18 in the "Per Packet Overhead (byte):" field (currently this will be applied symmetrically in ingress and egress direction)
  4. check the "Show Advanced Linklayer Options, (only needed if MTU > 1500). Advanced options will only be used as long as this box is checked." checkbox
  5. select either "default", "tc_stab", or "cake" in the "Which linklayer adaptation mechanism to use; for testing only" drop down box.

6.1) if you selected either tc_stab, htc_private here or default and did not select cake as qdisc (and neither piece_of_cake nor layer_cake as "Queue setup script"):
a) enter 64 in the "Minimal packet size, MPU (byte); needs to be > 0 for ethernet size tables:" field.

6.2) if you selected either cake here or default and selected cake as qdisc,
a) select the "Queue Discipline" tab
b) check both the "Show and Use Advanced Configuration. Advanced options will only be used as long as this box is checked." and the "Show and Use Dangerous Configuration. Dangerous options will only be used as long as this box is checked." checkboxes
c) add "mpu 64" to both the "Advanced option string to pass to the ingress queueing disciplines; no error checking, use very carefully." and "Advanced option string to pass to the egress queueing disciplines; no error checking, use very carefully." fields.

While you are that stage you might want to put the following into "Advanced option string to pass to the ingress queueing disciplines; no error checking, use very carefully.":
"mpu 64 nat dual-dsthost"
and the following into "Advanced option string to pass to the egress queueing disciplines; no error checking, use very carefully.":
"mpu 64 nat dual-srchost"

The "mpu 64" option will make sure the shaper knows all L2 ethernet frames passed to the cable modem are at least 64 bytes (which they are according to ethernet standards)
The "nat" option will instruct cake to look into the kernels network address translation tables to get access to the real internal addresses of packets, this in turn will give the next option ist "bite".
The "dual-XXXhost" option instructs cake to first distribute available bandwidth fairly by IP addresses (and for each IP address it will also give you per-flow-fairness); for ingress the destination address contains your internal addresses, hen dual-dsthost on ingress; for egress it is the source address, hence dual-dsthost. The effect of these two stanzas is per-internal-host fairness, in an ideal world this should make it less annoying if individual hosts in your network are using many flows (like bittorrent). That is the latency increase by bit torrenting should be isolated to those machines that actually are sending/receiving torrent traffic, but other machines on your network should be fine. (Please note that if you try VoIP and heavy torrenting from the same internal computer you still will get bad latency...)

In case you want to connect to your router via a shell/ the command line interface https://wiki.openwrt.org/doc/howto/firstlogin has some instructions. I believe currently LEDE does not allow telnet at all, but will accept ssh without password on first access. In any way I would recommend to avoid telnet even if if it would still work and always use ssh instead. If this does not get you started and enables you to execute anything you want on your router's CLI, please holler (or better start a new topic asking that question explicitly, it would be much more searchable if responses to such questions are not "littered" over otherwise unrelated topics...)

Hope that helps...

ADDED 2022: mpu 64 is not magically correct for all link types this really needs to describe the smallest data frame that is to be accounted against the gross shaper rate. Different technologies have different minimal sizes albeit many are caused by ethernet's 64 byte frame size, but to be a true mpu we still need to add some specific overhead:

`mpu 64' (the DOCSIS shaper accounts for the same ethernet fields that count against the minimal 64 byte L2 ethernet frame size)

mpu 84 (this contains the IFG, the preamble and start frame delimiter (SFD), and the inter frame gap (IFG) on top of the fields relevant for docsis)

mpu 68 (PTM uses ethernet frames without preamble, SFD, and IFG, but adds an additional 4 bytes for PTM )

all bets are off, ATM/AAL5 encapsulation either include the frame check sequence and hence inherit the ethernet mpu or they do not. No easy way to discern these two options. But to better be safe than sorry:
mpu 96 (this equals the payload of two ATM cells and should be a conservative guess).


So you can add commands to /etc/rc.local. This script is executed once per boot, for things that are ephemeral like network interfaces (which can come and go, especially interfaces like pppoe-wan) you actually want to hook your commands into the hotplug mechanism so it is executed every time the interface comes up. In sqm-scripts case we have unchecked option fields exposed in the GUI in which you can put a lot of individual configuration options that are properly handled with hotplug.

Thanks Sebastian... (it is Sebastian, yes?)

I had already been running steps 1, 2 and 3. A few more questions if I may.

General question, If you setup a SQM, and don't have the SQM checkbox enabled, is there some part still working? This is how I go back and forth while testing, vs uninstalling luci-app-sqm.

For step 4, and further, do I want to play with the defaults for max size, number of entries, etc that I see there? Or just leave them with the defaults?

I'm using cake as my qdsc, will set it to cake for step 5. I have some question, (rusty networking knowlege) on whether I have larger than 1500 MTU. Is the max length from a tc -s qdisc the same as MTU, or not?

Will try the setting suggestions in 6.2 as well. It will be interesting to see how much CPU/IRQ overhead this adds, since my situation is also a case of how far can you push a Archer C7, with a 300/30mbit link. Something I'm also testing the limits of... but that's another thread... :wink:

You can have multiple concurrent configuration sections and only those where the "Enable this SQM instance." checkbox is checked will be active, so you can have disabled sections in your configuration without any side effects on the active sections.

In all likelihood not, currently for fq_codel on most links I would recommend to set tcMPU to 64, but in real life, unless you flood your link with tiny packets you will not be able to see a difference.

For cake none of these values have any influence currently, in the future sqm-scripts will pass tcMPU to cake. And in essence the max_len you see is correlated for what you should put into the MTU field. But the relationship is a bit complex (as in it depends on a number of conditions), so unless your max_len/max_packet is larger than 2000 AND ypu are using fq_codel leaving these alone is fine.

Good point, please note that cake will tend to keep the configured bandwidth if CPU starved (while latency under load will increase a bit), while HTB+fq_codel will tend to keep the configured latency while bandwidth decreases. At 300/30 I would recommend to test both cake/piece-of_cake and fq_codel/simplest.qos as I assume the C7 will struggle to keep up with 300/30 and you will have to make a judgement call/policy decision...

Best Regards

It does. One of my questions there, is what happens when you do run out of CPU, how bad is it? Good to learn a bit about that. I had determined that I can run up to 140-145mbit configured bandwidth, and then I start bumping against 0% idle, and 100% sirq during a DSLReport run. I see maybe 75-85% CPU on the ksoftirq task. Don't know which of these numbers are the more important ones to judge overall loading with.

It will run higher than that, much more tightly pegged out. Seems to still be improving latency, but not as much.

From my observations the idle number is the most relevant, once you reach 0 shaping is going to suffer.

Well, as I said, cake will try to shape up to the configured bandwidth accepting more latency under load, while HTB+fq_codel will keep the latency close to theoretical limits while sacrificing bandwidth. How much each of the compromises the "other" measure really depends on how much you run them out of their comfort zone. I have no numbers though to quantify this effect....

Hi mindwolf,

July 24
My theoretical calculation. Please chime in with opinions and your experiences using this calculation.
Docsis 3.0 uses up to 42,880 Mbps per channel for a total of 686,880 Mbps (608,000 goodput) Ds and 122,880 (108,000 goodput) Us.

This might or might not be true (no time to look it up), but I assume that this is not going to be the relevant limit for you, as DOCSIS systems employ a traffic shaper that keeps the user traffic within the contracted rates. (Now if you have a DOCSIS segment all to yourselves that might be different, but in that case I am certain you need to already know more about DOCSIS than I ever will, so in that case just ignore me).

10% of the max achievable Ds and 5% of the max achievable Us. MTU 1500 minus 20 bytes for TCP overhead, 20 bytes of IPV4 overhead. 44 bytes total is = 6 bytes of overhead for Docsis 3.0, 14 bytes layer 1 overhead, 14 bytes layer 2 overhead,

This is a bit opaque for me, but the DOCSIS shaper is known to talke a full ethernet frame including the frame check sequence into account, so that will be 1518 Bytes at most. Now what layer that actually lives in is somewhat tricky for me to doiscern, as DOCSIS will drag in its own additional overheads (like the MPEG headers) but will leave out typical ethernet overhead like preamble and the inter packet gap... But for end users as far as I can see the ISPs shaper setting and the assumption that there will be 18 Bytes of overhead on top of the IP packet should hold true.


Not sure that I agree with that:
686880 * 0.9 = 618192 (686880 minus 10% of 686880)
686880 * 0.1 = 68688 (10% of 686880)

68688-2070= 66618
Package listed as 60 Ds; 4 Us
The Above answers of 66618 & 5959 match very closely within 1% of my max achievable speeds without congestion on the wire without SQM enabled.

But more relevant, I would start from the measured goodput (in Kbps?) and calculate backwards from that (assuming TCP/IPv4):
66618 * ((1500 + 14 + 4) / (1500 - 20 -20)) = 69264.4684932 Kbps
5959 * ((1500 + 14 + 4) / (1500 - 20 -20)) = 6195.72739726 Kbps
So as typical for cable ISPs you seem to have more bandwidth allotted than you could expect from your plan. But if your ISP uses power-boost that bandwidth might not actually be sustained but might throttle back after the 10 seconds a speedtest typically runs*. Also due to the interaction of the DOCSIS shaper assumption of 18 bytes per-packet overhead in combination with the true DOCSIS per-packet overheads that extra bandwidth might just be a stopgap measure to make sure you get your contracted bandwidth even when you use smaller packets, but I have no data to back this up...

I offer all of this as simple opinion, as I have not fully understood what your exact argument is (as I am not a native english speaker, please forgive my denseness).

Best Regards

*) This is why I recommend to use a speedtest that can run for more than 10 seconds, see [SQM/QOS] Recommended settings for the dslreports speedtest (bufferbloat testing)

I found some good information from this link http://www.bowe.id.au/michael/isp/DOCSIS/collected-references/docsisdsusspeedplaybook8-5-16-160807165905.pdf

1 Like

Great find! Thanks for posting, even though this looks more like something for your DOCSIS-ISP as there is very little actionable stuff in this presentation for end users (and by very little I mean nothing), but certainly a good reference for DOCSIS issues.

Thanks! I have contacted the volpefirm.com, which seems to a VERY knowledgable source on DOCSIS related material. Waiting on a reply to clear up any confusion and hopefully with a straightforward answer.

1 Like

Sure, good idea to inquire with experts, but what exactly are you looking for? The 18 bytes overhead for DOCSIS comes straight from the DOCSIS standards documents:

Interestingly the shaper used in DOCSIS systems that limits a user's maximal bandwidth does completely ignore DOCSIS overhead and only includes ethernet frames including their frame check sequence (FCS 4 Byte). To cite the relevant section from the Docsis standard (http://www.cablelabs.com/specification/docsis-3-0-mac-and-upper-layer-protocols-interface-specification/):

"C. Maximum Sustained Traffic Rate 632 This parameter is the rate parameter R of a token-bucket-based rate limit for packets. R is expressed in bits per second, and MUST take into account all MAC frame data PDU of the Service Flow from the byte following the MAC header HCS to the end of the CRC, including every PDU in the case of a Concatenated MAC Frame. This parameter is applied after Payload Header Suppression; it does not include the bytes suppressed for PHS. The number of bytes forwarded (in bytes) is limited during any time interval T by Max(T), as described in the expression: Max(T) = T * (R / 8) + B, (1) where the parameter B (in bytes) is the Maximum Traffic Burst Configuration Setting (refer to Annex C. NOTE: This parameter does not limit the instantaneous rate of the Service Flow. The specific algorithm for enforcing this parameter is not mandated here. Any implementation which satisfies the above equation is conformant. In particular, the granularity of enforcement and the minimum implemented value of this parameter are vendor specific. The CMTS SHOULD support a granularity of at most 100 kbps. The CM SHOULD support a granularity of at most 100 kbps. NOTE: If this parameter is omitted or set to zero, then there is no explicitly-enforced traffic rate maximum. This field specifies only a bound, not a guarantee that this rate is available."

So in essence DOCSIS users need to only account for 18 Bytes of ethernet overhead in both ingress and egress directions under non-congested conditions.

This is not intended to curtail your research into the intricacies of the DOCSIS standards, just to state that as far as we know from sqm-scripts perspective 18 bytes overhead is the relevant number. BUT the trickier thing is to get the exact number the DOCSIS shaper is set to (luckily most cable ISPs seem to provision more gross bandwidth than visible from the contract, so using the contracted numbers should work as a starting point).

1 Like

if my router doesn't have nat I can use mpu 64 dual-dsthost y mpu 64 dual-srchost ?

Not sure what you mean here?

Probably, but you can always test wether it does the right thing:

A) On two computers run 1 speedtest each at the same time (use something like fast.com that can be configured for a relatively ;ong run time, so starting multiple instances get easier).
With or without the dual-xxxhost options you should see roughly the same rate for each speedtest

B) On the first computer start one speedtest like above, and on the second computer start two speedtests (in two different browsers probably).
Now you either get identical results for all three speedtests idicating that dual-xxxhost did not do its thing or roughly: X Mbps on the first Computer and X/2 on each of the speedtest on the second computer which would show dial-xxxhost working as intended.

Quick question though: Why post this on a thread about Link Layer Adaptation instead of starting a new focussed thread with an appropriate title?

to save time also what I am looking for is in this topic

When i use mpu 64 nat dual-dsthost the bandwidth is reduced because I think the problem here is NAT