General Discussion of SQM

I use flent from Ubuntu running in virtualbox in Windows 10.

In that scenario, I wonder if there are other "outside" factors such as virtual drivers buffering or doing weird things to add to what a native system already does. I have a cheap lenovo laptop that doesn't have an ethernet port, but I may buy a usb-rj45 adapter. I worry about that causing weird issues as well lol.

I'd like to test the default script against these changes for fairness and bandwidth sharing. ATM my network is now flying and my vpn is very stable vs piece_of_cake. I don't think my TP Archer C7 v4 can handle alot.

Is it possible that the split gso and ack filter setting introduce some latency (up to ~10ms)?
And maybe there is some incompatibility with DSA?
I think DSA doesn't support GSO (and other offload features) because max packet size is always ~ 1500 byte. (cake and fq_codel statistics).
//edit
nvm
was good for a couple of minutes.
Now it looks like this again:

tc -s -d qdisc show dev wan
qdisc cake 8016: root refcnt 2 bandwidth 36Mbit besteffort dual-srchost nat wash no-ack-filter no-split-gso rtt 100.0ms noatm overhead 0 mpu 64
 Sent 513004028 bytes 695030 pkt (dropped 218, overlimits 620934 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 181632b of 4Mb
 capacity estimate: 36Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       64 /    1500
 average network hdr offset:           14

                  Tin 0
  thresh         36Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay       10.5ms
  av_delay        3.7ms
  sp_delay        265us
  backlog            0b
  pkts           695248
  bytes       513325360
  way_inds           37
  way_miss          796
  way_cols            0
  drops             218
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum          1098

there is not much load on the line ~5 Mbit/s.
is a pk_delay of ~10ms normal?

Not by design, no, but bugs certainly are possible...

What does pk_delay, av_delay, sp_delay mean?

pk_delay = packet delay? What delay exactly? The delay, dequeuing the packet from the qdisc to the nic? Can the nic driver also increase this delay?
av_delay = Average delay?
sp_delay = ?

I mentioned this delay before some time ago.
I don't think this a problem with DSA because I also had it with swconfig.

@ldir has a great post over at https://forum.openwrt.org/t/sqm-reporting/59960/6?u=moeller0 that should cover your questions.

Thanks that clears things up a bit.

for maxlen what is a "normal" value when using GRO/GSO?

In ldirs post it is around ~ 11000 on my cake output it is around ~65000
But on eth0 that is running fq_codel maxlen is around ~ 4000 ?

And I asked this before, why does cake put ARP packets into the high priority tin?
On my docsis link ther was high arp traffic (but they fixed it now).
cake puts the ARP packets into the high priority tin but they get dropped anyway afterwards.
Is it a good idea to hard code this?

I believe 64k is the maximum (coming from IP and/or TCP), and the actual maximum length you see depends on your traffic. As far as I can tell GRO will aggregate all related sequential packets into a super packet that arrive within a certain interval, if the traffic is well mixed your super packets will be smaller, if the traffic is more bursty with bursts from the same flow each, you should see a larger maxlen.

You might need to ask this in the cake list as cake's core developers are not all active in this forum. But maybe @tohojo might share some insight here?

I believe the idea is, that ARP packets are sufficiently important that expediting them seemed like a good idea (nodes waiting for ARP to proceed will be starved if ARP packets are blocked). But that probably wants re-consideration in the light of ARP flooding (then again cake will avoid starvation of the lower priority tiers so things are not so bad, and if your upstream floods you with ARPs there is not much cake can do for ingress traffic)

Yup, exactly.

As you say, there's not much you can do against an ARP flood. And CAKE will automatically deprioritise tins that go over their bandwidth usage. So don't really think there's any reason to change the current behaviour?

1 Like

Hello,

Can anyone confirm if AT&T Uverse (FTTN, copper to my building & apartment) uses DSL settings for Link Layer Adaptation? ATM / 44 bytes if I plan to use SQM / cake?

Ethernet with 44 byte overhead would be a good start. It almost certainly doesn't use ATM, only ancient ADSL is likely to use that, VDSL doesn't.

2 Likes

In all likelyhood it is not ATM/AAL5, but if you really wonder, have a look at https://github.com/moeller0/ATM_overhead_detector

1 Like

I was also thinking of looking at the AT&T modem's status page to see if it shows ADSL or VDSL connection type, but didn't get a chance to check. Thank you for the detector in case I need it to confirm.

according to that ADSL2+ maxes out at 24.0 Mbps down / 3.3 Mbps up so if you're above that it's certainly VDSL.

AT&T is reported to use PTM on at least some of its ADSL-links, so even with ADSL ATM encapsulation can be avoided....

I was briefly able to test the line this morning and it was about 45 down / 5 up.. so must be VDSL. Ethernet / 44 bytes to start, then.

Actually, the documentation says "For VDSL - Choose Ethernet, and set per packet overhead 34 (or 26 if you know you are not using PPPoE)"

If I started with 34 or 26, I can still use the same tuning methods to get to my sweet spot, right? Keep adjusting the target download/upload speeds until latency starts to suffer?

i'm on vdsl2 i'm use 44 and it work great 70/20

I haven't tried 44 but 34 works well with 25/10Mbps ptm VDSL.

Yes and no, the problem is that there are two unknowns, the veridical per-packet-overhead and the gross rate of the link. The latter is limited nby the sync, but since most ISPs seem to use a per-subscriber traffic shaper, the sync often is not the relevant gross rate.
The problem is now that underestimating one parameter can be compensated by overestimating the other. To test whether a selected combination of rate and overhead works well, one can perform bufferbloat tests at different packet sizes, by e.g. using MSS clamping on the router....

This ended up being VDSL2 (as shown by the AT&T modem's status page), and I'm getting great results with Ethernet / 26 bytes link layer adaptation at 90% of my non-QoS speed. Thank you all :slight_smile: