SQM broken in download direction, OpenWRT 24.10 tested on multiple targets

Hey All,

I am having trouble with SQM in the download direction:

I recently setup a new linksys e8450 in a location with very low bandwidth, 1.5Mb down, 0.5Mb up. I discovered SQM does not keep the download throughput below what is configured, which causes latency to spike. Even if I configure SQM for very low download, say 0.75Mb down, download speed still reaches 1.5Mb and latency spikes.

I am now testing this from my office with a high speed network on the WAN side. I have also tested an OpenWRT One and an X86 based router, the results are the same.

I have tried several 24.10 builds including an older one from back in March 2025.

I have all offloading and WED disabled.

For testing, I have SQM configured like this (this is mostly the default config):

config queue 'eth1'
        option enabled '1'
        option interface 'wan'
        option download '10240'
        option upload '10240'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option qdisc_advanced '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option qdisc_really_really_advanced '0'
        option itarget 'auto'
        option etarget 'auto'
        option linklayer 'none'

When using bwm-ng, it is easy to see that the download speed is not being restricted by the IFB like it should be, causing the modem’s buffer to fill and latency to spike:

 bwm-ng v0.6.3 (delay 1.000s);
 input: /proc/net/dev; press 'ctrl-c' to end this
 \         iface                    Rx                   Tx               Total
 ==============================================================================
             wan:        1471.85 KB/s           58.85 KB/s         1530.69 KB/s
          br-lan:          47.05 KB/s         1250.20 KB/s         1297.25 KB/s
         ifb4wan:        1251.76 KB/s         1251.76 KB/s         2503.52 KB/s
 ------------------------------------------------------------------------------
           total:        2769.40 KB/s         2559.56 KB/s         5328.96 KB/s

Download speed should never go over 1280KB/s.

Note how the wan speed is higher than the ifb4wan speed, they should be nearly identical right? The ifb4wan speed is correct, but the wan speed is too high. I’ve seen wan speed run 2X the ifb4wan speed.

I have spent hours trying various SQM and network options… I am at a loss, hopefully someone can give me a suggestion or pointer?

This is a low speed connection so I understand this configuration is probably not used much anymore, so perhaps nobody else has noticed this issue?

Thanks!

This can’t be done on the router itself. It has no real control over what is being sent to it and can only try to get senders to reduce what is being sent by doing something like dropping packets. That works for some senders and not at all for others.

If you need total control you have to do it from the other end - someplace upstream. A firewall/router/VPS/VPN with a faster connection configured with QoS to control what then gets sent down your limited connection. Adds complexity and more points of failure but will get you the level of control desired.

Usually a fair bit of traffic is from senders that do respond to the attempts to slow down/limit traffic so that things work well. However it only takes one or two senders that ignore those attempts and then proceed to saturate the incoming connection.

@netprince What do you see happening at the OS level?

tc -s qdisc show dev wan
tc -s qdisc show dev ifb4wan
tc -s filter show dev wan parent ffff:

Did you change the bandwidth in sqm to match your actual values? Nothing good will happen if still set to 10 Mbit in both directions.

Thanks for the reply, that makes a lot of sense.

Here is what the tc commands show:

root@router.testnet:~$ tc -s qdisc show dev wan
qdisc cake 800d: root refcnt 2 bandwidth 460Kbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 24960500 bytes 32157 pkt (dropped 4089, overlimits 57357 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 163876b of 4Mb
 capacity estimate: 460Kbit
 min/max network layer size:           42 /    1514
 min/max overhead-adjusted size:       42 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh        460Kbit
  target         39.5ms
  interval        134ms
  pk_delay       25.2ms
  av_delay       3.83ms
  sp_delay         28us
  backlog            0b
  pkts            36246
  bytes        30293018
  way_inds          283
  way_miss          856
  way_cols            0
  drops            4089
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len         11952
  quantum           300

qdisc ingress ffff: parent ffff:fff1 ----------------
 Sent 10297906 bytes 45965 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
root@router.testnet:~$ tc -s qdisc show dev ifb4wan
qdisc cake 800e: root refcnt 2 bandwidth 1101Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 9353577 bytes 44878 pkt (dropped 1101, overlimits 18541 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 529536b of 4Mb
 capacity estimate: 1101Kbit
 min/max network layer size:           60 /    1514
 min/max overhead-adjusted size:       60 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh       1101Kbit
  target         16.5ms
  interval        112ms
  pk_delay       3.61ms
  av_delay        388us
  sp_delay         10us
  backlog            0b
  pkts            45979
  bytes        10984505
  way_inds         2674
  way_miss         1006
  way_cols            0
  drops            1101
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len         10598
  quantum           300

root@router.testnet:~$ tc -s filter show dev wan parent ffff:
filter protocol all pref 10 u32 chain 0
filter protocol all pref 10 u32 chain 0 fh 800: ht divisor 1
filter protocol all pref 10 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1 not_in_hw
  match 00000000/00000000 at 0
        action order 1: mirred (Egress Redirect to device ifb4wan) stolen
        index 1 ref 1 bind 1 installed 4041 sec firstused 4041 sec
        Action statistics:
        Sent 10301947 bytes 46006 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

Yes I have only been using 10240 for testing, I am also testing 1101/460 because that is about what I need to use on-site.

Thanks

What link layer is that? ADSL? Could you post the output of ifstatus wan | grep device please?

Yes it is ADSL.

I dont currently have access to the on-site router (long story) but I have a matching router here:

root@router.testnet:~$ uname -a
Linux router 6.6.100 #0 SMP Wed Aug 27 13:44:53 2025 aarch64 GNU/Linux

root@router.testnet:~$ cat /tmp/sysinfo/*
linksys,e8450-ubi
Linksys E8450 (UBI)

root@router.testnet:~$ ifstatus wan | grep device
        "l3_device": "wan",
        "device": "wan",

Thanks

1 Like

Just to manage expectancy, at 460 Kbps a single full MTU packet will take:
(15388)/(460 * 1000) * 1000 = 26.74 milliseconds
and at 1101 Kbps:
(1538
8)/(1101 * 1000) * 1000 = 11.18 milliseconds

At those rates all you can do is manage the "pain"...

Could you post the results of:
https://spped.cloudflare.com
https://www.waveform.com/tools/bufferbloat

from the problematic link both with SQM disables and with SQM enabled?

For the ingress SQM instance (ifb4wan) please add the ingress keyword, so the shaper will also account for packets it drops. Also overhead 0 looks wrong, especially if we might be talking about an ADSL link...

Yes I agree, this speed feels VERY slow. Manage the pain as best as possible is exactly what I would like to do. :slight_smile:

Thanks I will add the ingress keyword. I will also look up the overhead option, do you have any suggestions there?

I am 2 hours away from the location right now, I wont be able to try anything on-site for a few day, but I will be on-site again this weekend.

Great, that means you should add atm overhead 44 to the invocation of both shapers...

Thanks, will do, and I will post results when I am on-site.

1 Like

Hey all,

Looks like adding the ‘ingress’ option has corrected the issue I was seeing where the WAN speed was exceeding the configured download speed. It says this in the docs:

ingress mode modifies how CAKE's shaper accounts for dropped packets, in essence they still count to the bandwidth used calculation even though they're dropped - this makes sense, since they arrived with us but we decided that the particular flow was occupying too much bandwidth so we dropped a packet to signal the other end to slow down. The shaper on egress doesn't count dropped packets, instead it looks in the queues to find a more worthy packet to occupy the space. The bottom line, if you're trying to control ingress packet flow use ingress mode, else don't.

Thanks for the tip!

1 Like