New to SQM / looking for help

Mmmh, so that shows that on ingress/download where you see the attrocious bufferbloat in the waveform speedtest, cake also reports matching high internal sojourntimes... that again makes my think about getting access to CPU issues....

BTW, your netdata plots are more helpful than the cake statistics, simply because the peak delay ages out too quickly (this comes fro the download and the subsequent upload test, basically resets the download peak counter due to the reverse ACK traffic and whatnot else).

I think seeing the output of:

cat /proc/interrupts

might be interesting....

root@Willthetech_Home:~# cat /proc/interrupts
            CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
   0:         28          0          0          0          0          0          0          0   IO-APIC    2-edge      timer
   1:          0          8          0          0          0          0          0          0   IO-APIC    1-edge      i8042
   4:          0          0          0         15          0          0          0          0   IO-APIC    4-edge      ttyS0
   8:          0          0          1          0          0          0          0          0   IO-APIC    8-edge      rtc0
   9:          0          5          0          0          0          0          0          0   IO-APIC    9-fasteoi   acpi
  12:          9          0          0          0          0          0          0          0   IO-APIC   12-edge      i8042
 123:          0          0          0          0          0         45          0          0   PCI-MSI 32768-edge      i915
 124:          0        206          0          0          0          0       2022          0   PCI-MSI 376832-edge      ahci[0000:00:17.0]
 125:          0          0          0          0          0          0          0          0   PCI-MSI 327680-edge      xhci_hcd
 126:          1          0          0          0          0          0          0          0   PCI-MSI 2097152-edge      eth0
 127:          0        845          0          0    1112058          0          0          0   PCI-MSI 2097153-edge      eth0-TxRx-0
 128:          0          0        148          0          0          0          0     309415   PCI-MSI 2097154-edge      eth0-TxRx-1
 129:          0          0          0     324732          0          0          0          0   PCI-MSI 2097155-edge      eth0-TxRx-2
 130:          0     433138          0          0        141          0          0          0   PCI-MSI 2097156-edge      eth0-TxRx-3
 131:          0          0          0          0          0     593524          0          0   PCI-MSI 2097157-edge      eth0-TxRx-4
 132:          0    1235432          0          0          0          0         95          0   PCI-MSI 2097158-edge      eth0-TxRx-5
 133:          0          0          0          0          0          0     655743         59   PCI-MSI 2097159-edge      eth0-TxRx-6
 134:         73          0     296428          0          0          0          0          0   PCI-MSI 2097160-edge      eth0-TxRx-7
 135:          0          0          0          0          0          0          0          0   PCI-MSI 2099200-edge      eth1
 136:          0       3998          0          0          0          0          7          0   PCI-MSI 2099201-edge      eth1-TxRx-0
 137:          0          0       3998          0          0          0          0          7   PCI-MSI 2099202-edge      eth1-TxRx-1
 138:          7          0          0          0          0          0          0       3998   PCI-MSI 2099203-edge      eth1-TxRx-2
 139:       3998          7          0          0          0          0          0          0   PCI-MSI 2099204-edge      eth1-TxRx-3
 140:          0          0          7          0          0          0          0       3998   PCI-MSI 2099205-edge      eth1-TxRx-4
 141:          0          0          0          7          0       3998          0          0   PCI-MSI 2099206-edge      eth1-TxRx-5
 142:          0          0          0       3998          7          0          0          0   PCI-MSI 2099207-edge      eth1-TxRx-6
 143:          0          0          0          0          0          7       3998          0   PCI-MSI 2099208-edge      eth1-TxRx-7
 144:          0          0          0          0          0          0          0          0   PCI-MSI 2101248-edge      eth2
 145:          0          0          0          0          0       3998          0          7   PCI-MSI 2101249-edge      eth2-TxRx-0
 146:          7          0       3998          0          0          0          0          0   PCI-MSI 2101250-edge      eth2-TxRx-1
 147:          0          7          0          0          0          0       3998          0   PCI-MSI 2101251-edge      eth2-TxRx-2
 148:       3998          0          7          0          0          0          0          0   PCI-MSI 2101252-edge      eth2-TxRx-3
 149:          0          0          0          7       3998          0          0          0   PCI-MSI 2101253-edge      eth2-TxRx-4
 150:          0          0          0       3998          7          0          0          0   PCI-MSI 2101254-edge      eth2-TxRx-5
 151:          0          0          0          0          0          7          0       3998   PCI-MSI 2101255-edge      eth2-TxRx-6
 152:          0          0       3998          0          0          0          7          0   PCI-MSI 2101256-edge      eth2-TxRx-7
 153:          0          0          0          0          0          0          0          1   PCI-MSI 2103296-edge      eth3
 154:        194          0          0          0    1140012          0          0          0   PCI-MSI 2103297-edge      eth3-TxRx-0
 155:          0        202          0     389242          0          0          0          0   PCI-MSI 2103298-edge      eth3-TxRx-1
 156:          0          0         66          0          0          0          0     395008   PCI-MSI 2103299-edge      eth3-TxRx-2
 157:          0     362142          0         83          0          0          0          0   PCI-MSI 2103300-edge      eth3-TxRx-3
 158:          0          0          0          0         92     322859          0          0   PCI-MSI 2103301-edge      eth3-TxRx-4
 159:          0          0          0          0          0     314669          0          0   PCI-MSI 2103302-edge      eth3-TxRx-5
 160:          0          0          0          0          0          0     528970          0   PCI-MSI 2103303-edge      eth3-TxRx-6
 161:          0          0          0          0          0          0    1071147        111   PCI-MSI 2103304-edge      eth3-TxRx-7
 NMI:          0          0          0          0          0          0          0          0   Non-maskable interrupts
 LOC:    3128146    2513398    2687286    2649625    2502015    2488208    2792236    2592385   Local timer interrupts
 SPU:          0          0          0          0          0          0          0          0   Spurious interrupts
 PMI:          0          0          0          0          0          0          0          0   Performance monitoring interrupts
 IWI:          0          0          0          0          0          0          0          0   IRQ work interrupts
 RTR:          5          0          0          0          0          0          0          0   APIC ICR read retries
 RES:       3002       2670       1826       2934       2664       2977       3349       2110   Rescheduling interrupts
 CAL:    1229458    1100057     638168     655226     682061     832889     680377     578411   Function call interrupts
 TLB:        188        178        212        153        279        165        294        133   TLB shootdowns
 TRM:          0          0          0          0          0          0          0          0   Thermal event interrupts
 THR:          0          0          0          0          0          0          0          0   Threshold APIC interrupts
 DFR:          0          0          0          0          0          0          0          0   Deferred Error APIC interrupts
 MCE:          0          0          0          0          0          0          0          0   Machine check exceptions
 MCP:         26         27         27         27         27         27         27         27   Machine check polls
 ERR:          4
 MIS:          0
 PIN:          0          0          0          0          0          0          0          0   Posted-interrupt notification event
 NPI:          0          0          0          0          0          0          0          0   Nested posted-interrupt event
 PIW:          0          0          0          0          0          0          0          0   Posted-interrupt wakeup event

@WilR I don’t know if this will help, but for reference I have a 400/20mbps cable connection. I actually get more like 480/24mbps. Regardless, here is the SQM config I use:

root@OpenWrt:~# cat /etc/config/sqm

config queue
        option interface 'eth0'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'cake'
        option linklayer 'none'
        option qdisc_advanced '1'
        option squash_dscp '0'
        option squash_ingress '1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'docsis dual-dsthost nat ingress'
        option eqdisc_opts 'docsis dual-srchost nat ack-filter'
        option enabled '1'
        option upload '24500'
        option download '462500'
        option script 'ctinfo_4layercake.qos'


I wonder if you might be willing to drop in this config and give it a test. Obviously you’ll want to modify the upload, download, and interface to match your setup. But I’m just curious what your experience will be with this known good config.

Notice that my linklayer option is “none”, however that’s because I specify “docsis” in the advanced config options.

You can create of backup of your current SQM config in case this doesn’t give positive results.

done

 OpenWrt 21.02.1, r16325-88151b8303
 -----------------------------------------------------
root@Willthetech_Home:~# cat /etc/config/sqm

config queue
        option interface 'eth0'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'cake'
        option linklayer 'none'
        option qdisc_advanced '1'
        option squash_dscp '0'
        option squash_ingress '1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'docsis dual-dsthost nat ingress'
        option eqdisc_opts 'docsis dual-srchost nat ack-filter'
        option enabled '1'
        option upload '40000'
        option download '600000'
        option script 'ctinfo_4layercake.qos'

root@Willthetech_Home:~#


It seems like is not limiting my speeds..also my upload got worse.

Would you mind dropping the output from your tc -s qdisc show dev eth0 ; echo " " ; tc -s qdisc show dev ifb4eth0 command here again?

1 Like

Mmmh this looks unsuspicious to me, thanks. Next stop powertop to see what the core frequencies do.

1 Like

I understand that you like qosify over sqm (which is fine) but here, both really are just tools to instantiate cake on ingress and/or egress, after they are done setting things up all that is left is identical. So I am not sure how switching to qosify is going to change anything substantial?

3 Likes

Could you post links to the actual test results please, so we can see more information about latency samples under the three different test regimes, please?\

I also agree with @_FailSafe that getting the ts -s qdisc output from just after a speedtest will be helpful.

1 Like

I might be especially thick today, but I do not see the connection between these two threads, sorry. Not trying to diss you, great that you are chiming in, I just wanted to understand the rationale behind your proposed test.

2 Likes

Yeah, @elan, throwing Qosify into the mix right now in the middle of troubleshooting doesn’t seem like it’s helping resolve the underlying issue(s) here.

1 Like
|       |.-----.-----.-----.|  |  |  |.----.|  |_
 |   -   ||  _  |  -__|     ||  |  |  ||   _||   _|
 |_______||   __|_____|__|__||________||__|  |____|
          |__| W I R E L E S S   F R E E D O M
 -----------------------------------------------------
 OpenWrt 21.02.1, r16325-88151b8303
 -----------------------------------------------------
root@Willthetech_Home:~# tc -s qdisc show dev eth0 ; echo " " ; tc -s qdisc show
 dev ifb4eth0
qdisc mq 0: root
 Sent 617615747 bytes 1659846 pkt (dropped 0, overlimits 0 requeues 162)
 backlog 0b 0p requeues 162
qdisc fq_codel 0: parent :8 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 129999713 bytes 217041 pkt (dropped 0, overlimits 0 requeues 14)
 backlog 0b 0p requeues 14
  maxpacket 1454 drop_overlimit 0 new_flow_count 5 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :7 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 62446087 bytes 228302 pkt (dropped 0, overlimits 0 requeues 50)
 backlog 0b 0p requeues 50
  maxpacket 15600 drop_overlimit 0 new_flow_count 21 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :6 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 75653711 bytes 138918 pkt (dropped 0, overlimits 0 requeues 4)
 backlog 0b 0p requeues 4
  maxpacket 10178 drop_overlimit 0 new_flow_count 2 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :5 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 135682067 bytes 232298 pkt (dropped 0, overlimits 0 requeues 16)
 backlog 0b 0p requeues 16
  maxpacket 17054 drop_overlimit 0 new_flow_count 9 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 31470783 bytes 257076 pkt (dropped 0, overlimits 0 requeues 15)
 backlog 0b 0p requeues 15
  maxpacket 17054 drop_overlimit 0 new_flow_count 6 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 111550757 bytes 149262 pkt (dropped 0, overlimits 0 requeues 23)
 backlog 0b 0p requeues 23
  maxpacket 17054 drop_overlimit 0 new_flow_count 10 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 49880723 bytes 192538 pkt (dropped 0, overlimits 0 requeues 21)
 backlog 0b 0p requeues 21
  maxpacket 8724 drop_overlimit 0 new_flow_count 10 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 20931906 bytes 244411 pkt (dropped 0, overlimits 0 requeues 19)
 backlog 0b 0p requeues 19
  maxpacket 1274 drop_overlimit 0 new_flow_count 8 ecn_mark 0
  new_flows_len 0 old_flows_len 0

Cannot find device "ifb4eth0"
root@Willthetech_Home:~#

@moeller0 the links from waveform? sorry confused by you request :slightly_smiling_face:

So it looks here like your SQM service isn’t started

1 Like

Odd the swm/qos stuff is gone from netdata

Yup, check your SQM service status

Ah, no cake at all, which effortlessly explains the speedtest, no?


looks like is running....

Yes if you run a waveform bufferbloat test
like this there is a line at the bottom labeled "Share Your Results" giving you a link to the detailed results page (that you screen shotted so far): Here is a screen shot from a test I just ran as example:

This is different from actually having active cake instances on the respective interfaces:

Please run the following and post the output here:

SQM_DEBUG=1 SQM_VERBOSITY_MAX=11 /etc/init.d/sqm stop ; SQM_DEBUG=1 SQM_VERBOSITY_MAX=11 /etc/init.d/sqm start
root@Willthetech_Home:~# SQM_DEBUG=1 SQM_VERBOSITY_MAX=11 /etc/init.d/sqm stop ;
 SQM_DEBUG=1 SQM_VERBOSITY_MAX=11 /etc/init.d/sqm start
SQM: Acquired run lock
/usr/lib/sqm/run.sh: line 57: can't create : nonexistent directory
SQM: Stopping SQM on eth0
SQM: Acquired run lock
/usr/lib/sqm/run.sh: line 57: can't create : nonexistent directory
SQM:
SQM: Fri Dec 31 11:36:58 EST 2021: Starting.
SQM: Starting SQM script: layer_cake.qos on eth0, in: 600000 Kbps, out: 40000 Kbps
SQM: fn_exists: function candidate name: sqm_start
SQM: fn_exists: TYPE_OUTPUT: sqm_start: not found
SQM: fn_exists: return value: 1
SQM: Using generic sqm_start_default function.
SQM: fn_exists: function candidate name: sqm_prepare_script
SQM: fn_exists: TYPE_OUTPUT: sqm_prepare_script is a function
SQM: fn_exists: return value: 0
SQM: sqm_start_default: starting sqm_prepare_script
SQM: cmd_wrapper: COMMAND: /sbin/ip link add name SQM_IFB_51e94 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link add name SQM_IFB_51e94 type ifb
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc replace dev SQM_IFB_51e94 root cake
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc replace dev SQM_IFB_51e94 root cake
SQM: QDISC cake is useable.
SQM: cmd_wrapper: COMMAND: /sbin/ip link set dev SQM_IFB_51e94 down
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link set dev SQM_IFB_51e94 down
SQM: cmd_wrapper: COMMAND: /sbin/ip link delete SQM_IFB_51e94 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link delete SQM_IFB_51e94 type ifb
SQM: cmd_wrapper: COMMAND: /sbin/ip link add name SQM_IFB_f41c6 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link add name SQM_IFB_f41c6 type ifb
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc replace dev SQM_IFB_f41c6 root cake
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc replace dev SQM_IFB_f41c6 root cake
SQM: QDISC cake is useable.
SQM: cmd_wrapper: COMMAND: /sbin/ip link set dev SQM_IFB_f41c6 down
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link set dev SQM_IFB_f41c6 down
SQM: cmd_wrapper: COMMAND: /sbin/ip link delete SQM_IFB_f41c6 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link delete SQM_IFB_f41c6 type ifb
SQM: sqm_start_default: Starting layer_cake.qos
SQM: ifb associated with interface eth0:
SQM: Currently no ifb is associated with eth0, this is normal during starting of the sqm system.
SQM: cmd_wrapper: COMMAND: /sbin/ip link add name ifb4eth0 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link add name ifb4eth0 type ifb
SQM: fn_exists: function candidate name: egress
SQM: fn_exists: TYPE_OUTPUT: egress is a function
SQM: fn_exists: return value: 0
SQM: cmd_wrapper: tc: invocation silenced by request, FAILURE either expected or acceptable.
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc del dev eth0 root
SQM: cmd_wrapper: tc: FAILURE (2): /sbin/tc qdisc del dev eth0 root
SQM: cmd_wrapper: tc: LAST ERROR: Error: Cannot delete qdisc with handle of zero.
SQM: LLA: default link layer adjustment method for cake is cake
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc add dev eth0 root cake bandwidth 40000kbit diffserv3 docsis dual-srchost nat ack-filter
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc add dev eth0 root cake bandwidth 40000kbit diffserv3 docsis dual-srchost nat ack-filter
SQM: sqm_start_default: egress shaping activated
SQM: cmd_wrapper: COMMAND: /sbin/ip link add name SQM_IFB_0f282 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link add name SQM_IFB_0f282 type ifb
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc replace dev SQM_IFB_0f282 ingress
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc replace dev SQM_IFB_0f282 ingress
SQM: QDISC ingress is useable.
SQM: cmd_wrapper: COMMAND: /sbin/ip link set dev SQM_IFB_0f282 down
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link set dev SQM_IFB_0f282 down
SQM: cmd_wrapper: COMMAND: /sbin/ip link delete SQM_IFB_0f282 type ifb
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link delete SQM_IFB_0f282 type ifb
SQM: fn_exists: function candidate name: ingress
SQM: fn_exists: TYPE_OUTPUT: ingress is a function
SQM: fn_exists: return value: 0
SQM: cmd_wrapper: tc: invocation silenced by request, FAILURE either expected or acceptable.
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc del dev eth0 handle ffff: ingress
SQM: cmd_wrapper: tc: FAILURE (2): /sbin/tc qdisc del dev eth0 handle ffff: ingress
SQM: cmd_wrapper: tc: LAST ERROR: Error: Invalid handle.
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc add dev eth0 handle ffff: ingress
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc add dev eth0 handle ffff: ingress
SQM: cmd_wrapper: tc: invocation silenced by request, FAILURE either expected or acceptable.
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc del dev ifb4eth0 root
SQM: cmd_wrapper: tc: FAILURE (2): /sbin/tc qdisc del dev ifb4eth0 root
SQM: cmd_wrapper: tc: LAST ERROR: Error: Cannot delete qdisc with handle of zero.
SQM: LLA: default link layer adjustment method for cake is cake
SQM: cmd_wrapper: COMMAND: /sbin/tc qdisc add dev ifb4eth0 root cake bandwidth 600000kbit diffserv3 besteffort docsis dual-dsthost nat ingress
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc qdisc add dev ifb4eth0 root cake bandwidth 600000kbit diffserv3 besteffort docsis dual-dsthost nat ingress
SQM: cmd_wrapper: COMMAND: /sbin/ip link set dev ifb4eth0 up
SQM: cmd_wrapper: ip: SUCCESS: /sbin/ip link set dev ifb4eth0 up
SQM: cmd_wrapper: COMMAND: /sbin/tc filter add dev eth0 parent ffff: protocol all prio 10 u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb4eth0
SQM: cmd_wrapper: tc: SUCCESS: /sbin/tc filter add dev eth0 parent ffff: protocol all prio 10 u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb4eth0
SQM: sqm_start_default: ingress shaping activated
SQM: layer_cake.qos was started on eth0 successfully