Best practice for SQM 2.5Gigabit on WAN and 1GB on LAN?

Okay, let's take a step back, please.
Could you please post the following:

  1. /etc/init.d/sqm stop
  2. run a capacity test under:
    https://speed.cloudflare.com
    and post a screenshot of the full results page
  3. /etc/init.d/sqm start
  4. tc -s qdisc
  5. like 2)
  6. tc -s qdisc, immeisately after the test finished, before taking the screenshot

So one issue with waveform and cloudflare capacity tests is that they run inside a browser and hence also measure delays created inside the browser itself (caused e.g. by garbage collection), so these are good indicators but not definitive proof. Ideally you would run flent/netperf tests, but for these you would need remote servers that are close enough and able to saturate your link...
As a poor man's test you can log into your router and run a speedtest there, while running gping from one LAN connected computer (e.g. gping -b 60 1.1.1.1 8.8.8.8) ands just look how/if the latency changes during the different modes of the capacity test... (you can combine this with any method that creates a saturating load)... for generating loads https://fast.com is not bad as you can configure this to run for 30 seconds with say 16 or 20 parallel streams and test both down- and upload direction to get something lasting long enough to easily compare with the gping graphs.
To check whether a load is still applied, you could log into your router and do (opkg update; opkg install iftop if not done already) iftop -i eth1 to see a real-timish display of traffic per direction (if you do this first make a dry run with just iftop to get a hang what to look for).

1 Like

Run both tests, cloudflare and waveform before seeing stats changed. Few reschedules are expected, but no drops optimally in qdisc stat.

Since we are talking quite high a rate could you please also post the output of:

  1. cat /proc/interrupts
  2. /proc/softirq
    The goal is to see whether too much processing bunches up on too few CPUs...

Good, this indicates that BQL is likely operational...

SACK - sometimes netcard offload sends excessive amount of those, router's sysctl configures only connections to/from router itself.

By golly, you really have it for selective ACKs? Did you have any specific bad experience in that regards?
Anyway to diagnose this hypothesis the OP would need to take packet captures during a capacity test and then look through this with wireshark and or tcptrace.

BTW, what happens if you set the Download shaper to 1000000 (1000 Mbps), does that hedge in the latency better, and what about 500000 (500 Mbps)?

1 Like

Yes, enterprise level netcard sending ack per frame turning up as iptables invalid...
Thats where my worst case estimate is from.

But in that case the solution is not to disable SACK but to disable the misbehaving offload in the specific NIC, no? But let's first figure out whether the OP suffers from that issue in the first place...

1 Like

In my case yes, but router has to survive accidental mishaps...

  1. /etc/init.d/sqm stop
root@homerouter:/etc/config# /etc/init.d/sqm stop
SQM: Stopping SQM on eth1
root@homerouter:/etc/config# 
  1. capacity test results

  2. /etc/init.d/sqm start

root@homerouter:/etc/config# /etc/init.d/sqm start
SQM: Starting SQM script: piece_of_cake.qos on eth1, in: 1732000 Kbps, out: 36318 Kbps
SQM: piece_of_cake.qos was started on eth1 successfully
  1. tc -s qdisc
root@homerouter:/etc/config# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 37745587741 bytes 28826090 pkt (dropped 82839, overlimits 0 requeues 66090)
 backlog 0b 0p requeues 66090
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 11464071962 bytes 9214999 pkt (dropped 45, overlimits 0 requeues 21621)
 backlog 0b 0p requeues 21621
  maxpacket 18168 drop_overlimit 0 new_flow_count 18801 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 17194291686 bytes 12574890 pkt (dropped 82793, overlimits 0 requeues 21245)
 backlog 0b 0p requeues 21245
  maxpacket 3028 drop_overlimit 57408 new_flow_count 18207 ecn_mark 0 drop_overmemory 57408
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 3973797128 bytes 3176240 pkt (dropped 0, overlimits 0 requeues 11263)
 backlog 0b 0p requeues 11263
  maxpacket 1514 drop_overlimit 0 new_flow_count 10052 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 5113426965 bytes 3859961 pkt (dropped 1, overlimits 0 requeues 11961)
 backlog 0b 0p requeues 11961
  maxpacket 9084 drop_overlimit 0 new_flow_count 8727 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 807f: dev eth1 root refcnt 5 bandwidth 36318Kbit besteffort dual-srchost nat nowash ack-filter split-gso rtt 100ms noatm overhead 42 mpu 84
 Sent 366719 bytes 691 pkt (dropped 0, overlimits 709 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 57856b of 4Mb
 capacity estimate: 36318Kbit
 min/max network layer size:           28 /    1480
 min/max overhead-adjusted size:       84 /    1522
 average network hdr offset:           13

                  Tin 0
  thresh      36318Kbit
  target            5ms
  interval        100ms
  pk_delay       1.03ms
  av_delay        232us
  sp_delay          9us
  backlog            0b
  pkts              691
  bytes          366719
  way_inds            0
  way_miss           31
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len          2988
  quantum          1108

qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
 Sent 745501 bytes 2466 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth2 root
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth2 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth3 root
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth3 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8080: dev ifb4eth1 root refcnt 2 bandwidth 1732Mbit besteffort dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm overhead 42 mpu 84
 Sent 780125 bytes 2466 pkt (dropped 0, overlimits 308 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 11Kb of 15140Kb
 capacity estimate: 1732Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       88 /    1542
 average network hdr offset:           14

                  Tin 0
  thresh       1732Mbit
  target            5ms
  interval        100ms
  pk_delay         34us
  av_delay          7us
  sp_delay          2us
  backlog            0b
  pkts             2466
  bytes          780125
  way_inds            4
  way_miss           39
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            3
  bk_flows            1
  un_flows            0
  max_len          2104
  quantum          1514

  1. capacity test

  2. tc -s qdisc before taking the screenshot

root@homerouter:/etc/config# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 38291911463 bytes 29229747 pkt (dropped 82875, overlimits 0 requeues 66935)
 backlog 0b 0p requeues 66935
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 11486573820 bytes 9240447 pkt (dropped 45, overlimits 0 requeues 21722)
 backlog 0b 0p requeues 21722
  maxpacket 18168 drop_overlimit 0 new_flow_count 18858 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 17715681761 bytes 12948541 pkt (dropped 82829, overlimits 0 requeues 21941)
 backlog 0b 0p requeues 21941
  maxpacket 3028 drop_overlimit 57408 new_flow_count 18890 ecn_mark 0 drop_overmemory 57408
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 3975061781 bytes 3178735 pkt (dropped 0, overlimits 0 requeues 11281)
 backlog 0b 0p requeues 11281
  maxpacket 1514 drop_overlimit 0 new_flow_count 10068 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 5114594101 bytes 3862024 pkt (dropped 1, overlimits 0 requeues 11991)
 backlog 0b 0p requeues 11991
  maxpacket 9084 drop_overlimit 0 new_flow_count 8747 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 807f: dev eth1 root refcnt 5 bandwidth 36318Kbit besteffort dual-srchost nat nowash ack-filter split-gso rtt 100ms noatm overhead 42 mpu 84
 Sent 70872648 bytes 120830 pkt (dropped 39, overlimits 219924 requeues 3)
 backlog 0b 0p requeues 3
 memory used: 611840b of 4Mb
 capacity estimate: 36318Kbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       84 /    1542
 average network hdr offset:           14

                  Tin 0
  thresh      36318Kbit
  target            5ms
  interval        100ms
  pk_delay        996us
  av_delay         72us
  sp_delay          2us
  backlog            0b
  pkts           120869
  bytes        70879998
  way_inds            4
  way_miss          741
  way_cols            0
  drops               3
  marks               0
  ack_drop           36
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len         10819
  quantum          1108

qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
 Sent 543371512 bytes 462385 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth2 root
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth2 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth3 root
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth3 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8080: dev ifb4eth1 root refcnt 2 bandwidth 1732Mbit besteffort dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm overhead 42 mpu 84
 Sent 549894356 bytes 462384 pkt (dropped 1, overlimits 454532 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 2735616b of 15140Kb
 capacity estimate: 1732Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       88 /    1542
 average network hdr offset:           14

                  Tin 0
  thresh       1732Mbit
  target            5ms
  interval        100ms
  pk_delay         16us
  av_delay          7us
  sp_delay          2us
  backlog            0b
  pkts           462385
  bytes       549895850
  way_inds           51
  way_miss         1130
  way_cols            0
  drops               1
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len         34822
  quantum          1514
  1. Current SQM config
config queue 'eth1'
	option enabled '1'
	option interface 'eth1'
	option download '1732000'
	option upload '36318'
	option qdisc 'cake'
	option script 'piece_of_cake.qos'
	option linklayer 'ethernet'
	option debug_logging '0'
	option verbosity '5'
	option overhead '42'
	option qdisc_advanced '1'
	option squash_dscp '1'
	option squash_ingress '1'
	option ingress_ecn 'ECN'
	option egress_ecn 'NOECN'
	option qdisc_really_really_advanced '1'
	option iqdisc_opts 'mpu 84 nat dual-dsthost ingress'
	option eqdisc_opts 'mpu 84 nat dual-srchost ack-filter'
root@homerouter:/etc/config# cat /proc/interrupts
            CPU0       CPU1       CPU2       CPU3
   4:          0         13          0          0  IR-IO-APIC    4-edge      ttyS0
   8:          0          0          0          0  IR-IO-APIC    8-edge      rtc0
   9:          0          6          0          0  IR-IO-APIC    9-fasteoi   acpi
 120:          0          0          0          0  DMAR-MSI    0-edge      dmar0
 121:          0          0          0          0  DMAR-MSI    1-edge      dmar1
 127:        185          0          0          0  IR-PCI-MSI 32768-edge      i915
 128:          0          0         11          0  IR-PCI-MSI 524288-edge      nvme0q0
 129:          0          0          0          0  IR-PCI-MSI 376832-edge      ahci[0000:00:17.0]
 130:     169623          0          0          0  IR-PCI-MSI 524289-edge      nvme0q1
 131:          0     148585          0          0  IR-PCI-MSI 524290-edge      nvme0q2
 132:          0          0     138578          0  IR-PCI-MSI 524291-edge      nvme0q3
 133:          0          0          0     204611  IR-PCI-MSI 524292-edge      nvme0q4
 134:          0          0          0     439294  IR-PCI-MSI 327680-edge      xhci_hcd
 135:          0          0          1          0  IR-PCI-MSI 1048576-edge      eth0
 136:          0          0          0   40808557  IR-PCI-MSI 1048577-edge      eth0-TxRx-0
 137:   39418456          0          0          0  IR-PCI-MSI 1048578-edge      eth0-TxRx-1
 138:          0   55670089          0          0  IR-PCI-MSI 1048579-edge      eth0-TxRx-2
 139:          0          0   50319178          0  IR-PCI-MSI 1048580-edge      eth0-TxRx-3
 140:          0          1          0          0  IR-PCI-MSI 1572864-edge      eth1
 141:          0          0  118148028          0  IR-PCI-MSI 1572865-edge      eth1-TxRx-0
 142:          0          0          0   30750573  IR-PCI-MSI 1572866-edge      eth1-TxRx-1
 143:   41384123          0          0          0  IR-PCI-MSI 1572867-edge      eth1-TxRx-2
 144:          0   56915784          0          0  IR-PCI-MSI 1572868-edge      eth1-TxRx-3
 145:          0          0          0          0  IR-PCI-MSI 2097152-edge      eth2
 146:     227008          0          0          0  IR-PCI-MSI 2097153-edge      eth2-TxRx-0
 147:          0     227008          0          0  IR-PCI-MSI 2097154-edge      eth2-TxRx-1
 148:          0          0     227008          0  IR-PCI-MSI 2097155-edge      eth2-TxRx-2
 149:          0          0          0     227008  IR-PCI-MSI 2097156-edge      eth2-TxRx-3
 150:          0          0          0          0  IR-PCI-MSI 2621440-edge      eth3
 151:          0     227008          0          0  IR-PCI-MSI 2621441-edge      eth3-TxRx-0
 152:          0          0     227008          0  IR-PCI-MSI 2621442-edge      eth3-TxRx-1
 153:          0          0          0     227008  IR-PCI-MSI 2621443-edge      eth3-TxRx-2
 154:     227008          0          0          0  IR-PCI-MSI 2621444-edge      eth3-TxRx-3
 NMI:          0          0          0          0   Non-maskable interrupts
 LOC:   57695269   59665205   92311247   61237225   Local timer interrupts
 SPU:          0          0          0          0   Spurious interrupts
 PMI:          0          0          0          0   Performance monitoring interrupts
 IWI:         16          0          3          0   IRQ work interrupts
 RTR:          0          0          0          0   APIC ICR read retries
 RES:       2583       3675       4033       3883   Rescheduling interrupts
 CAL:     242821     229495     258574     254642   Function call interrupts
 TLB:      74589      74316      73738      74429   TLB shootdowns
 TRM:          0          0          0          0   Thermal event interrupts
 THR:          0          0          0          0   Threshold APIC interrupts
 DFR:          0          0          0          0   Deferred Error APIC interrupts
 MCE:          0          0          0          0   Machine check exceptions
 MCP:       1503       1504       1504       1504   Machine check polls
 ERR:          0
 MIS:          0
 PIN:          0          0          0          0   Posted-interrupt notification event
 NPI:          0          0          0          0   Nested posted-interrupt event
 PIW:          0          0          0          0   Posted-interrupt wakeup event
root@homerouter:/proc# cat /proc/softirqs
                    CPU0       CPU1       CPU2       CPU3
          HI:         28          0          6          2
       TIMER:    1969855    1983705    1878151    6001854
      NET_TX:   10581619   13496481   38624612   12749975
      NET_RX:   80892148  112449529  167371128   71504437
       BLOCK:     147248     130756     126167     142670
    IRQ_POLL:          0          0          0          0
     TASKLET:   10661731   12481083   60188383   14296013
       SCHED:   44082491   42482135   40459756   42420243
     HRTIMER:          1          3          0          0
         RCU:    3419655    3398358    3234279    3193973

No much difference with 1000000bps

A little bit better with 500000bps

irq141 so heavy means rx hash is not balancing packets, likely it is internet side that you talk only to gateway and traffic is spread to CPU-s based on MAC addresses.

Buffer B -> cool progression relatively.
Can you disable SQM for a while and try to balance 50Mbps uplink only with commands?

In principle this should be visible in htop as one processor at max while running tests-
install htop (obviously) and in F2 - settings disable hiding kernel threads and enable detailed CPU time and try to capture screen grab while tests running.

Thanks... that test failed due to the low achievable rate to/from cloudflares servers so cake was not doing anything and hence we learn nothing about it...
Could you repeat using fast.com as capacity test, please (only the tc -s qdisc capacity test tc -s qdisc sequence)?
Configure the test for 30 seconds and enough flows to saturate your link...

In this case it sounds to me like all your network LAN devices are on the other end of a gigabit connection between your router and your switch. If that's the case then the bottleneck is at that connection and the extra bandwidth isn't usable. If you've got a 2.5 Gbps connection to the switch then aggregated among several LAN devices you could get above 1Gbps.

In any case I'd probably set my download shaper to about 900Mbps as a start and see what happens. You'll need enough streams to see the connection saturate that.

2 Likes

The router seems to have multiple 2G5 ethernet ports so he might be able to connect multiple devices at the same time, however for downloads BQL anf fq_codel should already ameliorate the 2.5/1.7 to 1.0 Gbps transition, I am puzzled why that does not seem to be the case here.

also SORRY-NOT-THIS package better speed.cloudflare.com

Only if you also establish your own netperf server to use... the default netperf instance is sponsored by a forum member privately and tends to run into '-ENOTRAFFICVOLUMELEFTFORTHEMONTH' issues...

quick question, how do I?

You disable sqm confirm tc -s qdisc shows fq_codel

0 - fq_codel target 20ms
tc qdisc replace dev wan root fq_codel help
tc qdisc replace dev wan root fq_codel target 20ms
1 - pfifo_fast or sqm or fq_codel
all work parameterless
2 - tbf with limit and one of above after
... tbf help
example in man page: i.e tbf limits bandwidth then 2nd queue discipline equalizes flows (try all parameterless at least)
# tc qdisc add dev eth0 handle 10: root tbf rate 0.5mbit \
     burst 5kb latency 70ms peakrate 1mbit       \
     minburst 1540

   To attach an inner qdisc, for example sfq, issue:

   # tc qdisc add dev eth0 parent 10:1 handle 100: sfq

3 - cake but with ack filters
cake help
...

edit wan is device and root is position directly at the interface