no worries, I just don't want to "take over" your thread by making too many posts that may or may not be what your after. I'm not bothered if you don't respond for long periods.
no, I have not crashed it yet, but give me time. Like most users I'm good at breaking programs from good coders. (I did think about setting the flows to 1 from 1024 - I'm pretty sure that would make a mess of things)
Below are some tc stats from more "casual" tests. I do plan on trying flent as I expect its output will be of more use to you, but I'm still playing around with iperf and mpstat ATM.
I'd like to see if can keep the iperf throughput constant enough and still measure a change in cpu utilization between fq_codel and fq_codel_fast. Unfortunately, my stats are no better than my coding. I know enough to recognize this as a multivariate problem in which the responses (cpu usage and iperf throughput) are correlated. Making a statistically sound comparison (like a t-test) between them requires some effort. I don't suppose flent is set for this? Anyway, I'm hopping keeping iperf throughput sufficiently constant is sufficient to make a meaningful comparison on cpu usage.
BTW while I can set ce_threshold to 2.5ms for fq_codel, I can not do it for fq_codel_fast. I think this constraint is due to the "tc" command which likely needs to be upgraded to be aware of fq_codel_fast. If that is not too difficult, I can make the change myself - but it will take me time.
EDIT: WRT ce_threshold, there is also this:
commit e7e3d08831ed3ebe5afb8b77f94a2e47fd4ccce2
Author: dave taht <dave.taht@gmail.com>
Date: Wed Aug 29 00:26:10 2018 +0000
Get rid of ce_threshold
This was a failed experiment at google.
details about the "casual" test are below
r7500v2 # tc -s qdisc show dev ifb4eth0.2
qdisc htb 1: root refcnt 2 r2q 10 default 0x10 direct_packets_stat 0 direct_qlen 32
Sent 9553980594 bytes 7316928 pkt (dropped 59, overlimits 396547 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel_fast 110: parent 1:10 [Unknown qdisc, optlen=72]
Sent 9553980594 bytes 7316928 pkt (dropped 59, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
# tests between r7500v2 in GW/AP mode (firewall enabled, SQM enabled, one 5
# GHz radio on but nothing connected, etc) with its WAN port connected to a
# switch. Also connected to the switch is a laptop running "iperf -s" (I'm still playing
# with tbf on the laptop as described above but I don't think I need this and will
# likely stop unless I want to use "netem" for emulating packet loss or something
# results above generated from several iperf commands similar to:
r7500v2 # mpstat -P ALL 2 30 > cpu-$(date +"%Y%m%d-%H%M").log&
r7500v2 # iperf -c XXX.XXX.45.137 -d -t60 -i10
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to XXX.XXX.45.137, TCP port 5001
TCP window size: 43.8 KByte (default)
------------------------------------------------------------
[ 4] local XXX.XXX.45.101 port 52476 connected with XXX.XXX.45.137 port 5001
[ 5] local XXX.XXX.45.101 port 5001 connected with XXX.XXX.45.137 port 34160
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 566 MBytes 475 Mbits/sec
[ 5] 0.0-10.0 sec 516 MBytes 433 Mbits/sec
[ 4] 10.0-20.0 sec 576 MBytes 483 Mbits/sec
[ 5] 10.0-20.0 sec 514 MBytes 431 Mbits/sec
[ 4] 20.0-30.0 sec 580 MBytes 486 Mbits/sec
[ 5] 20.0-30.0 sec 513 MBytes 431 Mbits/sec
[ 4] 30.0-40.0 sec 579 MBytes 486 Mbits/sec
[ 5] 30.0-40.0 sec 533 MBytes 447 Mbits/sec
[ 4] 40.0-50.0 sec 583 MBytes 489 Mbits/sec
[ 5] 40.0-50.0 sec 504 MBytes 423 Mbits/sec
[ 4] 50.0-60.0 sec 582 MBytes 488 Mbits/sec
[ 4] 0.0-60.0 sec 3.38 GBytes 484 Mbits/sec
[ 5] 50.0-60.0 sec 478 MBytes 401 Mbits/sec
[ 5] 0.0-60.0 sec 2.99 GBytes 428 Mbits/sec
[SUM] 0.0-60.0 sec 3.49 GBytes 500 Mbits/sec
[3]- Done mpstat -P ALL 2 30 1>cpu-$(...).log
# at this iperf throughput with SQM, the r7500v2 cpu idle is ~20% so it is getting
# close to its limits under these conditions.
r7500v2 # cat /etc/config/sqm
config queue 'eth1'
option qdisc_advanced '0'
option interface 'eth0.2'
option debug_logging '0'
option verbosity '5'
option linklayer 'ethernet'
option overhead '22'
option script 'simple.qos'
option download '500000'
option upload '500000'
option qdisc 'fq_codel_fast'
option enabled '1'