OpenWrt Forum Archive

Topic: testing fq_codel?

The content of this topic has been archived on 19 Apr 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

How do I make sure fq_codel is working? Only reason I'm using OpenWRT is for this. 
I saturated my downstream and ran a ping to 8.8.8.8, latency managed to stay below 88ms which is good (although it jumped around)

Thanks for the link, looks like everything is configured properly.
Any tips on reducing latency even more so, I noticed the interval is set @ 100ms. Is that something I should tweak?
EDIT: oops I just ran an upload and my ping spiked above and around 200ms hmm

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 704127 bytes 7556 pkt (dropped 0, overlimits 0 requeues 2)
 backlog 0b 0p requeues 2
  maxpacket 256 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc fq_codel 0: dev eth1 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 8959148 bytes 63359 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 7453635 bytes 55577 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc mq 0: dev wlan0 root
 Sent 54714583 bytes 58382 pkt (dropped 0, overlimits 0 requeues 16)
 backlog 0b 0p requeues 16

(Last edited by HansomPeerClown on 26 Apr 2015, 21:42)

no need to tweak anything other than your upload/download speeds. FQ_codel is designed to not require tweaking in most circumstances. What service do you have and what speeds? What did you set your QoS upload/download to?

DSL Reports has a new test that looks for bufferbloat and is far more realistic than speedtest.net. Speedtest.net basically just makes the ISPs look good.

(Last edited by drawz on 27 Apr 2015, 02:59)

Hi HansomPeerClown,

HansomPeerClown wrote:

Thanks for the link, looks like everything is configured properly.
Any tips on reducing latency even more so, I noticed the interval is set @ 100ms. Is that something I should tweak?
EDIT: oops I just ran an upload and my ping spiked above and around 200ms hmm

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 704127 bytes 7556 pkt (dropped 0, overlimits 0 requeues 2)
 backlog 0b 0p requeues 2
  maxpacket 256 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc fq_codel 0: dev eth1 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 8959148 bytes 63359 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 7453635 bytes 55577 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

qdisc mq 0: dev wlan0 root
 Sent 54714583 bytes 58382 pkt (dropped 0, overlimits 0 requeues 16)
 backlog 0b 0p requeues 16

        This looks incomplete.. I can not see the IFB device that should have been setup. Is this from running qos-scripts or from just activating fq_codel on your wan interface? Could you post the result from running "tc -d qdisc" on your router, please? Are you using either IPv6 or an oldish ADSL link? If any of those is true you might have better luck with running sqm-scripts instead of qos-scripts. As @drawz notes you should set the rates for the shaper lower than your link speed (typically 85% should be a good starting point, then you iteratively increase the bandwidth until the induced latency gets larger than you like).


Best Regards
        M.

Forgot that fq_codel is on by default in Barrier Breaker, but QoS or SQM still needs to be installed, configured, and enabled (in two places!) to make it actually work. This really shouldn't be necessary, but it is the current state of OpenWrt. I still think it should be part of a setup wizard on the initial login after flashing (bypassable of course).

moeller0 wrote:

Hi HansomPeerClown,

This looks incomplete.. I can not see the IFB device that should have been setup. Is this from running qos-scripts or from just activating fq_codel on your wan interface?

Its taken straight after flashing openwrt, I didn't have any QoS setup at the time of running that tc command.

Are you using either IPv6 or an oldish ADSL link?

IPv4 bridged ADSL2+ modem into my wireless router.

If any of those is true you might have better luck with running sqm-scripts instead of qos-scripts. As @drawz notes you should set the rates for the shaper lower than your link speed (typically 85% should be a good starting point, then you iteratively increase the bandwidth until the induced latency gets larger than you like).


Best Regards
        M.

I only just setup sqm-scripts, at first it felt fine but when multiple clients started generating traffic the bandwidth started to slow to a crawl yet latency was kept very low(no more than 10ms was added), I'm assuming this is correlated and I need to configure sqm-scripts to give me more bandwidth for higher latency. 
My sqm-scripts config is this

config queue 'eth1'
        option enabled '1'
        option interface 'eth1'
        option download '5239'
        option upload '821'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option qdisc_advanced '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '0'
        option itarget 'auto'
        option etarget 'auto'
        option linklayer 'atm'
        option overhead '40'

Could you post the result from running "tc -d qdisc" on your router, please?

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12 direct_packets_stat 3 ver 3.17 direct_qlen 1000
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc mq 0: dev wlan0 root
qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10 direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn

One last question, when setting the bandwidth for sqm to use do I set 85% of my sync speed(can range from 6000-6500 Kbps) or highest observed speed.

btw, I appreciate the help big_smile

(Last edited by HansomPeerClown on 28 Apr 2015, 00:34)

Hi HansomPeerClown,

HansomPeerClown wrote:
moeller0 wrote:

Hi HansomPeerClown,

This looks incomplete.. I can not see the IFB device that should have been setup. Is this from running qos-scripts or from just activating fq_codel on your wan interface?

Its taken straight after flashing openwrt, I didn't have any QoS setup at the time of running that tc command.

         Ah, fq_codel alone will not do what you expect it to do; together with byte-queue-limits (BQL) it can control the buffers of your router, but most likely the bloated buffers live in your Mmodem and dslam, so we need traffic-shaping to artificially move the bottleneck link into your router. Then fq_codel will be able to work as expected. Ideally we would get fq_codel or something similar into modems and slams but that is not very likely happening soon, if at all.

HansomPeerClown wrote:

Are you using either IPv6 or an oldish ADSL link?

IPv4 bridged ADSL2+ modem into my wireless router.

        So with all likelihood you will be using an ATM link, you might want to have a look at the link-layer adjustment tab on sqm-scripts then. Select ATM as link layer and fill in the per packet overhead on your link, typically 40, sometimes 44 bytes. (Often you can figure this out empirically, but that takes several hours of measurement, so start out worth setting this to 44 to get started.)

HansomPeerClown wrote:

If any of those is true you might have better luck with running sqm-scripts instead of qos-scripts. As @drawz notes you should set the rates for the shaper lower than your link speed (typically 85% should be a good starting point, then you iteratively increase the bandwidth until the induced latency gets larger than you like).


Best Regards
        M.

I only just setup sqm-scripts, at first it felt fine but when multiple clients started generating traffic the bandwidth started to slow to a crawl yet latency was kept very low(no more than 10ms was added),

        But this is what sqm scripts tries to achieve, keep latency low for unrelated sparse fows even under link saturation. Individual flows will have to pay a bandwidth price, since after all you only have so much bandwidth available and need to divide it between all flows somehow. That said the cumulative bandwidth of all flows should still almost max out the shaped bandwidth. Note that with link layer adaptation the effective bandwidth will be lower, as this option takes the full on dal-wire size of your packets into account...

HansomPeerClown wrote:

I'm assuming this is correlated and I need to configure sqm-scripts to give me more bandwidth for higher latency. 
My sqm-scripts config is this

config queue 'eth1'
        option enabled '1'
        option interface 'eth1'
        option download '5239'
        option upload '821'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option qdisc_advanced '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '0'
        option itarget 'auto'
        option etarget 'auto'
        option linklayer 'atm'
        option overhead '40'

Could you post the result from running "tc -d qdisc" on your router, please?

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12 direct_packets_stat 3 ver 3.17 direct_qlen 1000
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc mq 0: dev wlan0 root
qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10 direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn

        This looks reasonable, I think. Except you seem to be using pppoe, then you should disable SQM for eth1 and set ip up for pppoe otherwise the all data ends up in the same tier instead of the three you expect from simple.qos.

HansomPeerClown wrote:

One last question, when setting the bandwidth for sqm to use do I set 85% of my sync speed(can range from 6000-6500 Kbps) or highest observed speed.

        Assuming your ISP does not use an additional throttle in the BRAS/Redback I typically would start out at 85% (with the proper link layer adjustments!) measure the link performance (see: http://www.bufferbloat.net/projects/cer … ufferbloat for testing, I recumbent netperf-wrapper's RRUL test) and the slowly increase (or if you are unlucky decrease) the shaped bandwidth until the average latency (or extreme values) gets larger than say 4*target or in your case (15+5)*2 = 40ms.
        Rich Brown wrote a nice document about setting up sqm scripts for cerowrt which mostly should apply for openwrt as well, see: http://www.bufferbloat.net/projects/cer … roWrt_310.

HansomPeerClown wrote:

btw, I appreciate the help big_smile

     I hope the above helps.

Best Regards
        M.

Thanks for the reply, I just tested this whilst playing a game with other clients generating traffic and I'm getting packet loss and ping spikes anywhere from 150ms to 300ms. (40ms being the benchmark)
I don't know what I configured improperly as its worse than no QoS.

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc htb 1: dev pppoe-wan root refcnt 2 r2q 10 default 12 direct_packets_stat 2 ver 3.17 direct_qlen 3
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev pppoe-wan parent 1:11 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 120: dev pppoe-wan parent 1:12 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 130: dev pppoe-wan parent 1:13 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ----------------
qdisc mq 0: dev wlan0 root
qdisc htb 1: dev ifb4pppoe-wan root refcnt 2 r2q 10 default 10 direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev ifb4pppoe-wan parent 1:10 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
config queue 'eth1'
        option enabled '1'
        option interface 'pppoe-wan'
        option download '5239'
        option upload '821'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option qdisc_advanced '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '0'
        option itarget 'auto'
        option etarget 'auto'
        option linklayer 'atm'
        option overhead '40'

Hi HansomPeerClown,

HansomPeerClown wrote:

Thanks for the reply, I just tested this whilst playing a game with other clients generating traffic and I'm getting packet loss and ping spikes anywhere from 150ms to 300ms. (40ms being the benchmark)
I don't know what I configured improperly as its worse than no QoS.

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
qdisc htb 1: dev pppoe-wan root refcnt 2 r2q 10 default 12 direct_packets_stat 2 ver 3.17 direct_qlen 3
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev pppoe-wan parent 1:11 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 120: dev pppoe-wan parent 1:12 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc fq_codel 130: dev pppoe-wan parent 1:13 limit 1001p flows 1024 quantum 300 target 15.0ms interval 110.0ms
qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ----------------
qdisc mq 0: dev wlan0 root
qdisc htb 1: dev ifb4pppoe-wan root refcnt 2 r2q 10 default 10 direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead 40 mtu 2047 tsize 512
qdisc fq_codel 110: dev ifb4pppoe-wan parent 1:10 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
config queue 'eth1'
        option enabled '1'
        option interface 'pppoe-wan'
        option download '5239'
        option upload '821'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option qdisc_advanced '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '0'
        option itarget 'auto'
        option etarget 'auto'
        option linklayer 'atm'
        option overhead '40'

        Packet loss is something you expect with SQM as this is how we signal congestion on the uplink, so packet loss per see is not a bad thing; also for the downlink packets are dropped to signal congestion (if a flow uses ECN, which needs to be activated at both the client and the server, then packets are ECN marked and not dropped in ingress).
        fq_codel tries to let small sparse flows bypass longer running flows; typically voip and gaming traffic tends to be sparse while downloads are not, so in general interactive traffic bypasses heavy bulk flows. But in the end your bandwidth is limited and if sufficient flows count as sparse you will need to round robin (well almost fq_codel uses something DRR-derived but in essence that is still true) through those and your important flows will encounter delay.
        That said, you could try to set your shaped bandwidth lower and see how things behave then. Sometimes the link speed is not actually what is limiting your connection, in my case for example my ISP DTAG in Germany uses a throttle in the BRAS to artificially limit my bandwidth below the values reported as actual sync values by my modem, requiring me to set my shaper according to the artificial throttle to actually constrain latency under load. I would be very interested if you could run netperf-wrapper's RRUL test on your link. (I would like to see whether this should the latency under load increase you see in your real life testing, if so it could be a good tool to try to tune the shaper bandwidth). Alternatively, or better as a first test you could try to install Rich Brown's betterspeedtest.sh (from: http://www.bufferbloat.net/projects/cer … fferbloat) and netperf on your router and then test upload and download independently. The issue is that if even one of the shapers is configured to high latency under load is going to suffer, so testing them independently can help. I used:
./betterspeedtest.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4

successfully in the past to assess uplink and downlink sequentially and the simultaneously, but note this requires netperf installed on the machine running the test.
        I very much would lie to figure out why sqm-scripts do not seem to work well on yor link, so I hope you persevere...

Best Regards
        M.

@all I will also mention that I have created a HOWTO that explains the installation and configuration of the SQM/fq_codel packages into OpenWrt BB & CC.

http://wiki.openwrt.org/doc/howto/sqm

I realize that this diverts attention from the work by @HansomPeerClown and @moeller0, but it might save others some skull sweat.

(Last edited by richbhanover on 28 Apr 2015, 22:51)

@moeller0
Hi moeller, unfortunately my router has 4MB flash and thus I cannot fit netperf amongst sqm-scripts and luci-app-sqm. With that said, previously I was configuring sqm-scripts manually but after installing luci-app-sqm and configuring settings through that has fixed my issues(at least from what I've observed)

@richbhanover
Thanks for the writeup, don't have time to look right this moment but will take a peak later on

(Last edited by HansomPeerClown on 29 Apr 2015, 20:55)

Hi HansomPeerClown,

HansomPeerClown wrote:

@moeller0
Hi moeller, unfortunately my router has 4MB flash and thus I cannot fit netperf amongst sqm-scripts and luci-app-sqm. With that said, previously I was configuring sqm-scripts manually but after installing luci-app-sqm and configuring settings through that has fixed my issues(at least from what I've observed)

        Both sqm related packages should be tiny, but then again so s 4 MB wink, sorry to hear that. But if the issues are gone and the system now behaves under load there probably is no need to install netperf (and for testing you can always install it on a computer in your LAN and run a test through the router, but if latency under load is now constrained, all is well, I hope). Please "holler" if things go wrong so we can help you.


@richbhanover
Thanks for the writeup, don't have time to look right this moment but will take a peak later on

        I would like to recommend that as well, Rich's writeup is probably way clearer than my rambling here wink

Best Regards
        M.

The discussion might have continued from here.