SQM Makes bufferbloat significantly worse

I have a Linksys WRT1200AC running “LEDE Reboot 17.01.4 r3560-79f57e422d”. It is connected to an ADSL modem which is I’m “bridge mode”, so that the LEDE router connects to the WAN in PPPoE mode.

I have tested bufferbloat with IPv4 and IPv6 (I have native IPv6 connectivity) and get these results…

IPv4:

IPv6:

As can be seen there is significant bufferbloat on upload, but not download.

I have enabled SQM as detailed in this guide:

https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm#smart_queue_management_sqm_-_minimizing_bufferbloat

I chose 1800 kbps for my download link (which is about 90% of the measured capacity by http://www.dslreports.com/speedtest. My measure upload varies wildly between about 55 - 300 kpbs. So I tried setting the upload link to 80 kbps (and then when that didn’t work 50 kbps). I use the recommended settings in the above guide for everything else, including ADSL overhead and using the cake queue discipline.

With SQM enabled I get the following results under IPv4…

Which as you can see fails to fix the bufferbloat on upload and in fact introduces new bufferbloat on download.

With SQM enabled I am unable to run the tests under IPv6 as the latency is so large the bufferbloat test reduces to run.

All the tests are done on a wired connection (wireless completely disabled), and with no other clients using the connection.

Changing the queue discipline to the old default of fq_codel does not help, nor does changing the upload and download values.

I followed the troubleshooting guide at https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm-details#troubleshooting_sqm to produce the console log below.

Any help at all would be greatly appreciated. I am desperate to have a connection which will allow simultaneous upload and download, which my current setup (with or without SQM) does not.

CONSOLE LOG…

root@gw:~# cat /etc/config/sqm

config queue 'eth1'
	option qdisc_advanced '0'
	option debug_logging '0'
	option verbosity '5'
	option linklayer 'atm'
	option overhead '44'
	option interface 'pppoe-wan'
	option qdisc 'cake'
	option script 'piece_of_cake.qos'
	option download '1880'
	option enabled '1'
	option upload '80'

root@gw:~# ifstatus wan
{
	"up": true,
	"pending": false,
	"available": true,
	"autostart": true,
	"dynamic": false,
	"uptime": 110,
	"l3_device": "pppoe-wan",
	"proto": "pppoe",
	"device": "eth1",
	"updated": [
		"addresses",
		"routes"
	],
	"metric": 0,
	"dns_metric": 0,
	"delegation": true,
	"ipv4-address": [
		{
			"address": “REDACTED”,
			"mask": 32
		}
	],
	"ipv6-address": [
		{
			"address": "fe80::7dcb:8a:655b:25b5",
			"mask": 128
		}
	],
	"ipv6-prefix": [
		
	],
	"ipv6-prefix-assignment": [
		
	],
	"route": [
		{
			"target": "0.0.0.0",
			"mask": 0,
			"nexthop": “REDACTED”,
			"source": "0.0.0.0\/0"
		}
	],
	"dns-server": [
		"217.169.20.20",
		"217.169.20.21"
	],
	"dns-search": [
		
	],
	"inactive": {
		"ipv4-address": [
			
		],
		"ipv6-address": [
			
		],
		"route": [
			
		],
		"dns-server": [
			
		],
		"dns-search": [
			
		]
	},
	"data": {
		
	}
}
root@gw:~# SQM_DEBUG=1 SQM_VERBOSITY_MAX=8 /etc/init.d/sqm stop ; SQM_DEBUG=1 SQ
M_VERBOSITY_MAX=8 /etc/init.d/sqm start
SQM: Stopping SQM on pppoe-wan
SQM: ifb associated with interface pppoe-wan: 
SQM: Currently no ifb is associated with pppoe-wan, this is normal during starting of the sqm system.
SQM: /usr/lib/sqm/stop-sqm: ifb4pppoe-wan shaper deleted
SQM: /usr/lib/sqm/stop-sqm: ifb4pppoe-wan interface deleted
SQM: Starting SQM script: piece_of_cake.qos on pppoe-wan, in: 1880 Kbps, out: 80 Kbps
SQM: QDISC cake is useable.
SQM: Starting piece_of_cake.qos
SQM: ifb associated with interface pppoe-wan: 
SQM: Currently no ifb is associated with pppoe-wan, this is normal during starting of the sqm system.
SQM: egress
SQM: STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
SQM: egress shaping activated
SQM: QDISC ingress is useable.
SQM: ingress
SQM: STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
SQM: ingress shaping activated
SQM: piece_of_cake.qos was started on pppoe-wan successfully
root@gw:~# cat /var/run/sqm/*debug.log

Wed Apr 11 15:13:19 GMT 2018: Starting.
Starting SQM script: piece_of_cake.qos on pppoe-wan, in: 1880 Kbps, out: 80 Kbps
Failed to find act_ipt. Maybe it is a built in module ?
module is already loaded - sch_cake
module is already loaded - sch_ingress
module is already loaded - act_mirred
module is already loaded - cls_fw
module is already loaded - cls_flow
module is already loaded - cls_u32
module is already loaded - sch_htb
module is already loaded - sch_hfsc
/sbin/ip link add name TMP_IFB_4_SQM type ifb
/usr/sbin/tc qdisc replace dev TMP_IFB_4_SQM root cake
QDISC cake is useable.
/sbin/ip link set dev TMP_IFB_4_SQM down
/sbin/ip link delete TMP_IFB_4_SQM type ifb
Starting piece_of_cake.qos
/usr/sbin/tc -p filter show parent ffff: dev pppoe-wan
ifb associated with interface pppoe-wan: 
/usr/sbin/tc -p filter show parent ffff: dev pppoe-wan
Currently no ifb is associated with pppoe-wan, this is normal during starting of the sqm system.
/sbin/ip link add name ifb4pppoe-wan type ifb
egress
/usr/sbin/tc qdisc del dev pppoe-wan root
RTNETLINK answers: No such file or directory
STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
/usr/sbin/tc qdisc add dev pppoe-wan root stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm cake bandwidth 80kbit besteffort
egress shaping activated
/sbin/ip link add name TMP_IFB_4_SQM type ifb
/usr/sbin/tc qdisc replace dev TMP_IFB_4_SQM ingress
QDISC ingress is useable.
/sbin/ip link set dev TMP_IFB_4_SQM down
/sbin/ip link delete TMP_IFB_4_SQM type ifb
ingress
/usr/sbin/tc qdisc del dev pppoe-wan handle ffff: ingress
RTNETLINK answers: Invalid argument
/usr/sbin/tc qdisc add dev pppoe-wan handle ffff: ingress
/usr/sbin/tc qdisc del dev ifb4pppoe-wan root
RTNETLINK answers: No such file or directory
STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
/usr/sbin/tc qdisc add dev ifb4pppoe-wan root stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm cake bandwidth 1880kbit besteffort wash
/sbin/ip link set dev ifb4pppoe-wan up
/usr/sbin/tc filter add dev pppoe-wan parent ffff: protocol all prio 10 u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb4pppoe-wan
ingress shaping activated
piece_of_cake.qos was started on pppoe-wan successfully
root@gw:~# logread | grep SQM
Wed Apr 11 15:09:55 2018 user.notice SQM: Starting SQM script: piece_of_cake.qos on pppoe-wan, in: 1880 Kbps, out: 80 Kbps
Wed Apr 11 15:09:55 2018 user.notice SQM: piece_of_cake.qos was started on pppoe-wan successfully
Wed Apr 11 15:10:38 2018 user.notice SQM: Stopping SQM on pppoe-wan
Wed Apr 11 15:10:38 2018 user.notice SQM: Starting SQM script: piece_of_cake.qos on pppoe-wan, in: 1880 Kbps, out: 80 Kbps
Wed Apr 11 15:10:38 2018 user.notice SQM: piece_of_cake.qos was started on pppoe-wan successfully
Wed Apr 11 15:13:19 2018 user.notice SQM: Stopping SQM on pppoe-wan
Wed Apr 11 15:13:19 2018 user.notice SQM: ifb associated with interface pppoe-wan: 
Wed Apr 11 15:13:19 2018 user.notice SQM: Currently no ifb is associated with pppoe-wan, this is normal during starting of the sqm system.
Wed Apr 11 15:13:19 2018 user.notice SQM: /usr/lib/sqm/stop-sqm: ifb4pppoe-wan shaper deleted
Wed Apr 11 15:13:19 2018 user.notice SQM: /usr/lib/sqm/stop-sqm: ifb4pppoe-wan interface deleted
Wed Apr 11 15:13:19 2018 user.notice SQM: Starting SQM script: piece_of_cake.qos on pppoe-wan, in: 1880 Kbps, out: 80 Kbps
Wed Apr 11 15:13:19 2018 user.notice SQM: QDISC cake is useable.
Wed Apr 11 15:13:19 2018 user.notice SQM: Starting piece_of_cake.qos
Wed Apr 11 15:13:19 2018 user.notice SQM: ifb associated with interface pppoe-wan: 
Wed Apr 11 15:13:19 2018 user.notice SQM: Currently no ifb is associated with pppoe-wan, this is normal during starting of the sqm system.
Wed Apr 11 15:13:19 2018 user.notice SQM: egress
Wed Apr 11 15:13:19 2018 user.notice SQM: STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
Wed Apr 11 15:13:19 2018 user.notice SQM: egress shaping activated
Wed Apr 11 15:13:19 2018 user.notice SQM: QDISC ingress is useable.
Wed Apr 11 15:13:19 2018 user.notice SQM: ingress
Wed Apr 11 15:13:19 2018 user.notice SQM: STAB: stab mtu 2047 tsize 512 mpu 0 overhead 44 linklayer atm
Wed Apr 11 15:13:19 2018 user.notice SQM: ingress shaping activated
Wed Apr 11 15:13:19 2018 user.notice SQM: piece_of_cake.qos was started on pppoe-wan successfully
root@gw:~# tc -d qdisc
qdisc noqueue 0: dev lo root refcnt 2 
qdisc mq 0: dev eth0 root 
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc mq 0: dev eth1 root 
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
qdisc noqueue 0: dev br-lan root refcnt 2 
qdisc cake 800a: dev pppoe-wan root refcnt 2 bandwidth 80Kbit besteffort triple-isolate rtt 100.0ms raw total_overhead 22 hard_header_len 22 
 linklayer atm overhead 44 mtu 2047 tsize 512 
qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ---------------- 
qdisc cake 800b: dev ifb4pppoe-wan root refcnt 2 bandwidth 1880Kbit besteffort triple-isolate wash rtt 100.0ms raw total_overhead 14 hard_header_len 14 
 linklayer atm overhead 44 mtu 2047 tsize 512 
root@gw:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc mq 0: dev eth0 root 
 Sent 253845 bytes 685 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 253845 bytes 685 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root 
 Sent 110618 bytes 965 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 110618 bytes 965 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc cake 800a: dev pppoe-wan root refcnt 2 bandwidth 80Kbit besteffort triple-isolate rtt 100.0ms raw total_overhead 22 hard_header_len 22 
 Sent 5353 bytes 23 pkt (dropped 0, overlimits 3 requeues 0) 
 backlog 0b 0p requeues 0 
 memory used: 1984b of 4Mb
 capacity estimate: 80Kbit
                  Tin 0
  thresh         80Kbit
  target        227.1ms
  interval      454.2ms
  pk_delay       19.2ms
  av_delay        339us
  sp_delay        108us
  pkts               23
  bytes            5353
  way_inds            0
  way_miss           21
  way_cols            0
  drops               0
  marks               0
  sp_flows            6
  bk_flows            1
  un_flows            0
  max_len          1484

qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ---------------- 
 Sent 2576 bytes 23 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc cake 800b: dev ifb4pppoe-wan root refcnt 2 bandwidth 1880Kbit besteffort triple-isolate wash rtt 100.0ms raw total_overhead 14 hard_header_len 14 
 Sent 4505 bytes 23 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 memory used: 1984b of 4Mb
 capacity estimate: 1880Kbit
                  Tin 0
  thresh       1880Kbit
  target          9.7ms
  interval      104.7ms
  pk_delay          3us
  av_delay          0us
  sp_delay          0us
  pkts               23
  bytes            4505
  way_inds            0
  way_miss           21
  way_cols            0
  drops               0
  marks               0
  sp_flows           12
  bk_flows            1
  un_flows            0
  max_len           901



[At this point I ran tests at http://www.dslreports.com/speedtest from a browser]



root@gw:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc mq 0: dev eth0 root 
 Sent 5721965 bytes 7294 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 5721965 bytes 7294 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root 
 Sent 1982718 bytes 6335 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 1982718 bytes 6335 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc cake 800a: dev pppoe-wan root refcnt 2 bandwidth 80Kbit besteffort triple-isolate rtt 100.0ms raw total_overhead 22 hard_header_len 22 
 Sent 2227057 bytes 4806 pkt (dropped 1856, overlimits 11117 requeues 0) 
 backlog 12455b 25p requeues 0 
 memory used: 411456b of 4Mb
 capacity estimate: 80Kbit
                  Tin 0
  thresh         80Kbit
  target        227.1ms
  interval      454.2ms
  pk_delay        25.7s
  av_delay        12.9s
  sp_delay       64.9ms
  pkts             6687
  bytes         3166878
  way_inds           95
  way_miss          483
  way_cols            0
  drops            1856
  marks             870
  sp_flows           13
  bk_flows            6
  un_flows            0
  max_len          6466

qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ---------------- 
 Sent 5432952 bytes 6530 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc cake 800b: dev ifb4pppoe-wan root refcnt 2 bandwidth 1880Kbit besteffort triple-isolate wash rtt 100.0ms raw total_overhead 14 hard_header_len 14 
 Sent 6291789 bytes 6476 pkt (dropped 54, overlimits 5565 requeues 0) 
 backlog 0b 0p requeues 0 
 memory used: 42240b of 4Mb
 capacity estimate: 1880Kbit
                  Tin 0
  thresh       1880Kbit
  target          9.7ms
  interval      104.7ms
  pk_delay       30.9ms
  av_delay        3.3ms
  sp_delay         36us
  pkts             6530
  bytes         6381359
  way_inds           11
  way_miss          491
  way_cols            0
  drops              54
  marks             238
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1696

A lower bound of 55kb/s is way too low to give SQM any leeway in controlling your bufferbloat. I don't think you'll be able to get any big improvements over where you currently are unfortunately.

1 Like

What is the rated speed of your plan? Even "very slow" grades of DSL should be 250k or more upload. Usually they charge more for faster download, but don't cut the upload proportionally on slower plans.

Likely something is wrong with your line. Whatever speed you are paying for, you should get it consistently. SQM can't do much about a connection that crashes and errors.

I'm not on a bandwidth capped plan. It's an ADSL2 connection with whatever speed the backhaul can manage. Problem is I'm a very long way from the exchange (hence only around 2Mbps download). But I'll raise a support ticket with my provider to see if they think anything can be done. I've got the Firebrick network graph from my provider's upstream equipment and it looks pretty bad (although I'm not really an expert at interpreting these graphs), so maybe it is a line problem.

There's been bufferbloat issues with the WRT AC Series almost as far back as the first router in the series was released.

If you can't find any previous threads on this forum by searching something like:

then try using Google's advanced search to search the now dead OpenWrt forum's WRT1900AC thread:

If you really do have say 50 kbps upload speed, then a single tiny say 100 byte packet takes 16 ms to serialize, a 1500 byte MTU sized packet takes 240ms to serialize. If you send a ping, then a 1500 byte packet then a ping, the next ping can't be sent until 240ms after the 1500byte packet.

If you want to have a hope of controlling bufferbloat to say 10ms increase under load, you need to be able to send a 1500byte MTU packet in less than 10ms which is 1200 kbps. Unless you have upload and download speeds of around 1200kbps it is very difficult to get "reasonable" bufferbloat.

1 Like

might be worth a try: make sure mssfix is enabled and lower mtu on wan interface in steps of 100 until something breaks or bufferbloat is at an acceptable level
(ipv6 should(?) break below 1280, ipv4 can work with smaller mtu)

Yeah, I thought to recommend that, but then if you want to have a hope of keeping bufferbloat below say 50ms you need an MTU of 50kbps * 50ms = 312 bytes (thanks to GNU units for doing the conversions)

that's a pretty damn small MTU, and this is just the best you can do whereas in reality you'll have a qdisc with several packets in it, and real world bufferbloat will still be much bigger say 150ms or more.

1 Like

those good old 56k-modem days ... :smile:

I do remember being frustrated that I could type a lot faster than my 300 baud modem

I'd actually be willing to try the decreased MTU thing, as it would still be an improvement on the thousands of milliseconds latency I see whenever I try to do large uploads (which given I back up to the cloud is something I want to do quite often). However, I need IPv6 as I have several IPv6-only servers I need access to. I'll see what my provider says, but if it really is just a distance to the exchange problem, maybe I will have to bite the bullet and upgrade to fibre.

@JW0914 I wish I'd known there'd been bufferbloat issues with all the AC routers earlier, as the whole reason I purchased it was to have something to install LEDE+SQM onto to try and fix my problems! Should have done more research...

Thanks everyone for your help. Will post back when I found out what my provider has to say...

At least when I've had to use DSL, it has been good DSL with the speed steady. My configuration was to set SQM only on the upload side. Though I didn't test on any of the sites that make a letter grade, user experience was greatly improved. These were users who constantly ran Dropbox and other uplink-hogging services. I could watch the uplink graph in LuCI run a flat line at the SQM setting of 700kb (line speed was 750), and pings were still around 20 ms.

I assume you already have the modem as the only thing connected to the landline, and it is connected directly to the network interface with either twisted pair cable (cat 3 or cat5e) or a short length of phone cord.

Unplug the modem and plug in an analog phone. In most cases the company provides DC and a dial tone (which can only be used to call 911) even though you don't subscribe to analog phone. You want to just pick up the receiver and listen for any line noise. With the modem unplugged the DSL carriers should stop and a good line will be basically silent. If you hear crashing noises the line is faulty and the company needs to work on it.

Just do it, you won't really get anywhere trying to either manage bufferbloat, or back-up things to a cloud with 50kbps upload

1 Like

I concur with the other's @80Kbps you are in a world of pain... Do you have access to the modem and could search for the ADSL synchronisation parameters? These might be helpful in figuring out whether your uplink could be made faster. My gut feeling is that should be possible with your downlink at 2Mbps, what ISP are you a customer of?

Argh, sqm will only ever work with fixed bandwidth. I really would like to see the modem's sync values and error counters....

50kbps is like half the bandwidth needed for just a single VOIP phone call using PCMU, in other words, you can do just about as well by picking up your analog phone and shouting down the line. :wink:

Do you still know how to do dialup modem tones :slight_smile:

Wheeeeeeeeeuuuuuuuubudingbudangbong squeeze schhhhhhh.

2 Likes

There are fixes for the bufferbloat issues on the WRT AC Series (for example, @davidc502's builds have hardware buffer patches applied)

How big of a problem do the wrt series have with sqm?I have a WRT1200AC and I get an A on the bufferbloat test with sqm. I'm using 17.01.4 r3560. My connection is 150/10

@moeller0 I'm with Andrews & Arnold in the UK, they're a highly technical ISP so I have direct access to the modem, as well as upstream graphs of continuous line monitoring, and I can even change the 'profile' used by the backhaul carrier for the line. Unfortunately I don't know enough about network hardware to make the most of this!

My ADSL modem (ZyXEL VMG1312-B10D) doesn't seem to offer that much in the way of tweaking ADSL parameters or in diagnostics. Looking at it this morning (when line quality has been a bit better) I get the following from the "DSL Statistics" (I can try and 'stress' the line and then look again if you think that would help)

============================================================================
    ADSL Training Status:   Showtime
                    Mode:   ADSL2 Annex A
            Traffic Type:   ATM Mode
             Link Uptime:   0 day: 16 hours: 29 minutes
============================================================================
       ADSL Port Details       Upstream         Downstream
               Line Rate:      0.405 Mbps        2.772 Mbps
    Actual Net Data Rate:      0.372 Mbps        2.743 Mbps
          Trellis Coding:         ON                ON
              SNR Margin:       14.5 dB            8.3 dB
            Actual Delay:          4 ms             11 ms
          Transmit Power:       12.2 dBm          16.6 dBm
           Receive Power:        4.7 dBm           4.7 dBm
              Actual INP:        0.5 symbols       0.0 symbols
Attainable Net Data Rate:      0.440 Mbps        3.148 Mbps
============================================================================

            ADSL Counters

           Downstream        Upstream
Since Link time = 29 min 27 sec
FEC:		1178		11
CRC:		52		0
ES:			35		0
SES:		0		0
UAS:		0		0
LOS:		0		0
LOF:		0		0
LOM:		0		0
Retr:		0
HostInitRetr:	0
FailedRetr:	0
Latest 15 minutes time = 14 min 50 sec
FEC:		24		0
CRC:		0		0
ES:			0		0
SES:		0		0
UAS:		0		0
LOS:		0		0
LOF:		0		0
LOM:		0		0
Retr:		0
HostInitRetr:	0
FailedRetr:	0
Previous 15 minutes time = 15 min 0 sec
FEC:		8		0
CRC:		1		0
ES:   		1		0
SES:		0		0
UAS:		0		0
LOS:		0		0
LOF:		0		0
LOM:		0		0
Retr:		N/A
HostInitRetr:	N/A
FailedRetr:	N/A
Latest 1 day time = 16 hours 29 min 50 sec
FEC:		1178		11
CRC:		52		0
ES:			35		0
SES:		0		0
UAS:		22		22
LOS:		0		0
LOF:		0		0
LOM:		0		0
Retr:		0
HostInitRetr:	0
FailedRetr:	0
Previous 1 day time = 0 sec
FEC:		0		0
CRC:		0		0
ES:			0		0
SES:		0		0
UAS:		0		0
LOS:		0		0
LOF:		0		0
LOM:		0		0
Retr:		0
HostInitRetr:	0
FailedRetr:	0
Total time = 16 hours 29 min 50 sec
FEC:		1178		11
CRC:		52		0

The modem traffic status currently shows no errors or dropped packets.

All available DSL profiles and capabilities on the modem are currently enabled, including bitswap.

Upstream my provider currently artificially caps the line to 95% of capacity as they find this improves VoIP performance (I have a hardware VoIP phone that actually works quite well most of the time), although I can turn it off myself if I wish. Upstream also applies TCPFix (I have the option to also apply MRUFix, LCPFix and FastTimeout) and limits the MTU to 1492 because my backhaul doesn't support baby-jumbo frames, so I can't use a full MTU of 1500 after ATM encapsulation. However I notice my modem is still set to an MTU of 1500, could this be a problem, or will upstream line settings overrule this?

The backhaul is with TalkTalk (TT) and currently is using the SI16_6_24M_1M profile which means Annex: A, Adaption: Dynamic, Interleaving: 16, SNR: 6dB, Max Downstream: 24Mbs, Max Upstream: 1Mbs.

Monitoring of my line shows U Attn to be rock steady at 36dB, D Attn rock steady at 60dB and D Margin wandering all over the place between 4.5-8.5dB. I have to say that on the times when the line has seemed to be behaving really well from my point of view I have observed D Margin to be rock steady - however when I mentioned this to my provider they didn't think it was important.

My line traffic graph yesterday when I posted, looked like this:

netgraph

The state of the line between 10:00 and 14:00 shows when an upload was ongoing, without the LEDE router (using the ZyXEL modem as modem and router). Although the graph doesn't show "the situation on the ground", in that because of bufferbloat at my router's end the actual perceived latency on my computers was actually in the thousands of milliseconds. At around 14:00 I switched the ZyXel to bridge mode and introduced the Linksys running LEDE with SQM enabled to tackle the bufferbloat, but as can be seen on the graph things just got worse. The large gap between 16:30 and 17:30 is me switching the ZyXel back to its modem+router configuration and taking the LEDE router out of the loop. As can be see afterwards there was still significant latency on the line.

My provider got back to me this morning, saying:

Thanks for your email. Yes can see from 14:00 - 17:00 yesterday a
difference in activity.

Firstly a few stats about your broadband circuit...

Estimated line length 4174m

Estimated standard ADSL download speed 1,644→3,983kb/s.

Inherent latency = PING REDACTED (REDACTED) 56(84) bytes of
data.
64 bytes from REDACTED: icmp_seq=1 ttl=59 time=39.3 ms
64 bytes from REDACTED: icmp_seq=2 ttl=59 time=40.1 ms

The Zyxel has negotiated..

DownstreamRate: 2.743000Mb/s
UpstreamRate:   0.372000Mb/s

Looking at the last 60 days, though your circuit is long, it does appear
quite stable as the signal to noise ratio has remained stable and the
packet error rate is very low.

Most cloud services have an ability to cap the up-sync rate to alleviate
the asymmetric properties of the signalling as 0.372Mb/s is easily
saturated causing the latency you are experiencing.

If you need greater capacity we would advise moving to FTTC

So they don't seem to have found a problem with the line (in the UK they do actually have a reputation at being quite good at fault location).

Any suggestions on parameter tweaks I can make to improve things would certainly be appreciated!