Help with SQM config and packet loss

Hey everyone,
Let me start by saying that I am completely rookie to this type of things related to network connection. It's a black magic for me.

I will try to describe my problem in as much detail as possible and I will try to provide all the necessary details.

The problem is that despite enabling and configuring QSM, Bufferbloat still occurs, most often on download side.
This can be seen best in waveform tests, where some blue dots are much further away from the rest and have higher ping and jitter results compared to upload and unloaded.

The second thing is packet loss.
These are not huge numbers, but the loss ranges from 1-5%, sometimes up to 10%.
According to net graphs from games like Battlefield, Call of Duty and several others, these lost packets are Inbound Packets.
There is no problem with Outbound Packets, it is always 0%.

These problems occur when my PC is the only device that uses the Internet.
Probably if I turned on WIFI on the TV and mobile phone, the problem could be worse.

Now, few words about my connection.
It is a 100/100 connection, it is copper cable.
The cable comes from the outside, from the box, enters my apartment and is plugged directly into the router, and from the router there is a CAT6 ethernet cable to the PC. PC has 1000/1000 Realtek card.
My router model is: Netgear R6220
However, my ISP uses PPPoE and only uses ipv4.

Moving on,
My connection test WITHOUT QSM.

waveform and dslreport:
https://imgur.com/SzZlZ1r

My connection test WITH QSM:

waveform and dslreport:
https://imgur.com/iK8jtXf

My QSM settings.
(for guidance, I used the wiki, faq and other posts available here on the forum.)

https://imgur.com/Aw7SXPf

https://imgur.com/mdggVBx

https://imgur.com/s1VVTdx

https://imgur.com/pP7dfyA

https://imgur.com/3QzaYPs

I changed my LAN ip from 192.168.1.1 to 192.168.2.1, because I thought there was some conflict with IP's between LAN and WAN
Switched ethernet cable from LAN 1 to LAN 2, just in case if the LAN 1 port was worn out.
But with no effect.

Maybe it's the router's fault? I heard that Mediatek chips are not the best.
I also have TP-Link Archer C7 V2 as a backup and CAT7 cable with gold plugs.
Or maybe the ISP is to blame?
I'm in dead end.

I will be grateful for any help and suggestions.
Sorry for using imgur, as a new member, I'm limited to one embed media.

I hope that I described my problem quite understandably and provided the necessary information, if more information is needed, I will of course add it.

Best regards, have a nice day.

I also attach my interface settings, those are mostly default.
maybe they will be useful, if not I can always remove them.

Lan:
https://imgur.com/BavUStS
https://imgur.com/BGWG5sH
https://imgur.com/hVVzUng
https://imgur.com/brBvkSJ

PPPoE:
https://imgur.com/CMKO2T8
https://imgur.com/JKrqLt4

Devices:
https://imgur.com/UQSl4BP

Global Network Options:
https://imgur.com/aeVs8bs

So the waveform test is really an excellent test and a great service from waveform to the community! That said modern browsers are not a good environment to perform high precision time measurement in, as the internal workings of the browser will influence the test quite a lot. So this might explain a few outliers in the waveform test... I note that the reported maxima of these outliers and the mean latencies for download are noticeably smaller for SQM versus no SQM... So I am inclined to chalk up the outliers to the fact that browsers are not great environments for latency measurements....

Now, 5-10% packet-loss is IMHO a huge and unacceptable number (heck even 1% can be quite annoying already). But the devil is in the details, and SQM can only help with packet loss to some degree.

Please run a capacity test* here:

and post a screenshot of the full page after the test has finished.
*) These tests are commonly misnamed as speed tests, but speed is distance/time (and is essentially around 2/3 of the speed of light in vacuum) while these tests report volume/time which is either a volume flow rate or in information theory a (net) channel capacity...

This test contains an unloaded packet loss test that might trigger if you truly have permanent packet loss.
Also try:
To test packet loss and directionality thereof the following page is quite useful:

Please use the following settings:
Packet Sizes: 142 and 158 Bytes (slider to the left)
Frequency: 20 Pings/Second
Duration: 180 Seconds (slider to the right)
Acceptable Delay: (cosmetic only, the results will show a horizontal line at that value)
Or Select a Preset Approximation: Custom (do not touch, changing the sliders will set this to Custom already)
Wait 2 seconds before recording results? (Keep unchecked)
Using: XXX Server (pick the closest Server location for your test system)

Then click the green "Start Test" button.
The test will run for 3 minutes, keep the browser window in focus and leave the computer alone while the test runs.
After it finishes, please take a screenshot of the results and post is to the forum.

Maybe just use ssh to log into your router via a terminal and then execute the following commands and then copy and paste from the terminal window to the forum editor#:

  1. ifstatus wan | grep device
  2. cat /etc/config/sqm
  3. tc -s qdisc

#) Instructions for formatting of pasted text:
PER GUI:

Please use the "Preformatted text </>" button for logs, scripts, configs and general console output.
grafik
Please edit your post accordingly. Thank you! :slight_smile:

PER CLI:

Just make sure you "sandwich" your text between two rows of backtick characters ` (which themselves will be invisible in the preview) looking in something like this in the editor:
```
Your Pasted Text as preformatted text with fixed width font
1
1111 (note with fixed-width fonts the numbers are right-aligned)
```
but looking like this in the rendered forum:

Your Pasted Text as preformatted text with fixed width font
   1
1111 (note with fixed-width fonts the numbers are right-aligned)

Hey @moeller0 , thanks a lot for your reply.

With this % of packets lost, my imagination may have gotten a little carried away.
Yesterday, while playing Battlefield, I noticed a maximum of 4%-5% and the game displayed a red packet loss icon. Mostly it sit on 1-2% with orange icon.

Test with Cloudflare:

Test with Packetloss:

  1. ifstatus wan | grep device:
"l3_device": "pppoe-wan",
         "device": "wan",
  1. cat /etc/config/sqm:
config queue 'eth1'
        option enabled '1'
        option interface 'wan'
        option download '80000'
        option upload '80000'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option linklayer 'ethernet'
        option debug_logging '0'
        option verbosity '0'
        option overhead '44'
        option qdisc_advanced '1'
        option squash_dscp '1'
        option squash_ingress '0'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '1'
        option linklayer_advanced '1'
        option linklayer_adaptation_mechanism 'cake'
        option iqdisc_opts 'mpu 68'
        option eqdisc_opts 'mpu 68'
        option tcMTU '2047'
        option tcTSIZE '128'
        option tcMPU '0'
  1. tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 227938820828 bytes 154037002 pkt (dropped 8, overlimits 0 requeues 94281)
 backlog 0b 0p requeues 94281
qdisc fq_codel 0: dev eth0 parent :10 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :f limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :e limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :d limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :c limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :b limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :a limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :9 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 10737081158 bytes 7743104 pkt (dropped 0, overlimits 0 requeues 6576)
 backlog 0b 0p requeues 6576
  maxpacket 1510 drop_overlimit 0 new_flow_count 4175 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 217200638234 bytes 146285643 pkt (dropped 8, overlimits 0 requeues 87698)
 backlog 0b 0p requeues 87698
  maxpacket 15180 drop_overlimit 0 new_flow_count 13815198 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 866243 bytes 5009 pkt (dropped 0, overlimits 0 requeues 4)
 backlog 0b 0p requeues 4
  maxpacket 260 drop_overlimit 0 new_flow_count 316 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 235193 bytes 3246 pkt (dropped 0, overlimits 0 requeues 3)
 backlog 0b 0p requeues 3
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8011: dev wan root refcnt 17 bandwidth 80Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms noatm overhead 44 mpu 68
 Sent 114916576 bytes 199818 pkt (dropped 12, overlimits 167840 requeues 472)
 backlog 0b 0p requeues 472
 memory used: 488896b of 4Mb
 capacity estimate: 80Mbit
 min/max network layer size:           16 /    1500
 min/max overhead-adjusted size:       68 /    1544
 average network hdr offset:           14

                  Tin 0
  thresh         80Mbit
  target            5ms
  interval        100ms
  pk_delay         84us
  av_delay         19us
  sp_delay         10us
  backlog            0b
  pkts           199830
  bytes       114934172
  way_inds        73629
  way_miss         1876
  way_cols            0
  drops              12
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum          1514

qdisc ingress ffff: dev wan parent ffff:fff1 ----------------
 Sent 561835771 bytes 563340 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev phy0-ap0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev phy1-ap0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 5688199753 bytes 75534762 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 34316 drop_overlimit 0 new_flow_count 5427939 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8012: dev ifb4wan root refcnt 2 bandwidth 80Mbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms noatm overhead 44 mpu 68
 Sent 568545246 bytes 562548 pkt (dropped 792, overlimits 767677 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1666656b of 4Mb
 capacity estimate: 80Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       90 /    1544
 average network hdr offset:           14

                  Tin 0
  thresh         80Mbit
  target            5ms
  interval        100ms
  pk_delay         21us
  av_delay          9us
  sp_delay          3us
  backlog            0b
  pkts           563340
  bytes       569722531
  way_inds         2682
  way_miss         1926
  way_cols            0
  drops             792
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum          1514

The statistics say in download direction cake dropped 792 packets, in upload direction, a mere 12. Assuming that cake is unlikely to drop sparse game packets on a 80 Mbps link, I am not sure cake is the location where packets get lost...
Maybe it is packet capture time to look at the frequency of incoming and outgoing game packets...

Oh... 792 packets that's a lot. I can try fq_codel instead of cake, according to Wiki fq_codel uses less CPU than cake. But I got feeling this is problem on ISP side...

I already send e-mail to my ISP about this problem.
Fortunately, there are no problems with my IP, I can communicate with him without any problems and he is helpful, so maybe he will do something on his side.

I've heard stories where ISPs were stubborn, unhelpful and wanted a lot of money for their "services"

792 out of 563340 packets, or 792/563340 = 0.00140590052189, 0.14% that is not really all that much.
In a single https://speed.cloudflare.com capacity test on a 100/40 link my router used 1698 ECN/CE marks... so I would not worry much about 792 packets....

That is a rare situation, so I wish you luck in getting that resolved.

Hey, it's me again.

So, I exchanged a few emails with my ISP, explained the problem, showed their screenshots with the test results, I received a reply the next day at noon, they told me that everything was fine on their server side.
So I suggested to him that maybe it was something with the cable that came from outside, or from the box from which the cable was connected to me.

After some time, the ISP wrote back and suggested that I update TP-Link to the latest firmware and test it on this router.
Funny, because before OpenWRT, that TP-Link had up to date stock firmware, and I didn't say anything to my ISP, that I have OpenWRT installed.

So I rollback to stock soft, and I did a packetloss test with the same settings as you suggested earlier and it shows 0% packet loss.

I also quickly checked BF5, most of the time it showed 0-0.1%, for a moment it blinked at 0.8% but quickly returned to 0-0.1%.

It's strange, because like I said previously, with stock soft on this TP-Link, it was similar to Netgear, maybe a little better.
Maybe the ISP actually did something on his side, run some tests, did some tweaking or something, who knows.

Anyway, I will monitor this and if something happens, I will write to the ISP.
Once again big thanks to @moeller0 for his help.

The topic can be closed.

Well, what happens if you use the netgear with OpenWrt now? Still loss or no loss?
That would be interesting as it would tell whether penWrt/SQM has issues that need handling or whether that was just a coincidence.

I did test with Netgear with OpenWRT without QSM and with stock firmware.

OpenWRT:

Stock:

The difference is minimal.

1 Like

Where is the modem?

Oh...

You don't have access to the modem?

Thanks, so stock is not superior to OpenWrt, leaving the question what about OpenWrt with and without SQM, since you already show results for without, could I ask you to repeat the OpenWrt test with SQM (and if too much time will have been passed since the without SQM test to repeat that as well, please)?