State of TP-Link Archer C7v2|v5 in 2023

I love you

1 Like

Sounds like you're doing pretty good on your own...

Possibly playing with the more advanced settings on Cake SQM, (read down the 3 layers of SQM docs on the main OpenWrt site) to squeeze out a bit more performance and fairness results. One thing you would probably benefit from would be the ack-filtering on egress, reducing some traffic load on that 1Mbit upload.

Thanks. After some sleuthing, I added option eqdisc_opts 'nat dual-srchost ack-filter' but I am not sure it made a big difference in dslreports.com. It was cool to learn why it is important on very asymmetric links.

One thing that I have been wondering is the link layer adaption per packet overhead, but that would require a good bout of measuring ping times so I'm leaving it at 44 for the time being.

Yep... I'd add nat dual-dsthost ingress on the ingress side, to have the same kind of flow fairness behavior in both directions. I don't know if the ingress keyword is going to help much or not, but it's suggested.

Tuning the shaping speeds, of course, to find the maximums before the bloat starts to rise is the main thing, seeing where your sweet spot is for the link adaptation less so, but still worth it at some point.

You could use tc -s qdisc on the command line, to see a reporting of how Cake is configured and what it's doing, (and that you didn't do a typo in the optional lines, causing it not to set up an interface at all!) including how many acks are filtered.

Yup, that went in along with the egress options I mentioned.

I did check that while setting things up and several hours later, I can see the effect of ack-filter:

                   Bulk  Best Effort        Voice
  thresh       59368bit      950Kbit    237496bit
  target          307ms       19.2ms       76.7ms
  interval        614ms        114ms        172ms
  pk_delay          0us       13.6ms        9.4ms
  av_delay          0us       7.88ms       1.53ms
  sp_delay          0us       1.24ms         95us
  backlog            0b        1393b           0b
  pkts                0      6259880        52208
  bytes               0    677996170      7181345
[...]
  ack_drop            0      1155234            0
[...]

It does not look bad to me (~18% of packets!), but it's still a little arcane.

1 Like

Wow... much higher % rate than I've ever seen!! And, I guess you can say, the packets get more important (losing a packet when you have less total per second), the slower the rate. So, that would be a LOT of functional bandwidth you've gained back in your uplink!

1 Like

fast.com and dslreports.com/speedtest do not notice a big increase in uplink bandwidth, but as you said, I may need to retune the threshold. I decided to try diffserv8 on both ingress and egress. Here are the egress stats:

                  Tin 0        Tin 1        Tin 2        Tin 3        Tin 4        Tin 5        Tin 6        Tin 7
  thresh        950Kbit    831248bit    727336bit    636416bit    556864bit    487256bit    426344bit    373048bit
  target         19.2ms       21.9ms         25ms       28.6ms       32.7ms       37.4ms       42.7ms       48.8ms
  interval        114ms        117ms        120ms        124ms        128ms        132ms        138ms        144ms
  pk_delay          0us        5.52s       30.9ms          0us       1.27ms          0us       82.7ms       7.33ms
  av_delay          0us        550ms       3.12ms          0us         38us          0us         20ms       1.52ms
  sp_delay          0us         46us        120us          0us         23us          0us         26us         18us
  backlog            0b           0b           0b           0b           0b           0b           0b           0b
  pkts                0         2020      4915228            0           56            0        22568        89207
  bytes               0       652192   1201575283            0         5040            0      5392798     12372145
  way_inds            0            0       150564            0            0            0            0          903
  way_miss            0            6       259935            0           56            0           14        18295
  way_cols            0            0            0            0            0            0            0            0
  drops               0          242       169933            0            0            0         1396            0
  marks               0            0         3605            0            0            0            0            0
  ack_drop            0          961      1056227            0            0            0        10165            0
  sp_flows            0            1            1            0            1            0            1            1
  bk_flows            0            0            1            0            0            0            0            0
  un_flows            0            0            0            0            0            0            0            0
  max_len             0        25738        68970            0           90            0        36336          590
  quantum           300          300          300          300          300          300          300          300

There is really a lot of ack_drops (~20% of pkts).
Up to now I'm not unhappy.

Hmm, never tried diffserv8. Seems that it dosen't have class names so it's hard to know what they are. I usually have it set for diffserv4 or the default 3, and they have names for the tins. Looks like what would be the "best effort" tin is empty?

Overall, I'd guess that perhaps even if you don't see a big speed difference with the ack filtering on, more of the available pipeline is composed of YOUR data rather than acks and the time they take, so the responsiveness and speed that files get there is probably improved.

Yes indeed.

It's interesting to see 5 tins being used and I am definitely not doing anything on the DSCP marking front. Also, tried some video, online conferencing, etc and couldn't quite see a direct correlation while looking at these numbers.

For the time being, I'll let good enough be good enough :slight_smile:

Hi,

Thanks so much for sharing your builds Catfriend. I installed your 21.02.0 stable build on my V5 yesterday and I'm going to give it a few weeks to assess stability. So far things seem really snappy and the wifi reach is great.

A note that I am installing this build because the stock 21.02.0 gave me video stuttering and drop outs on Google Meet calls both with wifi and hardwired.

I will let you know how it goes in a few weeks.

2 Likes

I just want to mention that my experience is the same. Random dropouts on OpenWrt 21.02.0. So far this build has been OK, will update again if dropouts continue. I'm scheduling a daily reboot, just to be safe.

1 Like

This worked for me; I am running 6 Archer V2s and one Archer V4. The WiFi stability is great with the ath10k non-ct-drivers; I was having lots of drop outs with the ct-drivers and had resorted to using your scripts. Since switching to the non-ct-drivers, the scripts never run and I've stopped installing them.

Also, thanks for posting your imagebuild.sh script on GitHub. It got me started building my own images which makes upgrading a breeze with the non-ct-drivers.

1 Like

Many thanks for sharing, just to add some additional bits to disabling ipv6 according to:

opkg remove ip6tables
opkg remove kmod-ip6tables
opkg remove kmod-nf-ipt6
opkg remove kmod-nf-reject6
opkg remove odhcp6c
opkg remove odhcpd-ipv6only

Poster confirms this has provided additional stability.

With Metta
:pray:

2 Likes

To be honest, while removing ipv6 stuffs might increase the stability, 2.4GHz dies eventually.

I am testing the workaround of @sammo that remove 'ldpc'. 7 days running, hope it is the definite/real cause.

My assumption is that the issue is not related to ipv6 nor ath10k driver (ct or non-ct), but ath9k driver itself which enters in a waiting loop when a client station is far from the AP or the signal is weak and don't acknowledge some handshake. And a scan brings the driver out off that waiting loop.

The removal of ldpc in ath9k might help as error checking is different.

note: I own also a wdr7500 v3 modified 16MB flash fake C7V2.

2 Likes

Keep us posted... I haven't tried the ldpc disable yet, may try it later.

I'm working with someone who was involved with some of this code, and am trying to dig up some data for him. I have disabled my nightly cron of ifconfig wlan1 down and up... so that I can try troubleshooting the problem.

Annoyingly, it took nearly 2 weeks till I had an episode again! Then I had them on a much shorter basis. My point being, you might have to wait a few weeks till you have a "statistically correct" conclusion on something actually having an effect. Hang in there and let us know.

It does feel like there's a special interaction of traffic types, or special condition, that needs to occur to cause this. And, it's always difficult troubleshooting, when it's an intermittent, seemingly random problem.

1 Like

Hi Jon, glad to hear someone is dealing with the code itself.

our concern is likely a race deadlock (atomic action?) in a multi-threaded condition.

In a multi-threaded real-time program, even a simple printk may change behavior.

why a simple iw dev wlan1 scan bring the driver out off deadlock? I think this should be the direction of your investigation.

1 Like

Just checking in to let you know that Catfriend's 21.02.0 has been rock solid for the past 3 weeks. I'm on an Archer C7 V5. WIFI is working excellent and no problems with running SQM.

Note that I had to move to Catfriend's firmware after installing the regular 21.02.0. After a day with the regular firmware I would get drop outs and stuttering on Zoom calls and Google Meet. Both WAN and WIFI were affected.

3 Likes

I'm following this thread to jump to OpenWRT with my Archer C7 v2.
Is it stable with both band 2.4ghz and 5ghz ?

1 Like

On c7v2 : 5ghz yes, 2.4 ghz should be ok most of the time but a weekly reboot won't hurt.
On c7v5: yes -yes

1 Like

@Catfriend1

Hi! Just registered to the forum just to ask if your build is not affected by the known issue of v21.02? Thanks in advance

Known issues

Some IPv6 packets are dropped when software flow offloading is used: FS#3373

As a workaround, do not activate software flow offloading, it is deactivate by default.