That is exactly what I did. Entry restored to the config above.
Glad it’s working now.
If your problem is solved, please consider marking this topic as [Solved]. See How to mark a topic as [Solved] for a short how-to.
Thanks!
Just a general observation, unrelated to your problem. I would always advise against using capital letters or special characters (control characters in particular, like quotes) for interface names and even ESSIDs/ PSKs. Yes I know, for ESSIDs and PSKs this is explicitly allowed to do (evaluated as arbitrary binary data), but it still makes it harder on yourself (from quoting it correctly (especially misbalanced quotes can be really fun to debug and cause apparently unrelated issues) to being the first to find subtle bugs).
That is good advice which I wish past me had followed. That "R" bothers me every time I see it (with LuCi offering now obvious way to change it) and just the copying and pasting of "R'lyeh" is this thread has been a pain.
My next SSID will be "" on "
".
All the changes made under your guidance appear to be working and nothing I've noticed so far has stopped working as a result of the changes.
Alas the original problem of losing WAN traffic was unaffected by these changes.
so let's keep an eye on this and gather some data. Specifically, when you have a drop out such that the wan isn't working, immediately grab this from the router:
logread -e udhcpc
router, ap1 and ap2 upgraded from 24.10.1 to 24.10.2
Detailed process
ap1 and ap2 flashed with openwrt-24.10.2-ath79-generic-tplink_archer-a7-v5-squashfs-sysupgrade.bin, keeping settings and retaining the current configuration.
root@ap2:~# opkg update
root@ap2:~# opkg remove dnsmasq odhcp6c odhcpd-ipv6only
root@ap2:~# opkg install acme acme-acmesh acme-acmesh-dnsapi ca-certificates openssl-util rsync sed wget-ssl
root@ap2:~# rm /etc/config/acme-opkg
root@ap2:~# reboot
router flashed with openwrt-24.10.2-ipq806x-generic-tplink_c2600-squashfs-sysupgrade.bin , keeping settings and retaining the current configuration.
root@router:~# opkg update
root@router:~# opkg install 6in4 acme acme-acmesh acme-acmesh-dnsapi ca-certificates ddns-scripts ddns-scripts-services haproxy luci-app-acme luci-app-ddns luci-app-uhttpd openssl-util rsync sed socat terminfo wget-ssl
root@router:~# rm /etc/config/acme-opkg /etc/config/ddns-opkg /etc/haproxy.cfg-opkg
root@router:~# reboot
Will do.
Returned from being elsewhere to find traffic had dropped.
root@router:~# date ; logread -e udhcpc
Wed Jun 25 17:27:02 EDT 2025
Wed Jun 25 16:03:58 2025 daemon.notice netifd: wan (2786): udhcpc: sending renew to server 173.79.139.1
Wed Jun 25 16:03:58 2025 daemon.notice netifd: wan (2786): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Wed Jun 25 17:03:58 2025 daemon.notice netifd: wan (2786): udhcpc: sending renew to server 173.79.139.1
and another a few hours later, just before I posted this.
root@router:~# date ; logread -e udhcpc
Wed Jun 25 19:41:28 EDT 2025
Wed Jun 25 15:37:25 2025 daemon.notice netifd: wan (2790): udhcpc: started, v1.36.1
Wed Jun 25 15:37:27 2025 daemon.notice netifd: wan (2790): udhcpc: broadcasting discover
Wed Jun 25 15:37:27 2025 daemon.notice netifd: wan (2790): udhcpc: broadcasting select for 173.79.139.33, server 173.79.139.1
Wed Jun 25 15:37:27 2025 daemon.notice netifd: wan (2790): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Those timestamps make no sense to me.
root@router:~# uptime
19:44:13 up 4 min, load average: 0.05, 0.31, 0.16
I did not reboot four minutes ago, but WAN access has returned - consistent with a reboot.
Now I think I understand that those log entries were for the then-current lease, probably replayed after the reboot.
Unexpected reboots are not common with OpenWrt from a software perspective (unusual for the system to crash like that). What can cause this is hardware -- an inadequate/marginal or failing power supply for example. Sometimes that's the external power brick, occasionally it's the internal power circuitry.
I'd recommend replacing your external power supply with one that is rated for the same voltage and the same or higher current capacity.
The udhcpc messages are normal and just informative, not a sign of a problem (neither error nor warning).
Memory claims that this is unprecedented for any device on which I've run OpenWRT over a period of decades and there is a probability greater than zero that this reboot was not spontaneous.
If it happens again, then I'll consider the behaviour real and plan accordingly.
Thanks for confirming.
Just to make sure we have covered all bases... we have not explored if you hae any additional packages installed... particularly any that are memory hungry and could cause OOM errors or may have other reasons that it could crash the system. If your system doesn't have any additional packages (or ones known to be stable and typically not responsible for memory issues), I stand by my power theory.
The thoroughness is appreciated.
Hidden in ⯈ Detailed process in my reply detailing the upgrade to 24.10.2 I included the packages installed on router:
- 6in4
- acme acme-acmesh acme-acmesh-dnsapi ca-certificates
- ddns-scripts ddns-scripts-services
- haproxy
- luci-app-acme luci-app-ddns luci-app-uhttpd
- openssl-util rsyncs sed socat terminfo wget-ssl
There is no history of resource problems on this device and I've never seen OOM kills logged or found services mysteriously stopped.
As you might expect, I am following the System log with logread ; logread -f
, which is unlikely to help in the case of a spontaneous reboot, but might in case of a WAN drop.
Immediately following another drop:
root@router:~# logread -e udhcpc
Wed Jun 25 15:37:26 2025 daemon.notice netifd: wan (2793): udhcpc: started, v1.36.1
Wed Jun 25 15:37:28 2025 daemon.notice netifd: wan (2793): udhcpc: broadcasting discover
Wed Jun 25 15:37:28 2025 daemon.notice netifd: wan (2793): udhcpc: broadcasting select for 173.79.139.33, server 173.79.139.1
Wed Jun 25 15:37:28 2025 daemon.notice netifd: wan (2793): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Fri Jun 27 08:28:49 2025 daemon.notice netifd: wan (2793): udhcpc: sending renew to server 173.79.139.1
Fri Jun 27 08:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Fri Jun 27 09:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: sending renew to server 173.79.139.1
Fri Jun 27 09:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Fri Jun 27 10:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: sending renew to server 173.79.139.1
Fri Jun 27 10:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Fri Jun 27 11:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: sending renew to server 173.79.139.1
Fri Jun 27 11:28:50 2025 daemon.notice netifd: wan (2793): udhcpc: lease of 173.79.139.33 obtained from 173.79.139.1, lease time 7200
Again, this all looks nominal, as does everything else in the System log.
Another drop unusually soon after the last, I tried restarting the firewall service, with no apparent result, then the network service, which generated an error,
Failed to execute "/etc/init.d/network restart" action: Error: XHR request timed out
Restarting the network service again to see if it generated the same error, which it did, the WAN traffic resumed almost too soon to be a result of the second restart.
This is typically an error encountered only with LuCI when the server (i.e. the router) isn't responding or is encountering a glitch of some sort. This can include connectivity loss between the browser and the server, which would be triggered by a network restart.
Aside form the power adapter (which I think is very much worth swapping), the other thing you can do is to use a standard OpenWrt image with no additional packages. This will narrow down the scope of testing to purely the hardware and the default image.
That means these:
will be unavailable for the period of testing. can you live without those for a few days (maybe less)?
Additional packages removed and router rebooted.
root@router:~# opkg remove 6in4 acme acme-acmesh acme-acmesh-dnsapi ca-certificates ddns-scripts ddns-scripts-services haproxy luci-app-acme luci-app-ddns luci-app-uhttpd openssl-util rsync sed socat terminfo wget-ssl
...
root@router:~# opkg remove 6in4 acme acme-acmesh acme-acmesh-dnsapi ca-certificates ddns-scripts ddns-scripts-services haproxy luci-app-acme luci-app-ddns luci-app-uhttpd openssl-util rsync sed socat terminfo wget-ssl
...
root@router:~# opkg remove --force-removal-of-dependent-packages terminfo
...
root@router:~# reboot
Another very short interval before the next WAN drop. I rebooted to give router another run without the additional packages.
root@router:~# uptime
15:03:29 up 42 min, load average: 1.93, 0.93, 0.36
root@router:~# reboot
I'll check to see if my large pile of power bricks includes one which can be swapped with the original supply.
Another WAN drop, again unusually soon after the last.
Since this seems to convincingly eliminate the additional packages as a cause or likely contributing factor, I've reinstalled them, powered off, unplugged, scraped the label off the barely legible power spec, failed to find a replacement for this 12 V 4.2 A supply, replugged, and powered back on.
Wow. That ting is power hungry. I even checked the TP-Link official specs to verify that, and sure enough the device does require 4A.
I would still consider that a power supply issue could be at the root here, but given that the device is consuming as much as 48W, it wouldn't surprise me if some of the internal components in the power circuitry are at fault. Specifically the capacitors tend to be temperature sensitive -- high temps will dramatically shorten the lifespan of electrolytic caps. That device is dumping a lot of heat. If you have the opportunity to open it up and inspect, it might be a good idea.
There is only one power cap that I can see in this photo and it looks like it is an aluminum can style so unless it is leaking or severely bulging, if it has failed it is often harder to see visually than other types of through-hole caps. But if you have the experience and tools, you could replace that cap pro-actively and see if it solves the issue (it appears to be through-hole).
No further WAN drops since that last one a couple of hours ago.
It's certainly a possibility, though it is only slightly warm and has always been very well ventilated - resting on a wire shelf away from walls in a large room.
This $69.99 device has been in service since February 2018 and this is the first real problem it has ever experienced. Nevertheless, the problem started immediately after the upgrade and reconfiguration, so a coincident subtle hardware failure would be unlikely. While I could test this by installing 23.05.05 and restoring the config, I would much rather diagnose the problem and save that option for a limited time contingency while planning replacement device(s).
Your continued attention to this is much appreciated.