I'd add the gstatic.com to the allowed domains and retest. Given the intermittent nature of the problem, maybe it has something to do with the upstream DNS? If not, are there any out of memory crashes in the system log?
Try clearing browser cache. Having said that, I don't know what modifications ImmortalWrt has over vanilla OpenWrt, neither am I able to test adblock-fast on ImmortalWrt.
If you can run the following commands in CLI and see if any of them take unreasonably long to respond, that would be the first investigation step:
My guess would be something has changed with the user the procd init scripts run from and/or default jail for procd scripts in the recent snapshot and the init script doesn't have access to the /tmp/ and /var/run/ anymore.
I don't have time to investigate this at least until the end of this week if not the next. If someone can figure out/let me know what's happening, I might be able to implement the fix quickly tho.
If you can run the following commands in CLI and see if any of them take unreasonably long to respond, that would be the first investigation step
I add this blocklist https://raw.githubusercontent.com/Cats-Team/AdRules/main/dns.txt and then getFileUrlFilesizes takes obviously longer to respond. Does that mean a user-add rule will be checked through Internet every time, and when the githubusercontent is inaccessible for me, it runs time out?
No, that should affect the functionality, but I was under impression that jsdelivr was not blocked in China, could you please clarify?
In fact, github/openwrt/jsdelivr are not literally blocked but unregistered in China. I think some of their servers are blocked while most of which are still reachable. They are usually accessible, and if not, we can just wait one or two minutes to have a retry and the DNS may respond an other unblocked server. Google/Facebook/twitter are definitely blocked, so that they are total unreachable unless via vpn.
Thank you for bringing it up and elaborating on the issue.
There's a code which tries to determine the file size of all added block-lists by downloading first 512 bytes or so if I remember correctly. The sizes are then displayed in the WebUI so that users can decide which block-lists they can enable without running out of memory on the router.
Obviously, the assumption was that you're only adding the block-lists to the config which are accessible to you, I'll try to play with that code to see what can I improve so it fails faster for the links which are inaccessible.
Thanks for reply and the following works.
I suggest that you might modify the process flow to let the web UI load first and then determine and refresh the file size of every blocklist.
There's a code which tries to determine the file size of all added block-lists by downloading first 512 bytes or so if I remember correctly.
Does this all added block-lists mean all and only the blocklists added by user himself or both package-provided and user-added blocklists? When I say the Cats-Team Rule will make the getFileUrlFilesizes taking obviously longer to respond, I mean even the one and only rule had been added and size determined, getFileUrlFilesizes will still take longer to respond every time when it's called. So it seems that user-added rules are always being unfairly treated from the package-provided ones, which is quite logically wired. I don't think it's a result of inaccessible problem, because there are other package-provided github raw rules such as the AdguardTeam ones.
No, if the size is known, it's not being refreshed on getFileUrlFilesizes. Realistically, unless you're adding unavailable to you lists, it all works very fast. You're welcome to contribute any code you may consider improvement to the original source repo.
Thanks for spotting and reporting this! Version 1.1.2-2 (in my repository) contains the fix for the file size not being stored in config. You'll need to update both principal package and the luci app.
I've had another theory, maybe just the /tmp had gotten smaller (due to a bigger kernel or something). @odhiambo@phinn can you try with only the smallest list and see if that works?
Can you help me run that? It's not doing anything, just getting errors, "package name is read only" I tried to run it as an executable shell script too and nothing:
root@OpenWrt:/# . /etc/init.d/adblock-fast
"
touch "$R_TMP"
echo 11111 > "$R_TMP"
ls -la "$R_TMP"
. /etc/rc.common
load_validate_config load_environment
echo "$dl_command"-ash: readonly: line 35: packageName: is read only
root@OpenWrt:/# R_TMP="$(mktemp -u -q -t ${packageName}_tmp.XXXXXXXX)"
root@OpenWrt:/# touch "$R_TMP"
root@OpenWrt:/# echo 11111 > "$R_TMP"
root@OpenWrt:/# ls -la "$R_TMP"
-rw-r--r-- 1 root root 6 Jun 16 17:35 /tmp/adblock-fast_tmp.XXeDeHbj
root@OpenWrt:/# . /etc/rc.common
-ash: .: line 123: : not found
root@OpenWrt:/# load_validate_config load_environment
-ash: uci_load_validate: not found
root@OpenWrt:/# echo "$dl_command"
root@OpenWrt:/#
You only have to do the paste in CLI once, if you do it more than once you'll get that error.
I remember seeing some issues with the mbedtls errors when trying to access https in the snapshots, maybe that's an issue. The solution I've seen was to rebuild curl with OpenSSL dependency instead.
If you run something like curl --insecure https://cdn.jsdelivr.net/gh/hoshsadiq/adblock-nocoin-list/hosts.txt does it show the output or does it produce an error?
Yes it is, this is something that must have happened in main in last couple weeks, since I just had a good working snapshot recently. But yes, you're right might have to roll back to 23.05.3 if I could bear going back to kernel 5.15
Thanks for helping, I posted some results to that bug report too.