Tailscale using 130% of memory

ive tried to update it
tailscale update

and of course reboot a few times, but its pegged at this 130%

ive also tried
tailscale down
and it replies: Tailscale was already stopped.
but that process stays (even if i kill/terminate it)

my internet and samba was slow loading today so i investigated and found this. however im not sure how long its been going on. after a few reboots the process is still at 130% but things are loading and normal speeds

Please show

ubus call system board

from your device


root@skittles:~# ubus call system board
{
        "kernel": "6.6.86",
        "hostname": "skittles",
        "system": "ARMv8 Processor rev 4",
        "model": "GL.iNet GL-MT6000",
        "board_name": "glinet,gl-mt6000",
        "rootfs_type": "squashfs",
        "release": {
                "distribution": "OpenWrt",
                "version": "24.10.1",
                "revision": "r28597-0425664679",
                "target": "mediatek/filogic",
                "description": "OpenWrt 24.10.1 r28597-0425664679",
                "builddate": "1744562312"
        }
}

That is memory overcommitment at its finest, chexk pa aww in ssh for "res"ident memory usage, if that seems excessive add zram-swap to compress out-of-memory imminent soon?

what package/software is "pa"

root@skittles:~# pa
-ash: pa: not found

i looked in the Software section too but nothing just "pa"

ps axww
sorry big fingers small mobile kbrd

BusyBox' built-in ps doesn't support these options.

I'd try htop.
In most cases, htop isn't installed. You'd have to do this first.

2 Likes

i guess though ps or htop are really just showcasing the issue, not resolving it. kinda sucks because tailscale doesnt have a forum for q/a. likely for paid version it has support

found this in the logs

Wed Jun 4 15:08:01 2025 daemon.err tailscaled[2449]: 2025/06/04
15:08:01 health(warnable=no-derp-connection): error: Tailscale could
not connect to the 'Seattle' relay server. Your Internet connection
might be down, or the server might be temporarily unavailable.

Wed Jun 4 15:08:51 2025 daemon.err tailscaled[2449]: 2025/06/04
15:08:51 health(warnable=no-derp-connection): ok

such bs tailscales site says to post on stackoverflow and tag them but my question was closed there
submitted a ticket to tailscale; we'll see how that goes
from the error, the problem seems straight forward but idk where to configure the DERP server or what to replace it with (seems like its all handled internally by their client)


I also see an unusually high Memory usage number for Tailscale, but
I don't experience any issues....could this be a bug?

It uses 10x(60-20) mb just that vm.overcommit_memory pre-reserves some backing for virtual mappings.

Add zram-swap, if it swaps nothing then nothing needs to be done to fix it.

Noted, but it says 323% which looks like a wrong number being reported. I don't understand how that number can be above 100%?

Note: USG-3P has 512 MB RAM and 4 GB Flash.

I installed zram-swap which automatically installed kmod-zram and they did nothing to reduce the reported 323% memory usage shown on the LuCI Processes page.

I also installed HTOP and it reported approx 90 MB memory usage, while the LuCI Processes page continued to report 323% memory usage.

You need to check on first status page whether swap is ever used which would be real sign of danger.

Just tested again and "Swap free" sat at 100% and was never used.

Leave as it is check in a week and month again. That is the way linux accounts un-accesses but allocated memory.

if you look at syslog do you see the error "Tailscale could
not connect to the 'Seattle' relay server."? (or whatever derp server your client uses)

Not seeing that error currently, however I did have serious issues when IPv6 was enabled but the ISP did not support IPv6. So if your ISP does not support IPv6 then disable it.

This is mainly caused by Go programs. I have experienced this for ages with dnsproxy (which is also Go), currently reporting 515% on a 256 MB device.

I don't use Go myself, but as far as I know, it overcommits address space on purpose as an optimization to its GC.
Not sure what the appropriate fix for this (if any) would be. Overcommiting address space is by all means fine and shouldn't cause any problems, other than programs that report memory usage by committed address space reporting a wrong estimate.

EDIT: Some discussion by people who actually use the language

1 Like