I have multiple boxes with very slow uplinks (5mbits), and it's easy to get wireguard to overbuffer internally, with 600+ms of delay. Applying cake to the wg0 interface with a rate limit below that and an rtt setting close to your vpn's rtt is helping me a lot, particularly on keeping ssh functional. example on my vpn today (which has about a 25ms rtt)
tc qdisc add dev wg0 root cake bandwidth 4500kbit rtt 25ms ack-filter
Bulk Best Effort Voice
thresh 281248bit 4500Kbit 1125Kbit
target 60.6ms 3.8ms 15.1ms
interval 121.2ms 27.5ms 38.9ms
pk_delay 0us 34us 62us
av_delay 0us 16us 22us
sp_delay 0us 15us 16us
backlog 0b 0b 0b
pkts 0 92573 2606
bytes 0 82345028 615730
way_inds 0 0 0
way_miss 0 36 2
way_cols 0 0 0
drops 0 1 0
marks 0 3107 0
ack_drop 0 192 0
sp_flows 0 0 0
bk_flows 0 1 1
un_flows 0 0 0
max_len 0 2840 2840
quantum 300 300 300
(yes, there is a cake instance also running on the main uplink with more relaxed settings and that too drops or marks packets. Wireguard multiplexes on a single port and thus we drop/mark there too)
root@sparrow:~# tc -s qdisc show dev eth0.2
qdisc cake 8009: root refcnt 2 bandwidth 10Mbit diffserv3 triple-isolate nat nowash ack-filter split-gso rtt 100.0ms noatm overhead 18 mpu 64
Sent 10893528959 bytes 21556377 pkt (dropped 68330, overlimits 13208535 requeues 0)
backlog 0b 0p requeues 0
memory used: 687456b of 4Mb
capacity estimate: 10Mbit
min/max network layer size: 28 / 1500
min/max overhead-adjusted size: 64 / 1518
average network hdr offset: 14
Bulk Best Effort Voice
thresh 625Kbit 10Mbit 2500Kbit
target 29.1ms 5.0ms 7.3ms
interval 124.1ms 100.0ms 102.3ms
pk_delay 0us 1.8ms 248us
av_delay 0us 263us 26us
sp_delay 0us 38us 13us
backlog 0b 0b 0b
pkts 0 21301229 323478
bytes 0 10884807939 30810161
way_inds 0 261738 998
way_miss 0 377699 747
way_cols 0 0 0
drops 0 12961 0
marks 0 2401 0
ack_drop 0 55369 0
sp_flows 0 3 1
bk_flows 0 1 0
un_flows 0 0 0
max_len 0 23744 1374
quantum 300 305 300