I have a NAS (A) which pulls incremental backups from another NAS (B) on another location, using rsync/rsnapshot over ssh.
NAS A<=>Router A<=>Internet<=>Router B<=>Router C<=>NAS B
Router A and C are running OpenWrt, router B is a Fritzbox running stock firmware. Both NASses are running Debian.
There are portforwards in both router B and C to be able to access NAS B.
Recently the backup disk in NAS A died, and so I replaced it, and had to pull a full backup of around 90GB. That took forever. After a few days I decided to 'prefetch' the data using an ssh tunnel:
ssh user@NASB "tar -czC /path/to/data /path/to/data/*" | tar -xz
That was also slow, about 80kB/sec, and I couldn't find out where the bottleneck was.
Then I created a wireguard tunnel from NAS A to router C, and restarted the backup. This time I got around 5MB/sec, which was the max NAS B could deliver over ssh, CPU on 100%.
What causes this enormous difference in throughput?