Normal speed for OpenWrt NAS

Hello there,

I have a router running OpenWrt 18.06.1 on it, model GL.iNet B1300. I attached a USB 3.0 hard drive to the router. When I download file from the USB hard drive, the speed is only about 3 MB/s. That is much less than my expectation.

So, what should the normal speed for this be?

P.S., I SSH to the router, and tested R/W speed of the USB hard drive with dd command. The result is about 38 MB/s for writing and 80 MB/s for reading.

SSH, NFS, SMB, AFP…? which one are u sing for the file transfer? Can you check the speed difference between SSH file transfer and SMB?

If you are on MacOS you will get better speed with AFP and NFS than SMB, Even if you are on Windows or Linux you will get less transfer speed with SMB than the Mac with AFP.

Its an SMB issue not the router

Though I dont think the device is powerfull enough to sustain the NAS function. Only occasionnal share

I am on macOS and tried SFTP and AFP, both are about 2~3 MB/s download speed.

This is too slow, AFAICT. BTW, you just can't justify from this kind of test.

However, you can try to setup NFS and use it to test. In my case with a Seagate GoFLEX Home running on a debian Linux with NFS, I get an average of 70/30 MBps (R/W) speeds. Below is the result of some plain (no encryptions) R/W speed tests from a debian Linux computer to my Seagate GoFLEX Home using F3 utility from Digariti.

[mazilo@linux:/home/local/PEOPLE/mazilo 388%] ~ f3write /mnt/devel/junk/
Free space: 2.44 TB
Creating file 1.h2w ... OK!                            
Creating file 2.h2w ... 0.05% -- 30.10 MB/s -- 23:38:14^C
2.001u+3.763s=0:40.18e(14.3%) TDSavg=0k+0k+0k max=2244k 80+2467200io 0pf+0sw
[mazilo@linux:/home/local/PEOPLE/mazilo 389%] ~ f3read /mnt/devel/junk/
                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/        0/      0/      0
Validating file 2.h2w ...  369712/        0/      0/      0

  Data OK: 1.18 GB (2466864 sectors)
Data LOST: 0.00 Byte (0 sectors)
	       Corrupted: 0.00 Byte (0 sectors)
	Slightly changed: 0.00 Byte (0 sectors)
	     Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 69.73 MB/s
2.715u+1.222s=0:17.36e(22.6%) TDSavg=0k+0k+0k max=2244k 2466928+0io 0pf+0sw
[mazilo@linux:/home/local/PEOPLE/mazilo 390%] ~

Also, install iperf3 on your NAS and/or the computer you want to test. Then, you can do some R/W tests as follows:

  1. On your NAS, run iperf3 -s
  2. On your computer, run iperf3 -c

From the above test, it will show you the network bandwidth throughputs you can achieve from your computer to your NAS and/or vice versa. The following shows the output from an iperf3 test from my debian Linux computer to my Seagate GoFLEX Home.

[root@linux:/home/local/PEOPLE/mazilo 391%] # iperf3 -c 192.168.1.88
Connecting to host 192.168.1.88, port 5201
[  4] local 192.168.1.74 port 43818 connected to 192.168.1.88 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  72.8 MBytes   610 Mbits/sec    0    365 KBytes       
[  4]   1.00-2.00   sec  92.3 MBytes   775 Mbits/sec    0    399 KBytes       
[  4]   2.00-3.00   sec  90.1 MBytes   756 Mbits/sec    0    423 KBytes       
[  4]   3.00-4.00   sec  90.2 MBytes   757 Mbits/sec    0    423 KBytes       
[  4]   4.00-5.00   sec  90.3 MBytes   758 Mbits/sec    0    423 KBytes       
[  4]   5.00-6.00   sec  89.8 MBytes   753 Mbits/sec    0    423 KBytes       
[  4]   6.00-7.00   sec  91.2 MBytes   766 Mbits/sec    0    423 KBytes       
[  4]   7.00-8.00   sec  91.4 MBytes   767 Mbits/sec    0    423 KBytes       
[  4]   8.00-9.00   sec  90.5 MBytes   759 Mbits/sec    0    423 KBytes       
[  4]   9.00-10.00  sec  89.7 MBytes   753 Mbits/sec    0    423 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   888 MBytes   745 Mbits/sec    0             sender
[  4]   0.00-10.00  sec   887 MBytes   744 Mbits/sec                  receiver

iperf Done.
0.032u+1.621s=0:10.18e(16.2%) TDSavg=0k+0k+0k max=2412k 0+0io 1pf+0sw
[root@linux:/home/local/PEOPLE/mazilo 392%] #

As you can see from the above, I can only achieve 10% in read tests from the network bandwidth read throughput. Ideally, I would love to be able to get at least a double in R/W (particularly the later) speed tests, respectively. Unfortunately, as you can see the bottleneck will become the gigabit network traffics. I am hoping some experts here can help.

BTW, a hdparm -tT /dev/sda test (see below) on my Seagate GoFLEX Home indicates the HDD on my Seagate GoFLEX Home is capable of delivering more R/W throughputs.

[root@debian:/root 2%] # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   522 MB in  2.01 seconds = 260.16 MB/sec
 Timing buffered disk reads: 382 MB in  3.01 seconds = 126.72 MB/sec
0.302u+4.301s=0:13.57e(33.8%) TDSavg=0k+0k+0k max=3896k 798816+0io 5pf+0sw
[root@debian:/root 3%] #

I meant 60%.

Can you run top -d 5 to see if an I/O process takes up CPU and/or memory?

Perhaps there's an I/O bottleneck.

For a 717 MHz single core CPU - running at 1000 Mbps and full USB 3.0 transfer speed...

Normal speed might be: Slow.

It should be a quad-core CPU. Some (on other forums) says that 4~6 MB/s is the normal speed.

I stand corrected, it does say quad core.

What does top show?

This is the output of top, with NFS upload and download running

Mem: 224336K used, 26608K free, 136K shrd, 5820K buff, 149212K cached
CPU:  13% usr   9% sys   0% nic  47% idle  14% io   0% irq  14% sirq
Load average: 2.38 2.66 1.91 2/106 2718
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
 2007     1 root     S    14648   6%  19% transmission-daemon -g /mnt/transmission -f
 1794     2 root     SW       0   0%   2% [nfsd]
 1793     2 root     SW       0   0%   1% [nfsd]
  182     2 root     SW       0   0%   1% [usb-storage]
    7     2 root     RW       0   0%   0% [ksoftirqd/0]
   33     2 root     SW       0   0%   0% [kswapd0]
   88     2 root     IW       0   0%   0% [kworker/3:2]
 1799     1 root     S     2556   1%   0% /usr/sbin/rpc.mountd -p 32780 -F
 2648  2626 root     R     1080   0%   0% top -d 5
 1792     2 root     SW       0   0%   0% [nfsd]
 1494     1 root     S     1628   1%   0% /usr/sbin/hostapd -s -P /var/run/wifi-phy0.pid -B /va
    5     2 root     IW       0   0%   0% [kworker/u8:0]
   84     2 root     IW       0   0%   0% [kworker/1:1]
  231     2 root     SW       0   0%   0% [jbd2/sda2-8]
   14     2 root     SW       0   0%   0% [ksoftirqd/1]
   24     2 root     SW       0   0%   0% [ksoftirqd/3]
  754     1 root     S     1528   1%   0% /sbin/netifd
 2625  1007 root     S      892   0%   0% /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 2
    8     2 root     IW       0   0%   0% [rcu_sched]
   27     2 root     IW       0   0%   0% [kworker/0:1]
   19     2 root     SW       0   0%   0% [ksoftirqd/2]

Seems nothing wrong

try stopping transmission

2 Likes

From the output of your top utility, it only shows about 3 nfsd threads. Can you please post the output from nfsstat -m

  • Yea it is...Transmission it taking up over 20% of your CPU and 6% of memory!
  • And if you're running idle, 14% io is quite high
  • Also, your sirq (Software IRQs) is at 14%...again, if you're idle, this is high. This clearly indicates to me a bottleneck!
  • This only gets worse trying to move traffic, NAT, etc.

I agree with @fuller:

I tested with iperf3, transfer speed from my laptop to the router is only 2.4 MB/s.

And the result of hdparm -tT /dev/sda is

/dev/sda:
 Timing cached reads:   692 MB in  2.00 seconds = 345.54 MB/sec
 Timing buffered disk reads: 272 MB in  3.03 seconds =  89.92 MB/sec

Sorry, but which package contains the nfsstat command? I can't find it.

@lleachii, @fuller,

I tried to stop transmission. The performance of iperf3 raised from 2.4 to 2.7 MB/s...

I've heard that transmission is performance hungry, so is it normal to take over 20% of CPU when it downloading at about 5 MB/s speed?

BTW, the transmission-web client gets very slow when transmission is downloading, but this doesn't happen when transmission is only uploading.

and, was it successful?

1 Like

yes, especially with parallel torrents it tends to make random writes to hdd causing it to react very slow.
hence it would have perfectly explained your situation... :confused:

No. Stopping transmission just make 0.3 MB/s increase in NAS speed

You exclude a lot of essential information so let's try troubleshooting 101

  • Filesystem on USB HDD? Is FUSE involved?
  • How are client(s) connected? Wired / Wireless? What does iperf3 show between client and router (both ways)?
  • AFP "is" deprecated, use NFS or SMB if you're on OSX
  • When troubleshooting, you run one thing at a time... not 3+, try to get NFS (and/or) Samb (4+ highly recommended) going and only run that when testing/troubleshooting
  • How are you testing "download speed" to your client(s)?

Transmission usually pre-allocates files which puts a high load on I/O and during that time the UI will feel sluggish, this also happens on beefier devices so it's normal as far as Transmission goes. This is a design "limitation" by transmission, I guess the UI polls the I/O part of transmission and waits instead of dropping the request which makes it feel unresponsive.

I'd say that expected transfer rates should be around 35-50mbyte/s using Samba4 and USB3 (ext3/4), NFS should be slightly faster and using filesystems such as btrfs is going to slow down things quite a bit.

Keep in mind that IPQ series should still be considered "experimental" SoCs as non are supported in mainline so you might run into platform quirks. I'm not sure how much the support has improved in 4.19 but a few things like USB probably works quite a bit more efficient.

1 Like

Iperf3 directly tests the link, you can never get filesystem performance faster than this link speed. I assume this is wifi. Test your filesystem issue over wired connection, or if this is wired, try different network cables!!!!

If this is wifi try changing channels to 1,6,or 11 on 2.4ghz and to a non DFS channel that isn't in use near you on 5ghz.

That's a pretty low throughput unless the test was done through a WiFi connection. There seems to be some bottleneck somewhere between your computer and your NAS. If you have the F3 utility installed on your NAS, give it a try to see what is the R/W throughput you can get from your NAS to its HDD. See below the result of the R/W tests I did using F3 utility on my NAS.

[debian@debian:/home/local/PEOPLE/debian 5%] ~ f3write /srv/devel/junk/
F3 write 7.1
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

Free space: 2.44 TB
Creating file 1.h2w ... OK!                            
Creating file 2.h2w ... 0.07% -- 50.00 MB/s -- 14:11:35^C
5.021u+23.346s=0:38.18e(74.2%) TDSavg=0k+0k+0k max=3604k 856+3633672io 3pf+0sw
[debian@debian:/home/local/PEOPLE/debian 6%] ~ f3read /srv/devel/junk/
F3 read 7.1
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ...       0/  2097152/      0/      0
Validating file 2.h2w ...       0/  1536156/      0/      0

  Data OK: 0.00 Byte (0 sectors)
Data LOST: 1.73 GB (3633308 sectors)
	       Corrupted: 1.73 GB (3633308 sectors)
	Slightly changed: 0.00 Byte (0 sectors)
	     Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 82.43 MB/s
4.182u+14.698s=0:21.58e(87.4%) TDSavg=0k+0k+0k max=1656k 3633480+0io 4pf+0sw
[debian@debian:/home/local/PEOPLE/debian 7%] ~

On my Seagate GoFLEX Home running debian OS, dpkg -S /usr/sbin/nfsstat indicates the nfsstat utility is part of nfs-common package.