This is too slow, AFAICT. BTW, you just can't justify from this kind of test.
However, you can try to setup NFS and use it to test. In my case with a Seagate GoFLEX Home running on a debian Linux with NFS, I get an average of 70/30 MBps (R/W) speeds. Below is the result of some plain (no encryptions) R/W speed tests from a debian Linux computer to my Seagate GoFLEX Home using F3 utility from Digariti.
[mazilo@linux:/home/local/PEOPLE/mazilo 388%] ~ f3write /mnt/devel/junk/
Free space: 2.44 TB
Creating file 1.h2w ... OK!
Creating file 2.h2w ... 0.05% -- 30.10 MB/s -- 23:38:14^C
2.001u+3.763s=0:40.18e(14.3%) TDSavg=0k+0k+0k max=2244k 80+2467200io 0pf+0sw
[mazilo@linux:/home/local/PEOPLE/mazilo 389%] ~ f3read /mnt/devel/junk/
SECTORS ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/ 0/ 0/ 0
Validating file 2.h2w ... 369712/ 0/ 0/ 0
Data OK: 1.18 GB (2466864 sectors)
Data LOST: 0.00 Byte (0 sectors)
Corrupted: 0.00 Byte (0 sectors)
Slightly changed: 0.00 Byte (0 sectors)
Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 69.73 MB/s
2.715u+1.222s=0:17.36e(22.6%) TDSavg=0k+0k+0k max=2244k 2466928+0io 0pf+0sw
[mazilo@linux:/home/local/PEOPLE/mazilo 390%] ~
Also, install iperf3 on your NAS and/or the computer you want to test. Then, you can do some R/W tests as follows:
- On your NAS, run iperf3 -s
- On your computer, run iperf3 -c
From the above test, it will show you the network bandwidth throughputs you can achieve from your computer to your NAS and/or vice versa. The following shows the output from an iperf3 test from my debian Linux computer to my Seagate GoFLEX Home.
[root@linux:/home/local/PEOPLE/mazilo 391%] # iperf3 -c 192.168.1.88
Connecting to host 192.168.1.88, port 5201
[ 4] local 192.168.1.74 port 43818 connected to 192.168.1.88 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 72.8 MBytes 610 Mbits/sec 0 365 KBytes
[ 4] 1.00-2.00 sec 92.3 MBytes 775 Mbits/sec 0 399 KBytes
[ 4] 2.00-3.00 sec 90.1 MBytes 756 Mbits/sec 0 423 KBytes
[ 4] 3.00-4.00 sec 90.2 MBytes 757 Mbits/sec 0 423 KBytes
[ 4] 4.00-5.00 sec 90.3 MBytes 758 Mbits/sec 0 423 KBytes
[ 4] 5.00-6.00 sec 89.8 MBytes 753 Mbits/sec 0 423 KBytes
[ 4] 6.00-7.00 sec 91.2 MBytes 766 Mbits/sec 0 423 KBytes
[ 4] 7.00-8.00 sec 91.4 MBytes 767 Mbits/sec 0 423 KBytes
[ 4] 8.00-9.00 sec 90.5 MBytes 759 Mbits/sec 0 423 KBytes
[ 4] 9.00-10.00 sec 89.7 MBytes 753 Mbits/sec 0 423 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 888 MBytes 745 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 887 MBytes 744 Mbits/sec receiver
iperf Done.
0.032u+1.621s=0:10.18e(16.2%) TDSavg=0k+0k+0k max=2412k 0+0io 1pf+0sw
[root@linux:/home/local/PEOPLE/mazilo 392%] #
As you can see from the above, I can only achieve 10% in read tests from the network bandwidth read throughput. Ideally, I would love to be able to get at least a double in R/W (particularly the later) speed tests, respectively. Unfortunately, as you can see the bottleneck will become the gigabit network traffics. I am hoping some experts here can help.
BTW, a hdparm -tT /dev/sda test (see below) on my Seagate GoFLEX Home indicates the HDD on my Seagate GoFLEX Home is capable of delivering more R/W throughputs.
[root@debian:/root 2%] # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 522 MB in 2.01 seconds = 260.16 MB/sec
Timing buffered disk reads: 382 MB in 3.01 seconds = 126.72 MB/sec
0.302u+4.301s=0:13.57e(33.8%) TDSavg=0k+0k+0k max=3896k 798816+0io 5pf+0sw
[root@debian:/root 3%] #