OpenWrt Forum Archive

Topic: Guest networks and NFS

The content of this topic has been archived on 2 May 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

Hi, I'm setting up OpenWRT on my Archer C7. I've tested two different methods of file sharing, and found that NFS is up to 5 times faster than sshfs in some cases. However, I'm concerned about the lack of security in NFS. Occasionally others use my network, and I'd prefer not to let them access my shares. I'm not 100% sure how guest networks work and if they could solve my problem.

  • My router currently broadcasts two networks from the two 5 GHz and a 2.5 GHz radios. Is the purpose of guest networks to allow simultaneous broadcast? i.e. can I have an additional two networks (with distinct SSID and passwords)?

  • Can I only allow access to my NFS shares from a single pair of networks, denying access from the two guest networks?

Thank you in advance.

Pseudorellia wrote:

I've tested two different methods of file sharing, and found that NFS is up to 5 times faster than sshfs in some cases.

I sure will appreciate if you can kindly show the R/W throughput on your NFS storage.

However, I'm concerned about the lack of security in NFS. Occasionally others use my network, and I'd prefer not to let them access my shares.

In your /etc/exportfs file, you can specify which computers are allowed to access which NFS exported partition. For instance, the following will allow only the IP Address (10.0.0.100) to be able to see and mount the exported NFS drive (/opt). Other computers will probably be able to mount the exported NFS drive (/opt) with an empty directory.

root@lede:~# cat /etc/exports 
/opt    10.0.0.100/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

If your router supports multiple local subnets, then you can specify computers in which subnet are allowed to access your NFS drive

root@lede:~# cat /etc/exports 
/opt    10.0.0.0/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

Thank you for the reply.

mazilo wrote:
Pseudorellia wrote:

I've tested two different methods of file sharing, and found that NFS is up to 5 times faster than sshfs in some cases.

I sure will appreciate if you can kindly show the R/W throughput on your NFS storage.

Sure. It's not exactly comprehensive, but it gives the general idea. I created a 50 MB random file (with `dd if=/dev/urandom of=/tmp/randomness bs=1M count=50`), then mounted the router's drive locally with either sshfs of NFS, both with default options. I tested the time (in seconds) it took to transfer with `time rsync …`.

On my laptop connecting on 5 GHz 802.11n:

  • sshfs upload: 28.079, 27.133

  • NFS upload: 12.238

On my wife's laptop with 802.11ac:

  • sshfs download: 16.766

  • NFS download: 7.409

  • sshfs upload: 21.814, 20.835

  • NFS upload: 4.827

However, I'm concerned about the lack of security in NFS. Occasionally others use my network, and I'd prefer not to let them access my shares.

In your /etc/exportfs file, you can specify which computers are allowed to access which NFS exported partition. For instance, the following will allow only the IP Address (10.0.0.100) to be able to see and mount the exported NFS drive (/opt). Other computers will probably be able to mount the exported NFS drive (/opt) with an empty directory.

root@lede:~# cat /etc/exports 
/opt    10.0.0.100/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

Yes, I did see configurations for IP addresses, but I didn't feel like that was very secure. It seems that any computer on the network can simply run `showmount -e <server IP>`, and get a list of shares and IP addresses. It's then trivial to spoof the IP address and gain access.

If your router supports multiple local subnets, then you can specify computers in which subnet are allowed to access your NFS drive

root@lede:~# cat /etc/exports 
/opt    10.0.0.0/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

Hmmm… I'll have to investigate this further. Currently my OpenWRT wifi router is in DHCP bridge mode, so I guess it's not the one creating subnets.

Inferring from your answer, though, is guest networks the wrong way to go about this? And can they not increase the number of SSIDs and password pairs produces by the router?

Pseudorellia wrote:

Thank you for the reply.

mazilo wrote:
Pseudorellia wrote:

I've tested two different methods of file sharing, and found that NFS is up to 5 times faster than sshfs in some cases.

I sure will appreciate if you can kindly show the R/W throughput on your NFS storage.

Sure. It's not exactly comprehensive, but it gives the general idea. I created a 50 MB random file (with `dd if=/dev/urandom of=/tmp/randomness bs=1M count=50`), then mounted the router's drive locally with either sshfs of NFS, both with default options. I tested the time (in seconds) it took to transfer with `time rsync …`.

On my laptop connecting on 5 GHz 802.11n:

  • sshfs upload: 28.079, 27.133

  • NFS upload: 12.238

On my wife's laptop with 802.11ac:

  • sshfs download: 16.766

  • NFS download: 7.409

  • sshfs upload: 21.814, 20.835

  • NFS upload: 4.827

I gathered your NFS storage is mounted through a USB port, right? TBH, I don't know how fast is your NFS storage in terms of R/W in MBps. When I first read your post few days ago, I did some searches through Google and found this pretty recent post shows how the OP performed the measurements. I kept reading the discussions on the thread and found the OP uses the dd utility (similar to what you had done, except directly to the NFS storage) to show the R/W throughputs in MBps. Perhaps, you can give that a try and post the result here.

However, I'm concerned about the lack of security in NFS. Occasionally others use my network, and I'd prefer not to let them access my shares.

In your /etc/exportfs file, you can specify which computers are allowed to access which NFS exported partition. For instance, the following will allow only the IP Address (10.0.0.100) to be able to see and mount the exported NFS drive (/opt). Other computers will probably be able to mount the exported NFS drive (/opt) with an empty directory.

root@lede:~# cat /etc/exports 
/opt    10.0.0.100/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

Yes, I did see configurations for IP addresses, but I didn't feel like that was very secure. It seems that any computer on the network can simply run `showmount -e <server IP>`, and get a list of shares and IP addresses. It's then trivial to spoof the IP address and gain access.

Yes and this is also true even for Samba and/or perhaps all NAS systems our there that use many different software. Otherwise, a client won't be able to see if there is any network storage, i.e. an NFS drive, etc., to mount. But most importantly, will it show the directory listings and/or contents of the NFS storage?

If your router supports multiple local subnets, then you can specify computers in which subnet are allowed to access your NFS drive

root@lede:~# cat /etc/exports 
/opt    10.0.0.0/255.0.0.0(rw,insecure,nohide,no_subtree_check,sync)
root@lede:~#

Hmmm… I'll have to investigate this further. Currently my OpenWRT wifi router is in DHCP bridge mode, so I guess it's not the one creating subnets.

Inferring from your answer, though, is guest networks the wrong way to go about this? And can they not increase the number of SSIDs and password pairs produces by the router?

Honestly, I would not know the answer, unfortunately. sad

mazilo wrote:

I gathered your NFS storage is mounted through a USB port, right?

Correct.

Perhaps, you can give that a try and post the result here.

To be honest, I wasn't so much interested in raw R/W values. The reason I was investigating wireless storage is to backup laptops over the network. Currently, I just connect a hard drive via USB directly to the laptop, which takes 15 minutes to complete. I've previously tried backing up remotely to our home server, but the throughput of my previous 802.11n router meant a backup took ~4 hours. Since upgrading to 802.11ac (and OpenWRT), I was firstly trying to optimise which protocol to use. I was then going to test how quick a backup would be. However, if you are curious, or you think it might be useful to others, I'm happy to follow the instructions in the link. Just let me know.

Yes and this is also true even for Samba and/or perhaps all NAS systems our there that use many different software. Otherwise, a client won't be able to see if there is any network storage, i.e. an NFS drive, etc., to mount. But most importantly, will it show the directory listings and/or contents of the NFS storage?

I think the difference is that Samba is secured with a password by default. NFS can be secured with kerberos, but this seems fairly difficult to set up.

I now suspect it's not possible, but one theoretical way that NFS might work would be to restrict access to particular IP addresses, but not broadcast which addresses these were. ssh can be configured this way.

Honestly, I would not know the answer, unfortunately. sad

No worries. Thank you for your help anyway. Hopefully someone else might be able to pipe in. Otherwise, I'll give it a shot anyway and report back.

Pseudorellia wrote:

However, if you are curious, or you think it might be useful to others, I'm happy to follow the instructions in the link. Just let me know.

I sure will appreciate if you can please do it, especially if your external storage is through a USB2 port. Chances are your R/W throughput will be about 10/4 MBps, respectively. With a USB2 port, you can probably increase the R/W through by 2x if you use either a USB3 storage and/or a USB3 card reader.

mazilo wrote:

I sure will appreciate if you can please do it, especially if your external storage is through a USB2 port. Chances are your R/W throughput will be about 10/4 MBps, respectively. With a USB2 port, you can probably increase the R/W through by 2x if you use either a USB3 storage and/or a USB3 card reader.

Yes, it's USB2. I installed iperf on both boxes, but my understanding is that it shows raw throughput between IP addresses. There doesn't seem to be a way to specify protocol (i.e. sshfs vs. NFS), nor pick "local" (i.e. mounted) filesystems.

Pseudorellia wrote:
mazilo wrote:

I sure will appreciate if you can please do it, especially if your external storage is through a USB2 port. Chances are your R/W throughput will be about 10/4 MBps, respectively. With a USB2 port, you can probably increase the R/W through by 2x if you use either a USB3 storage and/or a USB3 card reader.

Yes, it's USB2. I installed iperf on both boxes, but my understanding is that it shows raw throughput between IP addresses. There doesn't seem to be a way to specify protocol (i.e. sshfs vs. NFS), nor pick "local" (i.e. mounted) filesystems.

Several follow-up posts down in the link I provided previously indicates how the OP accomplished this. It is something like the following. TBH, I don't know how accurate the tests will be, but it sure will give some ideas and/or comparisons.

To perform a write:

time dd if=/dev/zero of=/mnt/downloads/testfile bs=16k count=16384

To perform a read:

time dd if=/mnt/downloads/testfile of=/dev/null bs=16k

After the write, you will probably need to flush the cache before performing the read, i.e. umount the NFS partition, etc.

mazilo wrote:
Pseudorellia wrote:
mazilo wrote:

I sure will appreciate if you can please do it, especially if your external storage is through a USB2 port. Chances are your R/W throughput will be about 10/4 MBps, respectively. With a USB2 port, you can probably increase the R/W through by 2x if you use either a USB3 storage and/or a USB3 card reader.

Yes, it's USB2. I installed iperf on both boxes, but my understanding is that it shows raw throughput between IP addresses. There doesn't seem to be a way to specify protocol (i.e. sshfs vs. NFS), nor pick "local" (i.e. mounted) filesystems.

Several follow-up posts down in the link I provided previously indicates how the OP accomplished this. It is something like the following. TBH, I don't know how accurate the tests will be, but it sure will give some ideas and/or comparisons.

To perform a write:

time dd if=/dev/zero of=/mnt/downloads/testfile bs=16k count=16384

To perform a read:

time dd if=/mnt/downloads/testfile of=/dev/null bs=16k

After the write, you will probably need to flush the cache before performing the read, i.e. umount the NFS partition, etc.

Ah, sorry. I thought you were referring to the specific post you linked, not the whole thread. Having said that, I honestly think my way is better. If you just read zeros, then there is potentially some compression that confounds the analysis. If you read random data, then this can mitigate that concern. I feel that it's a more "real-world" scenario. At worst, using /dev/urandom is as good as using /dev/zero. The second command was essentially what I did, except I also tested the local write speed, which also reflects reality more.

Also, FWIW, I've previously done tests on my Raspberry Pi comparing NFS and ssh. I actually found that I could considerably speed up the sshfs by using a less intensive encryption, i.e. arcfour. It's not totally secure, but assuming my Wifi connection is not compromised, I was happy with that. This is probably a major reason why OpenWRT's sshfs is so slow, since it doesn't offer arcfour, and the processor is much weaker than my Pi.

Pseudorellia wrote:

Ah, sorry. I thought you were referring to the specific post you linked, not the whole thread. Having said that, I honestly think my way is better. If you just read zeros, then there is potentially some compression that confounds the analysis. If you read random data, then this can mitigate that concern. I feel that it's a more "real-world" scenario. At worst, using /dev/urandom is as good as using /dev/zero. The second command was essentially what I did, except I also tested the local write speed, which also reflects reality more.

I wasn't aware of that, but you really brought up a very good point using the /dev/urandom.

Also, FWIW, I've previously done tests on my Raspberry Pi comparing NFS and ssh. I actually found that I could considerably speed up the sshfs by using a less intensive encryption, i.e. arcfour. It's not totally secure, but assuming my Wifi connection is not compromised, I was happy with that. This is probably a major reason why OpenWRT's sshfs is so slow, since it doesn't offer arcfour, and the processor is much weaker than my Pi.

TBH, I don't even know how to test NFS throughput with ssh. I may not be able to test it right the way, but can you please provide instructions on how to test NFS throughput with ssh?

mazilo wrote:

Also, FWIW, I've previously done tests on my Raspberry Pi comparing NFS and ssh. I actually found that I could considerably speed up the sshfs by using a less intensive encryption, i.e. arcfour. It's not totally secure, but assuming my Wifi connection is not compromised, I was happy with that. This is probably a major reason why OpenWRT's sshfs is so slow, since it doesn't offer arcfour, and the processor is much weaker than my Pi.

TBH, I don't even know how to test NFS throughput with ssh. I may not be able to test it right the way, but can you please provide instructions on how to test NFS throughput with ssh?

Oh, perhaps I was unclear. I meant that I compared three different things. In order of increasing speed:

  • sshfs with default encryption

  • sshfs with arcfour encryption

  • NFS

Pseudorellia wrote:
mazilo wrote:

Also, FWIW, I've previously done tests on my Raspberry Pi comparing NFS and ssh. I actually found that I could considerably speed up the sshfs by using a less intensive encryption, i.e. arcfour. It's not totally secure, but assuming my Wifi connection is not compromised, I was happy with that. This is probably a major reason why OpenWRT's sshfs is so slow, since it doesn't offer arcfour, and the processor is much weaker than my Pi.

TBH, I don't even know how to test NFS throughput with ssh. I may not be able to test it right the way, but can you please provide instructions on how to test NFS throughput with ssh?

Oh, perhaps I was unclear. I meant that I compared three different things. In order of increasing speed:

  • sshfs with default encryption

  • sshfs with arcfour encryption

  • NFS

OK.

NFS should always be faster than SSHFS on many OpenWrt devices because SSHFS uses much, much more CPU resources. If you run top or htop on your OpenWrt device during a network transfer you will see what I mean. There are many things that influence write speeds. For example:

  • Protocol Overheard

  • CPU Power

  • Network Throughput

  • Media Write Speed

Benchmarks on one device will be different on another device based on the above.

What I do is serve all my media files with anonymous, read only, NFS and anonymous ftp using vsftp. Note that Kodi supports NFS on all platforms including Windows and Android. This gives me adequate performance to support multiple streams of 1080p movies with a minimal of CPU load. I don't care who accesses my media streams.

For files where I do care about access, I use sshfs. I typically don't transfer large files over sshfs.

(Last edited by vernonjvs on 20 May 2016, 05:08)

The discussion might have continued from here.