Not trying to derail this thread, but let me just say that I'm positively giddy to try and replace samba with cifsd now that it has officially landed in packages. Thank you for your work.
Yeah need to update cifsd again, too fix a service restart bug and create a luci package. I also still need to update wsdd2 so it might also work with cifsd.
I noticed. At the moment, cifsd only starts once. Any subsequent restarts (also stop & start) render errors which seem to come from the kernel module, or the handling of its rmmod/modprobe:
kern.err kernel: [ 1459.231963] kcifsd: create_socket:410: Failed to bind socket: -98 kern.err kernel: [ 1459.238083] kcifsd: tcp_destroy_socket:368: Failed to shutdown socket: -107 kern.err kernel: [ 1459.245030] Failed to init TCP subsystem: -98
Sometimes it would start after a longer stop, but it's not quite predictable.
However, in initial tests I am thoroughly impressed by cifsd's performance. Where I would run into a ~20 MB/s brick wall with Samba3, I now see ~35 MB/s with cifsd,
and I feel that that's still a bit low because the disk I'm testing it on is only managing ~45 MB/s raw write speed on a good day with clear weather. Edit: Nope, with a faster disk it doesn't improve. Still, a solid +75% of performance is very, very nice.
I now see ~35 MB/s with cifsd, and I feel that that's still a bit low because the disk I'm testing it on is only managing ~45 MB/s raw write speed on a good day with clear weather
nice - and when you add another squirrel to a nearby park?
Yes that's the restart bug, upstream already has a patch for it and i need to verify it and will update the package. You can wait 10s after a stop and it should start fine, 8s is the userspace timeout before the kernel module cleanly closes all connections. The problem was that if you start within this timeframe the "reconnect" does not work and leaves a zombie connection around, which prevents the start.
cifsd look god - can You tell me please, how can I it try? uninstall samba and install cifsd? install cifsd and "connect" to samba? - sorry for my questions, but I am not a IT - IT is only my hobby
Yes just install and create a manual /config/cifsd, your old samba config should be compatible, just copy&rename it. You can also just wait, i will have the luci ui package ready next week and than you just install and use it like samba.
PS: The only difference from a user perspective is that, the shares will not show up automatically in Win10/explorer/network. You have to manually navigate to the router/hostname aka \\router or whatever your hostname for openwrt is. Thats because cifsd is not yet compatible with wsdd2.
In addition to @Andy2244's remarks: To "install cifsd" you would have to update your OpenWrt installation. cifsd is currently only in master/snapshots, and since cifsd is a kernel module (kmod-kfs-cifsd) it is bound to the specific kernel it has been built against.
(Personally, I would wait a few days longer and do all of that once the abovementioned bug is resolved. It is not a dealbreaking bug by any stretch, but depending on how experimental you feel, you might only want to update once.)
I am a bit DYI (at home it) I tray anything, but sometimes I get stuck (I know no how is this in english) on something easy. On the router have I stable openwrt, on NAS dev version. And if I don't know what to do next, I make firstboot and reboot again...
On the NAS have I instaled only mc, vim, nmap, samba36, smartctl + hdparm and shadow I think (and packages for mount - block-mount, kmod-fs-ext4)
yes, something changed - read is now 4.5 MB/s ...
You mean, your throughput is actually lower now with cifsd?
I haven't found time to try the bugfixed version yet, it's in packages for about two days now and I intend to try it on the weekend. But I find it strange that it would perform not better but so much worse than regular samba3x.
unfortunately yes, but I think, it is not only samba problem - nfs/vsftp have the same problem with read speed. Disk is ok I think - smart has no errors, read/write speed in "big" pc are ok
You should test your raw speed in OpenWrt. On the shell, change to a directory on your disk, in your case that would be
/mnt/sda I assume. And then run the following commands:
time dd if=/dev/zero of=tempfile bs=1M count=1024
time dd if=tempfile of=/dev/null bs=1M count=1024
The former will test write speed, the latter will test read speed using the "tempfile" the write test wrote. Since that is a 1024 MB file, divide 1024 by the resulting seconds and you get your MB/s. This is not a be-all-end-all test, but it will show the limits of the disk when connected to your device.
thank You takimata, here are the results:
root@OpenWrt:~# mkdir /mnt/sda/marek/tmp root@OpenWrt:~# cd /mnt/sda/marek/tmp root@OpenWrt:/mnt/sda/marek/tmp# time dd if=/dev/zero of=tempfile bs=1M count=1024 1024+0 records in 1024+0 records out real 0m 17.44s user 0m 0.02s sys 0m 14.67s root@OpenWrt:/mnt/sda/marek/tmp# time dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out real 0m 8.50s user 0m 0.00s sys 0m 7.23s root@OpenWrt:/mnt/sda/marek/tmp#
That would equate to around 120MB/s read and 59MB/s write speed, so the connection is not the issue.
The next step would be opening
top and then transferring a big file, watching the overall CPU load and which processes are the highest.
Edit: With yesterday's snapshot build (r10586) and currently latest cifsd (kmod-fs-cifsd 4.19.57+2019-07-17-0c3049e8-1) I get the same ~42MB/s read and ~35MB/s write I got before, maxing out the CPU in the process. This is on a WD My Book Live which, hardware-wise, shouldn't be inferior to your Zyxel NAS.
Hi, I posted the top ressults in comments 9 a 10 I think, I am not at home, so I can't add the new one. I don't know where is the problem
Wait, you are testing this remotely, over the internet? To check the performance, you should do it in the wired LAN itself, not even over wifi. Everything else will introduce its own overheads.
No, every tests are over cabel and in LAN, but today we have birthday party for Our childern, so we aren't at home
It seems to be CPU bound
5693 5190 admin S 28336 11% 80% /usr/sbin/smbd -F
Yes, it can be, but why - original firmare has read/write speed 50/35 MB/s (over nfs)