That would equate to around 120MB/s read and 59MB/s write speed, so the connection is not the issue.
The next step would be opening top and then transferring a big file, watching the overall CPU load and which processes are the highest.
Edit: With yesterday's snapshot build (r10586) and currently latest cifsd (kmod-fs-cifsd 4.19.57+2019-07-17-0c3049e8-1) I get the same ~42MB/s read and ~35MB/s write I got before, maxing out the CPU in the process. This is on a WD My Book Live which, hardware-wise, shouldn't be inferior to your Zyxel NAS.
Wait, you are testing this remotely, over the internet? To check the performance, you should do it in the wired LAN itself, not even over wifi. Everything else will introduce its own overheads.
Oh, I remembered that I did run samba on a router at one point years ago and the following settings helped me last time: socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536
BTW, can you compare samba options between the stock and OpenWRT firmware? There could be a clue in there.
So going by this, Sendfile is broken for all newer 4.1+ samba versions, yet Sendfile is only used if aio is disabled, which is enabled by default. So platforms with working aio "should" be unaffected by the sendfile bug.
So if you suffer from slow speeds, maybe give forced synchronous transfers a try, while avoiding the sendfile bug via those settings:
aio read size = 0
aio write size = 0
use sendfile = no
Hi, after long time - I found the solution, unfortunately the solution is debian :-/ - r/w speed is 25/25 MB/s with samba, with scp is it 8 MB/s r/w in debian, 2.3 MB/s in openwrt