NanoPi R4S-RK3399 is a great new OpenWrt device

Thanks for the tip. For me, all three of these were enabled by the build system; did not have to manually select any. I can share my diffconfig if you're interested.

Can you elaborate on your soft reboot issues? What are they and how do you work around them? Also, which PR are you talking about as needing review?

PR #4860 - rockchip: fix mmc core set initial signal voltage on reboot

The RT3200 has occasionally required the power to be switched/unplugged when it fails to come back after a reboot. It hasn't happened to me recently but I still like to be close by when upgrading. Never bricked one so it's more of an occasional annoyance.

It's a shame the R4S hasn't received the same developer interest but after messing with it over the last several days I'm back to the mindset that it's best to keep core networking independent. It's cool knowing the R4S has the power to handle almost all of my essential stuff but I'd still need an AP and a capable VM host/server so I'd just be shifting roles around without actually consolidating anything.

Fortunately I only have a 200/10 connection which doesn't require that much power for cake, wg, etc...

1 Like

SD card controller does not restore proper voltage on the SDcard, so it cant find SD and stucks on bootloader. You just need to apply that one patch to the kernel. It makes it restore proper voltage just before restart.

You can always apply the patch yourself and the problem will be gone, besides this issue the rest is stable even on the snapshots.

2 Likes

i'd be interested to see the diffconfig to get a plain 'master' build running.

Aren't default configs available in the Image Builder?

Here are the commands I used on mine:

#eth0 interrupts on core 0
echo 1 > /proc/irq/35/smp_affinity
#eth0 queue on core 1
echo 2 > /sys/class/net/eth0/queues/rx-0/rps_cpus
#eth1 interrupts on core 2
echo 4 > /proc/irq/87/smp_affinity
#eth1 queue on core 3
echo 8 > /sys/class/net/eth1/queues/rx-0/rps_cpus

Your IRQ numbers might be different... you can find them with "grep eth /proc/interrupts". First column is the IRQ, so replace those in the commands above. The number you echo into the file defines which CPUs to use. It's basically a binary number where each bit represents one CPU, then convert that to hex. Rightmost bit is CPU 0. For example:

00000001 > hex 1 > cpu 0
00000010 > hex 2 > cpu 1
00001111 > hex F > cpu 0,1,2,3
00110000 > hex 30 > cpu 4,5
2 Likes

A great deal of the excitement associated with people sending money to Friendly ELEC for this small wonder is the fact that it runs OpenWrt FOSS. The OpenWrt maintainers surely know their part in this.

Seeing as how you enablers just convinced me to order an R4S, I sure hope that particular idiot was not affiliated with Friendly ELEC. Only the most altruistic of maintainers would help someone that shoots flames back at them for their trouble earn more money from their free labor.

Thank you very much.

Hi all,
I've a new NanoPi R4S for sale in Europe. I ordered two a few months back and my network has been working well with one single device, so I sale the second device.
Drop me a mail if someone interested.
Thank you,
Peter

I swapped the connections and tested the unloaded lag and the whether the WAN is connect to the UE300 or to the internal NIC, the lag is the same, 18-20 ms. My thought was I might get better not going the USB but that isn't the case.

Did some SQM testing with iperf3 using 3 different builds:

  • All tests done with cake / piece_of_cake set to 1Gb up/down
  • Stock build: 1.8 / 1.4ghz, r8169 driver
  • OC build: 2.2 / 1.8ghz, r8169 driver
  • r8168 build: 2.2 / 1.8ghz, r8168 driver

//////// Default IRQ and queue affinity //////////

Stock build:

  • egress: 935
  • ingress: 917
  • bidirectional: 898 / 750

OC + r8169 build:

  • egress: 940
  • ingress: 916
  • bidirectional: 890 / 800

OC + r8168 build:

  • egress: 935
  • ingress: 917
  • bidirectional: 890 / 790

//////// IRQs and queues on A53 cores only //////////

Stock build:

  • egress: 940
  • ingress: 819
  • bidirectional: 884 / 710

OC + r8169 build:

  • egress: 940
  • ingress: 920
  • bidirectional: 890 / 690

OC + r8168 build:

  • egress: 941
  • ingress: 920
  • bidirectional: 877 / 715

//////// IRQs and queues on A72 cores only //////////

Stock build:

  • egress: 936
  • ingress: 920
  • bidirectional: 902 / 776

OC + r8169 build:

  • egress: 934
  • ingress: 920
  • bidirectional: 910 / 835

OC + r8168 build:

  • egress: 932
  • ingress: 920
  • bidirectional: 910 / 858

//////// IRQs on A72 cores, queues on A53 cores //////////

Stock build:

  • egress: 936
  • ingress: 885
  • bidirectional: 882 / 655

OC + r8169 build:

  • egress: 934
  • ingress: 920
  • bidirectional: 884 / 680

OC + r8168 build:

  • egress: 932
  • ingress: 910
  • bidirectional: 874 / 680

//////// IRQs on A53 cores, queues on A72 cores //////////

Stock build:

  • egress: 940
  • ingress: 920
  • bidirectional: 875 / 899

OC + r8169 build:

  • egress: 940
  • ingress: 920
  • bidirectional: 887 / 888

OC + r8168 build:

  • egress: 939
  • ingress: 920
  • bidirectional: 843 / 902

////// TAKEAWAYS

  • Any of the cores seem to be able to handle gigabit in either direction. However when you put the queues on A53 cores and run a bidirectional test, the A53 cores struggle to keep up and drop to around 880 / 700 Mb
  • When you put queues on the A72 cores, they can almost keep up with a full gigabit bidirectional load (almost 900 both ways)
  • Overall the best result I got was by putting the IRQs on the A53 cores and queues on the A72 cores.
  • The overclock and r8168 driver didn't seem to matter a whole lot. Most of the results on all the builds were within the variability between runs.
  • I would say the overclock is not worth it unless you need it for docker / other stuff.
  • The ingress speeds on all tests were somewhat unstable. Speeds were around 940Mb for the most part but would drop to around 750 about every 5 seconds, bringing the average down to around 920Mb.
  • I would call this device borderline for a symmetrical gigabit connection. I had to drop SQM to 850Mb up/down to get stable bidirectional performance. Asymmetrical is more doable and I managed to get stable speeds at 1000 / 700.
4 Likes

I am curious if using friendly arm own implementation of openwrt if it makes any difference? Just a shot in the dark, but we might see more performance there? Otherwise we should question them i think, cause this device is suppose to achieve gigabit sqm.

Nice tests!

Just realized the limiting factor was that the iperf3 server was running on the R2C. I thought if the R2C had SQM turned off it could handle gigabit no problem. But looking at the CPU load during the bidirectional test, 2 of the cores on the R2C are maxed out. So I'm re-testing with my workstation as the server and will update the results, but it's looking better so far.

Edit: updated results and added comparison of stock build and OC build with r8168.

1 Like

See the test results I just posted with my own builds. You should be able to hit gigabit with SQM on. If not, maybe the bottleneck is the server you're testing against? I initially had some lower than expected results and it turned out my iperf3 server on the other end was a bottleneck. Re-tested against my workstation and got the expected results.

1 Like

Yeah, after installing the build you gave me (the OC one) I managed to get gigabit with SQM easily :slight_smile:

1 Like

I'd like to give your build a try as I am having IPv6 issues with a different build from github.
I see a kmod-r8169 in the root of your share along with another llvm file. Are these needed, or just the ext4 sysupgrade file?
Thanks.

You only need ext4/squashfs sysupgrade file (also packages folder if you want to install something which requires kmod packages). I uploaded whole builder output folder.
kmod-r8168 is alternative lan nic driver. There is already kmod-r8169 included in the build.

1 Like

Question, did you see any ping spikes when you did irq on the a53 cores and queues on the a72 cores? Also, are two a53 cores enough for the irqs? Leaving like 2 a53 cores for some docker application + disk IO?

Amazing tests btw! Clarifies so much!