Mochabin is shipping again (or so I am told)

Are 2.5 Gb SFP ports at all useful for fiber? It seems that all fiber transceivers are either 1 or 10 Gb. Yes 2.5 Gb SFP to copper adapters exist, but that should have been built on the board in the first place to save $30 per port.

I went searching for a DAC which took 10GbE fiber to 2.5G (any medium). Still looking and probably won't exist due to power needs.

I have not looked for a 2.5GbE optical solution, But, I have many many connectivity options for 10GbE optical and copper as well as many for 2.5GbE copper.

I've seen some. Optcore has them.

IMO... 'useful' is individual and situational.
When 1GbE was fast MGIG (2.5+) was useful.

Some systems are shipping with 2.5GbE NICs (copper from what I have seen).

And, thanks for the link!

This is a shame. I thought I had found a candidate for a non-x86 fanless router that could do QoS and firewall on full Gigabit throughput. Don't need or want wireless - I'll use separate switches and APs.

I just want to buy a box. Not cobble together something using the (currently unobtainable) Raspberry Pi Compute Module 4.

I get the same feeling as with the 'GNUBee PC-1 &PC-2' and the long slog towards mainline kernel support.

I am confused Limentinus. I got 1GbE on EspressoBin and it had OpenWrt and snort at that time.
Mochabin has more IO and better switch. True, I have not yet performed iperf like tests on Mochabin yet. And I know that NAPI can be horrid on some CPUs. But, I would expect a certain gain above Espressobin (which I hope to test soon).

I hope you are not confusing the clock speed of the interface and the ability of the system to send packets through that interface.

Think of the outgoing network interface as a leaky bucket. The hole in the bottom of the bucket is big enough to allow 1,000,000,000 bits per second to go though it. The tap filling the bucket is the output of the cpu and could well be less than 1,000,000,000 bits per second. So even though the interface is rated to pass 1,000,000,000 bits per second, the actual throughput can be much less, depending on the capabilities of the cpu.

The same is true of inbound data. In this case, the tap is from the network into the bucket and is big enough to deliver 1,000,000,000 bits per second into the inbound bucket. The hole in the bottom of the bucket is now how quickly the cpu can empty the bucket. If the cpu is too slow, the bucket fills up, and spills over the top: data is lost.

So to do full Gigabit throughput networking, the cpu needs to be able to process 1 Gbit/s of data inbound at the same time as 1 Gbit of data outbound. This is not easy. It gets worse. You usually have two network interfaces on a router attached to an Internet connection: one facing the ISP and one facing the local network. If both are to be fully used, the cpu needs to deal with 1 Gbit/s of data coming in from the LAN, then push it out at 1 Gbit/s towards the ISP. And the same in the opposite direction. Which adds up to the cpu needing to be able to cope with processing 4 Gbit/s of data to keep all the inbound and outbound interfaces full. If you add in firewall rules and quality of service, it is a very big load.

Of course, the reality is, I rarely upload at the same rate as downloading. Most people (but not all) are interested mostly in how quickly data arrives from the ISP, so being able to cope with a single 1 Gbit/s inbound stream is sufficiently adequate.

(Some people might not know what NAPI is).

What packetsize(s) were you testing with to get 1 Gbit/s throughput on the Espressobin? Can you link to any test results?

Edit to add:

Link to some performance testing of the Espressobin:

http://espressobin.net/forums/topic/performance-router/page/2/#post-386

So this time the GBit Ethernet gets saturated for frames >=1024bytes, and all this while staying below 55°C without a heat sink, which was really impressive

Perhaps I should look at the Espressobin.

I understand full duplex over two interfaces :slight_smile:
The tests I did on Espressobin long are gone. I can ask for some and see if we can get them from Marvell. Since I also ran these for many other chips I may not remember if the info is correct for the A7040. I know it was correct for the CN7040.

Usually I test with 512B packets since one customer I supported only wanted this data. Occasionally I run single core since some customers asked for this as well (and also one of the reasons I know NAPI can often cause bottlenecks).

But, again; I will ask.

1 Like

Thank you.

When you're looking for a router, it gets frustrating that proper performance information is so hard to get hold of.

You really need a test where you first confirm that this configuration saturates the interfaces.

[Upstream traffic generator]Tx--->>>link>>>---Rx[Downstream traffic sink]                                             
[Upstream traffic sink     ]Rx---<<<link<<<---Tx[Downstream traffic generator]

Then put the device under test into the link and retest.

[Upstream traffic generator]Tx->-Rx[Device under test]Tx->-Rx[Downstream traffic sink]                                             
[Upstream traffic sink     ]Rx-<-Tx[Device under test]Rx-<-Tx[Downstream traffic generator]

You then get the throughput statistics from the Traffic sinks, and hopefully monitor the cpu, memory and interrupt usage of the device under test, which gives you a good picture of what it is capable of. Adding firewall rule processing and QoS is optional, but interesting.

BPI-R3 also has an m.2 key m slot. Not USB2 only.

Mochabin only has a 1.4GHz Arm64 cpu. There is not a chance it can bit-bash 10Gbps of routing. My BPI-R64 has a 1.35GHz Arm64 cpu and it can only process about 4-5Gbps in loopback, let alone routing it. The BPI-R3 has a 2GHz Arm64, which is about beefy enough to comfortably route the 2.5Gb it can port.

Two SFPs @ 2.5Gbit is a good thing. Mochabin's second SFP is only 1Gbit. Means it has no "wired" way to get data out of it at nearly the speed data can come into it.

A router with a bigger mouth than it has the stomach for is not much use except to have impressive specs on paper. I actually respect BPI designers on this.

1 Like

Yes, that's nice. But it's completely irrelevant for 5G modems.

Please don't try to confuse this even more. The BPi3 has a mini-PCIe slot for modems. Provided you don't get the PoE version, which has no slot at all. That's your options.

The key M slot is for SSDs. Putting a modem in there is futile because
a) it will not fit, and
b) the slot is not connected to any SIM

Sure. The point is that you can use an SFP+ to connect to any standard switch and get a 10Gbps link. And therefore have a slight chance of actually routing more than 1gig. It's not about whether 2.5Gbps is enough - I'm sure it is - but what hardware interface is commonly available.

I see that 2.5G optics actually exist, which was surprising. But I would still worry about the real world support. I cannot imagine that you can put that into an arbitrary SFP+ switch out there. 2.5Gbps SFP is, and will probably always be, a weird interface. 10Gbps optics has been around for more than a decade. I've yet to see a switch supporting anything between 1 and 10Gbps (excluding BaseT which is a different subject).

And now I'm sure someone will come dragging that one switch which is the exception. Well, fine, if you want to build your entire network infrastructure around some weird hardware interface using one-of-a-kind components, then do that. I want standard hardware interfaces leaving me some choice.

My apologies indeed - you are 100% correct. I have (clearly) never used cellular data other than from my phone, nor have I used m.2 for anything other than storage.

I have also confirmed that the R3's mPCIe is indeed only USB 2.0 as you correctly surmised.

First of all, I wonder if anyone that has the budget for a 10Gb SFP+ switched backbone, is really going to be messing around with either a BPI-R3 or Mochabin?

Here I think you are confusing the issue. Most SFP+ switches will also take SFP modules. A lot of SPF+ ports don't advertise being able to do 2.5GB, but will. And if some SFP+ switch doesn't support a 2.5Gb SFP optical module, there are still other ways to connect the R3 to it, like a mode convertor box or just using copper modules for that link. There are available lists of 10GBe SFP+ modules that will link at 2.5Gbps.

With an R3 hooked to a 10Gbps SFP+ switch you still get 10Gbps switching, but you then have both the bandwidth AND the horsepower to route it's full 2.5Gbps link speed through the internet.

On the Mochabin, since you have only one 10Gbps port anyway, using it as an internet router connected to a 10GBps switch isn't even a horsepower issue - the only way you can route external data at more than 1Gbps is if you are using cellular. Do any 5g cellular networks even work at any (effective) speeds in excess of 1Gbps?

Honestly, though, I don't see that hamstringing the Mochabin by using its sole >1Gbps port to connect to a switch is likely going to be a common scenario. At its price point it is likely aimed at being attractive to FTT_.

Mochabin is clearly superior for 5G cellular (EDIT: unless this is an option). Thank-you for pointing that out. But for everything else, the R3 seems to be as good as or better.

Yes 5G can get you past 1gbps but keep in mind that this actual pci modem will cost 3-400$ due to huge licensing fees...