Hardware specs on a J4115 machine

For a Gigabit NIC?
Only gigabit ones I've encountered that do have heat issues is an ancient Sun quad gigabit but it comes with a hefty heatsink, pretty much everything else has a tiny heatsink or no heatsink at all and it's not hot to the touch.

It does, and don't forget driver support.
https://downloadmirror.intel.com/20927/eng/e1000.htm (there are different gbit drivers)
https://blog.kylemanna.com/hardware/intel-nic-igb-i211-vs-e1000e-i219/
...and for verification (as it's the some on FreeBSD)
https://pastebin.ubuntu.com/p/yBg3vh9CJZ/ :slight_smile:

Do you have an official verification of that part number?
YG4N3 seems to be a common but I couldn't find any official documents from Dell.

I know there are multiple drivers (all in default image, btw), I asked if that does actually matter in your experience, if you have seen instances where this was a problem.

Even the guy in the blog post just lists a bunch of features and declares the winner "because it has more features", which is not an answer to my question.
Yes the more modern hardware has more features, but does any of that have an appreciable effect in practice for a router/firewall? Afaik even old Intel 1000Pro on PCI cards supported VLAN.

Also afaik by default Pfsense and OPNsense both disable hardware acceleration for ARP and TCP due to possible incompatibilities or bugs in the driver causing grief in the past

Does that even mean anything? Copying a sticker with a known good part number (or multiple) is the least difficult part of making a fake card. Making a PCB and component layout that isn't immediately obvious as a fake without costing more than the current old stock is the difficult part.

I don't think any gigabit card that passes visual inspection can be a fake at this point in time, that ship has sailed years ago. The ChinaCards look obvious and generic.

As I need the newer drivers for functionality (igb) yes, it does matter however I still have older cards that uses the em (e1000) drivers that "runs" fine. At least on the igb cards hardware acceleration works fine for me at least (tm) but given various combinations etc I think they're playing it safe.

If you have an official verfication its at least something, I've seen numerous variants being listed as 350 while the manufacturer reports it as 340.

I have the impression that you are evading the question (what features actually matter for you and make you prefer/need the newer gigabit NICs), I mean it's ok if you dont't want to answer, I'm just curious.

I'd also be curious to know which features are of interest.

I have a board with ixgbe drivers, the 10Gbe version of igb. Most of the hardware offload functionality needs to be explicitly disabled when using openwrt because of SQM.

The only thing that is potentially of interest is hardware receive flow steering (also known as Intel Flow Director), which can marginally improve latency at the cost of processing all interrupts for a single flow on on a single core. And I don't believe this feature is available on igb drivers, only ixgbe drivers.

I want to use iflib and igb instead of em simply because its better maintained and plays nice with altq. I have no idea about drivers etc on Linux.

ah ok you use *BSDs (pfsense/OPNSense), yeah there you need to care about the drivers as hardware support on *BSDs is much more spotty.

Hmph. I used to backport the Intel igb drivers into my previous custom build. They're actually buggier than the mainline kernel drivers on Linux.

That's just bs on your behalf, it's the same driver base used in Linux fyi

You just said that you prefer newer Intel chipsets because they have a "better maintained" driver. There is no such problem on Linux afaik. SQM does not care.

But with that I actually meant the following:

Realtek driver is less crash-happy on Linux.

Linux actually can use Broadcomm network cards reliably while on FreeBSD it simply did not work for years, maybe now it's fixed (hopefully) as said in this bug report (not mine) https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=206168

Linux can operate reliably most Aquantia chipsets (the consumer more-than-gigabit brand that isn't Realtek), last year I loaded PfSense on my motherboard with an Aquantia 10Gbit ethernet and nope not even detected. It's in ports as a "dev preview" https://www.freshports.org/net/aquantia-atlantic-kmod
Does not matter as that board is not for a router (it has a Threadripper proecessor, it's running Proxmox virtualization), was just curious.

Linux has drivers for random oddball legacy cards like the Sun quad gigabit I mentioned, or the other 10Gbit one I linked above from Brocade.

I can use good USB ethernet chipsets (ASIX chipsets) at actual gigabit speed, while on FreeBSD they don't go above half that for some reason.

Modem interfaces on BSD are a mess, you can only use the old serial ppp whatever which is not fast enough for LTE. Wifi is worse.

FreeBSD also picky with SAS controllers, god forbid using non-LSI chipsets.

Heck, at this point in time even ffing VMWare ESXi 7 has better driver support than BSDs, (the one where they dropped Linux layer so it's all native drivers, no cheating by importing all Linux drivers).

Err, no they're not. The out of tree drivers are different

https://sourceforge.net/projects/e1000/files/igb%20stable/

You've got quite a sharp tongue haven't you.

I'll even give you a Makefile for the "bs" drivers, although for a much older version since I haven't used them for a number of years

#
# Copyright (C) 2010 OpenWrt.org
#
# This is free software, licensed under the GNU General Public License v2.
# See /LICENSE for more information.
#

include $(TOPDIR)/rules.mk

PKG_NAME:=kmod-igb-ooo
PKG_VERSION:=5.3.5.3
PKG_RELEASE:=1

PKG_SOURCE:=igb-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=http://downloads.sourceforge.net/project/e1000/igb%20stable/5.3.5.3/
PKG_MD5SUM:=3e74ca3ac738413ced9adb00d3f69977

include $(INCLUDE_DIR)/kernel.mk
include $(INCLUDE_DIR)/package.mk

PKG_UNPACK:=zcat $(DL_DIR)/$(PKG_SOURCE) | $(TAR) -C $(PKG_BUILD_DIR) --strip-components=1 -xf -

MAKE_PATH:=src

MAKE_VARS:= \
	KSRC="$(LINUX_DIR)" 

MAKE_OPTS:= \
	ARCH="$(LINUX_KARCH)" \
	CROSS_COMPILE="$(KERNEL_CROSS)" 

# If the +kmod-ptp dependency does not go before the @PCI_SUPPORT dependency
# a missing dependencies error will result. 

define KernelPackage/igb-ooo
  SUBMENU:=Network Devices
  TITLE:=Intel(R) I354 Quad GbE Controller 
  DEPENDS:=+kmod-ptp @PCI_SUPPORT 
  KCONFIG:=CONFIG_IGB \
    CONFIG_IGB_HWMON=n \
    CONFIG_IGB_DCA=n
  FILES:=$(PKG_BUILD_DIR)/src/igb.ko
  AUTOLOAD:=$(call AutoLoad,35,igb)
endef

define KernelPackage/igb-ooo/description
 Out of tree kernel modules for Intel(R) igb
 
 Warning: do not select the core package kmod-igb as this is an out
 of tree version of the igb driver that shares the same module name
endef

$(eval $(call KernelPackage,igb-ooo))

Let me just be clear here, first you claim "facts" than alter it to "afaik". Newer hardware (in this case) does have better "support" and functionality, just because both can push linespeed doesn't mean that they're equally "good" which also have been noted above. That's about the same as claiming that a Fiat Uno 123 has the same performance as a Tesla just because both can hit 100km/h.

...and did you read your own link? https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=206168#c8
There are a lot of people using Broadcom NICs just fine but as with everything else there are edge cases, that being said Broadcom is perhaps not the most preferred vendor and in general that also applies to Linux.

Yes, Marvell absorbed Aquantia which most likely changed some priorities etc. Are you going to after Marvell for abandoning Octeon on Linux too (which more or less is on life support at this point)? Maybe it works better on Linux, I have no idea and both opn/pfsense uses an older base than FreeBSD upstream just like OpenWrt doesn't track upstream Linux which may be due to various circumstances including development models which may affect your overall experience.

While FreeBSD or any BSD for that matter does indeed carry less drivers than Linux I don't think that's necessarily a bad thing. There certainly are different development philosophies and views on both sides in that regard. Drivers can be broken for a very long time on Linux too, one prime example is the MT7621 platform and I think we can safely say that no operating system/kernel is flawless.

I think your view of "good" might be a bit skewed or at least the bar is a bit lower than average, "works" doesn't mean it's good. There's a reason why you don't see network appliances or even cheap consumer routers utilizing USB NICs instead of integrated or PCIe-based. Another example that I can think of is the network driver support in ESXi, a lot of cheap NICs aren't supported and there's a reason for it even if you might not agree on their decisions. I can't comment on ASIX performance as I don't own or know anyone who owns such hardware. Worth keeping in mind that build quality of USB adapters in general are quite poor, even higher end controllers such as Aquantia have reports of various issues even as simple as overheating in normal environments.

I'm using NCM just fine with LTE is just as good or even ever so slightly faster than using OpenWrt on Allwinner ARM SoCs.
https://www.freebsd.org/cgi/man.cgi?query=if_cdce&apropos=0&sektion=4&manpath=FreeBSD+12.0-RELEASE&arch=default&format=html (12.0 is EOL by now, just wanted to show that its been in for quite a while)

Wifi support is indeed sparse, depending on your application that might be an issue.

No idea about SAS controllers, I've only used LSI/Avago/Broadcom ones and they've worked flawless. LSI seems to be the most common vendor anyway looking a various refurb outlets.
https://www.serversupply.com/CONTROLLERS/SAS-SATA/PCI-E/

Anyhow, I think we've gone enough off topic now :slight_smile:

I never claimed any facts about Intel drivers being bad on BSD so I can use afaik on that.
The facts I'm stating about BSD's "driver support being spotty" is the support for non-intel stuff.

It's used in server cards from Dell and HP thin clients where Linux is also commonly used.
It's not commonly used in consumer hardware.
And no my card was not consumer hardware but an old Dell server card.

The aqtion kmod is in "developer preview" stage in FreeBSD ports even now, of course they don't have it in a stable firewall distro a year ago when I did the test.

I wasn't bashing BSDs, just explaining what I meant with the original statement. The fact that BSD supports less hardware is well-known and I don't understand why you are acting so defensive about it.

Yes, but having some broken drivers for some cheap embedded hardware is different than working reliably only with Intel chipsets on PC hardware.

And it has nothing to do with the USB NIC being good or bad. Good USB NICs cost as much (or more) than a full PCIe card of the same category, and of course in a SoC it's easier to just integrate all controllers on the main interconnect bus, only toy consumer hardware like Raspberry solders USB ethernet controllers (and a USB hub) on the board.

Yeah but they have community packages for Aqtion and USB ethernet with ASIX chipsets, maintained by VMWare engineers. https://www.virten.net/2020/04/how-to-add-the-usb-nic-fling-to-esxi-7-0-base-image/
They are literally "recommended" instead than using the Realtek NICs in the community.

Gigabit dongles above 15-20 euro are good (and usually have a ASIX chipset), and by that I mean "can handle months of 100% load".

Higher-than-gigabit dongles have the same overheating problems of higher-than gigabit cards, unsurprisingly, and this is worsened if the OEM is also dumb.

Monoculture is bad.
We got extremely lucky that crossflashing old RAID controllers with HBA firmware was possible on LSI chipsets or perople on FreeNAS would be stuck paying the premium for HBA cards instead than using old cheap server hardware.

Meanwhile Adaptech makes some sweet sweet controllers with 16 or 24 ports that can be set to HBA mode (i.e. no need to crossflash the cards), but you can't use them (reliably at least) in FreeNAS.
Been using that in Linux, because I'm not a fan of expanders.

This is getting a bit off topic, but that isn't entirely correct. OpenBSD has an MBIM driver:
https://man.openbsd.org/umb.4

EDIT again: I thought this was a QMI driver based on the supported chipset, but I see now that it is a vendor specific serial driver: https://man.openbsd.org/umsm.4 So that doesn't count. But the MBIM driver is the real thing

There is nothing preventing more drivers from being written for the BSDs, except maybe that no one really cares.

1 Like

I'm digging because there seems to be some likelihood of having fiber to my network - - - - hopefully some time real soon. Given the rate of change in the speed of network connection needed to do things like even banking and accessing outside secure networks my guess is that it won't be too long and a 200 Mbit connection will be almost inadequate, especially when anything like a vnc is used.
I've come to understand that there is no real way to future proof anything to do with computers but I am trying to reduce the need for hardware change in just a couple years.
(My present system is from 2012 and whilst it would be possible to buy faster hardware I just haven't seen software that really takes advantage of multi-threading - - - - at least not without changing this and that and the next thing and that's even just a tiny amount of software. So I'm sorta thinking similarly in my networking development - - - - reasonable long term stability.)

I don't see any of that "rate of change", maybe you live in a country where population density is low so you see internet providers doubling their internet bandwith offers every few months. I know that in Romania for example you can get crazy cheap gigabit contracts, while in many other countries it's hard to get more than 100 Mbit (with effective speeds much less than that).

But this does not mean that the rest of the world is progressing at the same speed, nor that services on the Internet actually need that bandwith to operate. If anything there is a push to lower bandwith requirements now because of all the people that now work from home (and will keep working from home after the pandemic, since it's cheaper for the company) so more people overload the same lines and available bandwith per user decreases.

Afaik most web interfaces for banking or working remote don't need much more bandwith now than ten years ago, and I don't see why they should increase.

Any decent remote desktop system (the "anything like a vnc" you said) works fine at 50 Mbit or less and there are no reasons why it should suddenly increase by a factor of 3. I use them for work all the time, and I don't have a great internet connection (between 10 and 50 Mbit depending on time of day), nor do the clients I connect to most of the time.

I think the only reasons you should plan for higher than 200 Mbit is if you want to watch very-high-res movies in streaming services or if you want to play the remote gaming services like Stadia or Geforce Now or similar.
Or if you download A LOT of stuff, or want to host a home server.
I mean, you should have a good reason to do that, it's not a minimum requirement for anything critical.

The only thing that increased dramatically is ads. Ads are a separate issue, yes they slow down load times (mostly a latency issue, not bandwith, every time you open a website the server of that website is calling other servers to download the ads it will show you, and this adds wait time, it's not like in the old days where ads were just images on the same server). Yes they do eat a lot more bandwith than the site, and yes they are bigger now than they ever were, but since they are unnecessary you can just use an adblock plugin in the browser (Firefox/Chrome) like ublock origin (or have this function done by the router/firewall, for example OpenWrt has a package for it, that uses the same blacklists used by the browser plugins).

This is strange for me. All modern software is using (and better with) multithreading, especially web browsers. Maybe you are using a wrong tool to measure that, or maybe you just mean "it works well enough for me" instead, which is something I can agree with.

I mean Ok they may not scale beyond 4 cores, but from 1 to 4 cores there is a significant difference even for basic office software and web browsers.

Alberto Bursi bobafetthotmail
March 19 |

  • | - |

ajoeiam:

Given the rate of change in the speed of network connection needed to do things like even banking and accessing outside secure networks my guess is that it won't be too long and a 200 Mbit connection will be almost inadequate, especially when anything like a vnc is used.

I don't see any of that "rate of change", maybe you live in a country where population density is low so you see internet providers doubling their internet bandwith offers every few months. I know that in Romania for example you can get crazy cheap gigabit contracts, while in many other countries it's hard to get more than 100 Mbit (with effective speeds much less than that).

But this does not mean that the rest of the world is progressing at the same speed, nor that services on the Internet actually need that bandwith to operate. If anything there is a push to lower bandwith requirements now because of all the people that now work from home (and will keep working from home after the pandemic, since it's cheaper for the company) so more people overload the same lines and available bandwith per user decreases.

Afaik most web interfaces for banking or working remote don't need much more bandwith now than ten years ago, and I don't see why they should increase.

In a different location and barely ten years ago I suffered with a connection where it would take as much a 6 minutes to complete ONE bank transaction.
That connection was rated at 2 Mbit down and 0.5 Mbit up but reality was that 10% of that was normal and when mosquito farted the connection would break.
Banks here - - - well with a 10 Mbit down and 2 Mbit up - - - - the transactions are quite S L O W - - - today.
Three years ago - - - the same connection - - - - there was still some snap.
Banks are using a lot more eye candy - - - - their webistes are being optimised for 'stupid' phones and these websites are being tuned for where a 25 Mbit down connection (and likely 5 Mbit up) gives an effective solution.
Our present bank is likely due another website revamp in less than another 6 months (given past track record) and my guess is that the site will slow at least 15 to 20%.
It is what it is - - - - -I can't change this!
Most of the programming world in north america lives where pipes of up to 1.5Gbit down 100 Mbit up are sorta normal - - - - somehow they also think that those pipes are available everywhere (not true for almost ALL of rural North America!!!).

I think the only reasons you should plan for higher than 200 Mbit is if you want to watch very-high-res movies in streaming services or if you want to play the remote gaming services like Stadia or Geforce Now or similar.
Or if you download A LOT of stuff, or want to host a home server.
I mean, you should have a good reason to do that, it's not a minimum requirement for anything critical.

Having been on the web from when it was a 300 baud modem and looking at the huge push for 5G (they're stealing bandwidth from rural ISPs for this in North America at least) I would suggest that your requirements are based on information from the past.

The only thing that increased dramatically is ads. Ads are a separate issue, yes they slow down load times (mostly a latency issue, not bandwith, every time you open a website the server of that website is calling other servers to download the ads it will show you, and this adds wait time, it's not like in the old days where ads were just images on the same server). Yes they do eat a lot more bandwith than the site, and yes they are bigger now than they ever were, but since they are unnecessary you can just use an adblock plugin in the browser (Firefox/Chrome) like ublock origin (or have this function done by the router/firewall, for example OpenWrt has a package for it, that uses the same blacklists used by the browser plugins).

(My present system is from 2012 and whilst it would be possible to buy faster hardware I just haven't seen software that really takes advantage of multi-threading - - - - at least not without changing this and that and the next thing and that's even just a tiny amount of software.

This is strange for me. All modern software is using (and better with) multithreading, especially web browsers. Maybe you are using a wrong tool to measure that, or maybe you just mean "it works well enough for me" instead, which is something I can agree with.

I mean Ok they may not scale beyond 4 cores, but from 1 to 4 cores there is a significant difference even for basic office software and web browsers.

OK - - - - but my OLD computer has 6 cores and today its not too expensive to have 12 and quite a few more are possible (I can't remember if its 32 or 64 cores that's the top end right now).
Its sorta like the IBM 3083 in a facility I worked at in the beginning 80s and them not using the parallel processor.
It still processed a lot of data - - - - but with the parallel processor - - - - well its like comparing a 1970s Yugo to Bugatti Veyron (IIRC the new toy for very very rich people).
Both will you get into town - - - - - but if you can open that Bugatti up - - - - -well if I were on the autobahn I could likely get from Hamburg to Munich even with a rest break or two - - - - - even faster than riding on the ICE - - - - and that's saying something!

It would be quite nice if I could download a 20 page paper in a couple seconds and then a few others in the same and compare them rather than needing to take a very measurable amount of time to do the same.

Faster is better up to about 1Gbps. But my front room has no ethernet so I use a powerline adapter. This connection is limited to about 45Mbps but with good low latency. Noone ever notices even though the other desktop machines have full gigabit fiber access.

20Mbps per active device is plenty for most usage.