Intel CPU vulnerability mitigations: significant for a dedicated openwrt router?

Of the various Intel CPU vulnerabilities which have been mitigated in the kernel, I'm curious about which mitigations are actually important to the attack surface presented by a dedicated router, and in particular a router booting OpenWRT natively, not containerized or virtualized.

Not all mitigations cause appreciable performance degradation, but I'm wondering which if any of the ones that do are really needed on a fairly vanilla natively OpenWRT-booting x86 router not virtualized and not running a lot of userland applications. Curious to what extent anyone's explored this topic.

Edit: Again, I'm familiar with the generalities, about not allowing non-admin users to log in or run arbitrary code, not running virtualized CPUs or devices, etc. I'm interested in the attack surface of a dedicated router, not virtualized, no users logging in running arbitrary code, reasonably locked down firewall-wise.

Are there any mitigations which should NOT be disabled on a dedicated, non-virtualized router that offers no user accounts, exposes a WAN interface but no services over it, and only standard services such as DHCP, uhttpd, ssh DNS etc. to the local network?

1 Like

in short? None.

Most of the issues are caused by sharing a host. Which is why hosting companies shit bricks when it came out. The linux kernal has mitigations in and unless you are letting VMs or other things co-exist with your OpenWrt install... then you pretty much safe.

(edit - there are people who disable the mitigations as it does cause as performance hit. Anywhere between 2-10% iirc. It basically comes down to "do you trust the code running on your machine.")

(edit2 - It will depend on the chip you are running on. one of the early major hits was branch prediction. by restricting that you seriously hampered pre spectre chips due to their execution phase. By requiring that to be flushed and reloaded you incurred some major performance penalties. But you really need a research paper on cpu design and the effects of spectre/meltdown etc and a reasonable knowledge of how a modern cpu executes instructions)

1 Like

Well, it's those that I was asking about: if I were to do something as crude as adding "mitigations=off" to the command line, would the resulting exposure be of concern on a dedicated OpenWRT router?

damn!... that's hectic...

Depends on couple of things. Firstly if the cpu firmware / bios is up to date. Then if the motherboard manufacturer has got the most recent firmware from intel.

Its slightly easier for linux as you can actually force install newer firmware.

dmesg | grep microcode

The package names are as follows for popular Linux distros:

microcode_ctl and linux-firmware – CentOS/RHEL microcode update package
intel-microcode – Debian/Ubuntu and clones microcode update package for Intel CPUS
amd64-microcode – Debian/Ubuntu and clones microcode firmware for AMD CPUs
linux-firmware – Arch Linux microcode firmware for AMD CPUs (installed by default and no action is needed on your part)
intel-ucode – Arch Linux microcode firmware for Intel CPUs
microcode_ctl, linux-firmware and ucode-intel – Suse/OpenSUSE Linux microcode update package

I have no idea if there is a openwrt package for that or if you would have to build it yourself.

Its acutally far harsher if you disable SMT/HT which was one of the original patches. With newer microcode you don't HAVE to disable SMT/HT but some still recommend it. (mostly in VM machines)

(edit - I was slightly wrong. the effects are actually harsher.)

It found while the impacts vary tremendously from virtually nothing too significant on an application-by-application level, the collective whack is ~15-16 per cent on all Intel CPUs without Hyper-Threading disabled. Disabling increases the overall performance impact to 20 per cent (for the 7980XE), 24.8 per cent (8700K) and 20.5 per cent (6800K).

Personally if you are reasonably sure of what you download etc then mitigations can be turned off. If you wish to be more careful and have security policies on your machines (restrict no-exec on temp folders etc) then its not too harsh a penalty to have.

Even things like rowhammer (bit flipping on memory) requires programs to be run on the cpu in question. Yes its an issue but you have to look at what the machine does, how exposed it is, and what updates it gets. As long as you can control it and keep remote access off the machine? then you are fairly safe. That's not to say you aren't immune. There will always be one dayers etc. But in terms of "locking your front door" as it were, then you have done what you can.

1 Like

I suppose a slightly better way of putting it is like this.

If you have a older car you have to run it through emissions tests. (EG places like California require far stricter limits) These add things like catalytic converters, unleaded fuel and engine management to your car. This means the raw power is throttled. The car still runs in both pre-emissions and post emissions, but its performance is hindered.

If you can run that car with no restrictions on a desert island, with no one around, then you can drive it flat out making as much smoke as you like and no one will care. (This is your trusted code, single user model)

Running it in a crowded city will get you yelled at. (This is shared code, multiple users, potentially hostile code running on your machine.) Thus you should run mitigations etc.

1 Like

Also for those curious on performance hits.

Michael does some nice coverage and benchmarking of this if you are curious and want to know more.

Also's info :

@Cheddoleum I think this is the breakdown you are looking for. There are other whitepapers around that give advice but they have detailed usage/recommendations.

1 Like

Especially on older CPUs (sandy-bridge, ivy-bridge and earlier), the impact of the mitigations is really significant and noticable (still, there's no way around them on interactively used systems, like desktops, workstations or servers).

Keep in mind that malicious code might already be implemented in Javascript, so once you are running a webbrowser on a machine, you need to care.

A router, in a rather default configuration (without NAS, bittorrent and other things running), is (hopefully) only running trusted code, it merely passes data packets along, but doesn't interpret them or can be fooled into running exploitable code patterns. While I'd still not disable the mitigations here either, the actual risks are rather contained.


I didn't know you could turn an intel machine into an OpenWRT router. Interesting.

1 Like

FYI - any machine running the Linux kernel can be turned into a router by enabling IPv4 and/or IPv6 forwarding sysctrls. :wink:

Latest generations handle vulnerabilities really well:

Out of 96 different tests ran (see all the data in full over on in the default and then mitigations=off state on the Core i9 11900K, there was 2% difference overall for the geometric mean.

Long story short, if you are running Intel Alder Lake it's likely not worth messing around with "mitigations=off" as it's not a worthwhile trade-off of the security risk for very little to no gain.