Security of OpenWrt vs OPNSense vs desktop operating systems

In the area of security, high configurability and supporting old methods and wide variety of options are not good ideas.

But yeah OpenWRT is generally considered well maintained and secure.

Protocols can adapt/evolve over time by supporting new authentication/encryption algorithms.
Modern solutions are generally preferable, unless you are forced by external factors to use legacy ones.
Actually, OpenWrt was one of the first platforms to provide official WireGuard support.

1 Like

The real issue there is that OpenWrt is just like other Linux distributions.

But main benefit is that typical OpenWrt router power consumption.
A typical desktop machine costs $500 and it's power usage is like 100W.
A typical router with OpenWrt costs $50 and it's power usage is like 5W.

So if you're using a desktop machine as a router:

  • first will cost much more
  • second power bill for power it 24/7

Here is easy calculation.
Desktop 24/7 100W - for one day this is 2.4KWh, for one month this is 72KWh, for one year this is 864KWh.
Router 5W - for one day it's 120Wh, for month will be 3.6KWh, for year will be 43KWh.

That's why many persons (including me) using OpenWrt - because from security point it's like all other Linux distributions.

2 Likes

There are important differences between desktop and embedded operating systems:

  • How many lines of code is OpenWRT?

Linux kernel is some 28 million LOCs. OpenSSH alone is hundreds of thousands of LOCs. Huge attack surface.

  • How many security researchers work on OpenWRT? It’s much less audited and scrutinized.

  • Low power consumption requirements could mean trade offs in security?

For example, an embedded device may not have enough entropy.

  • OpenWRT is dedicated to wireless, routing and security.

About LOC - it's impossible to calculate since OpenWrt is built FOR device. And some devices may have some capability, other - others. Short example is OpenVPN - by default ovpn isn't included but you quick may add it with opkg. Next - Linux Kernel may be 28 MLOC, but this is with all architectures and all network drivers and so on. But OpenWrt doesn't use everything - for example there isn't Power9 router devices or devices with NUMA.
Same is with OpenSSH... actually this is good software, but it's too big to fit in embedded environments with 300MHz single core CPU, 32MB of RAM and 4MB of Flash. That's why OpenWrt uses Dropbear. And it's security is great!
https://www.cvedetails.com/product/33536/Dropbear-Ssh-Project-Dropbear-Ssh.html?vendor_id=15806
as you can see it's going good. The benefit for OpenWrt is that doesn't use mainly x86 architecture but uses ARM and MIPS arch.

But if there is some security issue - OpenWrt devs publish update ASAP. So most of time you only need to download and apply update.

Security researchers that works on OpenWrt are same number as researchers that works on Linux and users apps.
No - low power consumption isn't trading off security. But mine pocket benefits of this! And not only mine!

About entropy - technically OpenWrt didn't have entropy issues. It's all dependents from hardware manufacturers (some have hardware random generator) and Linux kernel.

While security for home users is in general good compared to consumer products much of your concern will most likely boil down to knowledge, maintence, functionality and to some extent hardware.

OpenWrt achilles heel is in my opinion maintence, due to the nature of target devices you normally can't update induvidual packages, updates can be disruptive (POLA - https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/freebsd-glossary.html#pola-glossary ) and requires in many cases manual intervention more so than other distributions. This may contribute to do the "set it and forget it" approach simply because it takes time. What you might also want to consider is that functionality can be severely limited due to targeting "low-end" devices / one size fits all approach depending on application.

If you look at dedicated distributions such as opn/pfsense etc maintaince usually requires very little effort and because they usually target faster and more powerful devices you usually see a substantinal difference in terms of functionality, logging and reporting.

Using a generic distro such as FreeBSD, Debian etc is by far the most flexible solution but usually require somewhat more knowledge to configure and maintain, there's also less "integration" of tools ootb (no webui, reporting etc). They can be just as secure as anything else depending on configuration. It also boils down to what you're comfortable with, I personally don't mind using a "full" distro compared to appliance like but I'd also say that they don't exactly replace each other either as it all depends on use case.

One more thing to consider is that more services also (in theory) may serve as more potential attack vectors. While many services wont face "the Internet" at all and your network is most likely not interesting to hack anyway I wouldn't put too much weight on that aspect for a home user.

As for hardware you might want to pay attention to vulnerabilities such as meltdown etc, in all honestly I highly doubt it'll be of a concern for 99.9% of all home users but you should however try to avoid using "broken" hardware if possible.

I personally run FreeBSD as "router/firewall OS" and OpenWrt for wireless APs simply because that's what I'm comfortable with and it's relatively low maintence and offers great flexibility. I do run a bunch of standalone OpenWrt routers/gateways but they're becoming time consuming and will most likely be replaced with a SBC such as the RockPro64 paired with a dual port Intel NIC.

2 Likes

Many distros these days also targets ARMv7+, the only exception in general is firewall because there were very few devices that offered suitable hardware and were affordable. Also, any distro will publish security updates ASAP and "download and apply update" is usually not really that effortless.

"Security researchers that works on OpenWrt are same number as researchers that works on Linux and users apps." I think that you're trying to say is that OpenWrt is a downstream user just like pretty much any other distribution. This however might have both pros and cons as OpenWrt sometimes ends up having custom software solutions that aren't reviewed as much/frequently.

2 Likes

Well - distributions are just compilation of Kernel, drivers and some userland software.

So technically same happens on OpenWrt more or less. Sometime there are very custom software, but reasons about it here is limited resources where OpenWrt runs.

1 Like

I agree with what you say. It seems to me, embedded devices (running OpenWRT or other OSs) should not face internet due to their limitations. The edge router is best to be a flexible security focused OS, eg, OPNSense, or a full OS eg OpenBSD or a Linux distribution running on a thin client mini PC. I am still undecided between OPNSense and a BSD distribution.

OpenWRT of course is a very good embedded OS, and the list of CVEs linked above speaks positively of its security. I would probably blame hardware, heterogeneity and limited resources.

Good question: when I connect to AWS or Google or Dropbox etc, what sort of operating system handles my request? In other words, what these corporations run on their data centers on edge?

Usually a customized variant of BSD or a Linux-distro
Amazon have their own flavour, Netflix for instance runs both FreeBSD (for serving data) and Linux etc
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-ami-basics.html
Many vendors doesn't disclose due to security concerns.

2 Likes

I summarize the post answering the question on secure firewall and router operating system for others who might be interested:

OpenBSD>>FreeBSD >>OPNSense/pfsense >>OpenWRT.

The first two work only if your needs are basic. For advanced networking, customize 2 or use 3. Keep 4 for WiFi and embedded inside LAN.

If you want to minimize the chance of intrusion your first task is to setup a proper concept of multi layer security that ensures that if one layer is breached it will not impact the other.
E.g. separation of Firewall and VPN gateway, Containerize Apps, Dedicated user authentication for Network and Apps,....

2 Likes

True, defense in depth is good.

However, I note that each layer must still be properly secured. Otherwise, if implementation in each layer doesn’t adhere to good security practices, the overall security would still be weak even if layered.

Sometimes developers working on each component pass a common goal to those building other components . This is dangerous.

Don't you wish a security auditor with a love for router firmware would suddenly appear.

I have always been happy with security updates from OpenWrt, and I think my expectation has always been "it's up to you", the same as administering a server, an abstraction lost sometimes.

When was the last time there was a big security issue in OpenWrt that wasn't quickly fixed?

1 Like

Haha, ok. Remove the internet connection and lock it in a room with a big angry guard spider.
Do anyone want information they will have to fill in 10papers to send to 10 different departments. And then maybe if they have been nice they can after three month of papperwork and handling time get a 2min audience with the server. Highly overwatched by the big spider (they have a lot of eyes!) of course!

How do we talk this kind of security after Solarwinds?

And you only focus on single line of defense (firewall line of the router). Not in depth defense.

Lines of code...secure, not secure or secure. At which point of LOC is the breaking point?
It is like the USA vs Russia defense budget where the measurement is “biggest budget=best defense”.

And the weak point in cyber security is almost always the human operators because “ease of use is not important”.

2 Likes

My 2 cents.

In the past I also believed that OPNsense had better features than OpenWRT and was hoping some day to move to it. But as I studied it, I didn't find any future on it that's not available to OpenWRT. And OpenWRT has better IPv6 multi-homing support. IPFire doesn't even support any sort of Multi-WAN.

OpenWRT uses some alternative or own software aimed at embedded systems, but it also supports x86 and AMD64 and most software can be installed with opkg. In example, we can install vim, bash, ip, htop, zabbix-agent. We can install sudo and add user to not use root. They just don't come installed on official packages because many devices have limited storage and RAM and installation may be a pain. Only package I miss is Subversion-client with HTTPS support. Ah also a traffic monitoring tool.

Seeing OpenWRT as aimed at wireless is a myth. It's not because it's the best on it that it's aimed on that.

I see the size of code as a minor factor. What matters is how well the software is developed, maintained and tested. How good is its architecture, and if its development follows a philosophy of focusing on quality and testing or on implementing new features ASAP and fix later. If its software achitects and main engineers are highly skilled or amateurs that are doing their best with what they know.

When an independent 3rd party reports a security issue, how quickly is it fixed? Is it properly fixed or needs subsequent patches?

Linux specifically and FOSS in general are developed by multiple independent groups and developers. That makes it harder to cope with a concise architecture, but also allows multiple independent parties to know them in details, to follow their updates and provide fixes. Remember that by "Linux" we mean a big amount of independently developed softwares working together.

BSD is less "bloated" with code, but also less used and tested and fewer engineers are prepared or willing to work on developing or testing it.

What I mean then is that defining which is the most secure is very complex itself, and less relevant than the skill of the admin of the router. It doesn't matter how secure the software is if u install a lot of services on it and use it for a lot of stuff, which should be on a proper server and not on a router. It doesn't matter its security if u make bad configs, or DMZ ur server, or leave it with default settings that any hacker will know what does. Or if u don't keep it updated and a hacker exploits a 1 year old version of some software. I had worked on a multinational company where some random ppl would use the app server as a desktop, just because they wanted to talk with the "IT manager" and that was the only "PC" on the desk, and he'd not take away its monitor and keyboard because he didn't feel secure to make the app server remotely accessible.

In my case, reliability is as important as or more important than security, because no security matters when the router had crashed or failed to upgrade. Being able to backup its drive instead of only its txt configs is very important on that, because upon some issue I can just restore a full backup and get it working again. Then with Internet access I can calmly analyze what happened on a VM, that is if the OS can be installed on one.

With that it's clear that for me it's preferable to have a low power PC for router than an embedded device. Also, I had read some ppl with 1Gbps+ Internet access saying that embedded routers are being limited to 800Mbps when some activities like many firewall rules are in place and are considering building a PC when their ISP increase their speed. And, when we see the price of top Arm routers that support OpenWRT, it isn't much lower than a mini-PC.

Regarding generic Lix distros, u can pretty much do on then anything u can on OpenWRT or IPFire. But these distros are built aimed at being used as router. If u'd take the time to prepare a Debian or Arch to use as router, u'd better build a new distro and publish it and build a community to help u keep it updated and tested. Or ur particular distro will be less secure than one built by tens of skilled guys and used by thousands.

Exactly. Same for any Lix or BSD distro or tool that are frequently maintained and updated.

If one needs to ask which is more secure, then there's no widespread consensus on differences on this regard. Therefore it's up to admin's skills to setup the router and to features they have and he needs.

O:)

2 Likes

So true. This has been my experience in the VPN tunnel. At least when you make your own tunnels so you run the client and server this problem gets clear.

I tried WireGuard when realesed to see if it would run faster than OpenVPN.
It did, but I still use OpenVPN!

WireGuards two commercials, first was that we have only 5000LOC compared to mega much LOC in OVPN.
And second, we are faster than OVPN.

WireGuard did connect without fault. It did light up the the VPN symbol in iOS.
It was really fast, actually as fast as without VPN on 4G network so I got suspicious.
So I ran a DNSLeakTest and I was on the 4G network.
WireGuard has no secure way of key management ether.

So OpenVPN has a lot of LOC but it is rock solid. When it is on it is on, no faults there and no leaks. It has also a secure way of manage the config files.

So the number of lines in the code isn’t everything if the lines are written good.
If I can’t trust the software telling me correct information than a small amount of LOC doesn’t matter.

Code quality and be discussed at length but you're looking at two very different release models and as for quality BSD is usually considered to have better however that is also usually the "blame" for lagging device support. There's a lot more of "bugfixes" being backported each week for a Linux LTS kernel compared to a BSD release which is usually only a handful during its (supported) lifetime and can to some extent be contributed to attention. That doesn't necessarily mean however that one is better per definition however the release models offers a different kind of "freedom".

What you should remember is that BSD is used a lot in commercial products and if by "less tested" you refer to a device running X or Y you're correct however there are a lot of products that ship with ancient versions etc so I'd say that such an argument might not hold very much of a value in the end.

In fact by tested I mean software engineers testing and reporting issues, preferably providing fixes too, before going stable. Being used by many users is important too, but in the end of the day users are useless to the quality of the software and progress of its features when they don't contribute to the development and don't even report issues they find.

I just had to register here to write that:

PFsense is considered by far more stable then OPNSense.

However, both had the issue that unbound itself is crashing and the last PFSense stable has other serious issues like NAT being broken in some cirumstances (yay - not that it wouldn't be a major issue/feature!).

Especially notice this comment:

EDIT: In the comments below Netgate say - there will be NO 2.5.2 release. It will be fixed in 2.6.0. They have no release date for 2.6.0. So fix might be years away from now( I think it took them 3 years for 2.5.x). So I would say options are: stay on 2.4.x or move to OPNsense.

OPNSense, due to it's nature encountered it a few months earlier (the issue with unbound), but they didn't manage to fix it in a fast way (not sure if even completely? I never used OPNSense).

pfSense on the other hand took years (I think it was really years not sure if my memory is tricking me) to upgrade to 2.5.x only to encounter ... the same issue!

So much for "stability".

Because the unbound PKG in the FreeBSD 12.2 repos is broken/unstable, that is even with 1.13.0.

By broken I mean - crashing.

Either within minutes or if you disable most usefull features for local resolving - hours sometimes even days. But far from what I'd call stable.

I never had that happen with any WRT, whatever port - despite all other issues with Wireless on some routers.

I've been pretty happy on PFsense so far, but I guess I'll go back to OpenWRT on arm64 now over amd64 seeing that OPNSense and PFSense both have more or less the same common issue:

A base system that's slow to tackle such serious issues (in my book).

On the other hand the Linux world is MUCH bigger and better maintained, even arm64 nowadays.

Do I really care if the base-system is less hardened? That the Kernel is more bloated?

I don't care if the system is stable for what it's made for.

Never had a Linux Machine break on me in the last years, no matter if amd64 or armv7 or arm64.

The worst of it all:

Old FreeBSD Packages need to be compiled from scratch (not sure about HardenedBSD).

There's litterally NO archive anywhere, so it's not easy to just install an old pkg.

You have to compile it yourself. Always.

And that compilation can't be done on the respective Firewalls itself, no matter how powerfull hardware they run because, obviously this features are optimized out for various reasons.

My advice is simple and sound:

Stick with OpenWRT.

If you need more - rather setup another small dedicated box or SBC, e.g. with PiHole or IPFire.

It will safe you lots and lots of nightmare.

If you really want to give either *Sense a try choose a stable OS like Proxmox VE (which, ironically runs on Debian) and run either of them in a VM.

There's just to much pain involved if things turn out bad (and it will, at a point).

I think the good old saying of BSD being more stable is slowly eating itself:

As old engineers go by (is Sony still using BSD for their newer consoles?), there are few newer ones coming to pick up their work (just guessing).

A system that isn't maintained properly can only go so far, no matter its great architecture.

The more complicated and sound the architecture, the harder it is to be maintained as well.