Security of OpenWrt vs OPNSense vs desktop operating systems

Haha, ok. Remove the internet connection and lock it in a room with a big angry guard spider.
Do anyone want information they will have to fill in 10papers to send to 10 different departments. And then maybe if they have been nice they can after three month of papperwork and handling time get a 2min audience with the server. Highly overwatched by the big spider (they have a lot of eyes!) of course!

How do we talk this kind of security after Solarwinds?

And you only focus on single line of defense (firewall line of the router). Not in depth defense.

Lines of, not secure or secure. At which point of LOC is the breaking point?
It is like the USA vs Russia defense budget where the measurement is “biggest budget=best defense”.

And the weak point in cyber security is almost always the human operators because “ease of use is not important”.


My 2 cents.

In the past I also believed that OPNsense had better features than OpenWRT and was hoping some day to move to it. But as I studied it, I didn't find any future on it that's not available to OpenWRT. And OpenWRT has better IPv6 multi-homing support. IPFire doesn't even support any sort of Multi-WAN.

OpenWRT uses some alternative or own software aimed at embedded systems, but it also supports x86 and AMD64 and most software can be installed with opkg. In example, we can install vim, bash, ip, htop, zabbix-agent. We can install sudo and add user to not use root. They just don't come installed on official packages because many devices have limited storage and RAM and installation may be a pain. Only package I miss is Subversion-client with HTTPS support. Ah also a traffic monitoring tool.

Seeing OpenWRT as aimed at wireless is a myth. It's not because it's the best on it that it's aimed on that.

I see the size of code as a minor factor. What matters is how well the software is developed, maintained and tested. How good is its architecture, and if its development follows a philosophy of focusing on quality and testing or on implementing new features ASAP and fix later. If its software achitects and main engineers are highly skilled or amateurs that are doing their best with what they know.

When an independent 3rd party reports a security issue, how quickly is it fixed? Is it properly fixed or needs subsequent patches?

Linux specifically and FOSS in general are developed by multiple independent groups and developers. That makes it harder to cope with a concise architecture, but also allows multiple independent parties to know them in details, to follow their updates and provide fixes. Remember that by "Linux" we mean a big amount of independently developed softwares working together.

BSD is less "bloated" with code, but also less used and tested and fewer engineers are prepared or willing to work on developing or testing it.

What I mean then is that defining which is the most secure is very complex itself, and less relevant than the skill of the admin of the router. It doesn't matter how secure the software is if u install a lot of services on it and use it for a lot of stuff, which should be on a proper server and not on a router. It doesn't matter its security if u make bad configs, or DMZ ur server, or leave it with default settings that any hacker will know what does. Or if u don't keep it updated and a hacker exploits a 1 year old version of some software. I had worked on a multinational company where some random ppl would use the app server as a desktop, just because they wanted to talk with the "IT manager" and that was the only "PC" on the desk, and he'd not take away its monitor and keyboard because he didn't feel secure to make the app server remotely accessible.

In my case, reliability is as important as or more important than security, because no security matters when the router had crashed or failed to upgrade. Being able to backup its drive instead of only its txt configs is very important on that, because upon some issue I can just restore a full backup and get it working again. Then with Internet access I can calmly analyze what happened on a VM, that is if the OS can be installed on one.

With that it's clear that for me it's preferable to have a low power PC for router than an embedded device. Also, I had read some ppl with 1Gbps+ Internet access saying that embedded routers are being limited to 800Mbps when some activities like many firewall rules are in place and are considering building a PC when their ISP increase their speed. And, when we see the price of top Arm routers that support OpenWRT, it isn't much lower than a mini-PC.

Regarding generic Lix distros, u can pretty much do on then anything u can on OpenWRT or IPFire. But these distros are built aimed at being used as router. If u'd take the time to prepare a Debian or Arch to use as router, u'd better build a new distro and publish it and build a community to help u keep it updated and tested. Or ur particular distro will be less secure than one built by tens of skilled guys and used by thousands.

Exactly. Same for any Lix or BSD distro or tool that are frequently maintained and updated.

If one needs to ask which is more secure, then there's no widespread consensus on differences on this regard. Therefore it's up to admin's skills to setup the router and to features they have and he needs.



So true. This has been my experience in the VPN tunnel. At least when you make your own tunnels so you run the client and server this problem gets clear.

I tried WireGuard when realesed to see if it would run faster than OpenVPN.
It did, but I still use OpenVPN!

WireGuards two commercials, first was that we have only 5000LOC compared to mega much LOC in OVPN.
And second, we are faster than OVPN.

WireGuard did connect without fault. It did light up the the VPN symbol in iOS.
It was really fast, actually as fast as without VPN on 4G network so I got suspicious.
So I ran a DNSLeakTest and I was on the 4G network.
WireGuard has no secure way of key management ether.

So OpenVPN has a lot of LOC but it is rock solid. When it is on it is on, no faults there and no leaks. It has also a secure way of manage the config files.

So the number of lines in the code isn’t everything if the lines are written good.
If I can’t trust the software telling me correct information than a small amount of LOC doesn’t matter.

Code quality and be discussed at length but you're looking at two very different release models and as for quality BSD is usually considered to have better however that is also usually the "blame" for lagging device support. There's a lot more of "bugfixes" being backported each week for a Linux LTS kernel compared to a BSD release which is usually only a handful during its (supported) lifetime and can to some extent be contributed to attention. That doesn't necessarily mean however that one is better per definition however the release models offers a different kind of "freedom".

What you should remember is that BSD is used a lot in commercial products and if by "less tested" you refer to a device running X or Y you're correct however there are a lot of products that ship with ancient versions etc so I'd say that such an argument might not hold very much of a value in the end.

In fact by tested I mean software engineers testing and reporting issues, preferably providing fixes too, before going stable. Being used by many users is important too, but in the end of the day users are useless to the quality of the software and progress of its features when they don't contribute to the development and don't even report issues they find.

I just had to register here to write that:

PFsense is considered by far more stable then OPNSense.

However, both had the issue that unbound itself is crashing and the last PFSense stable has other serious issues like NAT being broken in some cirumstances (yay - not that it wouldn't be a major issue/feature!).

Especially notice this comment:

EDIT: In the comments below Netgate say - there will be NO 2.5.2 release. It will be fixed in 2.6.0. They have no release date for 2.6.0. So fix might be years away from now( I think it took them 3 years for 2.5.x). So I would say options are: stay on 2.4.x or move to OPNsense.

OPNSense, due to it's nature encountered it a few months earlier (the issue with unbound), but they didn't manage to fix it in a fast way (not sure if even completely? I never used OPNSense).

pfSense on the other hand took years (I think it was really years not sure if my memory is tricking me) to upgrade to 2.5.x only to encounter ... the same issue!

So much for "stability".

Because the unbound PKG in the FreeBSD 12.2 repos is broken/unstable, that is even with 1.13.0.

By broken I mean - crashing.

Either within minutes or if you disable most usefull features for local resolving - hours sometimes even days. But far from what I'd call stable.

I never had that happen with any WRT, whatever port - despite all other issues with Wireless on some routers.

I've been pretty happy on PFsense so far, but I guess I'll go back to OpenWRT on arm64 now over amd64 seeing that OPNSense and PFSense both have more or less the same common issue:

A base system that's slow to tackle such serious issues (in my book).

On the other hand the Linux world is MUCH bigger and better maintained, even arm64 nowadays.

Do I really care if the base-system is less hardened? That the Kernel is more bloated?

I don't care if the system is stable for what it's made for.

Never had a Linux Machine break on me in the last years, no matter if amd64 or armv7 or arm64.

The worst of it all:

Old FreeBSD Packages need to be compiled from scratch (not sure about HardenedBSD).

There's litterally NO archive anywhere, so it's not easy to just install an old pkg.

You have to compile it yourself. Always.

And that compilation can't be done on the respective Firewalls itself, no matter how powerfull hardware they run because, obviously this features are optimized out for various reasons.

My advice is simple and sound:

Stick with OpenWRT.

If you need more - rather setup another small dedicated box or SBC, e.g. with PiHole or IPFire.

It will safe you lots and lots of nightmare.

If you really want to give either *Sense a try choose a stable OS like Proxmox VE (which, ironically runs on Debian) and run either of them in a VM.

There's just to much pain involved if things turn out bad (and it will, at a point).

I think the good old saying of BSD being more stable is slowly eating itself:

As old engineers go by (is Sony still using BSD for their newer consoles?), there are few newer ones coming to pick up their work (just guessing).

A system that isn't maintained properly can only go so far, no matter its great architecture.

The more complicated and sound the architecture, the harder it is to be maintained as well.

As much as personal experience is important I think you need to look at the bigger picture, there are millions of installs of either platform and as always you'll hear a lot less success stories than issues/problems. You can pretty much apply this to any kind of software/distro in general however you also need understand the differencies and pratical limitations.

Free software will "never" have the same support as commercial simply because money helps a lot when it comes to actual work. Developers/Maintainers will of course have interest in bug reports etc but since they're working for free (essentially) most aren't in the situation to drop everything and work on said bug report. It may also take time to track it down properly etc. While software development is different to many other types of work/professions the same principles applies, demanding something that's "free" is just silly. It's not like you'd expect a mechanic and an electrician to work for free or go to extreme lengths just because they offered to help.

Regarding to unbound, using FreeBSD's repo for other distros is unsupported and more or less carries "you're on your own" status. Just because you can install it doesn't mean it'll run properly. It's a shame that their base packages are broken but again I don't know if this is a very limited issue or not. It's free and they can't test every single scenario possible. This also applies to any code tree, you will see system/application breaking commits form time to time, depending on manpower, time, money, policies (testing etc) some are more strict than others when it commits to commiting code. Looking at the official bug reporting portal ( I'm going to guess it's related to simply not being comptible with third party distros. It's also maintained by upstream (fwiw) although I would assume that many uses the version included in base.

I don't know why you're expecting it being possible to compile on a(ny) firewall distro, this is clearly a misunderstanding on your behalf.

As for stability in general it depends on a lot of factors, hardware, drivers etc you can see this very clearly here too. Some platforms/combinations simply have better support than others.