OpenWRT in container or containers in OpenWRT?

I wish to run some extra applications on my headless x86 router, like unifi-network-application, emqx, netbird. I've been trying to spawn them on OpenWRT with docker and lxc, but it it's a bit more complicated than what I'm comfortable with; I'm afraid I'll lose track of everything and bump into problems when upgrading.

With LXC my main problem is that usually only docker images are available, and they don't run as such with LXC. It seems like too much trouble/learning to make stuff work.

With docker I'm frightened about it messing up my firewall rules - without me understanding what's going on. I have things like dns hijacking, mac allowlist for wan access (default deny), time restrictions, dmz, SQM. Even if I don't use the default docker bridge, but rather macvlan, docker adds a bunch of fw3/iptables stuff into the firewall. I don't like it. Maybe it's ok on a host dedicated to docker containers, but not on a router that exists for the sake of the firewall :stuck_out_tongue_winking_eye:

Assuming things are more straightforward with another distro, would you say it's worth flipping things over and putting OpenWRT in a container? Is there some downside like significant overhead/jitter/latency?

I'd neither use my network equipment host containers nor run my firewall as a container. Especially since every once in a while I'm a "yea, let's put that setting in there and see what it does"-kind of guy when it comes to arbitrary containers, and that's just not something I want for my network infrastructure.

If you do have a somewhat potent x86 architecture host, consider virtualization. Separate one VM for your router and one VM as a container host.

This keeps OpenWRT as a distro as is and opens your choice if distribution for the container host VM totally up to you, either what you're familiar with or what you find proper documentation for.

That said, I never even tried to do that, but I've heard OpenWRT does not run as a container. And to my knowledge of containers, I can see why it would at least need to run with full privileges, and I'd totally believe it does not work at all. Which only leaves the "host containers in OpenWRT" thing even possible.

Virtual machines on debian redhat ubuntu lts opensuse fedora

Running OpenWrt in a container (as opposed to full system virtualization (qemu-kvm, virtualbox, parallels, hyper-v, vmware, etc.) is not a supported configuration and broken, both in terms of functionality and hidden security issues.

As mentioned above, your boarder gateway is your first line of defence towards the internet, keeping its attack surface small and well-understood is paramount. Your router needs full control over the hardware, the network cards, it's not a good idea to get another middle-man (hypervisor) into the picture, whose security issues add on and which forces you to make the same key security- and routing policies on multiple layers.

2 Likes

For a small deployment like home use, move OpenWRT into LXC and run your other applications on the same host. It's fine.

OpenWRT is actually the only non-traditional linux distribution that are supported by LXC as shown here https://images.linuxcontainers.org/. Barring any undisclosed info(not willing to, not able to) I don't see why OpenWRT is unsafe in LXC.

trigger: post-files
  action: |
    #!/bin/sh

    # Disable process isolation to make dnsmasq work
    sed -i 's/procd_add_jail/: \0/g' /etc/init.d/dnsmasq
    # Disable conflicting sysntpd service to avoid crash loop
    rm -f /etc/rc.d/*sysntpd

There is nothing secret about it, it has just been discussed ad nauseam already. The gist is, OpenWrt must be able to do things that can't be done in a container (loading/ unloading kernel modules, applying sysctl settings, hardware access/ triggers, ...), but can be done to the virtual hardware of full system virtualization. Yes, you will see the webinterface without that, but core functionality (netifd and mostly the firewall, and more) can't do what it's supposed to do that way, resulting in glaring security holes.

A container can run userspace applications, it cannot virtualize systems that require direct hardware- and kernel- access, that's the whole point of a container, preventing to break out of the container.

There's an alternative way mentioned by someone else in this form to pass /dev/urandom into container and modify the same file like this by adding /dev/urandom to the end:
procd_add_jail_mount /etc/passwd /etc/group /etc/TZ /etc/hosts /etc/ethers /dev/urandom
Since the OpenWRT is running in unprivileged mode probably it's ok to just remove all the jail

Or disable ntp by cli or luci:

cat ../config/system

config timeserver 'ntp'
option enabled '0'

Just saying from my experience:

  • loading/unloading kernel modules: Only /dev/net/tun /dev/vhost-net /dev/null are needed to be passed into container. First two are needed only when there's vpn requirement(openconnect can leverage vhost-net otherwise tun is enough). null is optional just to get rid of one complaint during startup.

  • applying sysctl settings: Container has its own network namespace. There could be a few settings that's not available in container. Not sure what those might be but it sure didn't bother me or the proper functionality of OpenWRT.

  • hardware access/ triggers: No such need whatsoever.

Anyhow, happy container user. We opt the route of using x86 to run OpenWRT and just buy any wireless router with factory firmware as dumb AP. Not gonna bother with hardware choice/OpenWRT support what not going forward.

root@openwrt:~# uptime
17:09:43 up 24 days, 8:39, load average: 0.00, 0.00, 0.00

I hate sounding like a broken record, but you realize that fw4 depends entirely on the nftables processing of the kernel? A container depends entirely on what the undefined host kernel (just for the sake of an argument, imagine RHEL4 here) happens to provide, it can't request the host to load additional modules, it can't set their sysctl's (important for odhcpd and others to work as expected). Now look at the installed kmod packages (which only tell a third of the story) and check what could possibly go wrong (prime example, wireguard - because there you will see failure, rather than subtle security issues).

No, I won't expand this into more details, ...again. It's an explicitly unsupported mode of operation, trying it has a high chance of Bad Things(TM) to happen. The ugly side of it just happens to be that it might appear to 'work' (by accident) at first glance.

/Debbie_Downer

Thank you all for the very enlightening considerations! Based on the somewhat opposite suggestions provided, it looks like a problem beyond me to make a well informed decision about, and that I would set myself up for possible future headaches by containerizing. I have a tendency to take paths that lead to dead ends, so guess I'm better off leaving my router as a pure OpenWRT box, and putting my other stuff on a small server.

From container perspective anything runing as root can potentially escape it, most thing in openwrt do.

Run it in unprivileged mode

Shooo , and set routes and fw rules after.

lxc "unprivileged mode" means: inside the container everything is root, outside the container on the host side everything is running as a unprivileged user.

again in my case:

  • ancient kernel: no applicable
  • load additional modules: there's no need
  • it can't set their sysctl's: lxc container has its own network namespace. very little settings cannot be set if any, and they probably don't matter much. at the minimum openwrt runs very well in my setup.
  • kmod: not applicable.
  • wireguard vpn: i haven't tested this. openconnect works.

openwrt is setup-and-forget for me except the occasional upgrade.
"any wireless router with factory firmware as dumb AP" still stands.
OpenWRT in lxc/qemu is the way to go given the wireless support situation.

If you can dedicate a device to OpenWRT, by all means.

What is the issue with running Openwrt in a virtual machine? I understand it adds more layers that can be exploited, but apart from that is there any issue?

No issue go for it.

I've been using a Starfive Visionfive 2 board for a month or so that has Arch Linux running on it and then 2 x Openwrt containers set up via systemd-nspawn

Everything just working basically.

This is an example of how I start one:

systemd-nspawn --capability=CAP_SYS_TIME --private-users=no --network-interface end0 --network-interface veth0 --console=passive  -bD /var/lib/machines/openwrt1/ &

So that container has the ability to run the ntp server, has direct access to one network port and a virtual network out to anything else on the device internally.

I've recently had a look at lxc with openwrt on a Dynalink DL-WRX36 as the host, it actually seems pretty easy so far, the luci gui system set up to make the containers is cool, there's a drop down list of containers to set up but it still seems to work the same way as setting up any container so it should be fine to make my own too. So far all I had to do was mount a usb device at /srv/lxc and create a lxcbr0 bridge to get one starting.

It looks like lxc needs the same command to get into the command line of an openwrt container. e.g...

echo "console::askfirst:/usr/libexec/login.sh" >> /srv/lxc/[container-name-here]/rootfs/etc/inittab

Then boom, that easy to get to a login.

root@openwrt:~# lxc-start -F test
lxc-start: test: ../src/lxc/utils.c: safe_mount: 1330 No such file or directory - Failed to mount "/usr/lib/lxc/rootfs/proc/tty" onto "/usr/lib/lxc/rootfs/proc/sys/net"
/etc/preinit: line 58: can't create /sys/class/leds/red:/trigger: Read-only file system

/etc/preinit: line 58: can't create /sys/class/leds/red:/trigger: Read-only file system
Please press Enter to activate this console.
login[129]: root login on 'console'


BusyBox v1.37.0 (2024-12-19 08:01:46 UTC) built-in shell (ash)

  _______                     ________        __
 |       |.-----.-----.-----.|  |  |  |.----.|  |_
 |   -   ||  _  |  -__|     ||  |  |  ||   _||   _|
 |_______||   __|_____|__|__||________||__|  |____|
          |__| W I R E L E S S   F R E E D O M
 -----------------------------------------------------
 OpenWrt SNAPSHOT, r28354-31e45f62ca
 -----------------------------------------------------
=== WARNING! =====================================
There is no root password defined on this device!
Use the "passwd" command to set up a new password
in order to prevent unauthorized SSH logins.
--------------------------------------------------

 OpenWrt recently switched to the "apk" package manager!

 OPKG Command           APK Equivalent      Description
 ------------------------------------------------------------------
 opkg install <pkg>     apk add <pkg>       Install a package
 opkg remove <pkg>      apk del <pkg>       Remove a package
 opkg upgrade           apk upgrade         Upgrade all packages
 opkg files <pkg>       apk info -L <pkg>   List package contents
 opkg list-installed    apk info            List installed packages
 opkg update            apk update          Update package lists
 opkg search <pkg>      apk search <pkg>    Search for packages
 ------------------------------------------------------------------

For more https://openwrt.org/docs/guide-user/additional-software/opkg-to-apk-cheatsheet

root@test:~# 

Also quick tips when originally setting up, sometimes if you're testing connectivity you can just quickly drop the firewall.

nft flush ruleset

With the kernel stuff and support i'm not sure but yes, if you run openwrt containers on a host, obviously then the possibility is you aren't running an openwrt kernel, and they do have some patches that add or alter functionality. eg take a look in target/linux/generic/hack or target/linux/generic/pending
I think in general you may not see software offloading option and the like or some packet marking things won't be available.

3 Likes

ingress egress firewall targets will certainly not be available, just like most sysctl-s needed.

systemd-nspawn is what I attempted first. Now that I look back it probably works too. But running systemd-nspawn inside tmux/screen just to keep the session up, and that machinectl is picky about the container system eventually led me to lxc.

software offloading is probably no longer a thing cause host kernel handles that. ethtool running on the host comes to mind.
packet marking more likely will just work.