OpenWRT in container or containers in OpenWRT?

I mean like there's alterations to the kernel like...
https://git.openwrt.org/?p=openwrt/openwrt.git;a=blob;f=target/linux/generic/hack-6.6/650-netfilter-add-xt_FLOWOFFLOAD-target.patch;h=58b37db2fb9ce16b31527e8d608c6e0e46da488b;hb=HEAD

https://git.openwrt.org/?p=openwrt/openwrt.git;a=blob;f=target/linux/generic/hack-6.6/645-netfilter-connmark-introduce-set-dscpmark.patch;h=bb802857d6b57bafdb57b2a6b2a2fff26733a399;hb=HEAD

So these things won't be available if running on a normal linux kernel.

Most sonsiderate would be to just make up ujail restrictions and run your applications in compartments on same main systems. It is 1-2 conf files to restrict service.

I see what you mean now. Those patches are beneficial no doubt. Just that the marginal gain wouldn't be as great on x86 and in my opinion running vanilla kernel without those patches are just fine. Simply because x86 cpu is so much more powerful.

In my case I used a netgear WNDR4300 v1 for years and at some point through the upgrade it's noticeable that the newer OpenWRT kernel is holding performance back(23.05 and 300mbps). Then I convert it to a switch only(layer2 operation still works perfectly 950mbps) and run a router-on-a-stick setup with a very old laptop with L7500 cpu(660 single thread/660 multithread scores). When OpenWRT runs in QEMU and bandwidth test over WAN happens, the CPU load goes up to 1.0. While at the same time OpenWRT in LXC barely moves the CPU load still 0.01 or so. And in both cases I get my full bandwidth back up to 350mbps.

Given the crappy x86 CPU used here and the performance gain comparison, I'm convinced that separating wireless part from OpenWRT is for me. Let x86 LXC/QEMU OpenWRT run the fw/routing and just let any ac/ax wireless router with oem firmware handle wireless part.

1 Like

Yeah, the only draw back of old x86 stuff is it can get demolished in power efficiency if you care about that stuff. Even stuff you think wouldn't be bad like old laptops can sit there idling at like 30w or something otherwise though yeah I agree that containers seem to work well for this use.

By running another openwrt on openwrt it will allow me to assign different ports to different networks with a wireguard vpn without needing to install pbr. Maybe I set up an arch linux container with x2go then I have a persistent gui etc. It's nice.

Looks like I have the basic container done.

This ended up being my lxc config:

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: --dist openwrt --release snapshot --arch arm64 --server images.linuxcontainers.org
# Template script checksum (SHA-1): b27e730655b3208b5e2edcba69290c39970b4fd0
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = aarch64

# Container specific configuration
lxc.rootfs.path = btrfs:/srv/lxc/test/rootfs
lxc.uts.name = test

# Network configuration
lxc.net.0.type = vlan
lxc.net.0.vlan.id = 500
lxc.net.0.link = lan4
lxc.net.0.name = mdm1
lxc.net.1.type = phys
lxc.net.1.link = lan2
lxc.net.2.type = phys
lxc.net.2.link = lan3
lxc.net.3.type = phys
lxc.net.3.link = vwan0
lxc.net.0.hwaddr = 00:16:3e:07:07:07

I wanted to call that mdm1 device lan4.500 but I think it messes with the way openwrt works, so I figured it's connected to the modem so call it mdm1. So there it goes basically, port lan2 and lan3 just plug in and it's normal lan, and lan1 and the wan ports are vpn lan ports. vwan0 is the veth interface that goes back to the host.

I'm not sure I understanded what you were trying to do, but machinectl start <container> starts container in background and machinectl enable <container> makes the nspawn container autostart.

For me (OP), there's no issue really. I am reluctant to use VMs just out of principle, as they seem like an old fashioned and heavy solution where containers seem much more elegant and lightweight. For context, my x86 is a PC Engines board with 1.2GHz 4-core CPU and 4GB RAM. Containers (re)boot much faster (less downtime, especially when experimenting), use much less RAM (4GB can sustain much fewer VM's than containers), and of course don't need the additional abstraction layer a VM does. I also suspect a VM induces extra latency/jitter. They all matter, but maybe not enough to warrant my reluctancy :slight_smile:

That's understandable. My server is a embedded AMD Zen 1 with 4 cores and 16G RAM, so running a few VM's together with a bunch of containers is no problem.

Plus, I have a router and will use Openwrt as wifi access point only, so no routing, sqm, vpn... only wifi AP, so it should be light.

Actually, a question, since I only use openwrt as WiFi access point and do not use the other features, does anyone know if running openwrt as a container will still trigger the issues of running it as container vs VM mentioned above or not?

I have a Banana pi R4 with 4GB ram and wanted to make use of that additional RAM and CPU power.

So I setup podman. Took me a while, but I got it working now. Has a separate bridge for container network and DNS entries are also updated automatically. Port forwards are not working, because no nftables support in netavark (the virtual network thingy for podman), but that doesn't matter to me, because it saves me the effort of having a reverse proxy or something to forward names to ports.

Instead I can just open http://grafana/ and get to my grafana instance.

I use an nvme to store all the container data, so it is outside of the eMMC and also outside of my firmware image. Podman is inside the image, so I'm already at 60MB for the sysupgrade. :sweat_smile:

I have not yet set up all the containers, but I assume the base is working.

Still debating (with Copilot) where to place the setup scripts. In shell files in init.d or docker files or on the nvme as a separate download after flashing the firmware.

As others suggested, it's not the best idea to have all that on your router. If I didn't have the R4 but a resource limited router, I also wouldn't do this and use a raspberry pi, one of those rockchip SBCs with 16-32GB RAM or a miniPC instead. Just to separate the concerns and don't flash the router every week, "to try something out".

machinectl login doesn't work: (manpages) Note that this is only supported for containers running systemd(1) as init system. I didn't like this part but doesn't seem like a deal breaker for me now.

They'd still consider this [container] approach much less safer than VM, but totally fine from where I look.

That's the point. Use a much more resourceful x86 to run OpenWRT be it VM/container and consolidate.

Getting old and memory may not serve. Maybe 12w last time I clock it? Electricity bill is also not a concern because well, getting old and we are talking about $10 vs $30 a month 300% it may be but not 1k vs 1.5k 50% difference albeit a $500 difference.

Another lxc tip when running openwrt on openwrt. I couldn't figure out why dns wasnt working, and finally stumbled on a forum post somewhere recommending that dnsmasq be limited to what interfaces it listens on.

Network -> DHCP and DNS -> Devices & Ports -> Listen interfaces

Problem solved.

EDIT: at least I thought it was, i'm not sure what the deal is but it seems like starting the container via the luci gui might be the problem, starting with 'lxc-start' at the command line instead doesn't seem to have the same problem or at least with less frequency.

1 Like

Two solutions to dnsmasq issue in lxc

Does having OpenWRT as both host and container sidestep some of the issues mentioned here: namely kernel modules and security issues? You did the routing in the container, right?

No, not really (many things in OpenWrt expect to load- and unload kernel modules, as well as applying sysctl settings).

It worked fine for me in the end, I was actually a little impressed with it over all, and the capability of the Dynalink DL-WRX36 cpu and usb performance and stability of the ethernet driver for it I was trying it on.

At the end of the day, this isn't my argument to be making, if it was so bad / useless why are multiple version openwrt images provided in the luci gui ? They must be useful to someone. I have no idea what kernel modules would be getting unloaded and loaded again, that seems pretty niche to me. The sysctl stuff, yes there appear to be some ipv6 things set dynamically at least from what i've seen https://github.com/oofnikj/docker-openwrt/blob/master/patches/dhcpv6.script.patch but for the other basic system stuff, I mean it's already set by the host, which is openwrt, so.... Yeah the container has access to the devices you pass in and as far as I can tell able to set all the routing and firewall rules like a separate machine, I assume you can set up wireguard firewalls inside them and the like too. But again, this isn't some hill for me to die on and get into some kind of turbo hacking contest over, i'm not a turbo hacker or ceo of 'run openwrt in a container'

I've moved on back to using systemd-nspawn on different soc now, The Dynalink DL-Wrx36 is a pretty neat device but with wireless and the like the idle watts climbed up over 11 watts and I needed more 2.5g ports so I went back to a managed switch which idles at 8 watts and soc that idles at 2 watts. It also appears to not be as useful if you set the wireless country region outside of the US.

1 Like

Post in thread 'How to download LXC version of OpenWRT and run it on Proxmox'

I'm trying to run my OpenWRT Appliance in LXC on Top of Proxmox.

I'm failing right now on Inter-Vlan-Routing and consider putting OpenWRT into a KVM instead of LXC. Despite, i would prefer LXC over KVM.

I just want to mention that I run OpenWrt in Docker via a full system emulation (qemu inside Docker). I'm using it as my main router. Because the network is completely isolated from the host network, there is no Docker firewall mess up.

Link: https://github.com/AlbrechtL/openwrt-docker

Just for transparency, I presented my approach here: OpenWrt Docker Image (experimental) - #6 by albrechtl

1 Like

I'm running a proxmox host with OpenWrt as a vm. I run docker containers directly on the proxmox host, not in a vm. I'm running unifi controller, nginx, jellyfin, and Minecraft server on an Intel n95 mini PC with dual ethernet.

Before this, I ran OpenWrt on a nanopi r4s and ran the containers inside OpenWrt. It worked but it was a struggle. Had to fight with BusyBox versions of commands, manually set up users and permissions, etc. Would not recommend using OpenWrt as a docker host even though it's possible.

1 Like