I try to run openwrt on podman

To be honest, it can run, but it seems to automatically restart inexplicably.
Is this expected behavior? But to be honest, running openwrt on a container is more interesting than running openwrt on a VM.

Mind to explain why? Sure there are other "long running containers" but I don't see the point in running Openwrt containers. Even if you had multiple instances which sync for instance the conntrack state it is just pain. If you need HA then setup 2 VMs properly for this use case. Or configure the Openwrt container to a very specific use case but as a general router? Nope.

yes, it is expected as containers are using the host's kernel and kernel modules which may or may not the required ones owrt would install and use on bare metal. the only use case to run owrt in container i can think of is learning but not even for home "production" usage makes sense.

Containers use far fewer system resources than virtual machines, start up faster, use nearly all of the host's resources, and perform better.

There are almost all advantages except security.

To be honest, even the vfio network and storage used by qemu/kvm, I don't think it will be faster than the macvlan, OverlayFS storage used on containers.

Running OpenWrt in a container is not a supported configuration, OpenWrt does depend on its own kernel, being able to load and unload kernel modules at will, to set sysctl values as needed. All of this does work in a full system virtualized VM, it does not work in a container where you need to use the host kernel and have no further access to it.

Maybe it's just me, but I don't see how this could be the case when comparing against bare metal. I can certainly agree that it is much more efficient than running multiple physical machines if the host system is not fully tasked -- the machine's resources can be shared and such that the unused resources from any given guest OS/container can be used for other guest OS/containers serving other purposes.

But a containerized/virtual system requires a that there is a host OS with a supervisor/hypervisor on the host hardware -- this must boot up which takes both time and resources. Then the container must be booted and that is also not running on bare metal, so the supervisor/hypervisor must manage the container's access to the hardware itself, including sharing it with other containers/processes, which would mean some additional level of process overhead.

Can you say conclusively that an OS like OpenWrt actually boots (cold boot) faster in a container than when on bare metal? What are the boot times you've measured? What about the booting of the host os/supervisor/hypervisor? And what about other resources like storage and RAM (both of which are pretty minimal requirements for OpenWrt).

Yes, you may need to load modules on the host. Otherwise I think the main problem is that OpenWrt's Wifi packages require a patched kernel. Except from Wifi OpenWrt seems to mostly work when running as a system container (in for example LXD). Though I haven't been able to use jool but that might be a problem with jool/LXD and not OpenWrt.

It's also a recipe for more subtle, but serious, security issues, just look at the sysctl configuration - a container has no access to that, but getting that right is absolutely essential (the netfilter rules, handling RAs in userspace, etc. pp). This all works in a real full-system VM, it cannot in a container.

Or hotplug events, interface tracking, none of that works.

In the x86 context, it is usually rare to run OpenWrt on bare metal, so it is not meaningful to discuss it. Below, we will only discuss the difference between virtual machines and containerization. The original intention of containerization design is to reuse the runtime of the kernel, so it is natural that there is no need for kernel initialization, while virtual machines require it, which will naturally consume additional resources and start slower. Here are some differences between podman and Docker:

Podman runs the container directly without the need for a daemon process. This means that Podman can run the container directly without starting the daemon, and it does not depend on the Docker's repository or image format. Here are the differences between Podman and Docker.

In other words, podman does not require service startup, and the user experience is more similar to chroot. Podman has always been known for being lighter than Docker.

Logically, if I've already initialized the kernel, why would I want to initialize it again? What I'm doing is obviously starting the program, not initializing another kernel on the VM.

That is not even remotely true. Running OpenWrt bare metal on x86 hardware is extremely common. It is a favorite on NUCs and other similar small footprint/lower power x86 systems, especially where 1Gbps+ routing with SQM is needed.

This depends on the guest OS that you are running. Some guest OS's can leverage the host kernel extensions, others may not.

Sounds like some commercial routers, I really don't see any advantage of these x86 routes over the more common arm, MIPS architecture routes.

I know, but this use case isn't typical, and it's still slower than containers, and full virtualization is bound to be slower than containerization.

Performance above >>500 MBit/s (with- or without sqm or VPN), above 1 GBit/s, above 2.5 GBit/s, …, reliability/ stability.

arm and mips routes have hardware nat.

…and x86_64 doesn't need that, it has the performance to do without tricks.

If you look at hardware flow-offloading (mt7621, mt7622, filogic 820/830/880), you will often see hard to diagnose issues (like PPPoE not working or some other issues), NSS on ipq806x/ipq807x/ipq60xx/ipq50xx is just another world of pain and trying to find a combination that appears to work. With x86_64, you don't need any of that, it's well tested beyond OpenWrt's more narrow scope and just works, always, at full performance.

1 Like

So x86 bare metal running openwrt (or other OS) is a more compromise approach, he costs less, but not so friendly to power and things, it is because there is no hardware NAT needs higher power consumption, stronger general computing performance, which is a pure compromise.

It is well known that openwrt's usability is much worse than other common distributions, at least I don't have much reason to use it on bare metal.

My parents often blame me, since you installed this iron box, our electricity bill has been much higher, I don't know how to respond to them.

My dedicated x86_64 OpenWrt(-only) router with four 1 GBit/s ports uses less (than half the-) power, than the dedicated 'plastic' ARMv8 AP connected to it.

You can get x86_64 systems capable of this job getting away with less than 5 watts idle, but around 10 watts +/- is more easily found (obviously a desire for 10GBASE-T or higher would drive up the power consumption for the network cards alone).

At the same time many of the contemporary ARMv8 routers are solidly in the 15+ watts bracket.

While I'm personally not a fan of using virtualization for OpenWrt at home (without enterprise level maintenance, HA and hot-failover in place), for KISS- and bootstrapping reasons, as well as covering potential hardware failure, there is a major difference between virtualization and containerization here.

  • you can run OpenWrt in a VM (kvm, xen, virtualbox, hyper-v, vmware, parallels, etc.), as long as you know how to operate it.
  • you should not try to run OpenWrt in a container (lxc, lxd, docker, podman, virtuozzo/openvz, …) for all the reasons raised above

Just select your hardware wisely, x86_64 doesn't necessarily need more electricity.

Armv8 architecture routing provides far more performance than four 1gbps, even though I couldn't use hardware NAT.

Besides, you also ignore that the AP wireless chip also needs extra power consumption, and it seems that you don't need an X86 route if you use a full ARM solution.

In addition, it is well known that the typical modern mobile phone Soc power consumption is 3-5W, which is enough to support you to use WiFi6, UFS, Nvme.

Shouldn't containers with CAP_NET_ADMIN have access to most network related sysctl configuration settings but maybe you are thinking of some other sysctl.

ARMv8 as an architecture could.
ARMv8 as really existing devices you can buy as a mere mortal is a different story.

My example above covers my existing (read older) personal x86_64 OpenWrt hardware, chosen with low-wattage and (very) cheap purchasing prices in mind - it's faaar from the highend possible with x86_64.

If I had to select x86_64 hardware for OpenWrt now, I would probably end up with an alderlake-n n100 system and four 2.5GBASE-T ports (probably using around 8-10 watts idle), as it provides more power than needed, while being priced quite attractively (~130-230 EUR). And no, that is not a limit either, any old skylake i5 would cope with 2+ 10 GBit/s ports easily, 40 GBit/s, 100 GBit/s are all possible - you decide.

The RPi4 shows the potential of ARMv8, but it lacks on the I/O side and its prices are no longer attractive.
The Rockchip SOCs are interesting.
ipq807x is a far cry away from 10 GBit/s without NSS offloading
filogic 820/830/880 might get quite far, but it still expects to rely upon hardware flow-offloading.
The Apple M2 shows what ARMv8 performance 'could' look like, but pricing is beyond madness for a router (apart from questions like mainline Linux support, adding more ethernet cards, …).

If you dislike OpenWrt, why are you using it? If other router-centric distributions are more to your liking, it would seem to follow that you should probably use those, instead.