[Guide] Run OpenWrt as a container in Proxmox

Thanks aparcar and rkkoszewski/rkk2025 for the work you´ve done at https://github.com/mikma/lxd-openwrt!

This guide adds some more details to the instructions found at https://bugzilla.proxmox.com/show_bug.cgi?id=2044 by rkkoszewski/rkk2025.
·

NOTES

  • This guide is confirmed to support building 18.06.2 and 18.06.4 for the x86_64 architecture
  • You can create any additional network interfaces directly from the Proxmox Web UI, though configuring the IP from the Proxmox GUI only works temporarily till you restart the container, at least for now.
  • You can´t use the PVE UI to connect to the OpenWRT console, but the container is still up and running
  • PVE is short for Proxmox Virtual Environment

PREPARE BUILD ENVIRONMENT
It's recommended you use Debian or Ubuntu on the build system. The following additional packages are required on Ubuntu 18.04:

sudo apt install -y build-essential subversion fakeroot gawk gpg

RETRIEVE BUILD SCRIPTS
To build it manually just follow these steps,

Clone the lxd-openwrt repo:

git clone https://github.com/mikma/lxd-openwrt

To build a template that works with Proxmox, change directory into the top level of the cloned repo.
The build.sh script has the following defaults (2019-09-11):

arch_lxd=x86_64
ver=18.06.4*
dist=openwrt
type=lxd
super=fakeroot
# iptables-mod-checksum is required by the work-around inserted by files/etc/uci-defaults/70_fill-dhcp-checksum.
packages=iptables-mod-checksum

And supports the following options:

[-a|--arch x86_64|i686|aarch64]
[-v|--version <version>]
[-p|--packages <packages>]
[-f|--files]
[-t|--type lxd|plain]
[-s|--super fakeroot|sudo]
[--help]

Relying on defaults, we simply have to give the script a single parameter to build what is needed:

./build.sh -t plain

Here´s another example which includes some additional packages:

./build.sh -t plain -p "luci-app-sqm sqm-scripts luci-app-ddns ddns-scripts ddns-scripts_no-ip_com iptables-mod-checksum"

UPLOAD OPENWRT TEMPLATE TO PVE
Use WinSCP (or similar) to download the template from the build environment to your own computer. The file is located in the "bin folder" in the repo you initially cloned with a naming pattern such as "openwrt-18.06.4-x86-64-plain.tar.gz".

Upload the template to a PVE template directory using the PVE UI by clicking the "local (PVE)" storage on node PVE, and then select "Content" menu option and click the "Upload" button. Change Content to "Container Template" and locate your file and finally upload it.

CREATE A OPENWRT CONTAINER
SSH into the PVE host, and create a container for OpenWRT by executing:

pct create 201 local:vztmpl/openwrt-18.06.4-x86-64-plain.tar.gz --rootfs local-lvm:0.4 --ostype unmanaged --hostname openwrt1806 --arch amd64 --cores 4 --memory 256 --swap 0

Notes about "pct create" command:

  • "201" is the ID assigned to the container
  • "local" is the default name for the storage where container templates are stored (check your pve storage.cfg for more info)
  • "rootfs" is size of the container filesystem in GB
  • "local-lvm" is where the container is to be stored
  • "hostname" is name of container

Recommended but optional configuration. Remove any other lxc.includes that might be already in that config file, and add these lines to the container config file in (/etc/pve/lxc/201.conf)

lxc.include: /usr/share/lxc/config/openwrt.common.conf
lxc.include: /usr/share/lxc/config/common.conf
lxc.cap.drop: sys_admin
lxc.mount.entry: tmp tmp tmpfs rw,nodev,relatime,mode=1777 0 0

ADD A WAN-SIDE BRIDGE TO THE PVE HOST´S NETWORK CONFIGURATION
This guide is based on the assumption that have a network card with two physical ports, where the LAN port is named "enp2s0f0" and WAN port is named "enp2s0f3". If needed, change these names to fit your setup. You probably already have a bridge named "vmbr0" as part of the default PVE setup, anyhow it should looks something like this and be physically connected to your internal LAN.

Name: vmbr0
IPv4/CIDR: 192.168.1.2/24
Gateway (IPv4): 192.168.1.1
Bridge ports: enp2s0f0

Create a new bridge named "vmbr1" and assign it the physical LAN port connected to your WAN:

Name: vmbr1
Bridge ports: enp2s0f3

ADD LAN & WAN NETWORKS TO OPENWRT CONTAINER
Connect the OpenWRT container to your LAN bridge (vmbr0) by adding a virtual network adapter in the PVE UI. It could have these properties:

Name: eth0
MAC: (use auto generated)
Bridge: vmbr0
IPv4: "static"
IPv4/CIDR: 192.168.1.1/24
 
(I don´t use IPv6 so I didn´t add anything related to it).

Add another network for the WAN side. This time connect it to the WAN side bridge ("vmbr1")

Name: eth1
MAC: (use auto generated)
IPv4: "DHCP"

SET CONTAINER START-UP OPTIONS + START IT!
Change container option "Start at boot" to "Yes", and then start the container

FINAL STEPS - A COUPLE OF CONFIGURATION FIXES IN OPENWRT
OpenWRT root user password and network configuration for the LAN side is missing and has to be manually set up.

Access the OpenWRT container´s console through the PVE console by executing:

pct enter 201

Set a password for the root user

passwd [your password]

Open the file where OpenWRT keeps interface configuration:

vi /etc/config/network

And configure the "lan" interface to something like this

config interface 'lan'
        option type 'bridge'
        option ifname 'eth0'
        option proto 'static'
        option netmask '255.255.255.0'
        option ipaddr '192.168.1.1'

Finally, reboot the container, and now you should be able to access the OpenWRT UI through 192.168.1.1 and hopefully everything works out well!

1 Like

I tried to follow, but when I run I got message "EOF - dtach terminating".

Did you change the version string to latest openwrt release?

What do you mean version string? I did not change anything in the script. Of course in the resulted file name was different as now it is based on 19.07.
I was reading on proxmox forum that somebody else was not able to start, have no idea to what reasons.

I meant you have to change to openwrt 19, but you seem to have got that.

Try running it with:

lxc-start -F -n 120 --logfile=lxc.log --logpriority=debug

I stopped using openwrt in LXC because of some kernel related issues that may have been fixed by now. Also added a list of modules required by sqm to the container config that was dynamically loaded at start of the container. Let me know if you want to go there and I'll can provide more details.

1 Like

Thanks for guide! It's working flawlessly.

Is there recommended way to update OpenWRT inside container? I have 19.07.0 up and running but there is already 19.07.2 available. Can I just update with software manager inside Luci? Or I have to backup config, create new container and restore config inside it?

Glad to hear!!

I think you have to rebuild the container. Create a backup of your configuration using luci and it should be quick to restore it the new container.

Mikma has a script for upgrading but I haven't tried it

Thanks for the excellent writeup.

Inspired, I have created 2 sets of scripts for this purpose.

If someone have already got a buildroot for customed toolchain or packages that were not covered by official repo, they may be interested in my striped-down version of mikma/lxd-openwrt:

And a wrapper script for creating, upgrading and swapping OpenWRT lxc instances on Proxmox VE:

I hope they can be of help.

1 Like

@DazzyWalkman tried your DazzyWalkman/lxd-openwrt-simplified to setup an openwrt based OpenVPN gateway using VPN Policy-Based Routing + Web UI -- Discussion. Its a bit more simple setup than what I used before but so far all looks good, e.g. no kernel shoutouts in the log as seen last time I tried it.

Still on my todo-list before I consider using openwrt in a container again the internet facing router in the household, I need to try your script for loading SQM modules in the container + see how LXC handles passthough of physical NICs (that may have been what caused my issues last time).

Anyhow, it was really simple to build a container using your script. I tried the latest stable release openwrt-19.07.3-x86-64-generic-rootfs.tar.gz.

Are you using this as your daily driver?

I am glad they help. Yes, I use them regularly, for a couple of months, though I mainly focus on the latest snapshot builds. 19.07 should work without k-mod thing.

Concerning features requiring modprobe, all goes well as the pve 6.2 and the latest x64 openwrt snapshot share the same kernel 5.4 release. But I had some seemingly harmless kernel panics related to ppp, on 5.3 pve kernel, with openwrt tarball built against kernel 5.4.
As shortly after, I have moved on to the pve 6.2 using 5.4 kernel , then the panics are gone, hence I never further investigate if the panics were caused by host/container kernel abi misalignment.

You may have to keep an eye on this part.

And I have no experiences on nic physical passthrough on lxc, as I use veth.

That's why I leave out nic conf and hookscript untouched in my creating new ct script, as use cases differ.

I've only used the stable releases, for what reasons do you use latest snapshot builds?

For some strange totally irrational cause I assumet I really needed SQM (and thus a powerful CPU to drive it), but things are working out really well at the moment - using an Asus AC68U with OpenWrt as router (no wifi, no SQM). So for now I'll just leave it AS IS. But its just a matter of time before I get restles and need to break something :slight_smile:

I think it depends on how you deploy OpenWRT instances, as what roles, and how much time you would like to devote to this sort of activities. So, it's pretty personal.

I have a preference for the rolling update nature of the master branch (snapshot builds) over the backport-and-bugfix-only style of the latest stable branch. I am kind of OpenWRT enthusiastic user and tester, Œhave a usable buildroot, and like the latest and greatest things with tons of fixes and enhancements, and allow certain unexpected down time as I use OpenWRT in my homelab environment.

Basically I am doing a little more than a typical end-user would do.

This is the deciding factor.

It's ok to use the latest stable release, if nothing unexpected happens, just deploy and forget about it, except when there is a new point release. More so when you deploy it as a self-contained kvm guest instance.

Running OpenWRT as lxc instance, that's another story. We are running OpenWRT codes on non-OpenWRT host Kernel, venturing into an area where there is limited support and user base. Thanks to the devs, OpenWRT is shaped to run fine as lxc guest with minor-to-none modifications. Nevertheless, regarding combinations of particular host distro other than OpenWRT and OpenWRT release as guest, related tests are likely not performed. In this sense, stable releases are no longer considered stable. Users of lxc instances are pretty much on their own. If you only use userspace programs, have few interactions with kernel modules, any builds would be fine. However, as you are using SQM or the like, then you may have to take kernel compatibility into account. Please do note, OpenWRT 19.07 is using Kernel 4.14, utilizing an out-of-tree code of CAKE. In this case, I won't bother to evaluate the impact on stablity as lxc instance, just simply avoid builds with older kernels.

I would like to keep the difference of the major version of kernels, between the one OpenWRT is built against, and the one the host is using, as little as possible, even if their patches do differ.
Current OpenWRT x86_64 master branch uses Kernel 5.4, so does Proxmox VE 6.2. This makes a natural choice for me.

When doing building and test myself, reading through commit messages is quite important for me.

First, I can see what have been changed and whether it's worthy to make certain snapshot build and then deploy.

Second, if something goes wrong, I can quickly git-checkout, identify the problems, make workarounds, or revert certain commits, and file reports if I have to.

From my own experiences, most of the gotchas are non-fatal, and can be easily fixed.

Working with master branch is easier for me, for many issues getting reported and solved upstream as time goes by. Then I don't have to work on something already fixed upstream. On the other hand, certain bugfix may or may not make it to the stable branch. Users doing the backport themselves completely defeats the purpose of using the stable release.

Finally, lxc instances are cheap. Switching is easy. This is the very reason I wrote Oplxc4pve script for easier deployment and instances switching. It's not hard to pick a "stable" snapshot build as my one of the waypoints for my ongoing journey.

Interesting! Thanks for sharing! Never reflected on the dependency between lxc guest and host kernel when using SQM but that is something I need to pay more attention to in the future.

Hello,

Since yesterday, I'm struggling to start up the container with OpenWrt. I built a new one with version 19.07.4 and it doesn't want to start, with or without network interfaces assigned.

The strange thing is that I switched off my other OpenWrt running container (19.07.3) and that one does not start up either with the same errors:

root@teddybear:~# lxc-start -F -n 100 --logfile=/var/log/lxc100.log --logpriority=debug > error.log
lxc-start: 100: conf.c: lxc_setup_ttydir_console: 1691 Read-only file system - Failed to set mode "0110" to "/dev/pts/9"
lxc-start: 100: conf.c: lxc_setup: 3314 Failed to setup console
lxc-start: 100: start.c: do_start: 1224 Failed to setup container "100"
lxc-start: 100: sync.c: __sync_wait: 41 An error occurred in another process (expected sequence number 5)
lxc-start: 100: start.c: __lxc_start: 1950 Failed to spawn container "100"
lxc-start: 100: conf.c: run_buffer: 323 Script exited with status 1
lxc-start: 100: start.c: lxc_end: 964 Failed to run lxc.hook.post-stop for container "100"
lxc-start: 100: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: 100: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options
root@teddybear:~#

This was working before previously, albeit I had the same start-up issues previously: when I added new network interfaces and shut down the container it wouldn't start again. I had remove the new interfaces, and add it 'live' while the LXC container was running, then ifup from within Openwrt.

I am running these in Proxmox 6.2-11, latest.

May be a Proxmox VE issue, no defined answer to this problem though:

You can try downloading snapshot or stable tar.gz from OpenWRT official repo to give it a try.
As my Oplxc4pve script will issue multiple CT start/stops during upgrade/switch action, and I run the script a couple of times a day on a test session. Never experienced a start/stop issue, at least not with a unprivileged OpenWRT lxc instance without any nic.

Ah yes yes, might be related to Proxmox. I have restarted the host as a last case, I also had a pending kernel update as well, so that was due any way.

Now the issue is gone, Openwrt 19.07.4 starts and halts properly, and there is no issue with the network adapter configs any more, even with physical passthrough.

Might as well been a pve/lxc service issue, or update fixed it.