Running OpenWrt in a Docker container

@oofnik, Do you have a recommended procedure for upgrades? Is it just:

  1. do a backup through LUCI
  2. regenerate and start image with new version of openwrt
  3. restore backup through LUCI

Also, had you considered using the 'armvirt' 32/64 root packages vs extracting the rootfs from the rpi image? It would seem that you could streamline the workflow using that image instead.

I've been meaning to write an upgrade guide..

Basically:

  • backup through LuCI
  • pull or build latest image
  • make clean to delete the container and Docker networks
  • edit openwrt.conf to point to latest image if needed
  • make run, login through LuCI and import your previous configuration.

If you have added a bunch of packages to the container that are not in the base image, it might be a good idea to run opkg list-installed and save the output somewhere so you have a reference to re-install needed packages.

Re: armvirt images, it wouldn't save much time since we're basically doing the same thing (starting from scratch and adding a rootfs tarball). See https://gitlab.com/openwrt/docker/-/blob/master/Dockerfile.rootfs

1 Like

Well, I'm using a non-rpi ARM SBC, so wasn't sure what the differences in the rpi image were. The dockerfiles seem to modify the images quite differently (I assume due to the starting packages). I wasn't thinking that using the armvirt would save time, more that it would mean not having to maintain 2 different workflows (no need for build_rpi.sh or Dockerfile.rpi). I added some configuration options to openwrt.conf to allow setting the platform and built the image for my Odroid-C4 using armvirt/64. I've only done light testing, but it all seems to work well so far.

Ah, I see. Well I'm glad you got it working. Please do report any issues you encounter.

Now that I think about it it would be relatively painless to switch to using a single Dockerfile for all architectures. I think the fact that I'm using two different ones is an artifact of some previous bugs I've since worked out.

I've only personally tested on a Raspberry Pi Zero and a Raspberry Pi 4, but I believe the only functional difference (besides the kernel, which is irrelevant here) is the ABI for which the packages are compiled (armhf, aarch64, and so on).

Did you attempt to run the rpi4 image as-is? I would bet that it works on the Odroid-C4.

Took your advice and reworked the build system to use a single Dockerfile for all builds.
I'm now building both release and snapshot images for four architectures: x86_64, armhf (armvirt-32), aarch64 (armvirt-64), and a special build for the bcm2708 target (Raspberry Pi Zero). Thanks for the suggestion.

1 Like

The recent release of 19.07.5 provided an opportunity to finally write an upgrade guide, so I did.

Upgrades rely on sysupgrade to make a backup config archive, but there's some Docker-specific stuff to watch out for. I made all the mistakes writing this guide so you don't have to.

1 Like

@oofnik were you able to also set it up as the default route for a docker network?

Hi @fvlaicu, there was some discussion about this recently in a GitHub issue: https://github.com/oofnikj/docker-openwrt/issues/18

To summarize: no, I was not able to change the default route of the network. However my host OS (Ubuntu 16.04 in this case) is configured to send ICMP redirects which result in the packets being correctly routed. I believe this is the default configuration in many Linux distros.

Alternative 1: run a DHCP client inside the container in addition to assigning NET_ADMIN privileges, so that it may request an address assignment and default route from OpenWrt.

Alternative 2: modify the network namespace directly from the host with a series of ip netns exec ... commands, bypassing the Docker network configuration entirely.

1 Like

Hi @oofnik I had previously found the same :). I thought there might still be a way to achieve this.

What I found easier to do is the following:

  1. I created a bridge interface without any nics attached to it.
  2. Created an openwrt container using lxd and have a profile with 2 nics for it
    • 1st nic is of type macvlan, attached to the external interface i wanted to use as an egress
    • 2nd nic is bridged
  3. Created a macvlan docker network with the bridge as the parent interface and specified the gateway.

Docker will assign IPs automatically to containers in this network,
All that's left to do is to configure openwrt and give the gateway IP you specified in the macvlan docker network as the IP address of the bridged attached nic.

I inspired from this: http://www.makikiweb.com/Pi/lxc_openwrt.html

Thank you @fvlaicu, that is very interesting.
I came across @cvmiller's page before I started working on docker-openwrt. Since I am more familiar with the Docker toolset I opted to go that route instead of learning LXC/LXD, however there is a lot of feature overlap between the two.

So you're saying that you are using both LXD and Docker together to manage the network config? I find that a bit confusing - could you provide a little more detail in how this works?

I also want to point out that with docker-openwrt, any container attached to the openwrt-lan network in which net.ipv6.conf.all.disable_ipv6=0 is set will automatically have an EUI64 IPv6 address derived from the prefix advertised by OpenWrt if it exists.

Sorry for the confusion. No, I am only using LXD/OpenWrt for network management/control.

I had another project, where I wanted to run Docker inside a LXD container, but that is a separate project.

I think @oofnik project using OpenWrt in a Docker Container is an excellent idea, and solves many network problems I have with Docker (e.g. without it, the best you can hope for in IPv6 is NAT66, which is just bad).

If one is more familiar with Docker toolset, then I think you should take @oofnik approach. If you are more familiar with LXD, then my approach works. Both create OpenWrt VRs which have 2 interfaces, and allow OpenWrt to do the routing/fw/ipv6 prefix delegation, rather than relying on LXD or Docker's internal networking IPv6 cludge.

1 Like

Let me try to explain the network topology I have in my home lab :).
I try to have all the public accessible services live in a distinct vlan. Before deciding to containerize the workloads, I had achieved this through virtualization.
Now, docker will always SNAT to the default route. However I wanted all my services to egress through the distinct vlan I was talking about earlier.
I was able to do this segregation by modifying directly the routes from each of the containers network namespace, but i found that tedious and error prone.
Now let me explain how I made this work :slight_smile:
I created a linux network bridge, without any physical interfaces attached to it - let's call this dockerlxcbr.
The lxc container will have 2 network interfaces present in it's configuration:

  1. a macvlan network interface, bound to the physical interface - this will be the "default route" of the network (the egress IP of the other containers). This interface will live in my vlan that has the external services and will be directly routable in my network.
  2. a bridged network interface to the dockerlxcbr interface we created previously. Let's say we'll configure this interface to have the following IP 172.18.0.1/16 (the first ip in the 172.18.0.0/16 network, a subnet similar to what docker creates).
    We'll then configure the container as described in @cvmiller's doc
  3. we'll create the custom docker network called web and set the parent interface as dockerlxcbr:
docker network create -d macvlan -o parent=dockerlxcbr --subnet 172.18.0.0/16  --gateway 172.18.0.1 --ip-range 172.18.128.0/17 web

  1. Any containers created by docker that will be attached to the new web network will have the default route set to the openwrt container.

With all this being said, the current openwrt implementation isn't really cut out for a docker container.
However it is right at home with lxc/lxd, since the lxc containers by design can have a state :slight_smile:

Just setup a pi4, with a second tplink ue300 gigabit adapter

Even with a gigabit connection, the processor usage maxed out at 1.5% during a 60mbps swarm download. It’s barely idling.

Having an rpi4 running openwrt also hosting dockers such as pihole, dokuwiki, or even a mistborn like setup with easily added options would seriously rock

FYI, wireless is handled separately thru access points.

1 Like

Hey @oofnik, I love your implementation. I’ve been tinkering around with it for the past day or so, after I had enough of openwrt not supporting a few features I want, even if I run docker containers on OpenWRT.
Anyways, I’m having a bit of an issue here, I’m trying to use pppoe, but I keep getting:


Sat Jan 23 15:00:23 2021 daemon.info pppd[25723]: Plugin rp-pppoe.so loaded.
Sat Jan 23 15:00:23 2021 daemon.info pppd[25723]: RP-PPPoE plugin version 3.8p compiled against pppd 2.4.8
Sat Jan 23 15:00:23 2021 daemon.err pppd[25723]: Couldn't open the /dev/ppp device: Operation not permitted
Sat Jan 23 15:00:23 2021 daemon.err pppd[25723]: Sorry - this system lacks PPP kernel support

When trying to load any pppoe module inside the container, it cannot really find it since:


no module folders for kernel version 5.10.5-v8+ found

Outside:


root@raspberrypi:~# lsmod |grep ppp
pppoe                  28672  0
pppox                  16384  1 pppoe
ppp_generic            45056  2 pppox,pppoe
slhc                   20480  1 ppp_generic

lib/modules has 5.4.91 on the container but my Pi is in fact using 5.10.5-v8+ so that may have a correlation to these issues.

Would it be better to use Ubuntu or just a 5.4 kernel?

Thanks!
Mark

@fvlaicu thank you for sharing your config. That is a really clever setup.

I can't agree/disagree since I don't have experience with lxc/lxd but I will say that Docker containers can certainly be stateful as well - the state is simply another filesystem layer over the container image. My last OpenWrt container lasted six months until I upgraded to 19.07.5, adding / removing / upgrading packages, tweaking config, etc.

Very cool project, thanks for sharing. I am not familiar with it.

I am sure it is possible to combine containerized OpenWrt + Mistborn.

@markthehipster I ran into a similar issue when trying to run a GRE tunnel in containerized OpenWrt.

As a full OS distribution (kernel + root filesystem), OpenWrt makes several assumptions that are broken when run in a containerized environment - specifically the one where the modules present in the root filesystem are compatible with the running kernel.

I ended up having to patch the GRE protocol definition to get it working (see https://github.com/oofnikj/docker-openwrt/blob/master/patches/gre.sh.patch). Looking at the ppp proto definition, you may need to do something similar. Commenting out all the insmod lines and running them manually on the host might be all you need.

For GRE, the host kernel will automatically load the relevant GRE modules when a GRE interface is created. I'm not sure this is true for PPPoE so they may need to be loaded manually on the host with modprobe ....

My OpenWrt containers don't have permission to load kernel modules anyway. Which mean it shouldn't matter if the modules in the OpenWrt file system are incompatible with the running kernel.

Thank you, I'll see what I can do, thanks for pointing me in the right direction.

Yep that too.
I suppose if one really wanted to, the whole of /lib/modules can be bind-mounted into the container and run it in privileged mode, but that would open a very large can of security worms that I'd much rather keep closed...