Running OpenWrt in a Docker container

Ah, I see. Well I'm glad you got it working. Please do report any issues you encounter.

Now that I think about it it would be relatively painless to switch to using a single Dockerfile for all architectures. I think the fact that I'm using two different ones is an artifact of some previous bugs I've since worked out.

I've only personally tested on a Raspberry Pi Zero and a Raspberry Pi 4, but I believe the only functional difference (besides the kernel, which is irrelevant here) is the ABI for which the packages are compiled (armhf, aarch64, and so on).

Did you attempt to run the rpi4 image as-is? I would bet that it works on the Odroid-C4.

Took your advice and reworked the build system to use a single Dockerfile for all builds.
I'm now building both release and snapshot images for four architectures: x86_64, armhf (armvirt-32), aarch64 (armvirt-64), and a special build for the bcm2708 target (Raspberry Pi Zero). Thanks for the suggestion.

1 Like

The recent release of 19.07.5 provided an opportunity to finally write an upgrade guide, so I did.

Upgrades rely on sysupgrade to make a backup config archive, but there's some Docker-specific stuff to watch out for. I made all the mistakes writing this guide so you don't have to.

1 Like

@oofnik were you able to also set it up as the default route for a docker network?

Hi @fvlaicu, there was some discussion about this recently in a GitHub issue: https://github.com/oofnikj/docker-openwrt/issues/18

To summarize: no, I was not able to change the default route of the network. However my host OS (Ubuntu 16.04 in this case) is configured to send ICMP redirects which result in the packets being correctly routed. I believe this is the default configuration in many Linux distros.

Alternative 1: run a DHCP client inside the container in addition to assigning NET_ADMIN privileges, so that it may request an address assignment and default route from OpenWrt.

Alternative 2: modify the network namespace directly from the host with a series of ip netns exec ... commands, bypassing the Docker network configuration entirely.

1 Like

Hi @oofnik I had previously found the same :). I thought there might still be a way to achieve this.

What I found easier to do is the following:

  1. I created a bridge interface without any nics attached to it.
  2. Created an openwrt container using lxd and have a profile with 2 nics for it
    • 1st nic is of type macvlan, attached to the external interface i wanted to use as an egress
    • 2nd nic is bridged
  3. Created a macvlan docker network with the bridge as the parent interface and specified the gateway.

Docker will assign IPs automatically to containers in this network,
All that's left to do is to configure openwrt and give the gateway IP you specified in the macvlan docker network as the IP address of the bridged attached nic.

I inspired from this: http://www.makikiweb.com/Pi/lxc_openwrt.html

Thank you @fvlaicu, that is very interesting.
I came across @cvmiller's page before I started working on docker-openwrt. Since I am more familiar with the Docker toolset I opted to go that route instead of learning LXC/LXD, however there is a lot of feature overlap between the two.

So you're saying that you are using both LXD and Docker together to manage the network config? I find that a bit confusing - could you provide a little more detail in how this works?

I also want to point out that with docker-openwrt, any container attached to the openwrt-lan network in which net.ipv6.conf.all.disable_ipv6=0 is set will automatically have an EUI64 IPv6 address derived from the prefix advertised by OpenWrt if it exists.

Sorry for the confusion. No, I am only using LXD/OpenWrt for network management/control.

I had another project, where I wanted to run Docker inside a LXD container, but that is a separate project.

I think @oofnik project using OpenWrt in a Docker Container is an excellent idea, and solves many network problems I have with Docker (e.g. without it, the best you can hope for in IPv6 is NAT66, which is just bad).

If one is more familiar with Docker toolset, then I think you should take @oofnik approach. If you are more familiar with LXD, then my approach works. Both create OpenWrt VRs which have 2 interfaces, and allow OpenWrt to do the routing/fw/ipv6 prefix delegation, rather than relying on LXD or Docker's internal networking IPv6 cludge.

1 Like

Let me try to explain the network topology I have in my home lab :).
I try to have all the public accessible services live in a distinct vlan. Before deciding to containerize the workloads, I had achieved this through virtualization.
Now, docker will always SNAT to the default route. However I wanted all my services to egress through the distinct vlan I was talking about earlier.
I was able to do this segregation by modifying directly the routes from each of the containers network namespace, but i found that tedious and error prone.
Now let me explain how I made this work :slight_smile:
I created a linux network bridge, without any physical interfaces attached to it - let's call this dockerlxcbr.
The lxc container will have 2 network interfaces present in it's configuration:

  1. a macvlan network interface, bound to the physical interface - this will be the "default route" of the network (the egress IP of the other containers). This interface will live in my vlan that has the external services and will be directly routable in my network.
  2. a bridged network interface to the dockerlxcbr interface we created previously. Let's say we'll configure this interface to have the following IP 172.18.0.1/16 (the first ip in the 172.18.0.0/16 network, a subnet similar to what docker creates).
    We'll then configure the container as described in @cvmiller's doc
  3. we'll create the custom docker network called web and set the parent interface as dockerlxcbr:
docker network create -d macvlan -o parent=dockerlxcbr --subnet 172.18.0.0/16  --gateway 172.18.0.1 --ip-range 172.18.128.0/17 web

  1. Any containers created by docker that will be attached to the new web network will have the default route set to the openwrt container.

With all this being said, the current openwrt implementation isn't really cut out for a docker container.
However it is right at home with lxc/lxd, since the lxc containers by design can have a state :slight_smile:

Just setup a pi4, with a second tplink ue300 gigabit adapter

Even with a gigabit connection, the processor usage maxed out at 1.5% during a 60mbps swarm download. It’s barely idling.

Having an rpi4 running openwrt also hosting dockers such as pihole, dokuwiki, or even a mistborn like setup with easily added options would seriously rock

FYI, wireless is handled separately thru access points.

1 Like

Hey @oofnik, I love your implementation. I’ve been tinkering around with it for the past day or so, after I had enough of openwrt not supporting a few features I want, even if I run docker containers on OpenWRT.
Anyways, I’m having a bit of an issue here, I’m trying to use pppoe, but I keep getting:


Sat Jan 23 15:00:23 2021 daemon.info pppd[25723]: Plugin rp-pppoe.so loaded.
Sat Jan 23 15:00:23 2021 daemon.info pppd[25723]: RP-PPPoE plugin version 3.8p compiled against pppd 2.4.8
Sat Jan 23 15:00:23 2021 daemon.err pppd[25723]: Couldn't open the /dev/ppp device: Operation not permitted
Sat Jan 23 15:00:23 2021 daemon.err pppd[25723]: Sorry - this system lacks PPP kernel support

When trying to load any pppoe module inside the container, it cannot really find it since:


no module folders for kernel version 5.10.5-v8+ found

Outside:


root@raspberrypi:~# lsmod |grep ppp
pppoe                  28672  0
pppox                  16384  1 pppoe
ppp_generic            45056  2 pppox,pppoe
slhc                   20480  1 ppp_generic

lib/modules has 5.4.91 on the container but my Pi is in fact using 5.10.5-v8+ so that may have a correlation to these issues.

Would it be better to use Ubuntu or just a 5.4 kernel?

Thanks!
Mark

@fvlaicu thank you for sharing your config. That is a really clever setup.

I can't agree/disagree since I don't have experience with lxc/lxd but I will say that Docker containers can certainly be stateful as well - the state is simply another filesystem layer over the container image. My last OpenWrt container lasted six months until I upgraded to 19.07.5, adding / removing / upgrading packages, tweaking config, etc.

Very cool project, thanks for sharing. I am not familiar with it.

I am sure it is possible to combine containerized OpenWrt + Mistborn.

@markthehipster I ran into a similar issue when trying to run a GRE tunnel in containerized OpenWrt.

As a full OS distribution (kernel + root filesystem), OpenWrt makes several assumptions that are broken when run in a containerized environment - specifically the one where the modules present in the root filesystem are compatible with the running kernel.

I ended up having to patch the GRE protocol definition to get it working (see https://github.com/oofnikj/docker-openwrt/blob/master/patches/gre.sh.patch). Looking at the ppp proto definition, you may need to do something similar. Commenting out all the insmod lines and running them manually on the host might be all you need.

For GRE, the host kernel will automatically load the relevant GRE modules when a GRE interface is created. I'm not sure this is true for PPPoE so they may need to be loaded manually on the host with modprobe ....

My OpenWrt containers don't have permission to load kernel modules anyway. Which mean it shouldn't matter if the modules in the OpenWrt file system are incompatible with the running kernel.

Thank you, I'll see what I can do, thanks for pointing me in the right direction.

Yep that too.
I suppose if one really wanted to, the whole of /lib/modules can be bind-mounted into the container and run it in privileged mode, but that would open a very large can of security worms that I'd much rather keep closed...

I'm having trouble trying to replace my OpenWRT router with this docker container, the WAN configuration to be precise.

In the hardware router, I have the following WAN network config:

config interface 'wan'
	option proto 'pppoe'
	option ipv6 '1'
	option username 'username'
	option password 'password'
	option _orig_ifname 'ptm0'
	option _orig_bridge 'false'
	option ifname 'eth0.100'

However, it's not quite working. I think the issue is how I'm setting up the container networking. I have 2 dedicated NICs: WAN (that is directly connected to the fibre ONT) and a LAN nic that is plugged into a WiFi switch/AP.
My openwrt.conf file looks as follow:

WAN_DRIVER=macvlan
WAN_PARENT=wannic
#...
LAN_DRIVER=bridge
LAN_PARENT=lannic

Is it just that what I'm trying to do is not possible at all?

Hi @gmiranda, I haven't tested this with PPPoE at all as my ISP doesn't use it.

The issue is probably the same as the one described above by @markthehipster. My guess is that you will have to manually load the PPPoE kernel modules on your host (not inside the OpenWrt container).

Additionally you may have to comment out the section in the PPP protocol definition which tries to load the kernel module inside the container: https://github.com/openwrt/openwrt/blob/openwrt-19.07/package/network/services/ppp/files/ppp.sh#L314-L320. Protocol definitions are installed to /lib/netifd/proto/ on a running filesystem.

Hi @oofnik, thanks for all your hard work.
I am having trouble on my Raspberry Pi CM4 trying to run "make build" - for some reason it doesn't parse the square brackets in the build.sh, which is really strange as it seems like perfectly good bash to me.

  • I tried running chmod +x openwrt.config && source openwrt.config && ./build.sh on it's own and it gives the same fault
  • I tried finding the appropriate commands and stuffing them with the correct variables, but it still doesn't work.
  • Docker works fine with other apps. I installed domoticz and it worked fine.

Kernel uname -a Linux raspberrypi 5.10.17-v8+ #1403 SMP PREEMPT Mon Feb 22 11:37:54 GMT 2021 aarch64 GNU/Linux

Its running latest Raspbian but its a bit of a pet with a few hacks here and there, but nothing related to bash. It is running on a 128GB PCIe NVME SSD (which is amazing btw).

Can you shed any light on why the build script might be failing like this?

See below:

root@raspberrypi:/home/pi/docker-openwrt# make build
./build.sh
./build.sh: 29: ./build.sh: [[: not found
./build.sh: 34: ./build.sh: [[: not found
./build.sh: 37: ./build.sh: [[: not found
./build.sh: 40: ./build.sh: [[: not found
Unsupported architecture!
./build.sh: 71: ./build.sh: [[: not found
make: *** [Makefile:7: build] Error 1
root@raspberrypi:/home/pi/docker-openwrt#

I tried the following:

source openwrt.conf
tmpdir=$(mktemp -u -p .)
rootfs_url="https://downloads.openwrt.org/releases/${OPENWRT_SOURCE_VER}/targets/armvirt/64/openwrt-${OPENWRT_SOURCE_VER}-armvirt-64-default-rootfs.tar.gz"
version="https://downloads.openwrt.org/releases/${OPENWRT_SOURCE_VER}/targets/armvirt/64/version.buildinfo"
wget "${rootfs_url}" -O rootfs.tar.gz
wget "${version}" -O version.buildinfo
docker build --build-arg ts="$(date)"  --build-arg version="$(cat version.buildinfo)" -t $IMAGE:$TAG .
Sending build context to Docker daemon  2.533MB
Step 1/13 : FROM scratch
 --->
Step 2/13 : ADD rootfs.tar.gz /
 ---> 2c3ec144e456
Step 3/13 : RUN mkdir -p /var/lock
 ---> Running in 038e826503a1
The command '/bin/sh -c mkdir -p /var/lock' returned a non-zero code: 159

Bash version:

root@raspberrypi:/home/pi/docker-openwrt# bash --version
GNU bash, version 5.0.3(1)-release (arm-unknown-linux-gnueabihf)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Any ideas please?

Thanks,
Leon