Running openwrt inside docker - /sbin/init stuck

Hi,

I created docker image from scratch from trunk build x86 rootfs:

FROM scratch

ADD openwrt-x86-generic-rootfs.tar.gz /

RUN mkdir -p /var/lock \
  && mkdir -p /var/run \
  && /etc/init.d/dropbear enable

EXPOSE 22

USER root

CMD ["/sbin/init"]

However /sbin/init process is stuck at

ip -4 address flush dev eth0

with 100% usage eternally:
image

Anyone have experiences with running openwrt in docker? After I get it boot up I will be experimenting with using self created macvaln on host and expose them to openwrt inside docker through namespace manipulation. The docker managed macvlan bridge / trunk mode doesn't seem to fit the use case of a router.

My goal is to first get openwrt with at least two macvlan interfaces (one lan one "wan") running off one physical interface on host, then from there it should be able to function as a transparent sqm bridge. Later on I can further experiment with it being a full router with two macvlan interface off one physical interface / two physical interface passthrough.

Thanks a lot!

I terminated the stuck command from host, and the image proceeded to boot just fine. yeah!

It's not worth it - the fact that containers share the same kernel as the host os makes openwrt running inside a container for non strictly userspace application a no go unless you happen to have all "non standard" kernel modules loaded in the host os (iptables should work).

Openwrt is also restricted from touching the kernel in any way (as expected from a container sharing same kernel). Things like adding an additional interface for downlink shaping, or bridging interfaces, or in general doing anything to network interfaces are not permitted.

I saw someone from reddit claiming he was able to use openwrt inside lxd but I failed to see how its different from docker, unless lxd pass full control of the hardware to the container.

I will probably just go with kvm or if I really feel like it play with lxd a little bit in the future.

I discovered a parameter for creating a new container called
--privileged
for docker.

This enable openwrt inside a container manipulating and interacting kernel as it see fits.

Now it actually looks like with some work porting openwrt specific kernel modules to some other regular desktop full linux distro that is acting as host OS may really enable openwrt to run inside container without much restriction.

2 Likes

A privileged container shouldn't be needed to make openwrt usuable.

Here are scripts that creates an openwrt image for lxd which can be used in an unprivileged container: https://github.com/mikma/lxd-openwrt

Awesome!

Do you mind sharing more about how you pass network interface from host to the containerized openwrt? Is openwrt able to for example bridge the interfaces it is given / bringing down/up the interface / changing some basic network config like ip address / gateway etc?

I was really hoping to use macvlan + gateway configuration to force traffic to go in and out of a single ethernet port (using a laptop as the container host OS!), really hope this would work out :slight_smile:

Yes to all above.

I think macvlan should work, but I have mainly used the regular bridge and openvswitch which allows you more flexibility since you can use a VLAN trunk containing multiple tagged VLANs.

My openvswitch is called ovsbr and I use the following in lxd:

eth0:
  name: eth0
  nictype: bridged
  parent: ovsbr
  type: nic

Awesome - I figured out I just needed to add net_admin capability when creating a container

example:

sudo docker run -d --name openwrtc1 --hostname openwrtc1 --net macvlan1 --ip=192.168.4.191 --cap-add NET_ADMIN openwrt-x86-trunk

this stuck process

ip -4 address flush dev eth0

is no longer an issue and a vanilla openwrt x86 image boot just fine.

I looked at some benchmark and it seemed like kvm somehow has an edge in most performance benchmark over lxd and docker. That was quite unexpected, but I suppose one of the key of container is much higher deployment density.

I am still debating whether I should go with container vs kvm. If I go with container I would have to recompile the host os kernel and include modules like sch_cake which is harder to maintain over time. Which linux distro do you use?

2 Likes

Hi, for testing I removed the file containing the flush and the /sbin/init seem to run without the need of manually added network interfaces. I think it's kind of an odd solution, but seem to work. Anyone got a better idea?

$ cat Dockerfile 
FROM aparcar/openwrt-rootfs:latest

MAINTAINER Paul Spooren <mail@aparcar.org>

ADD var.tar.gz /

RUN mkdir -p /var/run \
 && mkdir -p /var/lock \
  && /etc/init.d/network disable \
  && rm /lib/preinit/10_indicate_preinit

EXPOSE 80 443 22

USER root

CMD ["/sbin/init"]