I wanted to try uxc- opener's own minimal container system. It is OCI compliant; so I should be able to use extracted docker images with it- but let's put that away for the moment- let's just get something small running, for example; another instance of openwrt in a container shell.
So I downloaded x86_64 image for this purpose, for issues faced later, I replaced that image with my own build, since cttyhack isn't included as default in busybox.
Okay, so here's how to do it, at least in theory:
# mkdir -p /root/cntr/rootfs
# mkdir -p /root/cntr/overlay
# cd /root/cntr
# crun spec
# cd /root/cntr/rootfs
# tar xvfz {DL_PATH}/openwrt-x86-64-generic-rootfs.tar.gz
Now our container is all set.. except for uxc part.. Let's continue to that..
ocispec generated by crun spec
(config.json) contains path rootfs as root fs path, you can change this to whatever.. You can also use others instead of crun, for example runc..
It also contains command to run, which at default is sh
# cd /root/cntr
# uxc create cntr --bundle /root/cntr --write-overlay-path /root/cntr/overlay
Now container cntr has been created. With uxc list
we get following:
[ ] cntr created runtime pid: 4566 container pid: 4569
And our syslog shows:
daemon.info ubusd[4570]: loading /usr/share/acl.d/dnsmasq_acl.json
daemon.info ubusd[4570]: loading /usr/share/acl.d/luci-base.json
daemon.info ubusd[4570]: loading /usr/share/acl.d/ntpd.json
daemon.info ubusd[4570]: loading /usr/share/acl.d/wpad_acl.json
daemon.info procd: out: jail: using guest console /dev/pts/2
daemon.info netifd[4571]: jail: exec-ing /sbin/netifd
user.notice : Added device handler type: bonding
user.notice : Added device handler type: 8021ad
user.notice : Added device handler type: 8021q
user.notice : Added device handler type: macvlan
user.notice : Added device handler type: veth
user.notice : Added device handler type: bridge
user.notice : Added device handler type: Network device
user.notice : Added device handler type: tunnel
daemon.err netifd[4571]: netifd_ubus_init(1372): connected as 013760e5
daemon.err netifd[4571]: config_init_wireless(648): No wireless configuration found
daemon.notice netifd: Interface 'loopback' is enabled
daemon.notice netifd: Interface 'loopback' is setting up now
daemon.notice netifd: Interface 'loopback' is now up
daemon.notice netifd: Network device 'lo' link is up
daemon.notice netifd: Interface 'loopback' has link connectivity
so far so good, except for those netifd errors, wireless obviously isn't working; I don't have wireless in my host- and other error probably is related to not yet configured network (veth pair, as explained here in this bit out-dated wiki: https://gitlab.com/prpl-foundation/prplos/prplos/-/wikis/uxc ).
Okay, then we start our container with uxc start cntr
, still - no errors, but.. system log has..
daemon.info procd: out: jail: prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_RAISE, 5, 0, 0) failed: No such file or directory
daemon.notice netifd: Interface 'loopback' is now down
daemon.notice netifd: Interface 'loopback' is disabled
daemon.notice netifd: Network device 'lo' link is down
daemon.notice netifd: Interface 'loopback' has link connectivity loss
daemon.info netifd[4571]: jail: jail (4572) exited with exit: 0
Okay, serious problems with capabilities; man prctl says that caps cannot be set if one of blocking caps is set first (can't remember it, google it if interested in facts) - So I started to disect this; by removing caps from config.json's ambient section, I eventually got rid of these errors, I actually added new groups (to all but ambient).
Now, delete your container cntr with uxc delete cntr
and make capabilities look like this:
"capabilities": {
"bounding": [
"CAP_KILL",
"CAP_NET_RAW",
"CAP_AUDIT_WRITE",
"CAP_NET_BIND_SERVICE"
],
"effective": [
"CAP_KILL",
"CAP_NET_RAW",
"CAP_AUDIT_WRITE",
"CAP_NET_BIND_SERVICE"
],
"inheritable": [
"CAP_KILL",
"CAP_NET_RAW",
"CAP_AUDIT_WRITE",
"CAP_NET_BIND_SERVICE"
],
"permitted": [
"CAP_KILL",
"CAP_NET_RAW",
"CAP_AUDIT_WRITE",
"CAP_NET_BIND_SERVICE"
],
"ambient": [
]
},
for now, make also rootfs writable:
"root": {
"path": "rootfs",
"readonly": false
},
I also added /tmp to mounts..
{
"destination": "/tmp",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev",
"rw"
]
}
Also, you could install catatonit - or if you built your own image, in build directory of tini, there is also statically built version of tini. catatonit is built static already. Or get both, see what works for you, if decide to use it; copy catatonit and/or mini to rootfs's root.
If you want to use them; change config.json again like this:
"args": [
"/catatonit", "--", "dropbear",
"-R", "-F"
],
advantage of these, is that they pass signals to child processes.. but to get that to work, we need to add environment variable - and now; this is for busybox (and ash/sh) only, on other systems, there are their own unique ways.. but config.json again:
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"ENV=/etc/profile"
],
Adding "ENV=/etc/profile"
causes it to run profile, which I pretty much cleared from stuff that isn't needed in containers, so it looks like this:
export PS1='\u@\h:\w\$ '
export EDITOR=/usr/bin/nano
trap "exit 0" HUP INT QUIT TERM KILL
all this helps uxc's kill command to work, might need tweaking still. Also when attached, ctrl-C kills terminal.
Those caps among other things, allow now usage of ping- which everyone wants as it's easiest way to test for connectivity.
Re-create and start your container, results:
# uxc list
[ ] cntr running runtime pid: 22705 container pid: 22760
# uxc state cntr
{
"ociVersion": "1.0.2",
"id": "cntr",
"status": "running",
"pid": 22760,
"bundle": "/root/cntr"
}
And here is the network setup to enable networking for container..
config device 'veth0'
option type 'veth'
option name 'vhost0'
option peer_name 'virt0'
config interface 'virt0'
option proto 'static'
option device 'virt0'
option ipaddr '10.0.201.2'
option netmask '255.255.0.0'
option gateway '10.0.0.2'
option jail 'cntr'
option jail_ifname 'host0'
my lan net mask is 10.0.0.0/255.255.0.0, and 10.0.0.2 is not gateway per se, it's host's ip address, 10.0.0.1 is gateway.. But this is how it works. 10.0.200 > is outside of my dnsmasq's provided ip address space, but accessible in subnet.
Now; network works, both to lan and to wan- but name service doesn't.
with uxc, /etc/resolv.conf is linked to /dev/resolv.conf.d/resolv.conf.auto and overwriting /etc/resolv.conf doesn't help; on your host (not in container), check /tmp - you'll find /tmp/resolv.conf.d there, and /tmp/resolv.conf-cntr.d - in resolv.conf.d you'll find your current resolv.conf, but in resolv.conf-cntr.d; file is empty.
With OCI/ucx - it is possible to create hooks. I tried to create createRuntime hook that copies file from resolve.conf.d to resolv.conf-cntr.d but it did not work out; container did not start properly, it got stuck in every possible way, container could not be killed or deleted. With uxc list
, you get pid for runtime and container, by killing both pids, you can delete container. Instead I made this startup script, which also starts container to help out with this issue; and it does work.
# cat /root/cntr/create.sh
#!/bin/sh
uxc create cntr --bundle /root/cntr --write-overlay-path /root/cntr/overlay
cp /tmp/resolv.conf.d/resolv.conf.auto /tmp/resolv.conf-cntr.d/
sleep 1
uxc start cntr
without sleep, it will result in errors. 1 second is a long time, but I didn't start looking for perfect value to use with usleep.
now, to things that do not work..
Do you want to replace podman/docker with uxc? Well.. There's a huge gap. I assume you want to be able to attach to container's shell from time to time, in podman it would be done like this:
podman exec -it cntr /bin/sh
But first, our service. All attempts to run dropbear, failed. Correct command-line arguments for dropbear would be "-R" and "-F", maybe also "-P" and "/tmp/dropbear.pid" where last 2 set location of pid file, since we don't have /tmp/run initially.
You can add dropbear to config.json's process -> args, even with tini or catatonit, I was not able to get it to work.
I tried to add dropbear to profile with and without -F argument. -F argument keeps it on foreground, so if I could get it to work... I would also have shell. All attempts to do this failed. So I attached to shell, and tried to execute it there. No errors, or nothing.. just clean exit of dropbear. This is where I came to conclusion that uxc limits container so that processes cannot have child processes. Yet, even as a single process, I failed every attempt to run dropbear.
From https://github.com/jjlin/docker-image-extract you can find a great script to pull and extract images from docker hub. I searched for tiniest alpine based dropbear image and found one. By using it as root filesystem, keeping profile we created first, I got some better results. I was even able to run dropbear and connect to it, but still ending up with failure. This is probably due to that single process (no forking allowed) limitation, sh isn't allowed to exec as child process. After logging in, connection just ends. Also shell with dropbear, with previously mentioned methods failed.
So, how is this done in podman? Oh, there is no shell. Instead /bin/sh or what ever command- is executed separately in same namespace. You probably could find out namespace (check running pid with ucx list
and then check /proc/PID_HERE/ns) and execute sh there by yourself, maybe with nsenter? I didn't test this since my testing ended to dropbear not to work. web server probably would had worked as it rarely forks.. this is probably why containers normally run a single process, in case of web server, you have other containers for proxy and php and whatever, like mysql.. but at least with podman and little effort, you CAN have child processes and even run all mentioned in single container. Though pods are there for that..
Also one really big downside is that when using uxc, you will have to get familiar with command reset, as something happens to your terminal with uxc pretty often. Even when viewing logs in another terminal. Especially when attaching.
Also, other container systems are easy - they build oci specs automatically with given arguments while creating container. Probably also hooks actually work there. As system has its flaws, it's really time consuming to try get it to work when editing spec manually.
Killing containers doesn't allow you to restart them. But it's do-able, with ubus..
ubus call container.state '{ "spawn": true, "name": "cntr" }'
Hmm... Probably a wrapper script would be great assistance to speed up management while testing..
If you have problems to stop containers, you might want to force it with signal 9..
uxc kill cntr 9
Though, if you end up with that previously mentioned issue with hooks, this won't work- use methods I mentioned in that part of guide.
So maybe not yet as there to compete with other container systems. But very promising. Also up to date and very thorough documentation is needed, this post gives some pointers to get a jump start. There might be a capability that allows to run child processes, someone else probably can find out.
But still very promising. Podman for example is great. Indeed great, where I mean big, huge- and most of users don't need even a fraction of it. It is possible to write a script that retrieves cpu usage of process, I have a mini container (for podman) management that works in luci, it can only start/stop/restart containers, tells which containers are available and their state, cpu usage and memory consumption. Because that is all I need + exec -it cntr /bin/sh - so this isn't that far from it. But then there's problems that I wasn't able to make it work with openwrt rootfs to keep dropbear running, even though I really tried- so work in progress.
Oh, I also want logs.. Also, when attaching to shell, your syslog is filled with every keypress by procd..
If uxc gets to level I mentioned that I use with podman, I very probably would use uxc instead of podman.