Problem with Local ASU server on RaspberryPI 4

I have installed and set up a local ASU using podman on a Raspberry PI 4 running Alpine Linux 3.23, but I am struggling to get the upgrade to work.

How do I complete this task?

I followed these installation instructions:
(I did not explicitly modify firewall settings.)

and this from @boilerplate4u:

(From thread: LuCI Attended Sysupgrade support thread - #715 by teleport )

The command owut check provides an apparently sensible answer (whether updates are available or not) as does owut upgrade if there is nothing to update.
But the command owut upgrade fails if an update is available with the output:

root@bthh5:~# owut upgrade -V 25.12-SNAPSHOT
owut - OpenWrt Upgrade Tool 2026.01.13~2526d84b-r1 (/usr/bin/owut)
ASU-Server     http://IPV4_addr:8080
Upstream       https://downloads.openwrt.org

~
~

Request:
  Version 25.12-SNAPSHOT r32827-00dcdd7451 (kernel 6.12.74)
--
Status:   Error: Internal Server Error
Progress:   0s total =   0s in queue +   0s in build

Build failed in   0s total =   0s in queue +   0s to build:
ERROR: Build failed with status 500 (--version-to 25.12-SNAPSHOT --device lantiq/xrx200:bt_homehub-v5a:squashfs)
The above errors are often due to the upgrade server lagging behind the
build server, first suggestion is to wait a while and try again.

podman checks:

raspi:~$ grep -n "network_mode" /opt/asu/asu/build.py
201:        network_mode="bridge"

raspi:~$ curl -sL http://127.0.0.1:8000/api/v1/overview | python3 -m json.tool | head -20
{
    "latest": [
        "25.12.2",
        "24.10.6"
    ],
    "branches": {
        "SNAPSHOT": {
            "path": "snapshots",
            "enabled": true,
            "snapshot": true,
            "path_packages": "DEPRECATED",
            "package_changes": [
                {
                    "source": "firewall",
                    "target": "firewall4",
                    "revision": 18611
                },
                {
                    "source": "kmod-nft-nat6",
                    "revision": 20282,

raspi:~$ curl -s http://<IPV4_addr>:8080/ | grep "Sysupgrade Server"
        <title>OpenWrt Sysupgrade Server</title>
            <h1>OpenWrt Sysupgrade Server (0.0.0)</h1>
                    <h2>About the Sysupgrade Server</h2>
raspi:~$ podman ps
CONTAINER ID  IMAGE                                      COMMAND               CREATED            STATUS         PORTS                                                       NAMES
ad722ecb0763  docker.io/redis/redis-stack-server:latest  /entrypoint.sh        About an hour ago  Up 40 minutes  127.0.0.1:6379->6379/tcp                                    asu_redis_1
d554211f2d41  docker.io/openwrt/asu:latest               uv run uvicorn --...  About an hour ago  Up 40 minutes  127.0.0.1:8000->8000/tcp                                    asu_server_1
39a2da451f24  docker.io/openwrt/asu:latest               uv run rqworker -...  About an hour ago  Up 40 minutes                                                              asu_worker_1
5d25ebb14553  docker.io/library/caddy:latest             caddy run --confi...  About an hour ago  Up 40 minutes  0.0.0.0:8080->8080/tcp, 80/tcp, 443/tcp, 2019/tcp, 443/udp  asu_caddy_1

I imagine you need something like QEMU user static to run x86/64 containers on ARM. I think all the SDK-related stuff (including the image builder) is x86-only. Unless something has changed recently.

@alistair: The 500 error comes from the worker, not the ASU setup itself, which looks correct. Check the actual build failure reason:

podman logs asu_worker_1

That will show exactly why the build failed.

Most likely cause: you are requesting 25.12-SNAPSHOT at a specific revision (r32827-00dcdd7451), and the corresponding ImageBuilder container may not exist on ghcr.io for that exact commit. Try a stable release instead:

owut upgrade -V 25.12.2

If that works, the issue is with the SNAPSHOT target, not your ASU setup. It could also be an upstream problem with the lantiq/xrx200/bt_homehub-v5a profile specifically, worth checking if that target builds successfully on the official server at sysupgrade.openwrt.org.

There should be a bunch of information from stderr in the owut output that you appear to have elided. That's what will us at least part of the story. Also on the server itself, you should be able to see what's happening with

asu-server$   podman logs --names asu_worker_1
asu_worker_1 10:03:38 default: asu.build.build(BuildRequest(distro='openwrt', version='SNAPSHOT', version_code='r33744-eba...) (95f6f72a8da5946ee0e3f89c599bdad766236b8b667c806df13f47ee1d3bae17)
asu_worker_1 10:03:38 Pulling ghcr.io/openwrt/imagebuilder:x86-64-master...
asu_worker_1 10:03:40 Pulling ghcr.io/openwrt/imagebuilder:x86-64-master... done
asu_worker_1 10:03:40 Running setup.sh for ImageBuilder
...

That log is not picking up any of my actions:

~ $ podman logs asu_worker_1
12:37:07 Worker d5b1a0faba00461e8fc6639f82200a47: started with PID 5, version 2.7.0
12:37:07 Worker d5b1a0faba00461e8fc6639f82200a47: subscribing to channel rq:pubsub:d5b1a0faba00461e8fc6639f82200a47
12:37:07 *** Listening on default...
12:37:07 Worker d5b1a0faba00461e8fc6639f82200a47: cleaning registries for queue: default
12:50:37 Worker d5b1a0faba00461e8fc6639f82200a47: cleaning registries for queue: default
13:04:08 Worker d5b1a0faba00461e8fc6639f82200a47: cleaning registries for queue: default
13:17:38 Worker d5b1a0faba00461e8fc6639f82200a47: cleaning registries for queue: default
...
repeated every 13.5 minute (approx).
...

I reverted a BPI-R4 to 25.12.0 (using the Image Builder) and tried to upgrade that.

Again the command owut check --add kmod-phy-aquantia went ok, but the owut upgrade --add kmod-phy-aquantia failed, with no addition to the above log.

(The module kmod-phy-aquantia has been added to the base config of this unit after 25.12.0.)

Try checking the server logs and see if the requests are getting through, same logs command, but with asu_server_1 instead of the worker one...

Here are the outputs of podman logs asu_server_1

After command owut check

INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/.versions.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/.versions.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/21.02.7/.targets.json "HTTP/1.1 404 Not Found"
INFO:     10.89.0.6:32786 - "GET /json/v1/overview.json HTTP/1.1" 200 OK
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/feeds.conf "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/base/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/packages/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/luci/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/routing/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/telephony/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/video/index.json "HTTP/1.1 200 OK"
INFO:     10.89.0.6:32786 - "GET /json/v1/releases/25.12.2/packages/mips_24kc-index.json HTTP/1.1" 200 OK
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/packages/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/profiles.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/kmods/6.12.74-1-85faff7027ce66bb21945fd8402545f6/index.json "HTTP/1.1 200 OK"
INFO:     10.89.0.6:32786 - "GET /json/v1/releases/25.12.2/targets/lantiq/xrx200/index.json HTTP/1.1" 200 OK

and after owut upgrade (with one file to update)

INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/.versions.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/.versions.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/21.02.7/.targets.json "HTTP/1.1 404 Not Found"
INFO:     10.89.0.6:42914 - "GET /json/v1/overview.json HTTP/1.1" 200 OK
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/feeds.conf "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/base/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/packages/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/luci/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/routing/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/telephony/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/packages/mips_24kc/video/index.json "HTTP/1.1 200 OK"
INFO:     10.89.0.6:42914 - "GET /json/v1/releases/25.12.2/packages/mips_24kc-index.json HTTP/1.1" 200 OK
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/packages/index.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/profiles.json "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET https://downloads.openwrt.org/releases/25.12.2/targets/lantiq/xrx200/kmods/6.12.74-1-85faff7027ce66bb21945fd8402545f6/index.json "HTTP/1.1 200 OK"
INFO:     10.89.0.6:42914 - "GET /json/v1/releases/25.12.2/targets/lantiq/xrx200/index.json HTTP/1.1" 200 OK
INFO:     10.89.0.6:42914 - "POST /api/v1/build HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 1003, in connect_check_health
    sock = self._connect()
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 1515, in _connect
    raise err
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 1499, in _connect
    sock.connect(socket_address)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/.venv/lib/python3.14/site-packages/uvicorn/protocols/http/httptools_impl.py", line 420, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        self.scope, self.receive, self.send
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/app/.venv/lib/python3.14/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/fastapi/applications.py", line 1163, in __call__
    await super().__call__(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/applications.py", line 90, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/app/.venv/lib/python3.14/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/app/.venv/lib/python3.14/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/app/.venv/lib/python3.14/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/routing.py", line 660, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/routing.py", line 680, in app
    await route.handle(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/fastapi/routing.py", line 134, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/app/.venv/lib/python3.14/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/app/.venv/lib/python3.14/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/app/.venv/lib/python3.14/site-packages/fastapi/routing.py", line 120, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/fastapi/routing.py", line 674, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
    )
    ^
  File "/app/.venv/lib/python3.14/site-packages/fastapi/routing.py", line 330, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/starlette/concurrency.py", line 32, in run_in_threadpool
    return await anyio.to_thread.run_sync(func)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/anyio/to_thread.py", line 63, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        func, args, abandon_on_cancel=abandon_on_cancel, limiter=limiter
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/app/.venv/lib/python3.14/site-packages/anyio/_backends/_asyncio.py", line 2518, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/anyio/_backends/_asyncio.py", line 1002, in run
    result = context.run(func, *args)
  File "/app/asu/routers/api.py", line 218, in api_v1_build_post
    job: Job = get_queue().fetch_job(request_hash)
               ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/rq/queue.py", line 336, in fetch_job
    job = self.job_class.fetch(job_id, connection=self.connection, serializer=self.serializer)
  File "/app/.venv/lib/python3.14/site-packages/rq/job.py", line 654, in fetch
    job.refresh()
    ~~~~~~~~~~~^^
  File "/app/.venv/lib/python3.14/site-packages/rq/job.py", line 1011, in refresh
    data = self.connection.hgetall(self.key)
  File "/app/.venv/lib/python3.14/site-packages/redis/commands/core.py", line 5546, in hgetall
    return self.execute_command("HGETALL", name, keys=[name])
           ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/redis/client.py", line 716, in execute_command
    return self._execute_command(*args, **options)
           ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.14/site-packages/redis/client.py", line 722, in _execute_command
    conn = self.connection or pool.get_connection()
                              ~~~~~~~~~~~~~~~~~~~^^
  File "/app/.venv/lib/python3.14/site-packages/redis/utils.py", line 236, in wrapper
    return func(*args, **kwargs)
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 3041, in get_connection
    connection.connect()
    ~~~~~~~~~~~~~~~~~~^^
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 976, in connect
    self.retry.call_with_retry(
    ~~~~~~~~~~~~~~~~~~~~~~~~~~^
        lambda: self.connect_check_health(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        lambda error: self.disconnect(error),
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/app/.venv/lib/python3.14/site-packages/redis/retry.py", line 132, in call_with_retry
    raise error
  File "/app/.venv/lib/python3.14/site-packages/redis/retry.py", line 120, in call_with_retry
    return do()
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 977, in <lambda>
    lambda: self.connect_check_health(
            ~~~~~~~~~~~~~~~~~~~~~~~~~^
        check_health=True, retry_socket_connect=False
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ),
    ^
  File "/app/.venv/lib/python3.14/site-packages/redis/connection.py", line 1025, in connect_check_health
    raise e
redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.

Had a similar problem on my proxmox alpine lxc container.

asu_server_1 fails to connect to asu_redis_1.

Problem went away after applying some changes to my podman-compose.yml

/opt/asu/podman-compose.yml
version: "3"

services:
  redis:
    image: redis/redis-stack-server
    ports:
      - "127.0.0.1:6379:6379"

  server:
    image: "docker.io/openwrt/asu:latest"
    build:
      context: .
      dockerfile: Containerfile
    restart: unless-stopped
    command: uv run uvicorn --host 0.0.0.0 asu.main:app
    env_file: .env
    environment:
      REDIS_URL: "redis://redis:6379/0"
    volumes:
      - $PUBLIC_PATH/store:$PUBLIC_PATH/store:ro
      - $PUBLIC_PATH/logs:$PUBLIC_PATH/logs:ro
    ports:
      - "127.0.0.1:8000:8000"
    depends_on:
      - redis

  worker:
    image: "docker.io/openwrt/asu:latest"
    build:
      context: .
      dockerfile: Containerfile
    restart: unless-stopped
    command: uv run rqworker --logging_level INFO --url redis://redis default
    env_file: .env
    environment:
      REDIS_URL: "redis://redis:6379/0"
    volumes:
      - $PUBLIC_PATH:$PUBLIC_PATH:rw
      - $CONTAINER_SOCKET_PATH:$CONTAINER_SOCKET_PATH:rw
    depends_on:
      - redis

  caddy:
    image: caddy:latest
    ports:
      - "0.0.0.0:8080:8080"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
    depends_on:
      - server

Did as well create file /opt/asu/.env

/opt/asu/.env
PUBLIC_PATH=/opt/asu/public
CONTAINER_SOCKET_PATH=/run/podman/podman.sock
ALLOW_DEFAULTS=1
1 Like

My only setup for redis is to use the correct uid:

$ id
uid=1000(efahlgren) ...

$ grep CONT .env
CONTAINER_SOCKET_PATH=/run/user/1000/podman/podman.sock

Your podman seems to run rootless. This explains why podman.socket is on a uid specific path.

Setup instructions done by @Boilerplate4U is running podman rootful ( on the root account uid=0).

Problem I had was reaching redis tcp port on 127.0.0.1. Configuring REDIS_URL to directly connect to redis container did allow connect() call to succeed.

Correct, same as the production server at sysupgrade.openwrt.org. Podman was chosen specifically over docker due to it's pain free rootless setup, giving improved securty via better isolation and sandboxing of the server, worker and build containers.