Any attempt to run LLM on OpenWRT

I'm new to openwrt and new to development. Is there any attempt to run LLM models on it?
I saw python is not very efficient on openwrt. But regardless are we able to run LLM such as llama2?

Please post output of

ubus call system board

I'm asking for a general status, not only for my device.

anyway, I post mine here (masked some info):

{
        "kernel": "5.15.150",
        "hostname": "...",
        "system": "ARMv8 Processor rev 4",
        "model": "...",
        "board_name": "...",
        "rootfs_type": "squashfs",
        "release": {
                "distribution": "ImmortalWrt",
                "version": "23.05.2",
                "revision": "r27625-416c8c5c91",
                "target": "ipq807x/generic",
                "description": "ImmortalWrt 23.05.2 r27625-416c8c5c91"
        }
}

Since you’re running immortalwrt, please ask them. That is not from the official openwrt project.

4 Likes

Again, I'm not asking about my device. I'm asking if there's general effort to run it on openwrt. It's sad to see such exclusionary answer.

OpenWRT is "just" a Linux-based OS, you can run what you like on it. Python isn't any more or less efficient on OpenWRT compared to other OS-es.

To me it doesn't make any sense to try and run LLMs on the same machine or VM that is running OpenWRT.

Thanks Arie.
I'm not sure if we could install pytorch and other large packages. Maybe it's better to compile and run C++ code? such as https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/llama-chatbot/

Of course, maybe it makes more sense to just call public API...

Makes way more sense to call APIs considering the limited CPU and RAM of typical OpenWRT devices.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.