We need to talk about why OpenWrt doesn't support automatic updates, saving configurations across updates, NIHS, and the developer clique

Everyone knows that we can't deploy OpenWrt to a user who can't maintain it themselves, because OpenWrt can't update itself. This restricts to use of OpenWrt to a small clique of users measured in the thousands, rather than ANYONE, who are measured in billions.

Everyone knows that OpenWrt doesn't save packages across upgrades, so anyone who isn't already capable on managing a local IT department can't keep a complex system up to date and working across upgrades.

Everyone knows that OpenWrt development stagnated and some developers split off to start LEDE.

For the good of the project today and in the future, I want to encourage all the "hardcore nerds" to think of the bigger picture and the millions of people out there who need good networking hardware powered by good networking software like OpenWrt. When I see a member of the community write code to make the system work better for the everyday user (like https://github.com/openwrt/openwrt/pull/1310 and there are others), only to be shut down by project leaders because "it's complicated", I see the fate of too many open source projects: death by self-strangulation, the programmers not seeing and understanding the users they're programming for. It's not about the feature or the code, it's about the effort YOU put in to make the SYSTEM work for MORE people.

I get that you probably work on OpenWrt for your own satisfaction. But I encourage you to look long-term at a time when there is no OpenWrt because you didn't include the convenient features that make it possible for 10x or 100x or 100,000x more human beings to adopt OpenWrt into their homes and workplaces.


It seemed to me that the suggestion was to introduce this as separate package, possibly wrapping sysupgrade, which is a sensible approach for yet unproven code in my opinion.

The code in the PR you linked added a lot of complexity and hard-coded assumptions (builtin certificates, scraping of HTML directory listings to discover versions, assumptions about future version and feed formats etc.).

This is - to some extent - necessary in order to have something working now but this is not the kind of code that should end up in the low level system upgrade program given that we do not even know how downloads, version formats packages etc. will be called ten years down the road. For sure the builtin PGP certs will be expired by then.

Being part of sysupgrade and thus, the 'base-files' package, will also prevent users of this functionality to upgrade this package in order to make it conform to the latest release requirements.

As a separate, installable, arch agnostic package, possibly even in the base repository, the linked PR will be a suitable interim solution for some use cases, but in the long run a lot of things have to be done to enable proper automatic updates, such as:

  • Long term release planning
  • Well known procedures and update notification mechanisms
  • Proper management of trust anchors and involved certificates
  • Release quality management

I think the psychology of the project's management is what I'm talking about rather than any given merge request. We can move mountains to migrate to a new kernel, but features that only "normal" people would use remain bridges too far for years. This is about time and energy dedicated to things that aren't for us. You don't need the things I'm talking about. The indefinite postponement of "consumer"-facing features is, I think, a toxic trend that naturally creeps into open source projects, and we want to avoid it by focusing on what the next million users of the project will need.


Applause, applause.
Doing my first (Assembler-)code almost half a century ago, I feel qualified to remind here of an "ancient" coding principle: "Egoless programming". Which seems to be forgotten in the modern universe of Open Source.


I've never upgraded/installed packages on a router anyways - not until I encountered OpenWrt (or used a full-Linux distro as a router). For other devices, I usually get an updated firmware from the OEM's website and flash it.

Or...the OEM has a feature built-in to do this for me...most self-updating routers no longer interact with the user anyways, nor offer a method in the GUI to upload firmware. These features sounds like a good idea, I wonder why the developer (which seems like it's the OP) didn't finish the code?

I also wonder how this works if a new package is needed to establish connectivity after said upgrade?

I've read your posts multiple times but I am really not sure what you want to get at. Basically you complain that developers chose to prioritize things which are not important or useful to you, which is fine. But I fail to understand why I should invest more spare time work to attract more people that ... generate more work.

I mean sure, fame and praise is some kind of compensation for the work being done and I cannot speak for the others here, but its not really my personal goal to support hundreds of thousand of people or installations.


These upgrade helpers are very hard to get right, without creating a huge burden for the future. Library names (sonames) change, packages go away or are being replaced (e.g. radvd --> odhcpd, uclibc --> musl, even more so for leaf packages), configurations aren't necessarily compatible between (major) version bumps. What helps for the simple cases, looking over the short range (e.g. maintenance releases of a single major version) will break spectacularly over longer terms (and they will require continued, very active maintenance to keep working at all).


One could argue that supporting hundreds of devices, upstreaming the code for it, providing infrastructure, help and sources for everyone to use the work is "egoless" enough.

But yes, maybe the principle of FOSS development fundamentally contradicts the egoless programming principle, at least when developers program for their personal satisfaction and not for the greater good of the community.


You are not completely correct.

How often you see normal users update their routers? Even my friends who are programmers etc. never touch their router as long as it works. Not to mention, many router companies simply make 3-5 firmware updates within lifetime of the router hardware and then EOL it. So why you think people buying routers in general as they can't really update it? that should be your first question.

BTW you can always build an image with all the needed packages. See https://github.com/aparcar/attendedsysupgrade-server

OpenWRT images comes by default packages with similar functionality to original router firmwares. So what is the problem?

While the pull request you linked is quite interesting. I think you do not really see the possible problems. You won't be the one who deals with bricked routers and bug reports when it fails due to unexpected situations.

I also think that the pull request was over-engineered and maybe not the best way to implement this. I think opkg should be enhanced to give user option to re-install missing packages after an upgrade.

This should not be the task of sysupgrade. It is just very cumbersome that way as opkg does not natively support keeping track of packages explicitly installed by user. So sysupgrade should be hacked to provide this facility. Seems wrong to me....


This was exactly one of the comments in the pull request. I think opkg should keep a list of explicitly installed packages by user. Then the user should be given a list of missing packages when he/she logs in to router after upgrade, and be allowed to auto-install them if he wants to. I think the way it was implemented in pull request by hacking sysupgrade was not the best way IMHO.

Which some un-/under-thanked contributor added some time ago and will be available in v19:

commit 5cb1dce542
Author: Luiz Angelo Daros de Luca <redacted>
Date:   Fri Aug 17 20:49:53 2018 -0300

    base-files: add sysupgrade -k to save list of pkgs
    When '-k' is used, sysupgrade inserts into backup a new file
    /etc/backup/installed_packages.txt which contains pkgname and origin (rom,
    overlay, unknown) without touching rootfs.
    It's mainly used to reinstall all extra packages:
     # opkg update
     # grep "\toverlay" /etc/backup/installed_packages.txt | cut -f1 | xargs -r opkg install
     # rm /etc/backup/installed_packages.txt
    Signed-off-by: Luiz Angelo Daros de Luca <redacted>

Personally, even after having PRs of mine effectively rejected, and having spent well over a month responding to requests to improve the robustness of the code base, I'm glad there's a review process. Having used OpenWrt since the White Russian days, it is quite welcome that there almost never is breakage on master. (Note that during the recent repo outage, master was fully buildable and functional on all the targets that I was working with.)


OK...but let's say that package is Wireguard, IPENCAP, IPSec or OpenVPN...or a wireless driver...perhaps even Unbound.

Now let's give a common use case - the user needs WWAN...How does the router download the packages post-upgrade if there's no Internet?

You may suggest a custom firmware...but then we're talking about an advanced user; and no need for this code.

1 Like

@jeff That is a modification/hack of sysupgrade to create a list of user installed packages during upgrade.

It does not distinguish if the installed package was a dependency or the package which is explicitly installed by user. It checks packages in overlay. As you can imagine, dependencies may change and it may cause you to install wrong/unnecessary packages. So if you build package re-installation after sysupgrade you will be building on a flawed component. It feels like a step backwards. Because once you start building on this, you will need to build even more code to try to find out dependencies and filter them out etc. But the script then can't know if I installed a dependency because I needed it or not. Gets more cumbersome by the minute. This is why the pull request linked by @ibex-are-goats is so complicated/elaborate, it needs to do a lot of work to figure out what to install and not and I am pretty sure it will make mistakes on some cases.

What I meant was that the better way would be opkg keeping track of explicit user package installation requests. Besides it is the package manager so it should keep track of what it was up to. This is a job which should be handled by opkg and not by sysupgrade.

1 Like

Thats why the system shows a list to user but user is not forced to install missing packages. He/she can go back there to install them later. So user first must install minimal packages required for internet connection. Then go back to the missing packages page and get them installed. So the point is that you do not have to do all the work yourself. It is a compromise.

BTW apparently this does create a custom firmware based on packages installed on your firmware. So you don't need to be an advanced user :slight_smile:

Also, in the case you describe. There would be no way to upgrade without losing internet unless a custom firmware is used. Since the root file system is compressed read-only and many devices not having enough space for mingling. I am not sure if it would be possible to come up with a better solution.

On a “limitless” system, I’d agree. I checked a day or two ago and the metadata store on one of my servers with limited packages installed was ~15 MB. Even compressed, that’s probably a lot more flash space than even 16 MB devices have. Then you need the RAM to parse and resolve the dependencies.

Hard to tell people that now you need a $100-class router to be able upgrade. Not to mention those still clinging to 4/32 devices that expect all the functionality of current OpenWrt!

Yes, some improvements could be made, and I know there are ongoing discussions on that. I don’t expect that something like apt or management of multiple library versions (at over 500 kB, in some cases) is reasonable even on a 16/128 box.


I think you're missing my point:

  • It would be impossible to do so after.
  • Packages cannot be installed until after the sysupgrade, so then users have to be mindful and proactive to download the packages (and their prerequisites) prior to flashing? That seems advanced too.


Interesting...this tool can overcome limitations and build a full firmware right before flashing, wow!

I digress...

opkg just needs to build a list of names (does not even need version numbers). When user does an opkg install foobar. It will add foobar to list. This would require much less space than what sysupgrade hack would do. That hack records all package names then next to the names it adds qualifiers like rom, overlay etc. So much unnecessary data. I run it in my system and it created a file which is 2646bytes. I have only 4 packages installed using opkg and the size is 40bytes with the method I am suggesting.

Also, as you put it adequately people who has 4/32 devices won't be able to install thousands of packages. So the list will be very short for them :slight_smile:

I don't understand why you compare this with some normal linux distribution in your server. It is not really a fair comparison....

Yes I understand you. But the user must have overcame this problem at least once when he installed OpenWRT first time. So by deduction we can conclude that this is not too advanced for this specific user. Unless the router was installed by somebody else originally, then maybe this user should not touch it anyway. :slight_smile:

I accept this solution is not perfect. But it is something to make the life little easier and it requires little effort to implement. At least this would be a solution for people who does not lose internet connection with default firmware.

Yes it is quite interesting. But I imagine it may be very resource intensive if everybody is using it. Also it may be difficult to verify the package authenticity. What if NSA hacks the system and sends you modified OpenWRT image? :smiley:

I believe you just very successfully argued that metadata on installs needs to be recorded. What you call a "hack" (no, it's not my code) is about the best you can do without additional metadata. If you have better ideas that are robust in all situations, you could propose them.

It's actually a pretty fair comparison, when you consider what it required in terms of metadata and processing power to be able to successfully and reliably upgrade/reinstall packages. You need all the ABI information, or you're effectively upgrading everything if all you do is look at upgrading:

  • Everything that depends on Package X
  • Everything that Package X depends on
  • and repeat for all that you just identified

By the way, don't forget that a partial upgrade can be just as debilitating as an improper one, with ABI incompatibilities, or applications that simply don't run because their dependencies aren't present. sophisticated package managers download everything before they start removing and reinstalling packages to try to mitigate this kind of damage.

Edit: Think about upgrading a 16/32 machine -- You've got 12 MB of upgrade sitting on the tmpfs (RAM), so you can't even extract it without crashing the device. You have to extract | select | write in a single pipeline just to be able to do that. Large numbers of packages have the same issues.

Also remember that you may need package-upgrade scripts, when the new package needs different config.

Then go ahead and write the code as a package and submit it for review.


Hardly unnecessary -- Bob installs a custom image. Everything he needs except UltraMediaShare is in ROM. Now he wants to upgrade to a "stock" image since his "ROM chef" has long since disappeared. Ok, a "release" image has LuCI, a "snapshot" image doesn't. We know it doesn't have uPnPEvilness, KnockKnock, SuperGameShare, Wireguard, mwan, ... pre-installed. He's hozed by the assumption that the replacement ROM has the same pre-installed packages in it as did what he's flashing from.

  • Using what?
  • Whose resources?
  • Who is "everybody"?
  • Are you saying there's a server somewhere - compiling custom firmware for everybody?

I honestly thought you meant a small embedded device could/would magically compile a full firmware for itself.

The NSA doesn't have to hack your equipment - your (up-to-date) OpenWrt is likely the most secure device in the chain (if up-to-date, of course). I'm not even sure what this means in the reality of this proposed software...

  • How would this differ from them inserting altered code before compiled?
  • Why do you trust this custom firmware build sever - any differently than the normal buildbots?
  • If the NSA p0wns what they need to exploit, you can't trust that you (or the custom build server) actually downloaded clean code anyways.

And if their setup or network changed since then?

Assumptions are bad when reducing them to code. If the opposite proves true, you have at worse destructive software - and at minimum, a bug.