Limitations of package system (opkg)

The package system (opkg) is rather confusing, at least for newcomers familiar with package systems of other modern platforms, which generally follow a few common patterns, including the following two:

  1. Conflicts between packages are uncommon, because components shared by multiple packages are provided by dependencies, and the dependencies, not the conflicting resources, are what the various packages share.

    That is, rather than package A conflicting with A', both would depend on some further package B.

  2. Automatic site upgrades (obtaining upgraded copies of all out-of-date packages) are supported.

    Some sources suggest workarounds, one of which seemed clear and robust enough that I ventured to try it, but the results were replete with cryptic error messages.

What discussions have occurred recently on how well the package system meets current demands, and how its limitations might be addressed in the future?

1 Like

These topics have been covered numerous times, especially the intricacies about bulk- and in-place upgrades. opkg hasn't been chosen because it's 'great' or because anyone loves it, but because it's small, small enough to fit into devices with 4-8 MB flash. and >=32 MB flash

1 Like

Do any network equipment anywhere in the world have a package update function instead of total firmware upgrade?

1 Like

I understand the topics may have been addressed already, and I had imagined they probably had been discussed at length. However, being new to the community, and interested in learning, I am hopeful to be greeted with a bit of patience.

Would either of the issues I raised, about cleaner dependencies, or global upgrades, be closely tied to the size of the distribution? Implementation of either would seem to carry an exceedingly modest footprint, with the former seeming more closely related to the way the packages are defined than to the system that manages them.

1 Like

I have no idea. Being new to the subject, would I be assuming correctly that you are asking a rhetorical question, to suggest that support for the features I mentioned would be unnecessary for the targeted applications of the system?

1 Like

There's two major factors preventing runtime package management on OpenWrt to be as complete as on other common platforms such as traditional Linux distributions:

a) lack of quality (or maybe, thoroughness, since I do not mean to discredit the voluntary work of the package and base system maintainers) in packaging, this means properly maintaining conflicts, replaces, provides, dependency annotations, carefully choosing compile time settings, tracking ABIs, updating libraries and subsystem in a coordinated manner - all that is not being done properly since the common perception of OpenWrt seems to be the one of a source-based distribution where features are resolved at compile-time and set in stone after that.

Runtime package management is an afterthought at best and the only actual supported and somewhat tested operation is installing additional things on top of the running system. Removals, upgrades, replaces etc. are supported by opkg itself (albeit often bugged) but not properly being catered for on a repository level (by ensuring coherency, coordinated upgrades, ABI tracking, prerm/postrm/preinst/postinst scripts etc.)

b) lack of a completely writable system, means we can't upgrade or replace certain components such as the kernel (stored differently on each target, out of pkg control, often not even writable from the booted system) or the libc (exempted from packaging due to insufficient ABI tracking). Furthermore the majority of targets only has a fraction of the overall flash space available as writable overlay space which is quickly exhausted.

A typical supported device does not have enough sufficient writable storage space to store a copy of each installed package (which would be the worst-case requirement for supporting complete system upgrades, kernel and libc update issues aside).

Some targets like x86 or capable ARM boards with plenty of flash space could theoretically support a better, more complete package management but since OpenWrt caters mostly to the lowest common denominator of all supported platforms, it has to follow the capability limits of low end targets. Even if there weren't the platform limitations, the current contributor base wouldn't (imho) be up to the task of properly maintaining binary repositories in a quality comparable to major Linux distributions for the several thousand packages as that would require a lot more scrutiny when dealing with packaging metadata. The build system wouldn't be up for the task either in its current form and require considerable development effort.

Regarding your question about conflicts: conflicts in OpenWrt packages are mostly between multiple providers of the same functionality, the vast majority of them being build variants. Means things like ip-full vs. ip-tiny or wpad-openssl vs. wpad-mbedtls. The differences among these variants cannot be moved to common packages in a meaningful manner (e.g. having ip-full and ip-tiny share a common hypothetical libip would make no sense since libip would need to contain all functionality required by ip-full, rendering ip-tiny useless) .

In my opinion, for targets that do have the capabilities (storage, mainly) to host "fat" distributions with proper package management, it might make sense to utilize things like Debian chroots to install additional software or to resort to container management to deploy additional userspace software. After all, OpenWrt is primarily meant to be a (source based) networking oriented distribution that happens to have kernel support for a lot of exotic hardware targets. Some people want the hardware support but not the network-oriented/limited userland, but rather a generic fully featured distribution but that isn't something OpenWrt can and wants to support in its current form.


I agree it would be quite nice if every component including the kernel would be upgradeable. I think, as you have indicated, unlike desktop Linux distributions, it would be infeasible to upgrade the kernel through the same mechanism as handles extension packages. A different mechanism would be required, which might be vendor specific, at least in some cases. Kernel upgrades would be more ambitious than simply resolving a more intuitive packaging ecosystem.

Regarding the various issues you raise, many quite real, certain solutions come to mind, of varying feasibility.

The simplest means toward greater automation and clarity, as it seems to me, would be separating the semantic packages from the physical ones, with the former being selected by the administrator, and the latter being selected by the system, based on the semantic selections. In such a scheme, the semantic versions of ip-full and ip-tiny would have no conflict, even if the binaries in the physical packages would be in conflict. The system would simply resolve the most comprehensive selection of physical packages among the total set of semantic packages selected.

Thus, from an administrative view, selecting both packages simply has the effect of the full version being installed and the tiny one not being installed, or removed if necessary.

If dependencies would be fixed to operate smoothly, then full system upgrades would seem to be an easy step from a development standpoint.

Respecting insufficient writeable space for new packages version, it seems a solution is feasible, though the required change is a bit drastic, by placing the packages included in the original image on a separate writeable partition, keeping the read-only partition for the core system.

1 Like

Just to extend on this aspect. Aside from the SBC like targets (RPi, sunxi, rockchip) and x86/ x86_64, even modern high-end ARMv8 routers (released within the last 12 months or less) rarely come with more than 128-256 MB NAND flash, of which something between 20-64 MB tend to be usable (the rest 'lost' to dual-boot, reserved partitions or just stupid OEM partitioning), with even more constraints for the maximum kernel size (both partitioning and bootloader related and not easily changeable). So the lowest common denominator (8/64) isn't as far away from the 'best' all-in-one wifi routers sold within the mid- to high 3-figure range, as you might think. Aside from dropping all target support apart from x86_64 and the SBC platforms, there is little alternative to keeping the base 'small' at (almost) all costs.

Doing things 'properly' and roughly on par with traditional general purpose linux distributions would require main storage sizes in excess of ~1 GB at least (twice that with dual-boot in mind; at least >=384 MB RAM) and bootloaders flexible enough to deal with dynamic partitions (no fixed offsets, no fixed partitions sizes, being able to locate the kernel on a filesystem and to co-install at least two kernels in parallel, on the same filesystem - and, most of all, 100% reliable recovery mechanisms akin to inserting a removable bootmedium and being able to reinstall from scratch under all imaginable circumstances).

Once you are there, you wouldn't need OpenWrt anymore - as arch, debian, gentoo, fedora, mageia, mandriva, OpenSuSE, Ubuntu and friends already exist and would cater for exactly this scenario. At this point you're effectively x86/ x86_64 or SBC-like multi-platfom ARMv7/ ARMv8 (ARM server base with UEFI/ ACPI) only anyways. All that would be missing there, is a pretty webinterface to configure the underlying networking dæmon of choice (in theory a much easier task than maintaining a full distribution like OpenWrt, in practice still an enormous task to integrate properly and keep upgrade safe).

You do have to keep in mind what hardware is your target - and cope with its constraints.


Quite a broad range of concerns have been raised in the discussion, more so than reasonable to resolve quickly.

My general feeling is that some improvements are possible with respect to the lower-level concerns of partitioning and kernel updates, but the explanations so far do clarify the serious obstacles, and the topic is not one of my most significant experience.

Where I feel I may have a compelling argument for the possibility of improvement is the area of my original comments, of selecting application components, libraries and executable programs, based on a package selection.

A package-management system targets the following two core questions:

  1. For any possible package selection (combination of each possible package selected or not), is there a selection of installed components (e.g. files) that provide on the system the sum total of the functionality promised by all selected packages?

    The answer may be affirmative even if the solution exceeds the storage capacity of the device. It is not assumed that a package manager may provide any functionality regardless of such limitations.

  2. For any possible package selection different from the current one, does a process exist for reliably transforming the current state of installed components to the target one, without adverse effects from any previous state?

    Again, physical limitations, including storage space and reachability of external resources, are not considered in the answer.

While it may be a simplifying assumption in many cases that each named resource, such as a file path, corresponds to only one selected package, such a constraint is not necessary or even helpful in the general case. Multiple versions of the same named resource may be selected depending on the package combination.

Taking the earlier example, two versions of libip may be available through the package system. One file version may be selected whenever the package ip-full is selected, regardless of whether ip-tiny is also selected, and the other may be selected when only ip-tiny but not ip-full is selected. Thus, the administrator may select any combination of both packages (i.e. either package may be selected or not, independently) without conflict, the system resolving correctly the minimal requirement to support the functionality promised by all packages in the selected combination.

Returning again to the one my original observations, it would seem possible for the system to obtain a newer version of all resources previously obtained through an addition to the package selection, without the administrator naming the specific package. That is, all selected packages with available updates may be updated based on a single, global upgrade command. Despite any concerns about core packages included in the original firmware image being frozen in read-only state, packages installed subsequently, and not captured in this image, may be handled as described.

1 Like

Not really any kind of answer to the issues stated, but IMHO the balance is about right.

I mainly use the package system to select what I want in a firmware that I will build and flash, and less rarely use opkg on the device to add extra packages. I've been surprised what I could find when I did want to hack something together on a running device to diagnose a problem. netsed was there IIRC - which I'll class as "niche".

So for me packages are largely an additive thing, rather than wanting robust add N, remove Y, add M, while retaining coherence. Any change that would reduce the range of software that is available even if temproarily (eg a year ot two) while everything was rechecked for conformance would be a backwards step IMHO.

When flashing a device last week I saw what I think is a reasonably / very new option in LUCI to keep a list of installed packages. For me a reflash to the latest release of whatever I'm running (stable or snapshot) which would honour that list to reinstall the packages while retaining /etc/config files would be a reasonable approach - the assumption being that it would bring in all latest sucurity, stability, etc updates to those packages.

It would also be nice to have a way to easily turn the installed package list into an Image Builder profile, as that would help optimise resource usage and rework for anyone maintaining multiple devices, etc.

Something similar is provided by the "attended-sysupgrade" packages ("luci-attended-sysupgrade" for Luci GUI and "auc" for command line/SSH),

it will send a list of installed packages to a server that uses Image Builder to assemble the image and provide it to the device for sysupgrade. It is an application developed by one of the core developers, aparcar and the server with the imagebuilder is hosted by OpenWrt project at (it is used by default by attended sysupgrade packages from OpenWrt 21.02 release onwards)
You can also host your own server.

IMHO that is "the way" for painless upgrades.


Afaik, opkg does not know beforehand what files are installed by a specific package, and afaik it's an uncommon functionality even for PC/Server Linux distros where the package manager application is larger than a whole OpenWrt firmware image.

It will download the package and then will notice what files will be installed and if there is a conflict it will stop

The process exists, the issue is the implementation.

The bigger issue in this department is disk space. I've seen enough failures when it just fills the rw partition and then fails without completing the install.

Actually computing disk space available is not straightforward, since in most devices using jffs2 or UBIFS (raw flash filesystems used by embedded devices like routers) there is compression involved, and in most devices you really don't have much margin of error.

Afaik, size checks are 100% reliable if inside the OpenWrt build system (and Image Builder, which is a subset of it), where the root partition image is built, then compared to the known size of the root partition (defined on a per-device basis in most cases). If it's too big, an error is raised and the process aborts before building a firmware image.

The above means that if you select too many packages and the root partition is too big to fit on the device, the build system will NOT create a firmware image and throw an error. Since the check is done AFTER the packages are "installed" and compressed to create the raw partition image, the size measurement is exact.

That can be useful but I think it would require the definition of what package is a superset of what other package(s) since the package manager itself can't really know this by just looking at files. Also there are alternative packages that are just side-grades, like OpenSSL libraries, they are all theoretically supposed to provide the same functionality.

opkg does not know what packages are installed by the user. It only knows "installed packages". So while it can run an all-packages-upgrade procedure, it will upgrade all packages.

If you want more advanced functionality like upgrading only user-installed packages, you need to install and run some of the scripts mentioned in the sysupgrade guide like opkg extras
that after being installed will allow you to upgrade only user-installed packages with
opkg upgr oi

1 Like

If I am understanding your comments correctly, I believe package managers generally do indeed offer such support, and from my observations so does opkg. Otherwise, it would be unable to detect conflicts between packages offering the same files.

The process that would exist was not meant to refer to the abstract concept of a process, but rather one that is available to the administrator to invoke, because it has been implemented.

However, conceiving such a process even abstractly is hindered by the constraint that currently multiple packages offer the same file, without any means to resolve which file version should be the one selected, in case both packages are selected.

Yes, but regardless, it is still helpful for the package manager to solve the basic problems.

Yes. The information would be given in the package metadata.

Yes. It would be helpful for packages to be marked concerning this distinction. If no better way is available, a file might simply be included listing all packages whose installation was original to the firmware image.

1 Like

Please do yourself and all of us a favour and look at how a common OpenWrt supported device with only 8 MB flash actually works, how the system is laid out (bootloader, kernel, rootfs, overlay) and what constraints are passed down from the hardware (and OEM bootloader/ partitioning) as well as the tangential issues of repo size, index processing (32-64 MB RAM!) and dependency trees. You'll have a better background understanding why opkg is the way it is, once you know the environment its supposed to work in - and it then becomes easier to discuss the topics at hand. As mentioned before, opkg hasn't been chosen because of its abilities and features, but because of its size to functionality ratio - and that matters a lot, if your total free space is somewhere between 1.5 MB and less than that.


I could try to start with understanding opkg, but have yet to find any documentation. It seems its repository has become hosted by Yocto Project, but also not being among the project core components, is not covered under the project's documentation.

If you're offering to make updates that won't destailise the general compactness then I'd vote that this would be a good one to start with. I think the crowdsourced request would be that it tags the package actually wanted by the user, so that they can be distinguished from dependencies that were installed at the same time. A flag also saying if they are present in firmware would be useful, although that might be confusing if it didn't account for it being added by the user via Image Builder.

What about two or more variants of opkg? The repository contains several packages in a small and a full-blown variant (e.g. dnsmasq, ip, iw, nmap).

Not sure as to goal, but there is a kick at Alpine apk to be found in a staging tree.

For me that would only be feasible if there was one set of metadata, where a new variant made "deeper" use of it while the other was similar to to day. Asking developers, etc to maintain two sets of metadata for a nominally single platform would be a dilution of effort and probably lead to breakage for both variants.

I've aimed to read this thread to understand what the real problem is. I understand that opkg could probably be better but am not sure how often that would benefit how many people in their typical use of OpenWrt. For me the typical use cases are:

  1. Install a prepackaged image and use it. Done. Core OS upgrades via reflash generally "just work".
  2. Install a prepackaged image and add a few "well known" packages to provide VPN, QoS, etc. Upgrades require keeping track of the "top level" packages to reinstall afterwards.
  3. Once the desired packages are known use Image Builder to build a bespoke image. Upgrades are well handled by keeping buiuld config and applying to new IB download.
  4. Try and get something a bit specilaist running with less visited packages: add, remove, try different, reflash to fix tangled mess, repeat until satisifed, finalise required set, make a careful note so you can do an upgrade relativley painlessly (aka scenario 2 or 3).

Am I reading correctly that tis is mainly about the "dynamic part" of situation 4, or have I missed other situations and / or impacts on 1-3?

Yes, asking developers to maintain parallel package repositories is not helpful, but improved support for package conflicts and dependencies is not in principle dependent on a large augmentation of the code footprint of opkg or maintenance of parallel package systems. Surely it might exceed constraints to replace opkg with a different system that is not targeting environments with resource limitations, but I am trying to emphasize a small set of improvements that would enhance usability considerably without exceeding the identified constraints, including on the code size of the package manager itself. I can imagine a scenario in which an advanced feature set of the package manager may be layered over the basic one, and itself provided as a package, but such is longer term than I think is best to consider outright.

My experience with the current system leads me to a less optimistic view.

For me, the problem emerged as early as step (2), trying to install a new package for additional functionality, because it conflicted with an existing package. I had to remove the packaging causing the conflict, which required that I not only perform an additional manual steps, but also temporarily reduce the functionality of my system, and furthermore, investigate the needed steps and the reason for the conflict. At that time I had no knowledge in particular of either opkg or the packages themselves. Presently, opkg requires quite a bit of particular knowledge and understanding by the administrator, even though the required logic in principle may be automated. It also requires extra manual steps that contribute to inconvenience.

There is no reason in principle why a package manager generally requires administrators to remove packages with more limited functionality before installing packages with greater functionality. The strongest difference between the packages targeting embedded environments versus the desktop is that in the latter case, space capacity is sufficient for a full version of every library to be installed whenever any version is needed, whereas in the former case, various packages might provide different builds of the same library. As argued, however, such a constraint is not ultimately a reason against more automated and intelligent operation of the package manager.