So, this is perhaps a crazy idea, but I would like to compile two toolchains - one using GCC 13 and one using GCC 14. By default everything would be built with GCC 14, but if some package fails to build I'll then (automatically, if possible) try building it with the GCC 13 toolchain.
Then, if that doesnt work either, I'll start manually trying some "compile hacks" to try get it to build. For example, this "compile hack" often (but not always) works to make it compile:
make package/.../${pkgName}/{clean,prepare}
grep -r 'Werror' ./build_dir/.../${pkgName}* | cut -f 1 -d ':' | sort -u | while read -r fPath; do sed -i s/'Werror'/'Wno-error'/g "${fPath}"; done
make package/.../${pkgName}/compile
Im currently rebuilding my custom dynalink dl-wrx36 build and trying to figure out why WAN isnt working, and ended up building the firmware in 2 separate build roots - one using GCC 14 and one using GCC 13.
In both cases there were a handful of packages that didnt want to build. BUT, for the most part, they werent the same handful...meaning that (almost) all packages compiled correctly (without needing any manual "compile hacks") with at least one of the GCC versions.
On the whole, trying to compile normally with another GCC version seems both more robust and much easier to automate (versus having to manually fix it; e.g., by ignoring errors)...seems like a win-win to me.
At any rate, is doing this even possible? if so how? any issues that I havent forseen?
I'm not sure why you would want to go this way. Sounds way more complicated than compiler fixes that some software might need because a newer GCC tightens the screws. Granted, those aren't obvious to the layman's eye (like mine), but to a programmer with C knowledge, they are, and they're often small.
I mean sure fixing the code itself is always preferable, but Ill be honest my C isnt that great and you can end up waiting quite a while for someone else to fix the code.
Most of the time when something doesnt compile it is because it is set to throw errors on all GCC warnings (i.e., a -Werror flag is hardcoded somewhere), including on non-fatal warnings that are basically just saying
hey, this part of the code might do something undesirable, though it might be just fine...you (the dev) should probably take a closer look
The vast majority of times an openwrt package doesnt compile correctly it is due to this sort of thing, often because newer GCC versions check for more possible issues and have more warnings.
So, if you are actively trying to compile firmware and a package you are building keeps erroring out and you dont know how to fix it yourself, your options are basically:
Dig through the source to figure out where a -Werror is being hard-coded and remove it. This usually works (if the code wouldnt compile at all it probably would be available as an openwrt package)...but this is tedious and often time-consuming.
Compile it with a different GCC version where it compiles without erroring out. If this works it takes far less of your time and could probably be implemented completely automatically.
Compile that package with make -i (which will have make just continue when an error is thrown...this will produce a broken package but will allow the build to continue) and fix it later.
Change your configuration to not include the package at all.
Of these, option #2 seems like it is by far the best one to try first.
UPDATE: I have this mostly worked out, though its implemented manually in a somewhat tedious way...it would be nice if it were more automated.
Basically, my process is more-or-less:
run make menuconfig and set it to use GCC 14
run make -j$(nproc) package/compile and then note any packages that fail to build
run make menuconfig again and set it to use GCC 13
make just the packages that failed to build. e.g., run something like make -j$(nproc) package/feeds/packages/failedPackage1 package/feeds/packages/failedPackage2 ...
run make menuconfig and set it back to use GCC 14
repeat the above steps until all the packages get compiled
run the final make to generate the firmware image with GCC 14 selected
The only scenarios where this wont work are:
if the package fails to build for both GCC 13 and GCC 14
if the package wont build for GCC 14 and it is one of the handful of packages that stubbornly insists on rebuilding itself every time make is called (even if it previously built successfully and nothing has changed).
If anyone knows a good way to prevent a packge that was already successfully compiled from rebuilding itself on everymake invocation please do suggest it.
To give a real example of how this works, on the build I am working on right now:
all but 6-8 packages compiled successfully using GCC 14
of those 6-8 that failed to build with GCC 14, all built successfully with GCC 13
of those 6-8 that failed to build with GCC 14 but built successfully with GCC 13, 1 stubbornly insisted on rebuilding itself every time make was called and needed another solution to build (package was wsdd2, the "other solution" was to add -Wno-int-conversion to target CFLAGS after everything else had already been compiled. This made it compile for GCC 14).
From a distribution point of view, you don't want to mix toolchains, that just costs size and introduces further breakage potential. It's better to have packages failing hard (so they scream to be fixed), than mixing stuff.
Fair enough. Though from a "Ive spend the day configuring and compiling this and now a handful of packages are preventing my from getting a firmware image and I have way too much other stuff on my to-do list and really need this done" point of view, waiting for a fix that might not come anytime soon isnt exactly an appealing option.
Dont get me wrong having 2 toolchains and having to switch between them isnt ideal. But, neither is just blanket disabling the warnings that the compiler is throwing for everything that you are compiling, which is the other realistic option if you cant/dont want to wait for a proper fix to come along.
So honest question: between
compiling a set of packages with a single compiler with warnings/errors disabled (which may include modifying the package source code if it overrides compile flags)
compiling a set of packages with 2 compilers running as intended, where the older compiler version is only used if the newer compiler fails
which is the better option? I mean both options are bad, but if you had to choose one?