Building LEDE/OpenWrt with a limited build-time space

Hey guys, I'm trying to build latest OpenWrt master and it doesn't complete successfully because of "No space left on device" during a "world" compilation.
Any tips or advises on how could I reduce the space needed for the build?
I only have a partition of about ~21G that I can use.

Hmmm, my whole build tree including several iterations of output is only 14 G.

lede_source$ du -h -d 1
737M	./dl
1.9M	./scripts
182M	./.git
48K	    ./config
12K	    ./.github
9.9G	./build_dir
2.9M	./tools
45M	    ./tmp
1.1M	./toolchain
6.2M	./env
1.4G	./staging_dir
19M	    ./package
89M	    ./feeds
372K    ./include
32K	    ./.idea
1.3G	./out
49M	    ./target
14G	    .

(My out is most peoples bin)

Are you only building the packages you need, or are you trying to build a bigger set of them?

Edit: Is that your entire system, including swap? If so, you need a bigger VM or another drive -- just the swap alone, if your RAM is as constrained as your disk, will kill you.

Just FYI, I'm using this: https://github.com/MOZGIII/archer-c7-v2-builder
This is an example of a build: https://semaphoreci.com/mozgiii/archer-c7-v2-builder/branches/su-exec/builds/4

I'm trying to actually build everything to later have packages ready if I need them. And I want a CI for the whole process.

"Methinks your eyes are bigger than your stomach"

Get some decent hardware (at least a USB 3.0 drive), or "rent" something in the cloud, then think about Docker and CI

Thanks for the advice, but that's not actually what I'm looking for. I'm capable of building things locally.
What I need to have is to reduce the actual footprint of the build somehow.
I'm thinking of it as a challenge. There are some constraints there: the RAM is limited to 4G, disk space to 24G, we have 4 CPU cores and 180 minutes for the job. Don't you feel the challenge? :slight_smile:

Actually, it turns out there's just 5G of free space when the work starts, so that's why if's not properly working. I expect only about 1-2G to be left after docker bootstrap process... But now I kind of see what's the problem there.

Is there a way to list all the packages in the "world" build target?

In .config

No, that's not a challenge. I lived through the days when an OS compile took 8-12 hours on a fast machine. There was no alternative then. Do yourself a favor and get a droplet on Digital Ocean for $10/mo (you need more disk than the $5/mo droplet). Do yourself a bigger favor and buy a 4 TB USB drive for under $100.

Add up the sizes at http://downloads.lede-project.org/snapshots/packages/mips_24kc/packages/ and tell me that it's even physically possible in 21 GB.

Ok, to begin with, I have a bunch of actual hardware servers around locally that would do well enough to do the job. The problem is this setup cannot be replicated easily.
I want a setup such that anyone would be able to just fork the repo, and to not just get the source, but also the ability to actually build the images and packages (including the required compute resources).
I don't see anything wrong with that. Again, you don't have to encourage me to buy a USB drive. Why would I need one if I have access to flash array? :slight_smile:

Next step would be to publish resulting files to Github releases, and making opkg able to load the packages from that.

Speaking of the good old days of "fast" machines - those days are long gone, and now the limits that are applied by CI are artificial. Applied to this task, they resemble to me the limitations usually seen in programming contests (by nature). That's where the challenge feeling is coming from for me. :wink:

Thanks for the tip about .config. Though, I think there should be a way to get a machine-readable list of the make targets that would not require manual parsing of the .config format (i.e. I don't know how to work with .config except manually parsing it). My idea is to just split the list into smaller parts, build and upload the intermediate results and then clean after the package build is done.

Other option is to remove something from the VM that build the code to free up some space. I'm sure they're recreated from a snapshot for every build, so it won't hurt anyone.

Again, "cheats" like buying paid CI package with bigger limits or negotiating the limits increase for just my repo won't be real solutions for me.

Btw, my bad - the space constraints I specified initially were not accurate: even though the actual device is 24G in the VM, ~17 of them are used initially, and the system won't allow using more than 22.

So you're trying to build in 5 GB something that is likely in the 10-15 GB size without extra packages, even before you consider that the output alone will likely exceed 5 GB, not to mention swap?

As I still think you've got "a solution in search of a problem", http://www.faqs.org/faqs/compression-faq/part1/section-8.html might interest you as well. At least there you've got the carrot of a $5,000 prize for circumventing basic information theory. Between that and the knowledge that you can rent an 80 GB, 2-core cloud instance for 3 pennies an hour, it might lead you to a more fruitful goal.

Should you consider to follow your quixotic path, a good understanding of GNU make, as well as how the "standard" Linux config system works would lead you to answers about how .config is generated.

Nvm, I solved it already.

I know the idea of a workaround is somewhat hard to grasp if one set himself to "mathematical" mindset. But, again, my goal is to solve a practical problem, not to tackle impossible problems.

At this point, I can observe that this thread suffers from a typical problem of the internet forums, where you came for A, but instead of solution people are trying to explain why one shouldn't actually do A and have to do B instead. I think I gave a rationale for why I want what I want. Not sure if you're trolling me (that's a good one if that's the case).

Not sure why you mention deep understanding of make and the kernel build tooling. The thing is I'm looking for an advice on a tool I might not be aware of. I'll count that as that you don't know a good command or one-liner for this case either.

To sum up, not sure who said that first, but everything is possible in software.
Also, I found openwrt build system is somewhat limiting. Though it's pretty cool when used as designed for.

PS: next up in my quixotic path is building Qt like that, which is problematic because it takes ~40G of space for a full build; it may blow your mind, but it's possible to build with just 24G block device on a VM and a bit of magic :slight_smile:

How did you end up solving it?

I think I just need to do some cleanup, since I have plenty of disk space, but usage just keeps growing the further along the build process I get (fixing this and that as I run into each error).

Take a look here:


https://semaphoreci.com/mozgiii/wrt1900ac-v1-builder/branches/master/builds/2

I ended up cheating and just allocating more space that is available initially by removing a bunch of stuff.

The proper solution for my case would be to split build process into independent parts (for example such parts could be "(kernel + 1/3th of all packages) + (the rest 2/3th of all packages)" - two step build or "(kernel) + (package 1) + ... + (package n)" - a build step per package) - and to clean up non-external build artifacts between the build, like .o files for executables after they're linked and other stuff like that. The idea is that building takes a lot more space than the resulting artifacts. I also considered pausing the build after each step and saving the cleaned-up state somewhere at the cloud, and then resuming it in another build. This is to bypass some limitations that are only relevant for my use case.

So, that's my idea, however I didn't get into it as I solved my issue the other way.