Resizing [overlay] partition on squashfs

I have some questions regarding resizing the overlay. I've seen multiple threads some manual, others automated, even looked at the wiki...

My setup runs from sdcard so no bricking dangers here afaict.

  1. Resizing a partition does not resize the filesystem on top correct ? So this would consist of two steps basically ?
  2. When resizing do you keep your data or does it all get wiped and are to restore from backup?
  3. How does this work with owut or asu ?
  4. If you do an upgrade, I presume its overwrites the first 120MB with newer firmware, but skipping the MBR ?
  5. If not I assume its just an MBR size mismatch with the overlay after a resize? eg: MBR reports 104 and overlay is still the resized size ?

Thanks :blush:

EDIT:

  1. Am I required to keep the tools like resizefs and fdisk installed or integrated into my build ?

I'll sort of dance around your questions and just make some philosophical statements...

If you resize the rootfs using on-device tools, then you wipe down all that work every time you upgrade as sysupgrade just blindly rewrites the partition table to whatever is in the .img file.

Also, if you sysupgrade with an image that has a different partition size than what's present on-disk, and try (foreshadowing) to keep your config, well, the config is lost. (I'm researching exactly why, what platforms and so on, but it's going slowly; I want to know if it's possible to rectify this situation.)

That's why you'll see some people using extroot, but most of my devices are x86 and I'm lazy, so...

My solution is to build images that contain a rootfs partition as big as I need (say, 512MB), then just keep using that size forever. (I tried to make this "easy" in owut, details here.)

1 Like

Any particular reason why its capped at 1gb ?

I'm using sdcards because they are so easy to swap in the nanopi-r6. I'm afraid wear-leveling might occur if its written to the same place over and over again. But since the 8gb cards are cheap and industrial I'm thinking its not going to be a problem.

Thank you, I will be looking into the resize option. 104mb was just too little.

EDIT:

As a test, I did a few imagebuilder runs with rootfs partitions ranging from the default (for x86) of 104 MB up to 20000 MB to see how long they would take.

ROOTFS_PARTSIZE= real user img size
104 26s 18s 12M
512 48s 25s 13M
1024 74s 33s 13M
10000 11m47s 4m36s 32M
20000 28m15s 13m9s 32M

Those last two rows should make fairly clear why increasing the upper limit is infeasible...

EDIT 2:
A couple of things come to mind:

  • Are you building in a temporary filesystem ?
  • Is this a problem for ext4, squashfs or both ?
  • Do you clear the slack in the ext4 image, because that's why I think its as slow as F.

Compression phase right ? ext4 ?

mount -t options image /image
dd if=/dev/zero bs=10M of=/image/.clear-slack
rm /image/.clear-slack
sync
umount /image

Image builder doesn't make a distinction when a build is requested as to what you ultimately want when it does the build, it creates all the images for a given profile. That's why when you do a Firmware Selector build, it presents you with all of, for example, factory, rootfs, squashfs and ext4 images after the request completes. owut, auc and LuCI app generally ignore everything but that one image that matches your current installation. So, bottom line for above table, even if you only care about squashfs, image builder creates ext4 anyhow. (The underlying tooling was originally designed to "build everything possible" for distribution on downloads, then later adapted for use in image builder, hence the lack of granularity in the request mechanism.)

I haven't dug into the details of how the image builders create the final image for each of the targets (it's different for every one), but the common steps are

  1. Mount the pre-built rootfs
  2. Run the host version of opkg/apk to install the user's requested packages (opkg install $PACKAGES)
  3. Resize it as appropriate

Then device specific:

  1. Do whatever is needed to create all the fit, ubiblock, mtdblock, ext4, jffs2, squashfs or whatever partitions in the various image files (kernel.bin, uboot, preloader, initramfs, factory, sysupgrade, combined...)
  2. gzip (or not) on those images

You can usually find the recipe for those final steps in the image/Makefile for a given target, here's the first one alphabetically (pop up to the linux directory from there to see all targets):

FYI: I know very very little about the build process. I picked a gen_image.sh file from random x86 build.

So after reading this:

  1. The rootfs image is copied into the final image.
  2. The bs=512 is way to small for partitions over 1GB (using dd)

No wonder performance goes down the drain.

A simple solution could be to increase the blocksize for rootfs to bs=1M eg:

MB=$((1024*1024))
ROOTFSOFFSET="$(($3 / $MB))"

dd if="$ROOTFSIMAGE" of="$OUTPUT" bs=$MB conv=notrunc seek="$ROOTFSOFFSET"

But this would need some testing I'm sure.