Do OWT admins run filesystem checks periodically?

I was thinking of running an e2fsck on one of my OWT routers.

The browser search re how to run a check on bootup, suggested adding the scan to the /etc/init.d/boot file.

When looking at this file (much of it attached), it seems much of the filesystem is created in RAM on the fly. So there should be minimal filesystem disruption even possible, for example due to power cuts.

Is the non-boot partition stored in the boot partition & uncompressed/extracted into “root filesystem”?

I was gonna ask WHERE in /etc/init.d/boot, the /usr/sbin/e2fsck should go?

The info I got, was putting it AFTER “grep -q”…..

boot() {
        [ -f /proc/mounts ] || /sbin/mount_root
        [ -f /proc/jffs2_bbc ] && echo "S" > /proc/jffs2_bbc

        mkdir -p /var/lock
        chmod 1777 /var/lock
        mkdir -p /var/log
        mkdir -p /var/run
        ln -s /var/run /run
        ln -s /var/lock /run/lock
        mkdir -p /var/state
        mkdir -p /var/tmp
        mkdir -p /tmp/.uci
        chmod 0700 /tmp/.uci
        touch /var/log/wtmp
        touch /var/log/lastlog
        mkdir -p /tmp/resolv.conf.d
        touch /tmp/resolv.conf.d/resolv.conf.auto
        ln -sf /tmp/resolv.conf.d/resolv.conf.auto /tmp/resolv.conf
        grep -q debugfs /proc/filesystems && /bin/mount -o nosuid,nodev,noexec,noatime -t debugfs debugfs /sys/kernel/debug
        grep -q bpf /proc/filesystems && /bin/mount -o nosuid,nodev,noexec,noatime,mode=0700 -t bpf bpffs /sys/fs/bpf
        grep -q pstore /proc/filesystems && /bin/mount -o nosuid,nodev,noexec,noatime -t pstore pstore /sys/fs/pstore
        [ "$FAILSAFE" = "true" ] && touch /tmp/.failsafe

        touch /tmp/.config_pending
        /sbin/kmodloader

        [ ! -f /etc/config/wireless ] && {
                # compat for bcm47xx and mvebu
                sleep 1
        }

block-mount does that. dont reinvent the wheel.
(as a matter if fact there is no fsck for ext4 root like x86 or whateverberry sdcard - just use squash and overlay is kept in shape)

1 Like

Squash vs Ext4 is one of my questions wrt OWT.

When I was starting w OWT, I didn’t know what Squash FS was, so I chose Ext4.

Are there advantages to Squash? I’ll search for bit of info anyway.

If its worth it, when upgrading to 25.10, I’ll try to convert to squash.

(info on changing FS when upgrading)?

squashfs has the fsck tool , similar to initramfs in a desktop linux distribution. ext4 root holds its own fsck which given enough power failures will become unreachable.
You can not convert inline between squashfs and ext4, you have to restore at least "data" to a separate non-rootfs/non-overlay partition

Yeah, I assumed I would have to

  1. Save Backup
  2. Write squashfs image.
  3. Restore Backup.

I assume I cannot convert to squashfs via Attended SystemUpgrade?

You have to follow up what is actually backed up. All but fstab can be restored if actually backed up. See sysupgrade -l for actual list.

There is actually a LOT backed up….. Seems a lot, but I have no idea what makes up OWT.

I do have NOTHING in my /etc/fstab, so doesn’t matter….

When 25.12 gets stable enough for me, I’ll ask how to migrate to squashfs. The sysupgrade process only downloads updated image of same filesystem doesn’t it? Or can I specify if I want to download a squashfs image?

If I backup config, restoring config, should get OWT right back up and running.

/etc/config/fstab

Being able to reset to factory defaults (failsafe mode) so you can diagnose a faulty configuration.

4 Likes

Are you saying that block-mount triggers e2fsck? I can't see anywhere in the files provided by that package where that is true.

OpenWrt does not write to disk frequently in most scenarios, so a filesystem check is not usually necessary.

More specifically, the only time data is written to disk (in a typical configuration) is when you are changing configurations. Based on that, there are very few instances in which any files are open (in write mode) or actively being written to the media, thus making it rather unlikely that normal power interruptions (be it intentional or accidental/incidental) or other "normal" events will cause data corruption.

The calculus changes when the storage is being used for more 'general purpose' functions such as a NAS, persistent log storage to non-volatile space, etc. and filesystem writes are more frequent. For what it's worth, this is not the recommend mode of operation for most embedded systems as it will degrade the flash storage device (which has a relatively limited number of write cycles compared to magnetic media or purpose built flash/SSD devices).

Obviously, aside from the types of damage described above (i.e. power interruption during writes, flash wear, etc.), there can be other type of corruption/damage that can happen due to things like 'bit rot', cosmic radiation, ESD and other power transients and the like, but these are quite rare, especially in as comparison to the earlier examples and thus don't usually warrant frequent integrity checks.

If you're running mission critical networks or if you are using the your device with frequent writes, fs integrity checks can be valuable to avoid surprises, but for most normal home/small business applications, it's unlikely to make much of a difference to run frequent checks vs on-demand if you suspect a problem (which is likely to be extremely infrequent if you're using your device in the typical scenarios).

1 Like