I have encountered the error again, on my hardware.
It starts with:
Sun May 15 14:37:12 2022 kern.err kernel: [533320.106127] blk_update_request: I/O error, dev mtdblock10, sector 2594 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:15 2022 kern.err kernel: [533323.226313] blk_update_request: I/O error, dev mtdblock10, sector 3488 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:16 2022 kern.err kernel: [533324.269099] blk_update_request: I/O error, dev mtdblock10, sector 2596 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:19 2022 kern.err kernel: [533327.385955] blk_update_request: I/O error, dev mtdblock10, sector 3490 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:20 2022 kern.err kernel: [533328.426176] blk_update_request: I/O error, dev mtdblock10, sector 2598 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:23 2022 kern.err kernel: [533331.546073] blk_update_request: I/O error, dev mtdblock10, sector 3492 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:25 2022 kern.err kernel: [533333.627892] blk_update_request: I/O error, dev mtdblock10, sector 2600 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:27 2022 kern.err kernel: [533335.707119] blk_update_request: I/O error, dev mtdblock10, sector 3494 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:28 2022 kern.err kernel: [533336.745936] blk_update_request: I/O error, dev mtdblock10, sector 2602 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:30 2022 kern.err kernel: [533338.825761] blk_update_request: I/O error, dev mtdblock10, sector 3496 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:31 2022 kern.err kernel: [533339.865779] blk_update_request: I/O error, dev mtdblock10, sector 2604 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Sun May 15 14:37:33 2022 kern.err kernel: [533340.905699] blk_update_request: I/O error, dev mtdblock10, sector 3498 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
which then progresses to:
Sun May 15 14:37:33 2022 kern.err kernel: [533340.910849] SQUASHFS error: squashfs_read_data failed to read block 0x130f86
Sun May 15 14:37:33 2022 kern.err kernel: [533340.935308] SQUASHFS error: squashfs_read_data failed to read block 0x1b407e
Sun May 15 14:37:33 2022 kern.err kernel: [533340.935350] SQUASHFS error: Unable to read fragment cache entry [1b407e]
Sun May 15 14:37:33 2022 kern.err kernel: [533340.941430] SQUASHFS error: Unable to read page, block 1b407e, size c9c4
Sun May 15 14:37:33 2022 kern.err kernel: [533340.948634] SQUASHFS error: Unable to read fragment cache entry [1b407e]
Sun May 15 14:37:33 2022 kern.err kernel: [533340.955139] SQUASHFS error: Unable to read page, block 1b407e, size c9c4
Sun May 15 14:37:33 2022 kern.err kernel: [533340.975622] SQUASHFS error: xz decompression failed, data probably corrupt
Sun May 15 14:37:33 2022 kern.err kernel: [533340.975666] SQUASHFS error: squashfs_read_data failed to read block 0x130f86
This is exactly in-line with the errors as originally described.
My underlying storage controllers are spi-qup and spi-nor, there is no ubifs (that I can tell) - there's only the mtd driver in place between the spi controllers and squashfs.
The underlying issue is very clearly with the storage controller failing a read, which squashfs then persistently caches. A reboot sorts the issue out, and the underlying storage presents as "just fine" after a reboot as well (even when forcing a read from all blocks).