Installing OpenWrt fails on Cudy WR1300

I got an image called "openwrt-ramips-mt7621-cudy_wr1300-19.07-flash.bin" from cudy support via email. I am not able to attach it here because format is not allowed. With this image i was not able to install the images from: https://openwrt.org/toh/hwdata/cudy/cudy_wr1300 but latest stable version 22.03.0-rc6 : https://firmware-selector.openwrt.org/?version=22.03.0-rc6&target=ramips%2Fmt7621&id=cudy_wr1300 is working now.

Thanks for the help. It is a very nice community here :slight_smile:

thnx for the update.

thx for this information. Last year i bought some of the cudy wr1300 with the w25q128 and now i've one with XM25QH128C. I wrote yesturday to cudy, because more people have issues with the version from the website, but they reply only with the link to the website. Is it possible to send me your Firmware file and can you approve you have the XM25QH128C. It is shown in dmesg.

dmesg | grep spi

this firmware work with XMC-XM25QH128C but i cant upgrade to 22.03.0-rc6. UART says following error message.

[    1.708692] mt7530 mdio-bus:1f: Link is Up - 1Gbps/Full - flow control rx/tx
[    1.718701] VFS: Mounted root (squashfs filesystem) readonly on device 31:5.
[    1.729892] Freeing unused kernel memory: 1260K
[    1.734455] This architecture does not have kernel memory protection.
[    1.740885] Run /sbin/init as init process
[    1.781275] SQUASHFS error: xz decompression failed, data probably corrupt
[    1.788301] SQUASHFS error: Failed to read block 0x1971ba: -5
[    1.794040] SQUASHFS error: Unable to read fragment cache entry [1971ba]
[    1.800709] SQUASHFS error: Unable to read page, block 1971ba, size e1ac
[    1.807415] SQUASHFS error: Unable to read fragment cache entry [1971ba]
[    1.814106] SQUASHFS error: Unable to read page, block 1971ba, size e1ac
[    1.820788] SQUASHFS error: Unable to read fragment cache entry [1971ba]
[    1.827470] SQUASHFS error: Unable to read page, block 1971ba, size e1ac
[    1.834170] SQUASHFS error: Unable to read fragment cache entry [1971ba]
[    1.840837] SQUASHFS error: Unable to read page, block 1971ba, size e1ac
[    1.847546] SQUASHFS error: Unable to read fragment cache entry [1971ba]
[    1.854230] SQUASHFS error: Unable to read page, block 1971ba, size e1ac
[    1.861018] Starting init: /sbin/init exists but couldn't execute it (error -5)
[    1.868309] Run /etc/init as init process
[    1.876699] Run /bin/init as init process
[    1.883295] Run /bin/sh as init process
[    1.932954] SQUASHFS error: xz decompression failed, data probably corrupt
[    1.939905] SQUASHFS error: Failed to read block 0x6e: -5
[    1.945298] SQUASHFS error: Unable to read data cache entry [6e]
[    1.951276] SQUASHFS error: Unable to read page, block 6e, size 269ac
[    1.957772] SQUASHFS error: Unable to read data cache entry [6e]
[    1.963768] SQUASHFS error: Unable to read page, block 6e, size 269ac
[    1.970218] SQUASHFS error: Unable to read data cache entry [6e]
[    1.976210] SQUASHFS error: Unable to read page, block 6e, size 269ac
[    2.030053] SQUASHFS error: xz decompression failed, data probably corrupt
[    2.036983] SQUASHFS error: Failed to read block 0x6e: -5
[    2.089870] SQUASHFS error: xz decompression failed, data probably corrupt
[    2.096795] SQUASHFS error: Failed to read block 0x6e: -5
[    2.102311] Starting init: /bin/sh exists but couldn't execute it (error -5)
[    2.109386] Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.
[    2.123519] Rebooting in 1 seconds..

so i tested the daily snapshot and i got a another error

[    1.579146] /dev/root: Can't open blockdev
[    1.583297] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
[    1.590794] Please append a correct "root=" boot option; here are the available partitions:
[    1.599113] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    1.607366] Rebooting in 1 seconds..

i was able to recover via tftp this "openwrt-ramips-mt7621-cudy_wr1300-19.07-flash.bin" working image.

complete logfile and images:
https://drive.google.com/drive/folders/1xfVDv-Qm7g0Qz5d7X8PitZ3pSax_35S4?usp=sharing

The SQUASHFS errors might be caused by a heat problem that is not related to the XM25QH128C issues.
I got them on my WR1300 (with W25Q128) today after it had been running at ~27°C room temperature for a while with the metal shield getting quite hot. They disappeared when unplugged it and let it cool down for a while, and they re-appeared when it had gotten hot again.

@obazda20 Can you tell how hot your room and device was when you got these SQUASHFS errors, and was it different when you tested the snapshot?

I'm working in my basement. It's around 20 °C, but i will test it again with the rc6 und the daily snapshot.

The errors only seem to happen to me when the device gets booted while it is already hot, but not when it gets hot after it has been booted cold.

I had it powered off over night and it booted fine this morning. I had it running for a few hours and read all files from the squashfs every once in a while with dropping the caches in between so that I really read from the chip. There was not a single squashfs error during this time, but when I just rebooted, the errors became so bad that the device is unusable. Letting it cool down for a few minutes brought back a clean boot without errors.

[update]

Meanwhile I managed to randomly turn the errors on and off when the device is hot regardless of the temperature at boot time.
I execute

sync; echo 3 > /proc/sys/vm/drop_caches

to clear the caches and force the kerne to read from the flash chip again, and then I run

find /rom -type f | xargs cat > /dev/null

to read all files in the squasfs. This last command randomly does or does not trigger the squashfs errors in the kernel.

[/update]

BTW, I had desoldered the RF shield which also acts as a heat sink for the SoC and WiFi chips, because I needed access to the flash chip. Not sure if running the device in that state for too long caused some damage that now shows up when the device just gets warm within its specs.

Image: 22.03.0-rc6 (r19590-042d558536)

so everything was cold. I refalsh with rc6 image 22.03.0-rc6 (r19590-042d558536)
and got the following error on first boot after flashing

[    1.665979] Please append a correct "root=" boot option; here are the available partitions:
[    1.674326] 1f00             192 mtdblock0 
[    1.674332]  (driver?)
[    1.680837] 1f01              64 mtdblock1 
[    1.680841]  (driver?)
[    1.687370] 1f02              64 mtdblock2 
[    1.687375]  (driver?)
[    1.693900] 1f03           15872 mtdblock3 
[    1.693905]  (driver?)
[    1.700409] 1f04              64 mtdblock4 
[    1.700413]  (driver?)
[    1.706941] 1f05              64 mtdblock5 
[    1.706946]  (driver?)
[    1.713447] 1f06              64 mtdblock6 
[    1.713451]  (driver?)
[    1.719978] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    1.728231] Rebooting in 1 seconds..

so then switch the power off and get an erease

[    1.743479] mt7530 mdio-bus:1f: Link is Up - 1Gbps/Full - flow control rx/tx
[    1.744831] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00000004: 0x0568 instead
[    1.759994] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00000008: 0x9bef instead
[    1.769476] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00000010: 0x000d instead
[    1.778938] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00000014: 0x0004 instead
[    1.788394] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00000018: 0x0240 instead
[    1.797843] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x0000001c: 0x0004 instead
.
.
.
.
.[   21.864955] jffs2: Further such events for this erase block will not be printed
[   21.886710] jffs2: Empty flash at 0x00cdd944 ends at 0x00cdd9f8
[   21.894442] jffs2: Cowardly refusing to erase blocks on filesystem with no valid JFFS2 nodes
[   21.902866] jffs2: empty_blocks 37, bad_blocks 0, c->nr_blocks 206
[   21.909202] VFS: Cannot open root device "(null)" or unknown-block(31,5): error -5
[   21.916769] Please append a correct "root=" boot option; here are the available partitions:
[   21.925110] 1f00             192 mtdblock0 
[   21.925116]  (driver?)
[   21.931622] 1f01              64 mtdblock1 
[   21.931626]  (driver?)
[   21.938152] 1f02              64 mtdblock2 
[   21.938157]  (driver?)
[   21.944659] 1f03           15872 mtdblock3 
[   21.944663]  (driver?)
[   21.951181] 1f04            2633 mtdblock4 
[   21.951186]  (driver?)
[   21.957702] 1f05           13238 mtdblock5 
[   21.957707]  (driver?)
[   21.964208] 1f06              64 mtdblock6 
[   21.964212]  (driver?)
[   21.970732] 1f07              64 mtdblock7 
[   21.970737]  (driver?)
[   21.977253] 1f08              64 mtdblock8 
[   21.977257]  (driver?)
[   21.983759] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(31,5)
[   21.992101] Rebooting in 1 seconds..

after this it's looping with this error

[    1.644740] pci 0000:00:01.0:   bridge window [mem 0x60300000-0x603fffff pref]
[    1.654365] /dev/root: Can't open blockdev
[    1.658465] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
[    1.665979] Please append a correct "root=" boot option; here are the available partitions:
[    1.674326] 1f00             192 mtdblock0 
[    1.674332]  (driver?)
[    1.680837] 1f01              64 mtdblock1 
[    1.680841]  (driver?)
[    1.687370] 1f02              64 mtdblock2 
[    1.687375]  (driver?)
[    1.693900] 1f03           15872 mtdblock3 
[    1.693905]  (driver?)
[    1.700409] 1f04              64 mtdblock4 
[    1.700413]  (driver?)
[    1.706941] 1f05              64 mtdblock5 
[    1.706946]  (driver?)
[    1.713447] 1f06              64 mtdblock6 
[    1.713451]  (driver?)
[    1.719978] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    1.728231] Rebooting in 1 seconds..

Image: SNAPSHOT (r20351-60738feded)

[    1.581110] /dev/root: Can't open blockdev
[    1.585273] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
[    1.592763] Please append a correct "root=" boot option; here are the available partitions:
[    1.601080] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    1.609333] Rebooting in 1 seconds..

after turn off and on again

[    1.600909] /dev/root: Can't open blockdev
[    1.605012] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
[    1.612520] Please append a correct "root=" boot option; here are the available partitions:
[    1.620859] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[    1.629112] Rebooting in 1 seconds..

all Logfiles you can find here:
https://drive.google.com/drive/folders/1I_rl6iafFJ1n3gVZFsyLMdPaJoB7vWlD

i am not able to leave this initramfs state. I always get this message after login. It also occurs after flashing an sysupgrade image.

does the upgrade process (with a sysupgrade image) report any errors or any other information? How are you using the sysupgrade -- is this via LuCI (web) or ssh? What options/arguments are you selecting/providing?

I had a look at the device tree in Cudy's OpenWrt image and found that they run the SPI flash at much lower speed than upstream OpenWrt (10 MHz vs. 80 MHz). According to the data sheets both flash chips should be capable to work at 80 MHz and beyond, but maybe they got a bad batch of XM25QH128C, or the layout of the SPI lines on the board doesn't work well at the higher speeds, but still better with W25Q128 than XM25QH128C.

Anyway, if you want to give it a try, change spi-max-frequency in target/linux/ramips/dts/mt7621_cudy_wr1300.dts from 80000000 to 10000000, rebuild and flash.

My device with the replaced flash shows the squashfs failures at 80 MHz when heated up, but so far they haven't happened at 10 MHz.

Update: At a SPI clock of 40 MHz I cannot reproduce the squashfs corruption either.

2 Likes

for building a new image i need a linux machine correct?
According dts-file the SPI frequency is already 10MHz:

for the moment i went again back to 19.07 to test adblock and vpn features on openwrt.

Yes, you need Linux to build OpenWrt yourself, or you can use the 10 MHz image I uploaded here, if you trust me that I will not turn your cudy into a bot.

But where did you get that dts file from, that uses 10MHz and m25p,chunked-io = <32>?

[Update: Ah, it is the dts file that Cudy provided themselves in their "OpenWrt for developers" package. And that one has been using 10 MHz since Cudy first released it mid 2020, so Cudy themselves apparently never ran it faster than that, even with the old chip.]

I checked the dts in OpenWrt, and it uses 80 MHz in the master branch as well as in 22.03 and 21.02. This means that all images for this model you currently find on openwrt.org are running the flash at 80 MHz. I also checked the history of the file and this spot was never changed since it was first added to OpenWrt.

You can also check the current SPI speed in the running system by running
hexdump -C /sys/firmware/devicetree/base/palmbus@1e000000/spi@b00/flash@0/spi-max-frequency
For 80 MHz the output starts with: 00000000 04 c4 b4 00
For 10 MHz the output starts with: 00000000 00 98 96 80

The SHA256 of my image is
0e529b3f7910942b3c1cf98ecb2b38b5d1c75a0370a341fb750b91a7dec3bf9f

thanks a lot! Flashing did work with the image you uploaded.

The issue i am now running into is that i am not able to install the driver for the wireless chip.

It says kernel is incompatible but i guess the image has the latest version of kernel inside?

I am glad to hear that the image works for you, but why do you think you have to install the WiFi driver yourself? It is already part of the image.

i don't see any wireless section in luci
image

Thats why i tried to install it manually.

Ah, indeed, my test image was based on the configuration for another MT7621 device that has no WiFi and therefore I had excluded the WiFi drivers. Sorry for the confusion. But I didn't mean this image to be used productively anyway. Its purpose was to test if a lower SPI speed solves the issues with the new flash chip.
I will now prepare a patch and hope it will be accepted soon, but more testers for the 10 MHz change (self-compiled or using my image) would be welcome.

2 Likes

ah okay understood. Thanks!

For those who test this with a self-compiled image: If it works for you at 10 MHz with the new flash chip, please also try 55, 44 and 22 MHz and report the highest frequency that works on your device.

Ideally, let it run for a while, so that it warms up, then reboot, and after another while check the kernel log for squashfs errors like shown above or other signs of file system corruption.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.