Rpi4 < $(community_build)

you can expand /tmp ... but the file is then copied to boot which has only 200MB free... so it won't solve the issue...

i.e.

mount -o remount,size=3000M /dev/shm /tmp/

large data that should be kept during upgrade is best handled with an fstab mount to an additional usb3 drive (and left out of sysupgrade.conf)

########################### /etc/config/fstab

config mount
	option target '/opt'
	option uuid 'aa8fca3f-7077-4f41-a289-ca04fc22470d'
	option enabled '1'
	option enabled_fsck '1'

above '/opt' is an example for the runtime mount location of the large data files...

looking at your backup list... i'm guessing one/some of these directories contain more than around 300MB

/root/docker/compose/servicehub/
/etc/HomeAssistant/
/etc/nut/
/etc/plex/
/etc/docker/
/etc/vhusbd/
/etc/x3mRouting/

so the idea is to;

  • get them under '/opt' or ( the /etc/config/fstab external usb3 drive mountpoint ) AND/OR
  • bindmount / symlink them to the external drive at startup etc. etc. OR
  • manually backup and restore them
1 Like

I see... looks like I got to an edge case again :sweat_smile:

As I'm understanding, by pointing /opt to an external drive I should be able to handle Docker screwing with the backup, with the only downside being having to deal with an external drive having to be plugged in at all times.

As such, having a "big" mSD card is of limited usability at the moment, and most multitasking operations that require storage space should be left up to external flash storage for the time being, is that correct?

If so, would a 32GB stick of ext4 over USB3 suffice for the mount, or should I consider something larger like an NVMe enclosure?

it is volatile due to the built in backup constraints...

it can be used for 'scratch'... ( temporary chroots / lxc ) or runtime lxc/docker... but you

  • must handle backup and restore manually

sounds good to me... for stuff like docker tho' it may fail after a few months... so ssd/nvme is definately a more robust option (but will likely need a usb hub)

1 Like

Okay, this one seems like the best alternative so far.

So then I should just move over the docker-compose mounted volumes over from /etc to /opt and as long as /etc and the rest of the stock partitions can fit under the 200MB limit everything would be peachy.

Then this would mean definitely moving everything over to and mounting /opt on an external drive would be the final solution for in place upgrades. Well, I learn something new every day.

Thanks for the help! :smiley:

1 Like

Yup... perfect!


OpenWrt is based on traditionally small amounts of flash memory ( 8M to 128M )... where only tiny backup files are/can-be stored...

In this build... i've made several 'enhancements' to support

  • larger amounts of packages
  • expansion of the rootfs
    but the underlying backup method is still traditional ( although with a stock image /boot is maybe 50M not 300M )

I tried to make it very clear when I implemented ROOTFSEXPAND that this will be an underlying limitation... ( and have told anyone using large datasets to use an exteral drive / mount for those ) since day 1... but it is good for you to test the limits, and see how errors manifest first hand... (and edge cases with verbose debugging help I learn alot from... so thanks)

( i.e. I still need to make this advice more clearer... and perhaps in the future I will make some more modifications to better support large data migration )

these constraints (and solutions) will exist on official (all) images...

Thank YOU wulfy, honestly you're the most patient developer I've dealt with so far, and as you say this experience has definitely teached me a lot of somethings, having you along for the ride certainly has helped my sanity from dwindling any lower :sweat_smile:

Then again, to tell you the truth the search function only helped me so much in finding out that you had already answered this sort of question, as I just found out about the quote above from checking the replies to the OP, so maybe something like a README.md on the download site or a warning to check the replies on the OP would help discoverability if you're still able to edit that one (I honestly thought the replies were either praise from fellow users or reserved posts for thread managing).

In any case, I sure am glad that I can put to rest this latest adventure, and a donation is surely coming your way after payday :blush:.

Thanks for everything, I guess i have my work cut out tomorrow. :sweat_smile:

1 Like

one it's setup it works beautifully... (and upgrades are fast as large data does not need to be moved)

[root@dca632 /usbstick 53°] mount | grep usbstick
/dev/sda1 on /usbstick type ext4 (rw,relatime)

[root@dca632 /usbstick 52°] uci show fstab | grep -A3 usbstick
fstab.@mount[2].target='/usbstick'
fstab.@mount[2].uuid='aa8fca3f-7077-4f41-a289-ca04fc22470d'
fstab.@mount[2].enabled='1'
fstab.@mount[2].enabled_fsck='1'

[root@dca632 /usbstick 52°] df -h | grep usbstick
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1               112.3G     29.8G     76.8G  28% /usbstick

It looks beautiful indeed! Now I just need to harvest the convention bag for a stick 'till I can get my hands on another NVMe drive. Not too shabby for being my first homelab! :grinning_face_with_smiling_eyes:

1 Like
/usr/lib/lua/luci/dispatcher.lua:427: /etc/config/luci seems to be corrupt, unable to find section 'main'

should I reboot?

1 Like

hmmm... that's the second time you've had an odd /etc/config/luci error but seems uniq to you...

so will be really hard for me to troubleshoot... I think last time we were thinking about failing sdcard or something... (what model is the sdcard?)

if nobody else has the issue... likely that or related to some additional package/manual change? but also suspect the error is a little misleading... like that config file is not the real issue or something... is there any relationship between high-io (lots of file copy operations) and the error?
you only need the last rpcd restart command see below

ucivalidate.sh 2>/dev/null | grep luci
cat /etc/config/luci
###################################
curl -sSL https://raw.githubusercontent.com/wulfy23/rpi4/master/utilities/config-luci > /etc/config/luci
rm -rf /tmp/luci*
/etc/init.d/uhttpd restart; /etc/init.d/rpcd restart

40 results on the forum...

'rebooting fixes but comes back'

'make sure rpcd is running'

ok this seems closest...
'And I seem to be able to reproduce the crash, even after your suggested move. Whenever I browse to the connection stats page and then somewhere else, I get an error'


so... some sort of rpcd crash related to visiting the 'connections' page?

think you should join in on the thread above ( or this github issue ) as it's looking like a non-build related resource problem (conntrack js overheads/quirks spamming rpcd? lib-json-c struct overflow? beyond my paygrade!)...

lol@slim-wrt workaround! just remove connections!


maybe see if this helps... (needs restart and browser cache clear);


sed -i 's|pollInterval=3|pollInterval=10|g' /www/luci-static/resources/view/status/connections.js
1 Like

It is fine now after reboot.
Yes, I was trying to check status > realtime graphs > connections

memory card is from Kingston select plus 32GB

1 Like

fyi...;

Failed to create the configuration backup

they will get;

############################################
Failed to create the configuration backup
############################################
it's probably too big (max 200MB)
remove large directories from your /etc/sysupgrade.conf
and backup / restore them manually
############################################

thanks again for the feedback...
(note: 200MB is the tar.gz limit which is probably something closer to 550MB in raw data files depending on the filetype)

Every time we talk you impress me more and more, honestly it's perfect!

Thanks for all the help so far!

And while on the topic of Big large data, I caught a glimpse that you were able to get ffmpeg to use hardware acceleration to a degree and was wondering if you had kept the libraries around so as to use them with Jellyfin per this guide, since I'm able to see the onboard encoder just fine.

Also, not sure if this is something that "requires" fixing, but it seems that the original youtube-dl is being superceded by this fork (yt-dlp), since the mantainers of the original project don't seem to be as active as in previous years, so while I don't use the feature personally, it might be worth it to take a look into changing over to the newer binary and giving it a test drive.

Also seem to be having some issues with both Wireguard and Transmission not opening their ports even after explicitly opening them in the firewall, but I'm pretty sure that has more to do with the packages rather than the build.

The upgrade worked correctly from yesterday's convo over to today's current build, and aside from some connection timeouts I'm looking into, everything's peachy so far.

Will keep you posted if I find anything else, and thanks again for all your guidance!

1 Like

cheers... will keep an eye on that...

ffmpeg is 'ripped-off' from alpine linux... ( extracted files manually and it gets downloaded on firstboot due to size... ) ... there are one or two more ( rclone, pastebinit etc. etc.)

i'll have to readup a little on this to get my bearings... but for packages not in openwrt I typically see if I can rip them out of alpine (or sometimes debian) as above...

findings-or-opinion

had a quick look over the jellyfin docs and in all honesty for this type of stuff (complex rpi4 video) you'll be better off(need) to be running the full blown distro(kernel) as the host os...

will save you a bunch of time and hassle... so if I were you, i'd look into purchasing an additional rpi4 (or just using x64) for this type of thing...

also has the huge benefit of not needing to update/zap the whole thing when you update the router...

a fun recent attempt...

[ /usbstick 49°]# kodi

[ /usbstick 47°]# ps w | grep kodi                                                                     
19191 root      0:00 {kodi} /bin/sh /usr/bin/kodi                                                                  
19199 root      0:00 /usr/lib/kodi/kodi-x11                                                                        
19205 root      0:00 grep kodi

ERROR: Unable to create GUI. Exiting

thanks for the report... first i've heard relating to this build but i've seen quite a few master related threads around the forum...

if it's urgent/persists i can probably try to implement a workaround (or maybe use r17530 or try r17637)... but as you say probably more to do with packages/netifd...

1 Like

Well that's an interesting tidbit, all this time I've been downloading rclone manually! Will keep an eye on that for the next build then.

Long story short it seems that the OpenMAX libraries are not shipped within the Docker container, and HW acceleration only works on the linuxserver.io image as long as both the library path and the device are shared to the container, but otherwise it seems like a fairly standard implementation. The only part that gives me pause is that the container's README mentions the other video devices that the RPi exposes while on Raspbian (video10, video11, video12, etc.), so there's a chance the way VideoCore exposes the hardware would require creating the rest of the device instances (then again, this is only speculation). On the other hand, it seems the Reddit post was used as the basis for the official snippet on their documentation according to this comment, so the paper trail checks out in regards to the requirements.

It's probably something related to the master branch, I've had this issue since r17443 at least with Wireguard (even though the port is open and the service running not even nmap is able to make tcpdump catch a packet), but if there's anything I could provide to help diagnose the issue I'd be glad to help.

Fortunately nothing's urgent atm, but if you have something in mind I'm game.

1 Like

for now... at least on 'current'(r1763x+) you can use rclone-aarch64 to install it... but it sort of needs an initscript too... for now thats up to the user...

or just

cd /
wget https://github.com/wulfy23/rpi4/raw/master/utilities/rclone.tar.gz
tar -xvzf rclone.tar.gz

this is more for your own purposes... way too much output to post... but you can try...

cp /sbin/hotplug-call /sbin/hotplug-call.orig
cp /sbin/hotplug-call-debug /sbin/hotplug-call

to see some verbose hotplug related info... ( cat /tmp/hotplug...)...

but i'll poke around and digest the other posts and your feedback for a bit because I think hotplug is a bit late in the chain ( for a root case but can be useful to catch exceptions for a workaround )

1 Like

Fair enough :sweat_smile:, I'm still getting used to uci, so I think I'll stick to the second option.

Okay, I'm interested and scared at the same time, but it's certainly worth a try!

So perhaps the issue is more of a layer 2 thing I presume, I'll keep you posted if anything weird comes up with hotplug. For the moment I'll just say that my rtl8152-based adapter dropped the error rates considerably after removing the kernel module (kmod-usb-net-rtl8152), it seems that on the "current" branch the rtl8150 non USB driver is currently more reliable, but I haven't been able to reproduce after today's upgrade the same error messages I was getting about restarting the interface using xHCI, so maybe that was more of a coincidence or a fluke.

Again, thanks for all the help!

1 Like

if you get a chance and you are still on the same build... can you PM me the output from

ubus -S call network.interface dump

when the interfaces / network has not come up correctly?

1 Like

i've put an experimental mtr(4) in the luci diagnostics page... but XHR is limited to 30secs so you will likely get a timeout error if you try to test this for now... at least for the first run I get it...

second run

1 Like

Wireguard status page now working on new build. Well done.

1 Like