Save luci-statistics/collectd across reboot

thats a good idea thx

sorry for doublepost, creating a share in windows with "everyone" permission would qualify as "world readable"?

I meant install samba on the router.

Then create a share on the router itself for the directory you want to be world readable.

Ok, what i read from samba was to install samba on openwrt and connect to a networkshare from my windowsmachine, which can house the rrd files, maybe thats not possible?

See also this thread which is, in essence, about the same thing (saving stats across reboots, across updates, same difference.)

I wrote a backup/restore init.d script that may help (as an alternative for external storage.)

I have a lightweight approach, where I use scripts that save/restore the stats to flash. My goal is not to guarantee a full 100% of data, but to merely be able to see developments in the long run (monthly/yearly trends), e.g. memory consumption, CPU temp, etc. (As I run both master and 19.07 in my R7800 intermittently, there are differences in the data in any case.)

Saving collectd data every 30 seconds to flash, either router or USB stick, does not seem viable to be, so I keep the accumulating data quite normally in ramdisk /tmp/rrd.

As I store the stats on the router's flash, the data is neither reset-proof (firstboot clears overlay) nor sysupgrade-proof.

Cron runs the save script once per day to backup data against reboots.

I manually run the save / restore scripts in connection of a sysupgrade. I have configured the stats backup file to be carried along in sysupgrade.

And every few weeks I copy the backup file to external flash.

root@router1:~# cat /etc/storeStats.sh
#!/bin/sh
/etc/init.d/collectd stop
logger -t "LuCI statistics" Create backup archive
mkdir -p /etc/backup.stats
cd /tmp/rrd/$(uname -n)
tar c -zvf /etc/backup.stats/stats.tar.gz *
cp /etc/backup.stats/stats.tar.gz /etc/backup.stats/stats-$(date +%Y%m%dT%H%M).tar.gz
/etc/init.d/collectd start

root@router1:~# cat /etc/restoreStats.sh
#!/bin/sh
/etc/init.d/collectd stop
logger -t "LuCI statistics" collectd stopped, stats being restored
mkdir -p /tmp/rrd/$(uname -n)
cd /tmp/rrd/$(uname -n)
tar x -zvf /etc/backup.stats/stats.tar.gz
/etc/init.d/collectd start


root@router1:~# cat /etc/crontabs/root
0 2 * * * /etc/storeStats.sh

root@router1:~# cat /etc/sysupgrade.conf
## This file contains files and directories that should
## be preserved during an upgrade.
# /etc/example.conf
# /etc/openvpn/
/etc/backup.stats/
3 Likes

This sounds like a really good idea, i am trying it out in a VM now.
The only thing that i dont really understand what it does is

root@router1:~# cat /etc/crontabs/root
0 2 * * * /etc/storeStats.sh

Does this tell cron to run a scheduled task once per day? How would i set that up?
Thank you

Yes.
Normal Linux cron stuff.
Like I said:

Just edit the root crontab file (normal text file, and can be done from LuCI), and make sure that the cron service runs.

Read wiki...

Thank you i think i got a good idea of how i want it now.
A question, would there be a way of running cronjobs, like the storeStats.sh if a reboot command is issued?

Not with a cron job, that is usually done using a script in /etc/init.d ... my script is designed to do that.

FTR, it's not a competition, there are many ways to skin a cat.

Excellent my friend, now i am quite new to openwrt and linux world, so i hope its ok if i run the script by you with my modifications, to make sure its correct.

  1. Copy paste that script into a file on openwrt called rrdbackup, located at /etc/init.d/rrdbackup
  2. chmod +x /etc/init.d/rrdbackup to make it executable
  3. /etc/init.d/rrdbackup enable (to enable it as a service?)
  4. /etc/init.d/rrdbackup start (not sure why this step, if u wanna explain i would appreciate it, in my head this starts the script, which isnt suppose to happend until reboot?)
    • (optional) add the backup target directory to /etc/sysupgrade.conf for the backup to be preserved across sysupgrades
  5. the script
#!/bin/sh /etc/rc.common

# OpenWrt /etc/init.d/ script to backup and restore the rrd (collectd) database, to preserve data across reboots
#
#
# howto:
# - upload this file as /etc/init.d/rrdbackup
# - (optional) adjust BACKUP_DIR below to point to a different target directory for the backup (e.g., a USB drive)
# - # chmod +x /etc/init.d/rrdbackup
# - # /etc/init.d/rrdbackup enable
# - # /etc/init.d/rrdbackup start
# - (optional) for periodic backups insert into crontab:
#      0 */6 * * * /etc/init.d/rrdbackup backup
#   adjust interval to own taste (example above backs up every 6 hours)
# - (optional) add the backup target directory to /etc/sysupgrade.conf for the backup to be preserved across sysupgrades

# queue this init script to start (i.e., restore) right before collectd starts (80)
# and stop (i.e., backup) right after collectd stops (10):
START=79
STOP=11

# add commands "backup" to manually backup database (e.g. from cronjob)
# and "restore" to restore database (undocumented, should not be needed in regular use)
EXTRA_COMMANDS="backup restore"
EXTRA_HELP="	backup  backup current rrd database"

# directories and files
# RRD_DIR needs to be relative to root, i.e. no slash in front (to mitigate tar "leading '/'" warning)
RRD_DIR=tmp/rrd 
BACKUP_DIR=/etc/luci_statistics
BACKUP_FILE=stats.tar.gz

backup() {
	local TMP_FILE=$(mktemp -u)
	tar -czf $TMP_FILE -C / $RRD_DIR
	mkdir -p $BACKUP_DIR
	mv $TMP_FILE "$BACKUP_DIR/$BACKUP_FILE"
}

restore() {
	[[ -f "$BACKUP_DIR/$BACKUP_FILE" ]] && tar -xzf "$BACKUP_DIR/$BACKUP_FILE" -C /
}

start() {
	restore
}

stop() {
	backup
}

The portion of the script i dont quite understand is

# add commands "backup" to manually backup database (e.g. from cronjob)
# and "restore" to restore database (undocumented, should not be needed in regular use)
EXTRA_COMMANDS="backup restore"
EXTRA_HELP="	backup  backup current rrd database"

Thank you

EDIT: I tried it on VM first and it worked so i did it on live router, WOW this is great how easy it was! And its great

1 Like

start() { restore }

Start will restore your previous backup at boot time or when run manually

EXTRA_COMMANDS registers an additional "backup" and "restore" commands, so you can call /etc/init.d/rrdbackup backup from a cronjob to "manually" trigger a backup.

EXTRA_HELP is just that, an additional line added to the script to be displayed if you run it without any parameters. (One probably will never need "restore" in regular operation, it's there, but "undocumented.")

This is not unique to this script. After you enable an init.d script, it will be registered to be run at startup/shutdown, but it will not have started yet. So you usually start it, once, manually.

P.S.: Even if it's not "large" by any means, and pretty straightforward, the script is still a lot larger than it actually needs to be. I tend to be a little bit more verbose in documentation, I do not compress commands down to the bare minimum, and I have some things configurable through variables even if they don't need to be (e.g., there is no real reason to have /tmp/rrd in a variable.) This is for the benefit of other people trying to understand it, and also for my own sanity when I need to revisit it after some time and try to understand my own code.

2 Likes

Thank you @takimata @mbo2o @hnyman @krazeh

You guys wouldnt happen to have a rough estimate of the amount of space this will take up?

rrd databases have fixed sizes, as they operate round robin fashion.

Typically LuCI stats only take a few hundred kilobytes, but that depends on the amount of plugins you apply.

Check with du /tmp/rrd

Ok. So that means, if a backup is not made in X time, the database will overwrite itself?

Not quite. It will (by default) keep five databases:

  • for the last hour, one value every collectd interval ... those are the raw values, also known as PDP (primary data point)
  • for the last day, one value every 600 seconds ... from here on down, these are CDPs (consolidated data points), i.e. averages of the "last hour" values, i.e. it averages 10 values of "the last hour" into one value
  • for the last week, one value every 4200 seconds (roughly one per hour), again averages from the "last day" values, and so on ... you get the point
  • for one month, one value every 18600 seconds (roughly one per 5 hours)
  • for one year, one value every 219600 seconds (roughly one per 2.5 days)

This means that the longer the timeframe, the lower the detail on the values. But you will only lose data after one year. (And I believe this can even be set to be a higher timeframe.)

The backup? It very much depends on what you will actually be logging in collectd, the more values you will be logging, the more space. It's not a huge amount, though, and it's gzipped on top. 20, 30, 40 kB, something like that.

1 Like

It keeps separate data series for the different time periods. The shortest interval gets constantly overwritten, but its data gets summarised to next longer data series.

Hour, day, week, month, year are the data periods. (144 items in each)