Rpi4 < $(community_build)

yes... for a gaming pc add the mac address to;

RPI4QOS_GAMING_MACS="aa:bb:cc:dd:ee:ff"

it is not a command... it is an INI file value...

also... it is not 'top priority'... giving any one PC 'top priority' implies you have total control of all traffic on that device ( and your highest tier free of all other traffic to assign )... not a wise move for most networks... surely you value your VOIP? DNS? traffic higher than gaming chat or windows updates on your gaming pc?

there is no capability within the script as things can go downhill very quickly... fwiw... if you really want to do that... the closest equivalent would probably be to remove it from gaming macs...

make your ipsets persistent;

RPI4QOS_IPSETPERSIST=1

add the PC ipv4 address to the latsens ipv4(6) ipset and restart sqm;

ipset add latsens <PC-IPv4ADDRESS>
/etc/init.d/sqm restart
1 Like

and whats the correct syntax to add multiple mac addresses?

RPI4QOS_GAMING_MACS="aa:bb:cc:dd:ee:ff, aa:bb:cc:dd:ee:ff"

is it correct

no comma... just space separated...

can you do me a favour also... and PM me the result of this command after a day or over around 1 hour with several games used or similar...

(iptables-save -c; ip6tables-save -c) | grep -iE '(gaming|game|set|burst|bytes|hashlimit)' | grep -v '0:0'
1 Like

ok sure i will PM you tomorrow

1 Like

Having a small issue with the collectd daemon seeming to error, respawn, but fail to kill the previous sqm_collectd script.

I've not modified any of the packaged collectd scripts.

I noticed because the SQM data was failing to log out to influxdb using the network plugin

FW revision: 2.7.77-12-r15880

Tue Mar  2 18:33:17 2021 daemon.err collectd[8528]: utils_dns: handle_dns: rfc1035NameUnpack failed with status 3.
Tue Mar  2 18:33:17 2021 daemon.err collectd[8528]: dns plugin: pcap_loop exited with status -1.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: configfile: stat (/etc/collectd/conf.d) failed: No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "irq" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "entropy" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "conntrack" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "ping" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "memory" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "cpu" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "load" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "thermal" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "processes" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "interface" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "dns" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "network" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "exec" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "disk" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: utils_taskstats: CTRL_CMD_GETFAMILY("TASKSTATS"): No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: utils_taskstats: get_family_id() = No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: processes plugin: Creating taskstats handle failed.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: Initialization complete, entering read-loop.
Tue Mar  2 18:33:37 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:33:52 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:34:07 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:34:22 2021 user.notice nobody: sqm_collectd.sh more than 2

Haven't had time today to dig into it, thought I would flag as this happened twice over the last couple of days!

Screenshot 2021-03-02 at 21.07.32

1 Like

the collectd sqm collector script has been observed re-spawning itself many times... has been going on for probably around a month+... and I've been hoping for a general fix / more info to surface...

two weeks ago... once again via 'ps w', saw there were around 10 of them running... so I implemented a respawn limit to two... this had the effect of breaking sqm logging once that limit was reached...

how to remove the two script limit
cat /usr/libexec-off/collectd/sqm_collectd.sh-OFFICIAL2021 > /usr/libexec/collectd/sqm_collectd.sh

i'm not sure which is the lesser of two evils... heaps of background processes or sqm logging stopping... perhaps a cron workaround restarting luci_statistics every 24 hours might be a better... ( which should help regardless whether my fix is implemented or not in the interim ) or something a teency bit fancier that finds all the zombies and ends those...

the message below I don't think is related... ( but it could be? ) pretty sure this message has been occuring for around 3 months... ( and i'm not sure what it relates to )

i'll escalate this... and;

as there are not lots of reports... generally... medium chance it is something build related...

1 Like

Ah that explains why I’ve not seen this crop up in previous builds, so not a regression.

I think forcing the error state here was a good call, at least it’s visible now and we can catch the preceding logs.

For now I’ll disable the dns plugin which is erroring out and causing the collectd restart.

If i find any other state changes that trigger the respawn I’ll report back.

As always, thank you

1 Like

can you post the error for the dns plugin? + 'uci show luci_statistics | grep dns'

cheers :+1:

dns-plugin inclusion was marginal... kinda useful when the dnsmasq bugfixes were going on... but enabling for all builds was probably beyond what i'd personally / guess majority of users would enable...

i'll disable the 'beta' autosetup for that I think ( leave it up to the user to manually enable )... unless anyone else has input...

1 Like

Ah I don’t have persistent logs at the moment and rebooted to fix the last sqm script respawn. (I’ve not gotten around to piping them out to influxdb)

I’ve just re-enabled the dns plug-in and will catch the logs for you next time I see the sqm data drop off :+1:

1 Like

this may work better (end prev/running duplicates)...

I had issues the first time as sqm_collectd.sh runs as nobody... ( hence the 2 spawn limit )

curl -sSL "https://raw.githubusercontent.com/wulfy23/rpi4/master/utilities/sqm_collectd.sh" > /usr/libexec/collectd/sqm_collectd.sh
1 Like

Nice! I'll pull this down once I've caught collectd_dns misbehaving and give it a shot :+1:

1 Like

wireguard users are advised that for the 101(r16076) build, I needed to remove 'wireguard' as the package did not exist... ( kmod and related packages are all included... I think the above was just a previous wrapper package? )

remember reading something somewhere about it being 'in kernel' in the future but thought that may have been from 5.10 onwards... which we are not on yet...

could be a wrapper rejig and wireguard is perfectly fine... could be an interim state...

2 Likes

Just have to ask and here is probably the right place to do so. In the upcoming OpenWRT 21.02 release is there a official Rpi support? And if so does it make building and maintaining easier?

yes

stable~reliable.... you wont get package incompatible with kernel errors or have to adjust customisations based on a moving target...

easier is subjective...

1 Like

Finally happened again today,

Here's the output of logread -e collectd > collectd.log

Looks like there have been multiple respawns of collectd (I count 13) - it seems the last respawn before the more than 2 logs didn't exit normally, and is missing the following:

daemon.err collectd[2751]: Exiting normally.
daemon.err collectd[2751]: collectd: Stopping 2 read threads.
daemon.err collectd[2751]: collectd: Stopping 5 write threads.

These preceded the prior collectd restarts, but not for the one that appears to have multi-spawned sqm_collectd.sh

Here's the output of uci show luci_statistics | grep dns

luci_statistics.collectd_dns=statistics
luci_statistics.collectd_dns.enable='1'
luci_statistics.collectd_dns.Interfaces='eth0'
luci_statistics.collectd_dns.IgnoreSources='127.0.0.1'
luci_statistics.collectd_processes.Processes='uhttpd dnsmasq dropbear'

It does seem that the errors I saw thrown previously by the dns plugin were bad timing and possibly unrelated as they're not thrown here.

Let me know if there's anything else I can dig into!

Cheers

edit: grammar...

1 Like

Just pulled this down (with multiple sqm_collectd.sh's running) and restarted collectd:

sh /etc/init.d/collectd stop
sh /etc/init.d/collectd start

Can confirm this killed the other instances :+1: logging has resumed

1 Like

freikfunk feed has been removed from master I believe... this probably wont be present in newer (factory) distfeeds.conf files...

as these are carried across within sysupgrade data... users of existing builds are advised to disable the feed in '/etc/opkg/distfeeds.conf' preferrably prior to any upgrades (or if you restore a backup) as it will likely bork autorestore functionality;

#src/gz openwrt_freifunk https://downloads.openwrt.org/snapshots/packages/aarch64_cortex-a72/freifunk

Hi.
I've been pushing my rpi 4 a little too much with multiple jobs all over the day and I would like to overclock it.
However, when I try to do it, it appears the frequency does not change.
Isn't it enough to set in /boot/config.txt the lines:
over_voltage=3
arm_freq=1600

Thanks in advance

1 Like

due to the various risks and complexities involved with overclocking...

on this build... the preferred method to 'overclock'(and 'underclock') is by tweaking the governor ( runtime within OS )...

/root/wrt.ini

POWERPROFILE="quickest"

try the above... ( check there is not a value already assigned i.e. a line that is uncommented... I may have set 'quick' as the default recently... cannot remember )

( in which case change that or duplicate and comment it out )

after changing this setting to apply without rebooting run;

/bin/rpi-perftweaks.sh

fyi, as your sending your collectd stuff to a server... technically you could/should do without the 'persistentlucistatistics' feature...

alas... I haven't cleanly checked/implemented a clean way to do this currently... i'd assume adding SERVICESDISABLED="persistentlucistatistics" to wrt.ini and commenting out the existing line in /etc/crontabs/root

# 0 */6 * * * /etc/init.d/persistentlucistatistics psave
/etc/init.d/persistentlucistatistics disable
/etc/init.d/cron restart

would be the logical way to do this... but probably better to not bother for a few builds (or at all if the workaround is ok)... I may get the chance to test/tidy over the next few builds...

basically... what these do is

  • copy /tmp/rrd to persistent storage in case of power loss ( @neil1 made me aware of this condition ) prior to that... there was no constant 'build-related' restarting of collectd...
  • some other stuff that tries to transparently retain luci_statistics and nlbwmon data over reboots and upgrades (messy as carp but I'm a little scared to touch it...)

hoping the 'end-existing' workaround largely resolves the fundamental issue for now ...

you can disable that cron script... although the firstboot logic may put it back each upgrade...

1 Like