Rpi4 < $(community_build)

Thanks for sending through your experience...

  1. Something seems not quite right with how you are using the image..

Did you start with our community 'fac'/factory image?... if not please try to explain the cronology of

  • what you flashed ( original version, what was upgraded over it etc. etc. )
  1. Kmod tun is in the build... I use fdisk, bash and nano which are also in the build daily... across all builds

  2. Which broadcom nic driver exactly ( both chipset and kmod package name if possible ) there are lots of kmods in the build and the ones which are not are installable via opkg.

  3. All other packages you try to install should generally work... there are some build specific quirks... but I think we should first work out whats generally going wrong with how you are using the build ( you need to mention the build version also when reporting issues )

  4. any build that says partuuid if flashed from factory will boot from usb (or mmc)... do not change cmdline to /dev... leave it as PARTUUID... if not flashed from factory ( upgraded ) you need to put in root=PARTUUID=nnnnnnnnn-0n and ensure you have no mmc in or the mmc has no OS on it of your RPI EEPROM has the default boot order...

Thankyou for Guinea willingness... i can always use more help testing and polishing the build...

peace

Hi wulfy23, love your new build with the addition of the rpi4.qos.

How do i change the dscp classification of gaming from CS4 to CS7 using rpi4.qos. I tried looking for the setting in usr/lib/sqm/ctinfo_4layercake_rpi4.qos but i dont know what to change without messing it up.

Thank you!

1 Like

thanks killah...

I dont have user hooks for CS modifications yet, although they may be on the cards in over the next month or so...

the easiest / only current hook is the GAMING_MACS ini option ( search the thread for usage )...

basically... 'gaming' is defined mostly by ipset matches... ( there are no protocol static rules within the logic ) although there are some general static rules that try to expedite bursty stuff... so without defining your mac you are at the mercy of those rules... ( while I hope the ipsets give lots of positive matching I don't game... so I have no idea how well they are hitting stuff ) and then there are cloudflare and cdn ipsets which probably match both gaming and other stuff...

to be honest... I would probably need to see how it's working in realtime in order to better tweak the script for gaming... but I will definately keep an eye on where I'm making edits in the future and try to consolidate things to a point where there are some user hooks / ini values to set the actual CS values ( i've already done most of the variables... but they all interact so changes in one place often cascade into others and before you know it things are out of whack badly...

edit: if that's what you are looking at doing... you may try adding the game ports to the CONTROL PORTS ( tcp / udp ) variables... it would achieve a similar result... ( assuming they are not 443 lol )

1 Like

Thanks for the prompt reply.

Instead of placing my mac in GAMING_MACS I tried changing CS5 to CS7 in usr/lib/sqm/ctinfo_4layercake_rpi4.qos and used the RPI4QOS_GAMING_IPS_4 in wrt.ini, will this accomplish the same thing?

#CS5>CS_GAMING #srconlyfornow #@+gamedevice-limitcatches@dependsonCS5issuesinthischain@00only-on-enter||CSother
sLBL="${sLBL:-gamingdevice}" #justincaseisntdefined
if ipset -L -n 2>/dev/null | grep -q "^${sLBL}4$"; then
    ipt4dscp -p udp -m set --match-set "gamingdevice4" src \
        -m dscp --dscp-class CS7 -m hashlimit --hashlimit-mode srcip,srcport,dstip,dstport \
        --hashlimit-name gameDG41 \
        --hashlimit-above 450/second --hashlimit-burst 50 \
        --hashlimit-rate-match --hashlimit-rate-interval 1 -j DSCP \
        --set-dscp-class CS2 -m comment --comment "GAMEDOWNGRADE4-CS5toCS2"
fi

Also, when i place my mac in GAMING_MACS where do they get classified as/ which ipset do they go to?

not sure if I finished coding the GAMING_IPS ... if you put the mac in MACS the ip's automatically get added to gamingdevice4 and gamingdevice6 which in the script would look like ${sLBL}4 ${sLBL}6 ...

ip(6)tables-save -c | grep gaming | grep set

will show you most of the relavent rules... ( and whether or not they are being hit... there is also qosdebug.sh which if your network is quiet you can use to check for hits and LEARNCONNECTIONS you can use to learn the ports )

1 Like

How can i give my gaming PC the top priority?
Can i use this command and add mac address of gaming pc?

yes... for a gaming pc add the mac address to;

RPI4QOS_GAMING_MACS="aa:bb:cc:dd:ee:ff"

it is not a command... it is an INI file value...

also... it is not 'top priority'... giving any one PC 'top priority' implies you have total control of all traffic on that device ( and your highest tier free of all other traffic to assign )... not a wise move for most networks... surely you value your VOIP? DNS? traffic higher than gaming chat or windows updates on your gaming pc?

there is no capability within the script as things can go downhill very quickly... fwiw... if you really want to do that... the closest equivalent would probably be to remove it from gaming macs...

make your ipsets persistent;

RPI4QOS_IPSETPERSIST=1

add the PC ipv4 address to the latsens ipv4(6) ipset and restart sqm;

ipset add latsens <PC-IPv4ADDRESS>
/etc/init.d/sqm restart
1 Like

and whats the correct syntax to add multiple mac addresses?

RPI4QOS_GAMING_MACS="aa:bb:cc:dd:ee:ff, aa:bb:cc:dd:ee:ff"

is it correct

no comma... just space separated...

can you do me a favour also... and PM me the result of this command after a day or over around 1 hour with several games used or similar...

(iptables-save -c; ip6tables-save -c) | grep -iE '(gaming|game|set|burst|bytes|hashlimit)' | grep -v '0:0'
1 Like

ok sure i will PM you tomorrow

1 Like

Having a small issue with the collectd daemon seeming to error, respawn, but fail to kill the previous sqm_collectd script.

I've not modified any of the packaged collectd scripts.

I noticed because the SQM data was failing to log out to influxdb using the network plugin

FW revision: 2.7.77-12-r15880

Tue Mar  2 18:33:17 2021 daemon.err collectd[8528]: utils_dns: handle_dns: rfc1035NameUnpack failed with status 3.
Tue Mar  2 18:33:17 2021 daemon.err collectd[8528]: dns plugin: pcap_loop exited with status -1.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: configfile: stat (/etc/collectd/conf.d) failed: No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "irq" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "entropy" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "conntrack" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "ping" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "memory" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "cpu" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "load" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "thermal" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "processes" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "interface" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "dns" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "network" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "exec" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: plugin_load: plugin "disk" successfully loaded.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: utils_taskstats: CTRL_CMD_GETFAMILY("TASKSTATS"): No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: utils_taskstats: get_family_id() = No such file or directory
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: processes plugin: Creating taskstats handle failed.
Tue Mar  2 18:33:36 2021 daemon.err collectd[9785]: Initialization complete, entering read-loop.
Tue Mar  2 18:33:37 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:33:52 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:34:07 2021 user.notice nobody: sqm_collectd.sh more than 2
Tue Mar  2 18:34:22 2021 user.notice nobody: sqm_collectd.sh more than 2

Haven't had time today to dig into it, thought I would flag as this happened twice over the last couple of days!

Screenshot 2021-03-02 at 21.07.32

1 Like

the collectd sqm collector script has been observed re-spawning itself many times... has been going on for probably around a month+... and I've been hoping for a general fix / more info to surface...

two weeks ago... once again via 'ps w', saw there were around 10 of them running... so I implemented a respawn limit to two... this had the effect of breaking sqm logging once that limit was reached...

how to remove the two script limit
cat /usr/libexec-off/collectd/sqm_collectd.sh-OFFICIAL2021 > /usr/libexec/collectd/sqm_collectd.sh

i'm not sure which is the lesser of two evils... heaps of background processes or sqm logging stopping... perhaps a cron workaround restarting luci_statistics every 24 hours might be a better... ( which should help regardless whether my fix is implemented or not in the interim ) or something a teency bit fancier that finds all the zombies and ends those...

the message below I don't think is related... ( but it could be? ) pretty sure this message has been occuring for around 3 months... ( and i'm not sure what it relates to )

i'll escalate this... and;

as there are not lots of reports... generally... medium chance it is something build related...

1 Like

Ah that explains why I’ve not seen this crop up in previous builds, so not a regression.

I think forcing the error state here was a good call, at least it’s visible now and we can catch the preceding logs.

For now I’ll disable the dns plugin which is erroring out and causing the collectd restart.

If i find any other state changes that trigger the respawn I’ll report back.

As always, thank you

1 Like

can you post the error for the dns plugin? + 'uci show luci_statistics | grep dns'

cheers :+1:

dns-plugin inclusion was marginal... kinda useful when the dnsmasq bugfixes were going on... but enabling for all builds was probably beyond what i'd personally / guess majority of users would enable...

i'll disable the 'beta' autosetup for that I think ( leave it up to the user to manually enable )... unless anyone else has input...

1 Like

Ah I don’t have persistent logs at the moment and rebooted to fix the last sqm script respawn. (I’ve not gotten around to piping them out to influxdb)

I’ve just re-enabled the dns plug-in and will catch the logs for you next time I see the sqm data drop off :+1:

1 Like

this may work better (end prev/running duplicates)...

I had issues the first time as sqm_collectd.sh runs as nobody... ( hence the 2 spawn limit )

curl -sSL "https://raw.githubusercontent.com/wulfy23/rpi4/master/utilities/sqm_collectd.sh" > /usr/libexec/collectd/sqm_collectd.sh
1 Like

Nice! I'll pull this down once I've caught collectd_dns misbehaving and give it a shot :+1:

1 Like

wireguard users are advised that for the 101(r16076) build, I needed to remove 'wireguard' as the package did not exist... ( kmod and related packages are all included... I think the above was just a previous wrapper package? )

remember reading something somewhere about it being 'in kernel' in the future but thought that may have been from 5.10 onwards... which we are not on yet...

could be a wrapper rejig and wireguard is perfectly fine... could be an interim state...

2 Likes

Just have to ask and here is probably the right place to do so. In the upcoming OpenWRT 21.02 release is there a official Rpi support? And if so does it make building and maintaining easier?

yes

stable~reliable.... you wont get package incompatible with kernel errors or have to adjust customisations based on a moving target...

easier is subjective...

1 Like