Rpi4 < $(community_build)

@anon50098793: the master builds are lacking Qosify package, but they are available on snapshot. Is there any way I can install this Qosify package onto master firmware???

1 Like

I did run pingplotter to check downtime and yes it was more than 5 minutes.

1 Like

master is snapshot... there is no qosify on 'release'/21.02.x but it should be in all master/snapshot builds... (since around r18000)

I mean stable build (21.02). i installed the latest stable build from 29.12.2021 and tried to search for Qosify in software. I couldn't find it. Am I searching for nonexistent content?

1 Like

correct... change to UPGRADEsFLAVOUR="stable" and run master builds for qosify...

don't forget to add to your ENABLEDSERVICES="qosify ..." list so it gets enabled next time you upgrade

1 Like

so... I just re-added / fixed the scavenging logic for some of these metafiles

nice one neil1, you just cut around 2+mins from most peoples upgrade times!

(also, I will take a look at the https log you provided over the next day or few, thanks!)

1 Like

Switched to stable and noticed during the day that the cpu governor setting isn’t sticking, I had it pinned at max speed in rc.local, the echo command is still there but no joy?

to force the cpu to fastest speeds set

POWERPROFILE="quickest"

or you can also

echo 'PERFTWEAKS_SCRIPT="/bin/true"' >> /root/wrt.ini

to disable it entirely if you want to set your own values
( i think you mentioned irqbalance now rc.local /bin/true would probably suit you best )

1 Like

No worries. Managed to roll back version and restore from a backup. Interestingly enough, when then upgrading to latest with kernel 5.10.90 it won’t respect the /boot/config.txt overclock. It’s not an issue per se, more an observation. No other changes made. Rolling back respects the overclock. Maybe a kernel change or something, not sure. Again, doesn’t affect operation, just a quirk.

1 Like

ok, that's more targeted... if others using overclocking report the same... we may be able to get the information that is required to do some basic tests and possibly report upstream

i've never felt a real need to overclock (via config.txt), and SubZero s findings on 1Gb support this. although if I were running lxc/docker/heavy services I might... still dont see much difference to pinning the max_scaling_freq tho ... maybe someone will educate me on how it's better/differs... perhaps it takes a little load of the scaling~scheduler or something...

Pinning the frequency seems to help if using SQM. Can’t put my finger on it exactly but I feel there’s a little lg when loading up the link on a 1gbit connection and expecting the little beast to apply SQM to it. Truthfully I’ve also never been able to hit max download on less than 1800mhz even using the tweaks and no SQM. Get about 850mbit, and overclocked I get up to 920mbit (calling that max). Dunno, maybe it also relies on how many VLANs one is running and firewall rules.

I don’t have any solid evidence, and I’m not a networking professional, I’ve just been testing snippets of things here and there. Maybe I need a complete network refresh, I was always of the opinion I had to keep my IoT devices on a separate VLAN and block those devices calling home, but maybe it’s just all a waste of time.

1 Like

yeah... that sounds related/relevant...

if you share your redacted /etc/config/network and /etc/config/sqm someone may spot something obvious...

tweaks are not tested / optimised for VLANS (if that is even possible)... but it wouldn't be the first place i'd be looking... 850 seems a little off... but not too bad...

but is was good of you to test without sqm... if you can't get 923 (with SubZero basic tips)... and overclocking raises your throughput... something does seem a little off...

Here’s my /etc/config/network with a few private details wiped:

config interface 'loopback'
        option device 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config globals 'globals'
        option ula_prefix ''
        option packet_steering '1'

config interface 'lan'
        option proto 'static'
        option ipaddr '192.168.1.1'
        option netmask '255.255.255.0'
        option ip6assign '60'
        option device 'eth0'

config interface 'wan'
        option proto 'static'
        option device 'eth1'
        option ipaddr ''
        option netmask '255.255.255.252'
        option gateway ''
        list dns '192.168.1.1'
        list ip6addr ''
        option ip6gw ''
        option ip6prefix ''
        option hostname 'router'

config interface 'IoT'
        option proto 'static'
        option device 'eth0.20'
        option ipaddr '192.168.2.1'
        option netmask '255.255.255.0'
        list dns '192.168.2.1'

config interface 'WireGuard'
        option proto 'wireguard'
        option private_key ''
        option listen_port '51820'
        list addresses '192.168.3.1/24'

config wireguard_WireGuard
        option description 'Phone'
        option public_key ''
        option route_allowed_ips '1'
        list allowed_ips '192.168.3.3/32'

config wireguard_WireGuard
        option description 'iPad'
        list allowed_ips '192.168.3.4/32'
        option route_allowed_ips '1'
        option public_key ''

Even with SQM off, and packet steering off, no bueno on more than about 850-870mbit. No issues hitting 920mbit slightly over clocked without SQM, and can hit 900mbit using FQCodel at 2000mhz.

1 Like

(dont see too much in the above, other than wg could be sapping some cpu juice which sort of seems normal or to be expected)


please also run (and paste output)


/bin/rpi-perftweaks.sh_userfileB; rpi-support.sh | grep -A5 interface-report

Attached

Settings are stored in /etc/perftweaks.txt
############### status:  13:59:47 up  7:01,  load average: 0.00, 0.00, 0.00
############### ini: ENABLEDSERVICES="banip sqm unbound" PERFTWEAKS="default" PERFTWEAKS_SCRIPT="/bin/rpi-perftweaks.sh_SubZero" PERFTWEAKS_Gbs=1 POWERPROFILE="quick" 
IRQBALANCE:[run:0][iniMGMT:0][srv:0]
SQM_STAT: [Ssrv:1] [Srun:1]
QOSIFY_STAT: [Qsrv:0] [Qrun:0]
ETH0_IRQs: 40 41 
 40:    1688509    5477737          0          0     GICv2 189 Level     eth0
 41:     832543          0          0    1640022     GICv2 190 Level     eth0
IRQ_AFF:  1=f 10=f 11=f 12=f 13=f 18=f 2=f 21=f 22=f 25=f 3=f 32=f 33=f 4=f 40=2 41=8 47=4 48=f 5=f 6=f 7=f 8=f 9=f default_smp_affinity=f
up_threshold: df: min_freq:1500000 max_freq:1500000
### time_in_state:  1500000 2528409
############################################ steering uci
network.globals.packet_steering='1'
############################################ steering sys
eth0 DEVICE1:fd580000.ethernet DEVICE2:fd580000.ethernet IRQCPU: IRQCPUMASK: subsys: platform
/sys/class/net/eth0/queues/tx-0/xps_cpus f
/sys/class/net/eth0/queues/tx-1/xps_cpus f
/sys/class/net/eth0/queues/tx-2/xps_cpus f
/sys/class/net/eth0/queues/tx-3/xps_cpus f
/sys/class/net/eth0/queues/tx-4/xps_cpus f
/sys/class/net/eth0/queues/rx-0/rps_cpus f
eth1 DEVICE1:2-2:1.0 DEVICE2:2-2:1.0 IRQCPU: IRQCPUMASK: subsys: usb
/sys/class/net/eth1/queues/rx-0/rps_cpus f
wlan0 DEVICE1:mmc1:0001:1 DEVICE2:mmc1:0001:1 IRQCPU: IRQCPUMASK: subsys: sdio
/sys/class/net/wlan0/queues/rx-0/rps_cpus f
################################# interface-report
############### lo [ok]  yes 
############### eth0 [ok] bcmgenet yes 1000Mb/s
############### eth1 [ok] r8152 yes 1000Mb/s
############### eth0.20 [ok] 802.1Q yes 1000Mb/s
############### WireGuard [ok]   

1 Like

yeah... could be a little problematic... irqbalance does not seem to be running now... but one of your eth0 cores is pinned to cpu3 (4) or 0+3(1+4)...

40=2 41=8

so that's;
-cpu1(2) for interrupt1
-cpu3(4) for interrupt2

i mean... may not be a huge issue... but in general... I think my script would have left you in a better place...

overall... i'm thinking its just some of those minor things + wireguard load...

or maybe not :robot:

1 Like

I’d switched off IRQBalance for that data, they’d be interrupts that were pinned to those cores because of IRQBalance. Anyway, I’ll go with your scripts and see how we go. Interestingly enough, I just rebooted for the 4th time today and the over clock worked this reboot……..buh fun and games.

1 Like

Updated to 5.0.11-13.

My SQM performance is solid with the following

  • POWERPROFILE="quickest"
  • irqbalance on
  • packet steering on

sqm config:

config queue 'eth1'
	option qdisc 'cake'
	option ingress_ecn 'ECN'
	option egress_ecn 'ECN'
	option debug_logging '0'
	option verbosity '5'
	option qdisc_advanced '1'
	option squash_dscp '1'
	option squash_ingress '1'
	option linklayer 'ethernet'
	option overhead '44'
	option qdisc_really_really_advanced '1'
	option script 'layer_cake.qos'
	option interface 'eth1'
	option enabled '1'
	option iqdisc_opts 'diffserv4 nat dual-dsthost ingress mpu 84 '
	option eqdisc_opts 'diffserv4  nat dual-srchost mpu 84'
	option download '950000'
	option upload '950000'

3 Likes

dude... that result is friggin outstanding!

(boy would I love to see a synchronous test on that line... but I dont know how :frowning: )

Thanks :slight_smile: Only other detail i can add is my rpi has a heat sink on the cpu and that might provide some benefit. Temps stay around 60 C at load.

1 Like