Qosify: new package for DSCP marking + cake

Just make sure you have no rules that force other DSCPs on those packets...

Say you use a rule to sterr 8x8 traffic into video then make sure to configure video like

config alias video
	option ingress +CS5
	option egress +CS5

so your CS3 marks set on 8x8 packets will not be changed... (it is the + that tells qosify to only apply the remarking if the packet's current marking is CS0).

CAVEAT: All I know about qosify comes from reading this thread, so my advise might be wronger than wrong...

1 Like

re hi i use this sript https://github.com/ldir-EDB0/sqm-scripts/tree/sqmqosnfa

but you will do transfer via pc to router for inject this script i can't explain like this in your router

@PerryComo1987, for this to work properly you need to install netify, netify-fwa, and manually configure the script. Please, before jumping on this ship better be ready for some reading and tinkering. I didn't send the patch to fix netify for DSA architecture yet tho', only did it for netify-fwa. So, it requires a little bit of additional tinkering.

2 Likes

you have to choose the script that suits best i know some people loved dlakelan's script which was on iptables it worked great for me at first but then no, a simple dlakelan script which he put in place that I found effective was the following

#!/bin/sh

## "atm" for old-school DSL or change to "DOCSIS" for cable modem, or "other" for everything else

LINKTYPE="ethernet"

WAN=eth0.2 # change this to your WAN device name
UPRATE=17000 #change this to your kbps upload speed
LAN=eth0.1
DOWNRATE=58000 #change this to about 80% of your download speed (in kbps)

## how many kbps of UDP upload and download do you need for your games
## across all gaming machines? 

GAMEUP=$((UPRATE*15/100+400))
GAMEDOWN=$((DOWNRATE*15/100+400))

## set this to "pfifo" or if you want to differentiate between game
## packets into 3 different classes you can use either "drr" or "qfq"
## be aware not all machines will have drr or qfq available

gameqdisc="pfifo"

GAMINGIP="192.168.2.167" ## change this



cat <<EOF

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth 

Based on your settings:

Game upload guarantee = $GAMEUP kbps
Game download guarantee = $GAMEDOWN kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = $(((1500*8)*3/UPRATE)) ms

DOWN = $(((1500*8)*3/DOWNRATE)) ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

$((1500*8*3/5)) kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

EOF




setqdisc () {
DEV=$1
RATE=$2
OH=26
MTU=1500
highrate=$((RATE*90/100))
lowrate=$((RATE*10/100))
gamerate=$3
useqdisc=$4


tc qdisc del dev "$DEV" root

case $LINKTOP in
    "atm")
	tc qdisc replace dev "$DEV" handle 1: root stab mtu 2047 tsize 512 mpu 68 overhead ${OH} linklayer atm hfsc default 3
	;;
    "DOCSIS")
	tc qdisc replace dev $DEV stab overhead 25 linklayer ethernet handle 1: root hfsc default 3
	;;
    *)
	tc qdisc replace dev $DEV stab overhead 40 linklayer ethernet handle 1: root hfsc default 3
	;;
esac
     



#limit the link overall:
tc class add dev "$DEV" parent 1: classid 1:1 hfsc ls m2 "${RATE}kbit" ul m2 "${RATE}kbit"

# high prio class
tc class add dev "$DEV" parent 1:1 classid 1:2 hfsc rt m1 "${highrate}kbit" d 80ms m2 "${gamerate}kbit"

# other prio class
tc class add dev "$DEV" parent 1:1 classid 1:3 hfsc ls m1 "${lowrate}kbit" d 80ms m2 "${highrate}kbit"


## set this to "drr" or "qfq" to differentiate between different game
## packets, or use "pfifo" to treat all game packets equally

REDMIN=$((gamerate*30/8)) #30 ms of data
REDMAX=$((gamerate*200/8)) #200ms of data

case $useqdisc in
    "drr")
	tc qdisc add dev "$DEV" parent 1:2 handle 2:0 drr
	tc class add dev "$DEV" parent 2:0 classid 2:1 drr quantum 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 drr quantum 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 drr quantum 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the quantum parameter
    ;;

    "qfq")
	tc qdisc add dev "$DEV" parent 1:2 handle 2:0 qfq
	tc class add dev "$DEV" parent 2:0 classid 2:1 qfq weight 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 qfq weight 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 qfq weight 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the weight parameter

    ;;

    *)
	PFIFOLEN=$((1 + 40*RATE/(MTU*8))) # at least 1 packet, plus 40ms worth of additional packets
	tc qdisc add dev "$DEV" parent 1:2 handle 10: pfifo limit $PFIFOLEN
	## send game packets to 10:, they're all treated the same
	
    ;;
esac

if [ $((MTU * 8 * 10 / RATE > 50)) -eq 1 ]; then ## if one MTU packet takes more than 5ms
    echo "adding PIE qdisc for non-game traffic due to slow link"
    tc qdisc add dev "$DEV" parent 1:3 handle 3: pie limit  $((RATE * 200 / (MTU * 8))) target 80ms ecn tupdate 40ms bytemode
else ## we can have queues with multiple packets without major delays, fair queuing is more meaningful
    echo "adding fq_codel qdisc for non-game traffic due to fast link"
    tc qdisc add dev "$DEV" parent 1:3 handle 3: fq_codel limit $((RATE * 200 / (MTU * 8))) quantum $((MTU * 2))
fi

}


setqdisc $WAN $UPRATE $GAMEUP $gameqdisc

## uncomment this to do the download direction via output of LAN
setqdisc $LAN $DOWNRATE $GAMEDOWN $gameqdisc

## we want to classify packets, so use these rules

cat <<EOF

We are going to add classification rules via iptables to the
POSTROUTING chain. You should actually read and ensure that these
rules make sense in your firewall before running this script. 

Continue? (type y or n and then RETURN/ENTER)
EOF

read -r cont

if [ "$cont" = "y" ]; then

    iptables -t mangle -F POSTROUTING
    iptables -t mangle -A POSTROUTING -j CLASSIFY --set-class 1:3 # default everything to 1:3,  the "non-game" qdisc
    if [ "$gameqdisc" = "pfifo" ]; then
	iptables -t mangle -A POSTROUTING -p udp -s ${GAMINGIP} -j CLASSIFY --set-class 1:2
	iptables -t mangle -A POSTROUTING -p udp -d ${GAMINGIP} -j CLASSIFY --set-class 1:2
    else
	echo "YOU MUST PLACE CLASSIFIERS FOR YOUR GAME TRAFFIC HERE"
	echo "SEND TRAFFIC TO 2:1 (high) or 2:2 (medium) or 3:3 (normal)"
    fi
else
    cat <<EOF
Check the rules and come back when you're ready.
EOF
fi

echo "DONE!"

tc -s qdisc

His scripts contain quite a number of insight. Including the script where on very slow links he reduces the MTU/MSS, so that the transmission time of a single maximally large packet does not hog the link for too long... certainly worth looking at.

BTW... I wanted to share my noobness with you guys... wasted 2 hours on this damn bufferbloat not realizing Wireshark was causing it..... arghhhhhhh

if [ "$cont" = "y" ]; then

    nft flush chain inet fw4 mangle_postrouting
    nft add rule inet fw4 mangle_postrouting meta priority set 1:3 counter # default everything to 1:3,  the "non-game" qdisc
    if [ "$gameqdisc" = "pfifo" ]; then
    
    
    nft add rule inet fw4 mangle_postrouting ip protocol udp ip saddr ${GAMINGIP} counter meta priority set 1:2
	
	nft add rule inet fw4 mangle_postrouting ip protocol udp ip daddr ${GAMINGIP} counter meta priority set 1:2


i used that script and just changed these to work with nftables, seems so work great

2 Likes

I'm glad it works for you :wink:

you can make lie that i think


#!/bin/sh

## "atm" for old-school DSL or change to "DOCSIS" for cable modem, or "other" for everything else

LINKTYPE="ethernet"

WAN=wab # change this to your WAN device name
UPRATE=17000 #change this to your kbps upload speed
LAN=br-lan
DOWNRATE=58000 #change this to about 80% of your download speed (in kbps)

## how many kbps of UDP upload and download do you need for your games
## across all gaming machines? 

GAMEUP=$((UPRATE*15/100+400))
GAMEDOWN=$((DOWNRATE*15/100+400))

## set this to "pfifo" or if you want to differentiate between game
## packets into 3 different classes you can use either "drr" or "qfq"
## be aware not all machines will have drr or qfq available

gameqdisc="pfifo"

GAMINGIP="192.168.2.160" ## change this



cat <<EOF

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth 

Based on your settings:

Game upload guarantee = $GAMEUP kbps
Game download guarantee = $GAMEDOWN kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = $(((1500*8)*3/UPRATE)) ms

DOWN = $(((1500*8)*3/DOWNRATE)) ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

$((1500*8*3/5)) kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

EOF




setqdisc () {
DEV=$1
RATE=$2
OH=26
MTU=1500
highrate=$((RATE*90/100))
lowrate=$((RATE*10/100))
gamerate=$3
useqdisc=$4


tc qdisc del dev "$DEV" root

case $LINKTOP in
    "atm")
	tc qdisc replace dev "$DEV" handle 1: root stab mtu 2047 tsize 512 mpu 68 overhead ${OH} linklayer atm hfsc default 3
	;;
    "DOCSIS")
	tc qdisc replace dev $DEV stab overhead 25 linklayer ethernet handle 1: root hfsc default 3
	;;
    *)
	tc qdisc replace dev $DEV stab overhead 40 linklayer ethernet handle 1: root hfsc default 3
	;;
esac
     



#limit the link overall:
tc class add dev "$DEV" parent 1: classid 1:1 hfsc ls m2 "${RATE}kbit" ul m2 "${RATE}kbit"

# high prio class
tc class add dev "$DEV" parent 1:1 classid 1:2 hfsc rt m1 "${highrate}kbit" d 80ms m2 "${gamerate}kbit"

# other prio class
tc class add dev "$DEV" parent 1:1 classid 1:3 hfsc ls m1 "${lowrate}kbit" d 80ms m2 "${highrate}kbit"


## set this to "drr" or "qfq" to differentiate between different game
## packets, or use "pfifo" to treat all game packets equally

REDMIN=$((gamerate*30/8)) #30 ms of data
REDMAX=$((gamerate*200/8)) #200ms of data

case $useqdisc in
    "drr")
	tc qdisc add dev "$DEV" parent 1:2 handle 2:0 drr
	tc class add dev "$DEV" parent 2:0 classid 2:1 drr quantum 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 drr quantum 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 drr quantum 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the quantum parameter
    ;;

    "qfq")
	tc qdisc add dev "$DEV" parent 1:2 handle 2:0 qfq
	tc class add dev "$DEV" parent 2:0 classid 2:1 qfq weight 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 qfq weight 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 qfq weight 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000 min $REDMIN max $REDMAX probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the weight parameter

    ;;

    *)
	PFIFOLEN=$((1 + 40*RATE/(MTU*8))) # at least 1 packet, plus 40ms worth of additional packets
	tc qdisc add dev "$DEV" parent 1:2 handle 10: pfifo limit $PFIFOLEN
	## send game packets to 10:, they're all treated the same
	
    ;;
esac

if [ $((MTU * 8 * 10 / RATE > 50)) -eq 1 ]; then ## if one MTU packet takes more than 5ms
    echo "adding PIE qdisc for non-game traffic due to slow link"
    tc qdisc add dev "$DEV" parent 1:3 handle 3: pie limit  $((RATE * 200 / (MTU * 8))) target 80ms ecn tupdate 40ms bytemode
else ## we can have queues with multiple packets without major delays, fair queuing is more meaningful
    echo "adding fq_codel qdisc for non-game traffic due to fast link"
    tc qdisc add dev "$DEV" parent 1:3 handle 3: fq_codel limit $((RATE * 200 / (MTU * 8))) quantum $((MTU * 2))
fi

}


setqdisc $WAN $UPRATE $GAMEUP $gameqdisc

## uncomment this to do the download direction via output of LAN
setqdisc $LAN $DOWNRATE $GAMEDOWN $gameqdisc

## we want to classify packets, so use these rules

cat <<EOF

We are going to add classification rules via iptables to the
POSTROUTING chain. You should actually read and ensure that these
rules make sense in your firewall before running this script. 

Continue? (type y or n and then RETURN/ENTER)
EOF

read -r cont

if [ "$cont" = "y" ]; then

    nft flush chain inet fw4 mangle_postrouting
    nft add rule inet fw4 mangle_postrouting meta priority set 1:3 counter # default everything to 1:3,  the "non-game" qdisc
    if [ "$gameqdisc" = "pfifo" ]; then
    
    
    nft add rule inet fw4 mangle_postrouting ip protocol udp ip saddr ${GAMINGIP} counter meta priority set 1:2
	
	nft add rule inet fw4 mangle_postrouting ip protocol udp ip daddr ${GAMINGIP} counter meta priority set 1:2
	
	
	
else
	echo "YOU MUST PLACE CLASSIFIERS FOR YOUR GAME TRAFFIC HERE"
	echo "SEND TRAFFIC TO 2:1 (high) or 2:2 (medium) or 3:3 (normal)"
    fi
else
    cat <<EOF
Check the rules and come back when you're ready.
EOF
fi

echo "DONE!"

tc -s qdisc

Please @nbd can you fix this problem?

Hello, I am curious to know if the performance of my PC Engines apu4d4 is inline with what others are experiencing. My setup is currently the following:


Hardware: PC Engines apu4d4 - BIOS v4.17.0.2
Openwrt version: Linux version 5.10.138 (pktdrop@archlinux) (x86_64-openwrt-linux-musl-gcc (OpenWrt GCC 11.3.0 r20439-a96382c1bb) 11.3.0, GNU ld (GNU Binutils) 2.39)
Qosify version: qosify - 2022-04-08-ef82defa-1
Internet connection: FTTH (GPON) 1Gbps/1Gbps (IPOE using VLANID)


  • /etc/config/qosify
        option name wan
        option disabled 0
        option bandwidth_up 900mbit
        option bandwidth_down 900mbit
        option overhead_type manual
        option overhead 44
        # defaults:
        option ingress 1
        option egress 1
        option mode besteffort
        option nat 1
        option host_isolate 0
        option autorate_ingress 0
        option ingress_options "rtt 150ms mpu 64"
        option egress_options "rtt 150ms mpu 64"
        option options "triple-isolate nat"

Both packet steering and irqbalance feature are enabled. Software flow offloading disabled. I have set the cpu_governor to "Performance" on all cores. Running several speedtest, I cannot go over ~420Mbps in Download but I do not see the cpu that heavy loaded. Without qosify enabled, I can easily get full speed 1Gbps/1Gbps with not much bufferbloat, to be honest.

Speedtest

This is what I see from top during the speedtest:

https://imgur.com/a/a11BKMF - (since I am a new user, I cannot post more than one image.)

As you can see, qosify does it job perfectly in upload but for the download, cpu? cannot keep up. Is it really the maximum for this device? CPU is AMD GX-412TC SOC. 4 cores with a base clock of 1Ghz.

Let me know if there are other logs to be look at or please share your experience with this device.

Thanks!

Have you installed and enabled irqbalance?

Yes, it is running.

That basically disables qosify (cake will only have one priority tier any work qosify does the change DSCP labels will be essentially wasted).
Could you post the output of tc -s qdisc (there might be a qosify status command or similar that return the same information just for the cake instances, maybe some of the qosify users could help with the exact command, please?)
Which speedtest are you using?

Just to calibrate the expectation, with your setting and assuming IPv4/TCP the measurable goodput should be smaller or equal to:
900 * ((1500-20-20)/(1500+44)) = 851.04 Mbps
so both the 888 and 899 look unbelievably high

You need to install htop and configure it to show data for all CPU's I would bet that during the download test the CPU running the download cake instance is maxed out (as seen by its idle% being close to 0). The aggregate load percentages in busybox' top are great for getting a quick overview of a router's general load, but on multi-core routers these hide too much relevant information. Have a look at Measured goodput in speed tests with SQM is considerably lower than without in the SQM details wiki page, which has some pointers on getting multi-core load values.
Also please post cat /proc/interrupts from before and after a speedtest, to see how/if the load spreads over the available CPUs. (My gut feeling is that ~500 Mbps is probably the ceiling for your CPU, the bigger question I have is, is upload shaping working at all, as that should have similar limits and the speedtests numbers are way too high).

Thanks. I should get logs in a few hours when I am back home :slight_smile:

P.S.
About besteffort, yes, you are right. Left there as part of several tests. diffserv4/8 did not change much.

Sure, for your problem the diffserv mode is mostly irrelevant. Thinking this over besteffort is proably the best for your test (or disabling qosify for the tests and just using sq-scripts for testing, but in the end it really does not matter much what tool you use to instantiate cake).

About diffserv8/4/3 I always recommend to be very sparse and judicious in prioritization. If a packet gets treated better than average (aka if a packet gets prioritized) some other packet(s) need to be treated worse than average; with the boundary case that if one tries to up-prioritize all packets one essentially prioritizes none. But the additional cost in cake for diffservX over besteffort should be minuscule at worst.

1 Like

It has been well established by others previously that the APU2 is already marginal at keeping up with plain routing at 1 GBit/s wirespeed (on linux, on xBSD it's even slower than that), without much/ any headroom left for PPPoE, SQM or VPN uses; the APU4D4 doesn't improve the performance (same dated AMD GX-412TC Jaguar cores at 2*1 GHz), just the number of interfaces (4 vs 2), the type of onboard slots and the RAM (4 GB vs 2 GB). Your expectations do exceed its physical performance limits (keep in mind that networking tasks, including SQM, is inherently single-threaded, so one of your cores will be maxed out).

2 Likes

Installed htop and enabled Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest).I have taken the logs and recorded the speedtests. IRQ load in htop is in purple.

I used this qosify conf file:

config interface wan
        option name wan
        option disabled 0
        option bandwidth_up 900mbit
        option bandwidth_down 900mbit
        option overhead_type manual
        option overhead 44
        # defaults:
        option ingress 1
        option egress 1
        option mode diffserv4
        option nat 1
        option host_isolate 0
        option autorate_ingress 0
        option ingress_options "rtt 150ms mpu 64"
        option egress_options "rtt 150ms mpu 64"
        option options "triple-isolate nat"

It looks like I am a bit unlucky at this hour of the day as sometimes my ISP SO-NET do some shaping on their side (I am getting about 650~880Mbps). Still, it is not that bad and I should still be able to do more than 420Mbps. As you can see, all 4 cores are being utilized and they are not really 100% maxed out. It is like something is not letting the cpu to do more work as there is some space. Not much but there is.

I did three tests:

  • waveform bufferbloat and inonius speedtests WITHOUT qosify
  • waveform bufferbloat WITH qosify
  • inonius WITH qosify

Even though Inonius does not provide unloaded/loaded latency comparison, it has some good Japanese servers close to where I live.

I have uploaded the logs (eth1 is my WAN) and screenrecordings (mp4) here: https://a.tmp.ninja/WCYlwLfE.zip
Some screenshots as a preview:

Thanks. I agree but I have not found any recent tests with CAKE and I was curious about some comments.

Compiled a new image without qosify but using sqm-scripts and ran some quick tests this morning.

Getting different results from different speedtests. At least now, I can see some cores getting 100%. With qosify, it was not showing up as it is running inside the eBPF VM, I guess. There should be some stats in the kernel somewhere.. will poke around.

I have attached also my current sqm-script configuration (pretty standard).



Doing some torrent load testing, I think this shows the real performance with multiple flows:

I should expect at least ~500Mbps most of the time. Not sure for speedtests but yeah, think this would be it.

500 Mbps equals:

(500 * 1000^2) / (8 * 1024^2) = 59.60 MiB/s

So you are in the right ballpark. However you should probably set the shaper rate to 450 to 500, because at the moment the shaper still burns CPU cycles without controlling latency all that well, no?

Maybe instead of getting and posting screenshots, simple copy and paste the output of:

cat /etc/config/sqm

Yes, two CPUs are maxed out. Once a Cpu is loaded to say >= 85% you will likely experience issues with cake. Cake needs a fair amount of CPU cycles, but more importantly it needs cpu-access with pretty tight deadlines, which on a loaded CPU gets tricky (assuming the load is not only cake itself).

1 Like