CAKE w/ Adaptive Bandwidth

Does this graph capture what you have in mind:

So:

- the blue line represents the shaper rate increase factor;

- the red lines represent the existing thresholds (explained below); and

- the green line represents a new threshold that has to be inferred from owd_delta_thr_ms and avg_owd_delta_thr_ms.

Assuming the existing variable names would still make sense in this method:

- owd_delta_thr_ms (the delta required on a single reflector for the detection of a delay); and

- avg_owd_delta_thr_ms (the average of the deltas taken across the reflectors required for the maximum shaper rate reduction),

any thoughts on what this new threshold might be called to fit in with the existing variable names?

1 Like

Yes, except for these values 1.04 and 0.75 the zero transition will happen closer to the left than to the right. but the idea would be to calculate the threshold value represented by the green line from the 5 values shown...

Mmmh, owd_delta_thr_ms made some sense, but already avg_owd_delta_thr_ms needs explanation... and are these not directional, so ul/dl_ owd_delta_thr_ms already?

maybe rename these to:
xl_owd_delta_thr_ms: xl_owd_delta_hold_thr_ms
xl_ avg_owd_delta_thr_ms: xl_owd_delta_max_dec_thr_ms
and add to this xl_owd_delta_max_inc_thr_ms but do not expose this in the config file but calculate on the fly.... (otherwise we have too many dterminants for our line)

1 Like

Thanks for detailed reply. My current topology is like this

NanoPi R2S -> Archer C6 v3 (dumb AP wired) -> D-Link DAP-1610 repeater (dumb AP wireless WDS) all on Openwrt.

Considering Archer C6 v3 has MT7621 and the DAP-1610 has mt76x8 they wouldā€™ve AQL right?

1 Like

Please run the following commands on all OpenWrt devices:

ubus call system board | grep 'model\|description'
iw list | grep 'Wiphy\|TXQS\|AIRTIME_FAIRNESS\|AQL'
iwinfo | grep 'Hardware:\|PHY name'

and post the output here, that should show us the capabilities, for more information see:

1 Like

On Archer C6 v3

root@AP1:~# ubus call system board | grep 'model\|description'
        "model": "TP-Link Archer C6 v3",
                "description": "OpenWrt 23.05.3 r23809-234f1a2efa"
root@AP1:~# iw list | grep 'Wiphy\|TXQS\|AIRTIME_FAIRNESS\|AQL'
Wiphy phy1
                * [ TXQS ]: FQ-CoDel-enabled intermediate TXQs
                * [ AIRTIME_FAIRNESS ]: airtime fairness scheduling
                * [ AQL ]: Airtime Queue Limits (AQL)
Wiphy phy0
                * [ TXQS ]: FQ-CoDel-enabled intermediate TXQs
                * [ AIRTIME_FAIRNESS ]: airtime fairness scheduling
                * [ AQL ]: Airtime Queue Limits (AQL)
root@AP1:~# iwinfo | grep 'Hardware:\|PHY name'
          Hardware: 14C3:7663 14C3:7663 [MediaTek MT7613BE]
          Supports VAPs: yes  PHY name: phy1
          Hardware: 14C3:7663 14C3:7663 [MediaTek MT7613BE]
          Supports VAPs: yes  PHY name: phy1
          Hardware: 14C3:7603 14C3:7603 [MediaTek MT7603E]
          Supports VAPs: yes  PHY name: phy0

On DAP-1610 Repeater

root@AP2:~# ubus call system board | grep 'model\|description'
        "model": "D-Link DAP-1610 A1",
                "description": "OpenWrt 23.05.2 r23630-842932a63d"
iw list | grep 'Wiphy\|TXQS\|AIRTIME_FAIRNESS\|AQL'
root@AP2:~# iwinfo | grep 'Hardware:\|PHY name'
          Hardware: 14C3:7662 14C3:7662 [MediaTek MT76x2E]
          Supports VAPs: yes  PHY name: phy1
          Hardware: 14C3:7662 14C3:7662 [MediaTek MT76x2E]
          Supports VAPs: yes  PHY name: phy1
          Hardware: embedded [MediaTek MT7628]
          Supports VAPs: yes  PHY name: phy0

Sqm settings from NanoPi

root@NanoPi:~# cat /etc/config/sqm

config queue 'eth1'
        option enabled '1'
        option interface 'eth0'
        option download '140000'
        option upload '155000'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option linklayer 'ethernet'
        option debug_logging '0'
        option verbosity '5'
        option qdisc_advanced '1'
        option squash_dscp '1'
        option squash_ingress '1'
        option ingress_ecn 'ECN'
        option egress_ecn 'NOECN'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'nat dual-dsthost ingress'
        option eqdisc_opts 'nat dual-srchost'
        option overhead '44'
        option linklayer_advanced '1'
        option tcMTU '2047'
        option tcTSIZE '128'
        option tcMPU '84'
        option linklayer_adaptation_mechanism 'default'

This look fine, but this

looks odd... not sure what is going on there. Maybe this is caused by WDS?

The repeater is on a forked build created by a member of this community here as the 1610 is not officially supported by Openwrt.

Edit: well no scratch that, I made an error copy pasting the command output, sorry about that

Hereā€™s the correct output

iw list | grep 'Wiphy\|TXQS\|AIRTIME_FAIRNESS\|AQL'
Wiphy phy1
                * [ TXQS ]: FQ-CoDel-enabled intermediate TXQs
                * [ AIRTIME_FAIRNESS ]: airtime fairness scheduling
                * [ AQL ]: Airtime Queue Limits (AQL)
Wiphy phy0
                * [ TXQS ]: FQ-CoDel-enabled intermediate TXQs
                * [ AIRTIME_FAIRNESS ]: airtime fairness scheduling
                * [ AQL ]: Airtime Queue Limits (AQL)
1 Like

Excellent, so this looks like your APs are as good as can be expected right now.
At least in upload direction latency should be somewhat contained... but there is an on going discussion on whether the defaults for TXQS|AIRTIME_FAIRNESS|AQL are as good as possible, especially for users focusing on lower latency.
See:

for trying different AQL values and less convenient

for trying to actually get to change WiFi's fq_codel parameters....

But again, this will only help in the AP -> station direction and do nothing for the reverse traffic...

3 Likes

I wonder if you could elaborate. I have the feeling there may be more than one way to do this and not sure of the optimal one.

For example, I wondered about the lower threshold just being the middle threshold minus the difference between the middle and upper thresholds. But that would seem to give negative lower thresholds for some values.

Well this is a relative simple geometric problem:
avg_owd_delta_thr_ms and 0.75 define the fist point on the line
owd_delta_thr_ms and 1.0 define the second point on the line
-> the line is now fully defined
now any other point on the line will have dependent xxx_thr_ms and N.NN values, so if we set N.NN to 1.04, there is only one xxx_thr_ms that is actually valid, and so the user-friendly thing IMHO would be not to force the user to precalculate that and put it into the config file, but just do this internally, no?

Well, if the line has a +/- 45 degree angle then owd_delta_thr_ms is going to sit in she middle between the other two threshold values, but for any other angle not so much?
As I mentioned above, if you look at that as an geometry problem things become obvious pretty quickly, no?

EDIT: another solution is to not look at this as a single line, but as two lines meeting at point (owd_delta_thr_ms, 1.0) then each 'segment' can have its own independent slope...
IMHO that becomes pretty complicated quickly and I am not sure that that complexity is merited... the one continuous straight line model can be implemented with the numbers we already have in the config file... so maybe start with that?

1 Like

Many thanks @moeller0 so changing the AQL values will work in tandem with sqm right?

I donā€™t know how to incorporate a diff patch so Iā€™ll stay away from modifying fq_codel parameters.

Not really. AQL is related to the built in fq_codel/airtime fairness code in the OpenWrt/Linux network stack. This is related to sqm, but not the same thing. The reason IMHO is that to do a good job for WiFi the AQM/scheduler really needs to live close to the radio and needs to know which station a packet is addressed for. Changing the AQL values should affect how much data is queued up into the driver/firmware blob this will help but will not be enough. But then IIUC every driver offering AQL will also offer TXQS and AIRTIME_FAIRNESS.
But as I probably mentioned multiple times, this is only effective for traffic in the AP to station direction (typically that is the internet download direction). In the other direction, at least for typical WiFi4/5/6? the stations are responsible for their own transmit opportunity acquisition and queueing and if a station insists on too large queues it will suffer from higher bufferbloat in upload direction... but the correct place to fix that is in the stations themselves...

2 Likes

Thanks again for the in-depth reply @moeller0

So just to re-cap, at the moment we have:

# owd delta threshold in ms is the extent of OWD increase to classify as a delay
# these are automatically adjusted based on maximum on the wire packet size
# (adjustment significant at sub 12Mbit/s rates, else negligible)
dl_owd_delta_thr_ms=30.0 # (milliseconds)
ul_owd_delta_thr_ms=30.0 # (milliseconds)

# average owd delta threshold in ms at which maximum adjust_down_bufferbloat is applied
dl_avg_owd_delta_thr_ms=60.0 # (milliseconds)
ul_avg_owd_delta_thr_ms=60.0 # (milliseconds)

# rate adjustment parameters
# shaper rate is adjusted by a maximum of shaper_rate_max_adjust_down_bufferbloat on detection of bufferbloat
# and this is scaled by the average delta owd / average owd delta threshold
# otherwise shaper rate is adjusted up on load high, and down on load idle or low
shaper_rate_min_adjust_down_bufferbloat=0.99    # how rapidly to reduce shaper rate upon detection of bufferbloat (min reduction)
shaper_rate_max_adjust_down_bufferbloat=0.75	# how rapidly to reduce shaper rate upon detection of bufferbloat (max reduction)
shaper_rate_adjust_up_load_high=1.04		# how rapidly to increase shaper rate upon high load detected
shaper_rate_adjust_down_load_low=0.99		# how rapidly to return down to base shaper rate upon idle or low load detected
shaper_rate_adjust_up_load_low=1.01		# how rapidly to return up to base shaper rate upon idle or low load detected

And we calculate compensated thresholds for the thresholds:

compensated_owd_delta_thr_us[dl]=dl_owd_delta_thr_us + dl_compensation_us,
compensated_owd_delta_thr_us[ul]=ul_owd_delta_thr_us + ul_compensation_us,

compensated_avg_owd_delta_thr_us[dl]=dl_avg_owd_delta_thr_us + dl_compensation_us,
compensated_avg_owd_delta_thr_us[ul]=ul_avg_owd_delta_thr_us + ul_compensation_us,

For each reflector, we determine whether the OWD exceeds the compensated_owd_delta_thr_us, and then we classify bufferbloat as follows:

bufferbloat_detected[dl] = sum_dl_delays >= bufferbloat_detection_thr ? 1 : 0,
bufferbloat_detected[ul] = sum_ul_delays >= bufferbloat_detection_thr ? 1 : 0,

And then, in terms of adjusting the shaper rate, we use:

# bufferbloat detected, so decrease the rate providing not inside bufferbloat refractory period
*bb*)
	if (( t_start_us > (t_last_bufferbloat_us[${direction}]+bufferbloat_refractory_period_us) ))
	then
		if (( compensated_avg_owd_delta_thr_us[${direction}] <= compensated_owd_delta_thr_us[${direction}] ))
		then
			shaper_rate_adjust_down_bufferbloat_factor=1000
		elif (( (avg_owd_delta_us[${direction}]-compensated_owd_delta_thr_us[${direction}]) > 0 ))
		then
			((
				shaper_rate_adjust_down_bufferbloat_factor=1000*(avg_owd_delta_us[${direction}]-compensated_owd_delta_thr_us[${direction}])/(compensated_avg_owd_delta_thr_us[${direction}]-compensated_owd_delta_thr_us[${direction}]),
				shaper_rate_adjust_down_bufferbloat_factor > 1000 && (shaper_rate_adjust_down_bufferbloat_factor=1000)
			))
		else
			shaper_rate_adjust_down_bufferbloat_factor=0
		fi
		((
			shaper_rate_adjust_down_bufferbloat=1000*shaper_rate_min_adjust_down_bufferbloat-shaper_rate_adjust_down_bufferbloat_factor*(shaper_rate_min_adjust_down_bufferbloat-shaper_rate_max_adjust_down_bufferbloat),
			shaper_rate_kbps[${direction}]=shaper_rate_kbps[${direction}]*shaper_rate_adjust_down_bufferbloat/1000000,
			t_last_bufferbloat_us[${direction}]=t_start_us,
			t_last_decay_us[${direction}]=t_start_us
		))
	fi
	;;

# high load, so increase rate providing not inside bufferbloat refractory period
*high*)
	if (( achieved_rate_updated[${direction}] && t_start_us > (t_last_bufferbloat_us[${direction}]+bufferbloat_refractory_period_us) ))
		then
			((
				shaper_rate_kbps[${direction}]=(shaper_rate_kbps[${direction}]*shaper_rate_adjust_up_load_high)/1000,
				achieved_rate_updated[${direction}]=0,
				t_last_decay_us[${direction}]=t_start_us
			))
		fi
		;;

That is, if bufferbloat is detected, we ensure we're not in a refractory period and then apply the shaper rate adjustment in dependence upon where the average OWD delta sits between compensated_owd_delta_thr_us and compensated_avg_owd_delta_thr_us.

Or, on high load (without bufferbloat), we check we've not already updated since the last load update and ensure we're not inside a bufferbloat refractory period and simply increase by our shaper_rate_adjust_up_load_high (1.04) factor.

At the moment I'm thinking of the following. Set the new thresholds as:

# rate adjustment parameters
# shaper rate is adjusted by a maximum of shaper_rate_max_adjust_down_bufferbloat on detection of bufferbloat
# and this is scaled by the average delta owd / average owd delta threshold
# otherwise shaper rate is adjusted up on load high, and down on load idle or low
shaper_rate_min_adjust_down_bufferbloat=0.99    # how rapidly to reduce shaper rate upon detection of bufferbloat (min reduction)
shaper_rate_max_adjust_down_bufferbloat=0.75	# how rapidly to reduce shaper rate upon detection of bufferbloat (max reduction)
shaper_rate_max_adjust_up_load_high=1.04		# how rapidly to increase shaper rate upon high load detected (max increase)
shaper_rate_min_adjust_up_load_high=1.0		# how rapidly to increase shaper rate upon high load detected (min increase)
shaper_rate_adjust_down_load_low=0.99		# how rapidly to return down to base shaper rate upon idle or low load detected
shaper_rate_adjust_up_load_low=1.01		# how rapidly to return up to base shaper rate upon idle or low load detected

# owd delta threshold in ms is the extent of OWD increase to classify as a delay
# these are automatically adjusted based on maximum on the wire packet size
# (adjustment significant at sub 12Mbit/s rates, else negligible)
dl_owd_delta_thr_ms=30.0 # (milliseconds)
ul_owd_delta_thr_ms=30.0 # (milliseconds)

# average owd delta threshold in ms at which maximum adjust_up_load_high is applied
dl_min_avg_owd_delta_thr_ms=20.0 # (milliseconds)
ul_min_avg_owd_delta_thr_ms=20.0 # (milliseconds)

# average owd delta threshold in ms at which maximum adjust_down_bufferbloat is applied
dl_max_avg_owd_delta_thr_ms=60.0 # (milliseconds)
ul_max_avg_owd_delta_thr_ms=60.0 # (milliseconds)

And then apply the shaper rate adjustments accordingly. How does this all seem to you?

Should:

dl_min_avg_owd_delta_thr_ms=20.0 # (milliseconds)
ul_min_avg_owd_delta_thr_ms=20.0 # (milliseconds)

also be compensated as in:

compensated_avg_owd_delta_thr_us[dl]=dl_min_avg_owd_delta_thr_us + dl_compensation_us,
compensated_avg_owd_delta_thr_us[ul]=ul_min_avg_owd_delta_thr_us + ul_compensation_us,

I would guess if we commit to a linear transition from shaper_rate_max_adjust_up_load_high to shaper_rate_max_adjust_down_bufferbloat we might as well get rid of shaper_rate_min_adjust_down_bufferbloat and shaper_rate_min_adjust_up_load_high
as these should meet at the zero transition? And even if we go the segmented route with different slopes for the increase_decrease than for the decrease_increase (I hope this is clear) maybe having only the two max rates but three threshold settings might do? We should add a note that
1 - shaper_rate_max_adjust_down_bufferbloat
SHOULD to be larger than
shaper_rate_max_adjust_up_load_high - 1
for stability reasons

Hmm, that's a possibility true, but we've always erred on the side of keeping in configuration options and I'm inclined to just leave them in for now, at least at first for trying this all out.

I can add a check for this and output warning if that fails.

But I still have outstanding questions:

  • should the new lower threshold get compensated as we already do for the middle and upper thresholds?

  • does it still make sense to use sum of delays to classify bufferbloat? In a way it would seem neater in our new regime to abandon sum of delays exceeding a value and instead just see whether the average OWD exceeds the middle threshold (since we already use this in terms of the shaper rate adjustment once inside bufferbloat). Without this, we could have weird situations where the sum of delays exceeds value (so bufferbloat detected), but the average is actually slightly below the middle threshold. Or we could have average slightly above with sum of delays not exceeding value.

  • also, does it still make sense to apply the refractory periods as we are: so on high load, update only once per load update and not inside bufferbloat refractory period, and on bufferbloat, update subject to bufferbloat refractory period?

if you adjust the two values that are in the config file the third will need to be re calculated... as long as the delta between the two explicitly set thresholds does not change I would guess (I mean it, I did not even try to calculate this) that simply increasing all three by the same amount would be OK.

Personally I would to do the detection for bloat individually for each reflector and then use a voting rule (the clear sign of access link congestion is that all traffic is delayed), and my reading of the config you posted was that sum_dl_delays was the sum of the booleans describing for each reflector whether the threshold was exceeded. But I think you explored a few alternatives and am not sure my assumption here describes what the code currently does.

And yes I would use the 'middle' threshold as the classifier whether to declare bufferbloat...

Yes, these IMHO serve to actually give the traffic some breathing room to adjust to the new thresholds... the point is if the offered load exceeds the capacity we likely have some data in flight that will result in more bufferbloat detections until the deluge of packet has cleared (which only really happens after the sender has been signalled to slow down and the excessive inflight-data has been dealt with). So I see no reason to abandon the refractory period for rate reduction (but I am open for data to convince me otherwise*). For rate increases I am less invested, I believe that you introduced that to reduce the cost of the tc calls, I guess that rationale also is not affected much by the changes we discussed here.

*) This is after all why doing this in bash is worth the effort, things can be prototyped rapidly... if we converge on a stable controller, IMHO it becomes much harder to keep using bash instead of changing to a real compiled language for efficiency...

1 Like

Thanks so much as ever. Think I have all I need to give this a shot now.

I am hopeful that this might offer some bandwidth recovery by reducing the loss of bandwidth given the corrections associated with excessive overshoot. That is, Iā€™m hopeful that with this new approach the overshoots are less extensive and/or less frequent. Smaller oscillations and more plain sailing.

Like surfing, we want to sort of straddle a peak rather than fall in and recover all the time, right?

Whatā€™s your guess as to whether this offers an improvement or worsening?

1 Like

My best guess is, this will not affect much as we are really only changing behaviour in a small part of the latency regime, but conceptually it seems like the right thing to not exceed our latency threshold by too much.

OK initial data is in from 2x speedtests - does this tell us much?

Data in the form of:

  • SUMMARY lines for old and new routines; and
  • excel spreadsheet

available here: