Eth0 vs pppoe-wan tunnel for sqm cake? is cake auto-rate useful for VDSL2? & link layer adaptation 34 vs 44? 83% bandwidth with cake, expected?

which interface is the better one one to run cake on? & is cake auto-rate useful at all for a VDSL, also why is is 44 overhead in link layer adaptation giving me slightly better "more consistent +1,+2 or +3 on bufferbloat test" compared to the suggested for VDSL2 pppoe 34? I've my bandwidth for sqm set to slightly under 90%, & my actual download bandwidth while using sqm is ~83 is this within the expected or is it too low?

That depends entirely on the question how stable your wan speeds are in practice.

If your VDSL connection is reasonably good (which is hopefully the case these days with vectoring and outdoor DSLAMs, but that may be very different on long- and slow ADSL links), the achievable rates tend to be rather stable (+/- 1-2 MBit/s more or less are pretty much within the safety margins). If that is the case and remains stable throughout the day (e.g. your ISP isn't overbooked), there is no reason to use autorate.

If your wan connection is very volatile, with very wide span of expectable rates, autorate might be sensible.

--
just to clarify, the question is not if your line is good enough to sync full speed - only if the resulting practically achievable throughput is more or less 'always' the same or rather within the very expectable safety margin accounted for in your configuration, without having to sacrifice 'too much' on the variance. The absolute throughput values do not matter for this question (other that being the base for your configuration).

1 Like

it's overbooked at nights it loses a few megabits, but during the day it's quite stable within 1 to 2 mbits difference, do you know if auto-rate works on top of my configured values or not? rn I get A+ all the time with bloat between 1 to 3 ms consistently, would auto rate automatically try to keep that, or does it need any manual tuning at first?

If you use auto-rate, you can only use auto-rate - it can be configured for 'sane' defaults.

For cake, I believe this does not matter as will look though the PPPoE header, for fq_codel IIRC pppoe-wan is better. Personally I use pppoe-wan and make sure I remember that when specifying the per packet overhead.

Cake autorate is useful if you experience large fluctuations in available capacity so a simple static shaper limit does not work for all times, or sacrifices too much throughput for some of the time.

Hard to say... per-packet overhead and gross shaper rate are not independent from each other, if e.g. you have set the shaper rate too high, you can partially make up for that by increasing the per packet overhead setting. So we do not know what happened:
a) your gross shaper rate is spot on, but your true per-packet overhead is > 34
b) your gross shaper rate is set too high, and with the 44 byte overhead setting your are essentially reducing the shaper rate

Without additional data that is hard to disambiguate, hence the recommendation if in doubt to slightly overestimate the per-packet overhead and to underestimate the gross shaper rate.

90% of what exactly, the contracted rate as promised by the ISP or 90% of a typical (net throughput) speed test result?

You would need to tell me a bit more details... but it is true that the SQM shaper settings are gross rates and you will measure noticeably less with typical on-line capacity tests.

If you post the output of:

  1. tc -s qdisc
  2. cat /etc/config/sqm
  3. ifstatus wan | grep -e device

I might be able to tell you more.

You would need manual tuning, the defaults (which are not expected to be optimal for everybody) are tailored for keeping an acceptable trade-off between throughput and latency (in the 30-60ms latency under load increase range) on a very volatile long range LTE link. So for your link you would need to change a few values, but nothing earth shattering difficult or time consuming.

regarding the rate %m I set the limits to within 90% of the measured speed "pre-sqm enabled" with the bufferbloat test as instructed by the openwrt site, what I'm getting is 83% of the aforementioned measured speed, so for example rn it's:
measured 62 mbits/s without sqm download, 55mbits/s cake limit, 49mbit/s with cake enabled, is losing 20% of the bandwidth to get A+ bufferbloat of 1 to 3 ms, expected/reasonable?

So if you set the shaper to 55Mbps and the overhead to 44 bytes you can expect at best the following throughput for MTU 1500 sized packets over PPPoE/IP/TCP:
IPv4
55 * ((1500-8-20-20) / (1500-8+44)) = 51.99 Mbps
IPv6
55 * ((1500-8-40-20) / (1500-8+44)) = 51.28 Mbps
with TCP timestamps (default on some OS)
IPv4 + TCP timestamps
55 * ((1500-8-20-20-12) / (1500-8+44)) = 51.56 Mbps
IPv6 + TCP timestamps
55 * ((1500-8-40-20-12) / (1500-8+44)) = 50.85 Mbps

So getting ~49 Mbps is not great, but well within the expected range.
Still to give more detailed recommendations I would like to see the data I requested above, please.

tc -s qdisk:

root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev dsa root
 Sent 1941733372 bytes 1614254 pkt (dropped 1, overlimits 0 requeues 858)
 backlog 0b 0p requeues 858
qdisc fq_codel 0: dev dsa parent :10 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :f limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :e limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :d limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :c limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :b limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :a limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :9 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :8 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :7 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 1941731660 bytes 1614238 pkt (dropped 1, overlimits 0 requeues 858)
 backlog 0b 0p requeues 858
  maxpacket 24288 drop_overlimit 0 new_flow_count 7129 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :6 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :5 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :4 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 1712 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :3 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :2 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev dsa parent :1 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth0 root
 Sent 690845907 bytes 1499056 pkt (dropped 1, overlimits 0 requeues 1046)
 backlog 0b 0p requeues 1046
qdisc fq_codel 0: dev eth0 parent :10 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :f limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :e limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :d limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :c limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :b limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :a limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :9 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 690845907 bytes 1499056 pkt (dropped 1, overlimits 0 requeues 1046)
 backlog 0b 0p requeues 1046
  maxpacket 1494 drop_overlimit 0 new_flow_count 3814 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev eth1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8005: dev pppoe-wan root refcnt 2 bandwidth 17408Kbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms noatm overhead 44
 Sent 657771801 bytes 1491922 pkt (dropped 4731, overlimits 947188 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 228480b of 4Mb
 capacity estimate: 17408Kbit
 min/max network layer size:           40 /    1472
 min/max overhead-adjusted size:       84 /    1516
 average network hdr offset:            0

                  Tin 0
  thresh      17408Kbit
  target            5ms
  interval        100ms
  pk_delay        1.1ms
  av_delay        184us
  sp_delay          8us
  backlog            0b
  pkts          1496653
  bytes       664575588
  way_inds        36028
  way_miss         4021
  way_cols            0
  drops            4731
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len         17280
  quantum           531

qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ----------------
 Sent 2022194294 bytes 1661495 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8006: dev ifb4pppoe-wan root refcnt 2 bandwidth 52296Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms noatm overhead 44
 Sent 1886880096 bytes 1569505 pkt (dropped 91990, overlimits 2202250 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 2389824b of 4Mb
 capacity estimate: 52296Kbit
 min/max network layer size:           40 /    1491
 min/max overhead-adjusted size:       84 /    1535
 average network hdr offset:            0

                  Tin 0
  thresh      52296Kbit
  target            5ms
  interval        100ms
  pk_delay       2.69ms
  av_delay        468us
  sp_delay          7us
  backlog            0b
  pkts          1661495
  bytes      2022194294
  way_inds        15269
  way_miss         2901
  way_cols            0
  drops           91990
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1491
  quantum          1514

cat /etc/config/sqm:

config queue 'eth1'
        option enabled '1'
        option interface 'pppoe-wan'
        option download '52296'
        option upload '17408'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option linklayer 'ethernet'
        option debug_logging '0'
        option verbosity '5'
        option overhead '44'

ifstatus wan | grep -e device:

"l3_device": "pppoe-wan",
        "device": "eth0",

1 Like

This will result in approximately:

DOWNLOAD IPv4
52.296 * ((1500-8-20-20) / (1500-8+44)) = 49.44 Mbps
DOWNLOAD IPv6
52.296 * ((1500-8-40-20) / (1500-8+44)) = 48.76 Mbps

UPLOAD IPv4
17.408 * ((1500-8-20-20) / (1500-8+44)) = 16.45 Mbps
UPLOAD IPv6
17.408 * ((1500-8-40-20) / (1500-8+44)) = 16.23 Mbps

you might want to consider trying to add the following to /etc/condf/sqm:

	option qdisc_advanced '1'
	option qdisc_really_really_advanced '1'
	option eqdisc_opts 'dual-srchost nat wash overhead 44 mpu 64 memlimit 32mb'
	option eqdisc_opts 'dual-dsthost nat ingress noatm overhead 44 mpu 64 memlimit 32mb'

Which will give you strict per-internal-IP isolation, so a computer running bittorrent should not interfere much with other computers...

I'm sorry for replying late, is there any scenario where those advanced options you suggested could affect the network negatively since they're not on by default? also should I still use them even if I changed from piece of cake to layered cake? also could you help me with the settings of latyer cake, if I put dscp marking 46 on my game "valorant" does this mean it'll bypass the sqm completely, on wikipedia it say 46 is for telephony and it doesn't have AQM, check the table Differentiated services - Wikipedia

Well, the main reasons why this is not enabled by default sre:
a) one needs to know whether the interfaces are direted towards or away from the internet wan, which is hard to do generically (but easy to confirm individually)
b) the default triple-isolate tries to give similar isolation without knowing the direction of the interfaces, and while not perfect it often is good enough.

The consequence is that triple-isolate is the default, and people needing stricter isolation are instructed to manually change their configuration.

I would say so, layer_cake will in addition also give you priority tiers, but you still likely want internal IP fair sharing in each priority tier?

So there is a bit of confusion around DSCPs, but the concept is actually quite simple.
DSCPs are just a set of bit patterns of a 6bit field in both the IPv4 and IPv6 headers, these can be set/manipulated both by end points/senders as well as by all intermediary hops. These patterns, that can also be interpreted as a 6bit integer to result in e.g. your value of 46.
By them selves these numbers do nothing... but they are quite convenient if you want to sort/differentiate packets in up to 64 different categories, as you can tag each packet with the category number/pattern.
The 'magic' (if there is any) comes from how you treat packets differentially based on their DSCP value, the IETF calls the ready made recipes for that per-hop-behaviour or PHB. There are RFCs for different PHBs and each alao typically comes with one or more proposed DSCP values to use for that PHB. But these are just proposals, each network still needs to implement the whoemachinery to treat packets differently and can use and DSCP value to denote any behaviour.
One small exception is WiFi's WMM (mandatory for anything above 802.11b, IIRC) which by default uses the upper 3 bit of the DSCP field to steer packets into its own 8 user priority (UP) tiers and these in turn into its 4 access classes (AC). (Sidenote OpenWrt uses a slightly morwe elaborate set of DSCP to AC mappings, and you can actually change these).
The consequence of all of this in the context of cake's diffserv modes, as used by layer_cake, is, you should look which DSCPs cake steers into which priority tier and the mark packets according to which tier you want them to end up in.

Quick reminder, prioritisation works best if used with restraint, moving all packets into the priority tier had the same effect as not prioretizing at all... So I recommend always, use priority marking carefully and always try to confirm that any marking results in noticeable and intended behavior. And I advise against recreating elaborate priority hierarchies from some recommendatins on the web, at least without actually confirming ideally any individual rule.

There are a few OpenWrt projects that help with assigning DSCP marks in clever ways, like qosify, cake-qos-simple, or DSCP classify (just search in the forum to find more information).

Tl;dr: for DSCP46, don't bother about wikipedia, 46 also calked EF, for expedited forwarding, is sorted into cake's highest priority tier for both the default diffserv3 and diffserv4, for diffserv8 it is in the 2nd highest.