a good time to enable cake, hit https://bufferbloat.libreqos.com/ and enjoy the improved latency
@Rorschach FRITZ!Box 7520 OpenWRT user here, yes, you should do that, it makes the experience noticeably better, especially when the connection is saturated with e. g. a download.
The 7520 can run Cake up to approximately 100 MBit/s downstream. More than that and the CPU saturates, but you do not have that kind of link anyway, so it does not matter to you.
I am on a 1und1 Deutsche-Telekom-resale VDSL connection with full IPv4 and IPv6 connectivity.
I use two used-bought 7520s that I switch out every time there is a new OpenWRT release. That way, if I screw up a new version, I can plug in the previous box and stay online.
Also, in case one of them croaks, I have a backup. This did not happen yet, they are reliable and incredibly stable for me.
I'll post my configuration that works very well for me here so you have something to compare.
Heads up, I do not use the built-in wifi, so I do not have experience with that particular aspect.
OpenWRT 24.10 and 25.12 since 25.12.0-rc5 have been running flawlessly, currently on 25.12.1.
SQM (traffic shaping)
You will need to install the package sqm-scripts, and enable the service sqm.
I am located very close to a VDSL endpoint and my contract is for 100 MBit/s downstream and 40 MBit/s upstream, and the connection delivers this with room to spare.
I have set the SQM maximum speeds to 96% of the contractual speeds. If the connection were slower, I would use 96% of whatever the stable sync speeds are.
SQM parameters
root@router-openwrt:~# uci show sqm
sqm.eth1=queue
sqm.eth1.interface='pppoe-wan'
sqm.eth1.download='96000'
sqm.eth1.upload='38400'
sqm.eth1.linklayer='ethernet'
sqm.eth1.overhead='34'
sqm.eth1.qdisc_advanced='0'
sqm.eth1.verbosity='2'
sqm.eth1.debug_logging='0'
sqm.eth1.enabled='1'
sqm.eth1.qdisc='cake'
sqm.eth1.script='piece_of_cake.qos'
Interrupts distribution over CPU cores
As by default OpenWRT seems to make CPU0 do everything, I have opted to install the irqbalance package that distributes interrupts to the other 3 CPU cores as well.
You will need to install the package irqbalance and enable the service irqbalance.
Please note: The most resource-intensive interrupts are the ones coming from the VDSL modem, and those are hard-locked to CPU0. Also I am pretty sure that the Cake scheduler is single-threaded and needs all VDSL traffic be handled on a single CPU core, CPU0 in this case.
Therefore I have denylisted these VDSL modem interrupts (64, 65, 66, 67) for irqbalance.
root@router-openwrt:~# uci show irqbalance
irqbalance.irqbalance=irqbalance
irqbalance.irqbalance.enabled='1'
irqbalance.irqbalance.banirq='64' '65' '66' '67'
After running for some time the interrupts counter might look like this:
Interrupts table
root@router-openwrt:~# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
26: 823961467 48598780 72025351 93891767 GIC-0 20 Level arch_timer
29: 5 0 0 0 GIC-0 270 Level bam_dma
30: 0 0 0 0 GIC-0 239 Level bam_dma
31: 90965 472780 513687 352977 GIC-0 133 Level bam_dma
32: 3 0 0 0 GIC-0 139 Level msm_serial0
35: 0 0 0 0 PCI-MSI 0 Edge aerdrv
36: 6923 58090711 47636991 49746006 GIC-0 97 Edge c080000.ethernet:txq0
40: 6639 63069191 59351865 56551782 GIC-0 101 Edge c080000.ethernet:txq4
44: 7512 70130252 43704819 49316556 GIC-0 105 Edge c080000.ethernet:txq8
48: 13249 61580582 57039441 59515791 GIC-0 109 Edge c080000.ethernet:txq12
52: 25536 30653205 32185252 46002037 GIC-0 272 Edge c080000.ethernet:rxq0
54: 8280 65426286 23646934 47017507 GIC-0 274 Edge c080000.ethernet:rxq2
56: 10248 54830757 48748492 51195668 GIC-0 276 Edge c080000.ethernet:rxq4
58: 6338 8782407 7569692 10642531 GIC-0 278 Edge c080000.ethernet:rxq6
60: 0 0 0 0 msmgpio 42 Edge keys
61: 0 0 0 0 msmgpio 41 Edge keys
62: 0 0 0 0 msmgpio 43 Edge keys
63: 1 0 0 0 GIC-0 164 Level xhci-hcd:usb1
64: 1 0 0 0 PCI-MSI 524288 Edge PTM SL
65: 14378585 0 0 0 PCI-MSI 524289 Edge mei_cpe
66: 492743093 0 0 0 PCI-MSI 524290 Edge aca-txo
67: 496984062 0 0 0 PCI-MSI 524291 Edge aca-rxo
68: 33 0 0 0 GIC-0 200 Level ath10k_ahb
69: 33 0 0 0 GIC-0 201 Level ath10k_ahb
IPI0: 0 0 0 0 CPU wakeup interrupts
IPI1: 0 0 0 0 Timer broadcast interrupts
IPI2: 1078952 308473 336832 259956 Rescheduling interrupts
IPI3: 16802500 294937977 204834775 322062994 Function call interrupts
IPI4: 0 0 0 0 CPU stop interrupts
IPI5: 0 0 0 0 IRQ work interrupts
IPI6: 0 0 0 0 completion interrupts
Err: 0
VDSL and WAN parameters, firmware
VDSL parameters
root@router-openwrt:~# uci show network.lan
network.lan=interface
network.lan.device='br-lan'
network.lan.proto='static'
network.lan.ipaddr='(...)'
network.lan.ip6assign='60'
network.lan.netmask='(...)'
root@router-openwrt:~# uci show network.wan
network.wan=interface
network.wan.device='dsl0.7'
network.wan.proto='pppoe'
network.wan.username='(...)'
network.wan.password='(...)'
network.wan.ipv6='1'
network.wan.keepalive='8 5'
root@router-openwrt:~# uci show network.atm
network.atm=atm-bridge
network.atm.vpi='1'
network.atm.vci='32'
network.atm.encaps='llc'
network.atm.payload='bridged'
network.atm.nameprefix='dsl'
VDSL modem firmware was extracted from a Draytek firmware package.
I use a /lib/firmware/vdsl.bin symlink that points to the file, which is the default path and therefore does not require configuring a path in UCI.
root@router-openwrt:/lib/firmware# ls -l | grep xcpe_
lrwxrwxrwx 1 root root 32 Jul 4 2023 vdsl.bin -> xcpe_8.D.1.C.1.7_8.D.0.E.1.2.bin
-rw-r--r-- 1 root root 1010220 Jul 2 2023 xcpe_8.D.1.C.1.7_8.D.0.E.1.2.bin
(...)
root@router-openwrt:/lib/firmware# sha256sum xcpe_8.D.1.C.1.7_8.D.0.E.1.2.bin
873cb9997411eb47c26060df7033d4dc8aa90c073166fd55b40d909ba49140dc xcpe_8.D.1.C.1.7_8.D.0.E.1.2.bin
Snapshot of VDSL state
root@router-openwrt:~# ubus call dsl metrics
{
"api_version": "4.23.1",
"firmware_version": "8.13.1.12.1.7",
"chipset": "Lantiq-VRX500",
"driver_version": "1.11.1",
"state": "Showtime with TC-Layer sync",
"state_num": 7,
"up": true,
"uptime": 506027,
"atu_c": {
(...)
"vendor": "Broadcom 193.144",
(...)
},
"power_state": "L0 - Synchronized",
"power_state_num": 0,
"xtse": [
(...)
],
"annex": "B",
"standard": "G.993.2",
"profile": "17a",
"mode": "G.993.2 (VDSL2, Profile 17a, with down- and upstream vectoring)",
"upstream": {
"vector": true,
"trellis": true,
"bitswap": true,
"retx": true,
"virtual_noise": false,
"ra_mode": "At initialization",
"ra_mode_num": 1,
"interleave_delay": 0,
"inp": 44.000000,
"data_rate": 46720000,
"latn": 1.700000,
"satn": 1.400000,
"snr": 12.200000,
"actatp": -4.000000,
"attndr": 54410000,
"mineftr": 46351000
},
"downstream": {
"vector": true,
"trellis": true,
"bitswap": true,
"retx": true,
"virtual_noise": true,
"ra_mode": "At initialization",
"ra_mode_num": 1,
"interleave_delay": 160,
"inp": 72.000000,
"data_rate": 116790000,
"latn": 4.500000,
"satn": 4.500000,
"snr": 11.900000,
"actatp": 8.400000,
"attndr": 138815016,
"mineftr": 116546000
},
(...)
}
Good luck and have fun.
Hi @Eomanis, I bought two used 7520s for the same reason and installed them similarly. Your settings are very interesting. Some of it is new to me and I still need to learn it. Thanks anyway, this will help me a lot.
So did I. But I left out the ATM stuff as PTM is used with VDSL here.
Still playing around with MTU/MRU to get 1500 out in the end. Got up to 1496 from 1492.
If someone can suggest good defaults to make use of RFC4638... what minimal configuration to get 1512 on the wire. And 1500 from PPP.
1500 plus 8 for PPPoE overhead and plus 4 for a VLAN tag.
Support was added in 2024 with https://github.com/openwrt/openwrt/pull/16856
And there was a fix relevant to bridging in 2025 with https://github.com/openwrt/openwrt/pull/21045
Me too. Seems a common strategy with the 7530, because it is cheap, accessable and openWRT supports it’s DSL modem.
Well, the biggest DSL operator in Germany decided against rfc4638 baby jumbo frames, so very little first hand experience from this side of the channel, but @bill888 might know?
config device
option name 'dsl0'
option macaddr 'EDITED'
option mtu '1508'
config device
option name 'dsl0.35'
option type '8021q'
option ifname 'dsl0'
option vid '35'
config device
option name 'br-wan'
option type 'bridge'
option ipv6 '0'
list ports 'dsl0.35'
list ports 'lan1.9'
config device
option type '8021q'
option ifname 'lan1'
option vid '9'
option name 'lan1.9'
config interface 'brwan'
option proto 'none'
option device 'br-wan'
option mtu '1508'
in this config interfaces values are :
lan1.9 has MTU 1508
eth0 has MTU 1512
dsl0.35 has MTU 1508
As this was modem side . Router side my wan.9 has MTU 1508
PPPoE should pick this up and negotiate MTU 1500
To check if everything is correct below commands should return alive
fping -M -b 1452 ipv6.google.com
fping -M -b 1472 ipv4.google.com
No changes have occurred over the past few days, so I think the limit has been reached. Therefore, I've run the recommended tests. Perhaps someone could take a look and assess whether a 100 Mbps plan would be possible or if there's anything I can improve.
Mmh, the attainable net data rate is currently optimistically estimated at 78.675/37.732 Mbps, that gives you a decent upper bound for the sync achievable when switching to a nominal 100/40 Mbps plan. (These estimates are almost always too optimistic, but you should be able to achieve something in the 70-75/30-35 Mbps range when switching, assuming your ISP will actually offer that. Whether that increase in throughput justifies the increase in cost is something only you can decide, especially since with higher sync you will have less SNR margin and hence less resilience against noise, so expect retransmissions to increase which also results in higher jitter).
BTW, 63/23 is about the upper limit of the allowed sync speed for dsl 50/20, so you indeed maxed out.
The bufferbloat test indicates that you still face a noticeable amount of bufferbloat. Have you started using sqm-scripts? If yes, maybe post the output of cat /etc/config/sqm and tc -s qdisc.
BTW, your ISP is nominally 1&1 (if I recall correctly) but the libreqos test reports AS2230/Telekom, that means 1&1/Versatel has no own fibers to your BNG location and hence you get Telekom internet access, 1&1 just sends you the invoice... and that means you are at the receiving end of Deutsche Telekom's peculiar peering policy (visit https://netzbremse.de for details). That said, not every Telekom customer is actually affected by this.
Unfortunately, I haven't had time to look into SQM in more detail yet, as @Eomanis suggested. But I think it would be a good idea. Maybe I can even negotiate a DSL100 upgrade at no extra cost. 50 and 100 are currently the same price. It can't hurt, we'll see.
Maybe do so... SQM should make your 50/20 link considerably more usable, as larger data transfers to/from the internet will not degrade the perceived quality of concurrent interactive uses as much...
As I indicated above you will trade in in some SNR margin for a higher sync and that will make your link more susceptible for noise interference... but if there are no noise sources that will remain mostly theoretic.
It seems to look a little better.
root@OpenWrt:~# uci show sqm
sqm.eth1=queue
sqm.eth1.enabled='1'
sqm.eth1.interface='pppoe-wan'
sqm.eth1.download='61000'
sqm.eth1.upload='22000'
sqm.eth1.qdisc='cake'
sqm.eth1.script='piece_of_cake.qos'
sqm.eth1.linklayer='ethernet'
sqm.eth1.use_mq='0'
sqm.eth1.debug_logging='0'
sqm.eth1.verbosity='5'
sqm.eth1.overhead='34'
61000 seems a bit high, you see clear latency increases during all phases with download traffic. Try 55000 for a test and see whether that results in better latency under load. (You then still need to decide what you prefer a somewhat higher peak throuhput or lower latency under load, your network, your rules...)
That's another way to do it, and perhaps the compromise lies somewhere in between. Thank you for the suggestions.
overhead 34 is ok?
Yes, I will post a few things to optionally add to your /etc/config/sqm later...
Here are my recommendations for /etc/config/sqm: ()
config queue 'eth1'
option ingress_ecn 'ECN'
option egress_ecn 'ECN'
option itarget 'auto'
option etarget 'auto'
option verbosity '5'
option qdisc 'cake'
option script 'piece_of_cake.qos'
option qdisc_advanced '1'
option squash_dscp '0'
option squash_ingress '0'
option qdisc_really_really_advanced '1'
option use_mq '0'
option eqdisc_opts 'nat dual-srchost memlimit 16mb'
option linklayer 'ethernet'
option linklayer_advanced '1'
option tcMTU '2047'
option tcTSIZE '128'
option linklayer_adaptation_mechanism 'default'
option debug_logging '1'
option iqdisc_opts 'nat dual-dsthost ingress memlimit 16mb'
option interface 'pppoe-wan'
option tcMPU '88'
option enabled '1'
option overhead '34'
option download '55000'
option upload '22000'
The important additions are:
option eqdisc_opts 'nat dual-srchost memlimit 16mb'
memlimit 16mb: increase the (worst case) memory for cake to 16 MiB (OpenWrt's default of 4MiB is aimed at really low RAM routers, the FB7520 has 128 MiB and hence should allow a bit more making for better throughput for longer RTT transfers)
nat: look at the conntrack table to get access to the internal IP addresses
dual-srchost: first distribute egress/upload capacity equitably between all intzernal IP addresses (machines), then within each IP address equitably per flow.
option iqdisc_opts 'nat dual-dsthost ingress memlimit 16mb'
Similar to the egress/upload option:
dual-dsthost: distribute ingress/download traffic also first by internal IP-addresses then by flow, together with egress' dual-srchost this will distribute capacity fairly between active machines... that way a bit-torrent client will get 100% of capacity if no other machine is active, but only it's fair-share if other machines are being used. That often works very well for home networks, without requiring too detailed configuration
ingress: this will instruct the shaper to try to control the rate of incoming packets which for ingress results in a more robust shaper that adapts its "aggressiveness" better to the actual traffic behavior.
option tcMPU '88'
This will properly account the minimal packet size making the shaper more robust in those (rare) cases when there are lots of minimal sized packets in flow.
Also you might be able to push the egress rate up to almost 23000 (assuming that is worth it for your use case), just make sure to test this properly (23000 might be too close, but 22500 should work, and maybe even 22900). VDSL2 uses PTM encoding (every 65th octet is added by PTM for its internal purposes) so out of the 23339 Kbps sync only 23339*64/65 = 22979.9384615 can be used for actual traffic.





