Adding support for VRX518 (and maybe VRX320)

So , i guess its still buggy (modem support for this frizbox and generaly any modem that can do
supervectoring) ?

Maybe it is rather your misunderstanding of the OpenWrt development process as a community project. Community members are developing things, working out further details. Eventually, creating a merge request which further community members may test, review and refine. Until, eventually, it may get merged into the master codebase. As for the feature you are looking for that process is just still ongoing.

yes , im new to openwrt (coming form PFsense) and trying to learn my way arround this piece of software . excuse me if im asking "wrong questions"

1 Like

Is there anyway to support you? (as someone who is a non-developer)

@janh I rebased and pushed my changes, updates and whatnot:

The support for 7520 was re-done according the the PR comments, which now means it's independent of the dsl support you added (so it's no longer intertwined with it).

There's prolly a few cleanups I skipped, but please feel free to use that as a base whenever you get back on working on vrx518 support!

3 Likes

I'd like to offer a small reward for the person who manages to get it working in a performant way.
Is here anyone who would throw in some bucks as well?

@DavidDohmen
@numero53

anyone else?

I would like to add something.

Fine. Me too

Yeah, I've offered it several times before and the offer still stands.
So far there weren't any takers though.

Alright, i will chip in 40 Euros.
How much would you guys like to add?

The higher the reward, the higher the incentive...

I guess the question is, what performance does it reach today? E.g. if you configure a VRX518 as simple bridged-modem under OpenWrt will you be able to bidirectionally saturate a nominal 250/40 SuperVDSL link (profile 35b)? And what about the maximal sync rates of 292.032 / 46.720 on Deutsche Telekom TALs?
In other words is the performance deficit in handling high rate VDSL or in the rest of the router functionality?

P.S.: Due to Nahbereich, the highest rate I get offered is 100/40 so even if I had a VRX518 modem, I could not test that myself...

As far as I remember, performance when used just as a modem was not a problem when I tested it (the modem synced at the maximum rate you mentioned, minus a few kilobits/s as usual).

However, from my previous experience with a Fritzbox 4040 with an external modem, I'd say that even without the built-in-modem, this hardware platform isn't ideal as a router at such high rates. With PPPoE connection and cake SQM, the device was really at its limits when using the wired connection. With wireless connection, it maxed out at roughly 150 Mbit/s for WAN traffic. Results without SQM or at lower rates are probably better, though.

In any case, I think getting the code cleaned up should be the first step. Maybe I'll have a look at that again, based on @dhewg's changes.

After that, the most obvious way to improve performance would be to get rid of all that copying in the modem driver. But I'm out of ideas on how to do that. Maybe someone who has experience with other Lantiq network drivers is interested in working on this? At least the general architecture seems similar to other Lantiq stuff.

1 Like

Great, and bridging that over to a ethernet port also works at the expected throughput?

I think there is a reason why AVM (and many other router makers) tends to use accelerators instead of using sufficiently beefy CPUs; and since that helps save electricity I do not want to blame them, but it makes performance parity when running OpenWrt on the same hardware without the accelerators tricky.

+1; especially after the stellar work you did on the xrx200 generation's dsl drivers and getting them into mainline.

I’ve been using OpenWrt on my FritzBox 7530 for a while now and after a little bit of performance tweaking, I’m getting quite decent performance on my Deutsche Telekom line (current DataRate 260.074 / 42.460). Here are the key points to my config:

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
  • SQM with fq_codel and simple.qos

With these settings dspreports is reporting a Tripple-A rating (sometimes even an A+) and a download speed of 221 MBit/s. Ping latency to servers nearby is 4.5 ms average.

The only thing that is still bothering me is, that I have to disable and enable SQM manually after every reboot to make it work. Haven’t found a solution for this yet, but this certainly has nothing to do with the kernel drivers.

Sounds like a hotplug issue?

How long are you using this overclocking patch? Is it safe?

Could you post the output of:

  1. cat /etc/config/sqm
  2. ifstatus wan | gre -e device
  3. tc -s qdisc
  4. cat /etc/os-release

please, so we might find out why hotplug seems in-operable. Thanks!

I have been using the overclocking patch for about 10 weeks. On my router it does not seem to cause any issues and I haven't recognized any overheating, however as always with overclocking this might not be true for all individual chips.

Here is what I have (after turning it off and on and playing around with some values):

  1. cat /etc/config/sqm
config queue 'eth1'
	option debug_logging '0'
	option interface 'dsl0'
	option linklayer 'ethernet'
	option overhead '34'
	option enabled '1'
	option verbosity '1'
	option upload '42000'
	option qdisc 'fq_codel'
	option script 'simple.qos'
	option linklayer_advanced '1'
	option tcMTU '1535'
	option tcTSIZE '96'
	option linklayer_adaptation_mechanism 'default'
	option tcMPU '72'
	option download '247000'
  1. ifstatus wan | grep -e device
	"l3_device": "pppoe-wan",
	"device": "dsl0.7",
  1. tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 415747700804 bytes 322072073 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 62674802259 bytes 52993395 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 7570 drop_overlimit 0 new_flow_count 2071 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 56581020782 bytes 47534936 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 7170 drop_overlimit 0 new_flow_count 1953 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 236813339971 bytes 171370799 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 8604 drop_overlimit 0 new_flow_count 5756 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 59678537792 bytes 50172943 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 7570 drop_overlimit 0 new_flow_count 1211 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc htb 1: dev dsl0 root refcnt 2 r2q 10 default 0x12 direct_packets_stat 1 direct_qlen 1000
 Sent 22982497872 bytes 73979863 pkt (dropped 4024, overlimits 5583534 requeues 163348)
 backlog 0b 0p requeues 163348
qdisc fq_codel 120: dev dsl0 parent 1:12 limit 1001p flows 1024 quantum 300 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 22982497776 bytes 73979862 pkt (dropped 4024, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1608 drop_overlimit 64 new_flow_count 998849 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc fq_codel 130: dev dsl0 parent 1:13 limit 1001p flows 1024 quantum 300 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 110: dev dsl0 parent 1:11 limit 1001p flows 1024 quantum 300 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc ingress ffff: dev dsl0 parent ffff:fff1 ----------------
 Sent 234744954651 bytes 179557139 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev dsl0.7 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 30583169016 bytes 125975245 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 12026 drop_overlimit 0 new_flow_count 82579 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc htb 1: dev ifb4dsl0 root refcnt 2 r2q 10 default 0x10 direct_packets_stat 0 direct_qlen 32
 Sent 251803716592 bytes 179545792 pkt (dropped 11347, overlimits 3328217 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 110: dev ifb4dsl0 parent 1:10 limit 1001p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 251803716592 bytes 179545792 pkt (dropped 11347, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1608 drop_overlimit 2816 new_flow_count 9022078 ecn_mark 0
  new_flows_len 1 old_flows_len 1
  1. cat /etc/os-release
NAME="OpenWrt"
VERSION="SNAPSHOT"
ID="openwrt"
ID_LIKE="lede openwrt"
PRETTY_NAME="OpenWrt SNAPSHOT"
VERSION_ID="snapshot"
HOME_URL="https://openwrt.org/"
BUG_URL="https://bugs.openwrt.org/"
SUPPORT_URL="https://forum.openwrt.org/"
BUILD_ID="r19300+70-8df161791a"
OPENWRT_BOARD="ipq40xx/generic"
OPENWRT_ARCH="arm_cortex-a7_neon-vfpv4"
OPENWRT_TAINTS="no-all"
OPENWRT_DEVICE_MANUFACTURER="OpenWrt"
OPENWRT_DEVICE_MANUFACTURER_URL="https://openwrt.org/"
OPENWRT_DEVICE_PRODUCT="Generic"
OPENWRT_DEVICE_REVISION="v0"
OPENWRT_RELEASE="OpenWrt SNAPSHOT r19300+70-8df161791a"

Might the queue name (eth1) cause the issue? There is no eth1 in my router.

No that is just cosmetic....

This looks odd, but with linklayer 'ethernet' it will not matter

This however is likely your problem.... try pppoe-wan instead, which should work with hotplug.