Netgear R7800 exploration (IPQ8065, QCA9984)

What is 80p80

160MHz bandwidth on two noncontiguous 80MHz channels.
80P80 means 80+80

80 Padding 80

So this will permit 160 MHz on more channel?

Does anyone know how to completely remove the wifi blinking LED code from r7800 build?

ath10k-4.19/mac.c +/blink
include/mac80211/led.h

Do I just delete those two files?

Does this work with DFS? Otherwise it will not be legal to use 80+80 channels in the EU, as you have to use spectrum above channel 48 (DFS only) to create the second 80 MHz channel.

whooooa! hold your horses cowboy :wink:

i don't know much but.......

remove this near the bottom of mac.c ( build_dir/target-arm_cortex-a15+neon-vfpv4_musl_eabi/linux-ipq806x/ath10k-ct-XYZ-ish )

#ifdef CPTCFG_MAC80211_LEDS
        ar->led_default_trigger = ieee80211_create_tpt_led_trigger(ar->hw,
                IEEE80211_TPT_LEDTRIG_FL_RADIO, ath10k_tpt_blink,
                ARRAY_SIZE(ath10k_tpt_blink));
#endif

or find where to set CPTCFG_MAC80211_LEDS 0

just a guess... tho...

or set .blink_time to 0 in mac.c;

static const struct ieee80211_tpt_blink ath10k_tpt_blink[] = {
	{ .throughput = 0 * 1024, .blink_time = 0 },
	{ .throughput = 1 * 1024, .blink_time = 0 },
	{ .throughput = 2 * 1024, .blink_time = 0 },
	{ .throughput = 5 * 1024, .blink_time = 0 },
	{ .throughput = 10 * 1024, .blink_time = 0 },
	{ .throughput = 25 * 1024, .blink_time = 0 },
	{ .throughput = 54 * 1024, .blink_time = 0 },
	{ .throughput = 120 * 1024, .blink_time = 0 },
	{ .throughput = 265 * 1024, .blink_time = 0 },
	{ .throughput = 586 * 1024, .blink_time = 0 },
};

or just disable led support from config menu o.o

2 Likes

I have collapsed @dissent1's PR into a single patch file and have a couple of questions.

  1. Multiple CPU frequencies can map to the same l2 frequency, so it makes sense to cache it and avoid calling into clk_set_rate, which is using locking. The same applies to l2 voltage. What is the right way to cache those variables in this driver: do I need to use per_cpu approach?

  2. There was a suggestion above in the thread to set min frequency to 800MHz: is that done in the dts entries like this? I could then remove the last two rows from each entry in qcom-ipq8065.dtsi.

                qcom,speed0-pvs0-bin-v0 =
                        < 1725000000 1262500 >,
                        < 1400000000 1175000 >,
                        < 1000000000 1100000 >,
                         < 800000000 1050000 >,
                         < 600000000 1000000 >,
                         < 384000000 975000 >;
  1. The original PR was not merged with the explanation below. Can anyone provide some hints about what is expected? I read it as I just need to copy the current cpufreq-dt.h/c into cpufreq-dt-ipq806x.h/c an make it a part of the patch. Is that correct?

I noticed that you're hacking a lot of ipq806x specific code into the generic cpufreq-dt driver. I think it would be a lot less messy if you just fork that driver and create an ipq806x specific one instead.

2 Likes

About point 3
You need to create a separate driver and set a dedicated compatible in dts

Can you point me to a sample drive that is done this way?

Yes the standard allows it to work with DFS.
I haven't tried it yet as I'm having trouble getting even simple DFS running at the moment

160mhz in EU is impossible... You need to have 3/4 of channel free

Only way to have 160 MHz is live in the desert or mod the reg database with no limitation (that violated EU law)

Then perhaps this post isn't relevant to you.
Here in the land down under, this let's us do 160MHz without encroaching on any DFS spectrum.

Any tramping of neighbours networks will typically be resolved with a kangaroo duel in the street.

7 Likes

I have noticed that both L2 patches (original & modifed) are noticeably increasing the CPU utilization, so I decided to test the NAT-ed throughput and saw a 35% throughput drop.

Test: performance governor, iperf3, single stream, WAN is configured with a static IP, router between two computers (LAN & WAN ports), all other normal setup (firewall rules, etc), LAN is on CPU0 & WAN is on CPU1.

Software Offload Enabled:
19.07: upload 700Mbits / download 800Mbps
L2 patch: upload 450Mbits / download 475Mbps

Software Offload Disabled:
19.07: upload 622Mbits / download 730Mbps
L2 patch: upload 403Mbits / download 400Mbps

Results from multiple runs would fluctuate a bit, but overall would stay consistent with the values above.

Any ideas/hints/suggestions will be appreciated as this is a totally unfamiliar area to me.

1 Like

I have no answers nor am I informed about this; however I hope a question (not necessary to you) and some observations might help spark some discussion from others.

why not just bump the voltage and l2 "rates" to the max values (1150000 and 1200000000 respectively) and test? (i.e. if you could profile it, perhaps the performance degradation you observe is due to the changing values...)

My layman's knowledge of HPC is that there are several types of limitations one can hit. In this case, I'm thinking about cpu bound problems and memory bound problems. I suspect that the l2 cache performance enhancements would help memory bound problems while the performance benchmark your ran might be more of a test for cpu bound problems. Regardless, your benchmark test is still very relevant as it likely represents the most common use case for a router.

That said, maybe a different test is necessary to see a real change. I looked for a benchmark related to the l2 cache and came up with linpack but that might be too much work to port over. Stress is available on openwrt but I'm not sure which test to use or how to get some kind of benchmark from it.

HTH

EDIT: it would be nice to know if you saw a temperature change and/or a change in power consumption as well... i.e. is a mechanism to reduce the cpu freq starting to function if it gets too hot?

I used the performance governor and the CPU was running at the constant/max freq, so no values should be changing.

I ran the patched build for a few days with no temperature changes, but my router is running at <45C anyway. I didn't core about about power consumption, so did not measure that.

1 Like

Hard for me to tell if the l2 voltage and rate are constant if the cpu freq is constant... but that would make sense

Wow, my ambient ranges between ~21 to ~26 deg. C and I see temp variations going from 44 deg. C (coolest zone, coolest ambient) to 54 deg. C (hotest zone, hotest ambient) running virtually no load with ondemand governor. You must have won the silicon lottery.

Not at all, I simply placed the router at a strategic location with ~15C ambient temperature and the router always runs with the performance governor. I just checked and the avg is ~40C.

1 Like