OpenWrt Forum Archive

Topic: NAT Performance between Attitude Adjustment and Barrier Breaker

The content of this topic has been archived on 7 Apr 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

Just want to bring up the NAT performance difference on WR1043ND Router between AA and BB

Attitude Adjustment
http://s24.postimg.org/gwo2f2aqb/WR1043_ND_AA.png

Barrier Breaker
http://s18.postimg.org/s6ev57tbr/WR1043_ND_Trunk.png

Exact same images are taken from OpenWRT Download repository.

(Last edited by alphasparc on 20 Aug 2013, 09:45)

What do you get if you try the following unix commands?

Sender:
dd if=/dev/zero bs=1024K count=5120|nc <dest ip> 5001

Receiver
nc -l -p 5001 > /dev/null

(Last edited by phuque99 on 20 Aug 2013, 06:58)

I am using iperf here it can't be wrong.
You can test it out yourself if you want.

iperf, with all the bells and whistles, is frequently slower than plain raw netcat.

That does not matter, iperf simulate real world NAT throughput better than netcat.
The goal is not to obtain big numbers, it is to improve the NAT throughput by awareness, testing and optimizations.

The screenshots posted look broken from here.

Same issue over here on TPLink WDR4300. AA gets me >= 200 Mbps while Barrier Breaker caps on at around 170 Mbps.

Disregard what I said earlier in my setup SFQ is still active.

(Last edited by alphasparc on 2 Nov 2013, 06:17)

paradoxmonkey wrote:

The screenshots posted look broken from here.

Yes it is broken (self-reminder: DO NOT USE POSTIMAGE from now on)
I will just describe what the images says:
3.3.8 Attitude Adjustment NAT > 200Mbps
3.10 Barrier Breaker NAT < 200Mbps ~ 170Mbps

Hi anyone investigated this issue?
It is really a bummer to suffer a performance drop when upgrading to BB...

I'd not believe iperf, it gives me far less than I'm able to achieve with normal downloads, but the issue AA vs BB seems to be real. I'd try to compile without ipv6 support and disable ipv6 rules on nat, but I don't have spare gigabit router to test the difference, on wr841nd I have not spotted any difference, because it's only 100mbit router, so the cpu is far more powerful than nic.

Well I will use this thread to record my blind shots at isolating the issue.
I just tried porting over the AA's RTL8366rb switch driver over to BB.
In the process I needed to patch the __devinit and __devexits functions.
After succeeding in compiling and tested:
Still the same so the culprit is not the switch driver.

I did try to compile oprofile into the kernel but the package was broken so a bug report was filed.

(Last edited by alphasparc on 3 Feb 2014, 14:56)

Removed the following patches from 3.10 generic patch folder
656-skb_reduce_truesize-helper.patch
657-qdisc_reduce_truesize.patch
660-fq_codel_defaults.patch
661-fq_codel_keep_dropped_stats.patch
662-use_fq_codel_by_default.patch
663-remove_pfifo_fast.patch
Still no joy.

i'll try this: build from a really old BB svn revision (at a certain time it should be same of AA!) then build something like 2000 svn revision newer, and so on...

Actually you can use git bisect for that...

great work!

could it be the kernel change in r34414?

If I get that right, you are implying between r34408 and 34415.
It would be great if you could still narrow it down, as there are several possible culprits.

I guess we are talking at least about ar71xx architecture, so 34414 is an obvious candidate. But it is the Linux kernel change from 3.3.x to 3.6.x, so rather impossible to pinpoint a reason there.

The other possibility is 34415, but it concerns Realtek drivers, which should not affect wr1043nd.

The only other change is the atm patch 34410.

EDIT:
Changelog:
https://dev.openwrt.org/changeset?new=3 … 08%40trunk

(Last edited by hnyman on 20 Feb 2014, 18:38)

Seems that kernel 3.3.8 has better ethernet throughput. :-/

Any help ? I am stucked.

I did additional tests on trunk and got this
The Blue line is the default NAT performance while I got the Green line after I ran "/etc/init.d/odhcpd stop".
What does odhcpd do that limits the throughput?
PunBB bbcode test

(Last edited by alphasparc on 30 Mar 2014, 15:24)

alphasparc wrote:

I did additional tests on trunk and got this
The Blue line is the default NAT performance while I got the Green line after I ran "/etc/init.d/odhcpd stop".
What does odhcpd do that limits the throughput?

Sounds strange. Afaik odhcpd is a ipv6 relayd/ra/dhcpv6 server, but plays no role in actual traffic routing. But it adds more ipv6 info and possibly thus complicates routing inside kernel/iproute2/whatever...

It only adds more ipv6 support. I am wondering if your traffic gets now passed through a ipv6 tunnel, or something like that, which could cause more protocol overehead. Do you have ipv6 in use?

Could you please verify that result:
I would like to see separate
a) odhcpd stop. Normal service stop. That will probably remove the ipv6 prefix infos from routing tables etc.
b) odhcpd process killed forcefully via kill command. Then the routing rules etc. should still be there and continue affecting performance.

My guess is that you will see a speed difference in a) and b). odhcpd process itself should have no effect, but the routing rules etc. may have. Having more ipv6 support may add just that much more routing tasks, that the additional work gets visible under heavy speeds.

(The ipv6 routing information has a lifespan between 10-30 minutes, I think, so please test the speed right after killing the process.)

It is as you guess.
killing odhcp does not improve performance but stopping odhcp does.
So does this means that the current performance of barrier breaker is here to stay?

I guess so. Because Openwrt surely wants to provide services with the growing amount of ipv6 use.

But if you get the developers a bit interested, they might check if there is anything to be done for lightening the routing tasks.
The author of odhcpd is CyrusFF (in forum) / cyrus (bug tracker & svn)

https://github.com/sbyx/odhcpd
https://dev.openwrt.org/log/trunk/packa … ces/odhcpd

Interesting results, in any case.

The discussion might have continued from here.