OpenWrt Forum Archive

Topic: Build for WNDR3700/WNDR3800

The content of this topic has been archived between 9 Jul 2013 and 6 May 2018. Unfortunately there are posts – most likely complete pages – missing.

hnyman, thank you for building these great fw's.
I was wondering if your last BB build is the official BB build?
Will you still be buidling BB updates?

Romulus wrote:

I was wondering if your last BB build is the official BB build?
Will you still be buidling BB updates?

It is almost the official build. I think it is one odhcp6c fix ahead the official build.
I will be building BB every now and then, maybe once per 1-2 weeks, a bit depending on changes made. So far there has not been "interesting changes", yet. Just some fixes to odhcp6c and fixes to not-included packages:
http://git.openwrt.org/?p=14.07/openwrt.git;a=shortlog
https://github.com/openwrt/packages/commits/for-14.07

(Last edited by hnyman on 10 Oct 2014, 11:49)

hnyman - I'm intrigued with the new Qos add-in that you are including in the Trunk build.  I've only ever gone with the stable release so I'm  using BB right now.

I looked at the Wiki for OpenWRT "Choosing an OpenWRT Version" and it mentions the trunk builds don't include the Luci GUI interface.

Do you compile your trunk builds with Luci?  Basically wondering if I can easily test CC w/Qos or not.  Also do you think you will do a BB build with the SQM Qos in it?  Or can it simply be downloaded as a package and installed like other OpenWRT apps after flashing the firmware?

Thank you for your assistance and time in building us updated OpenWRT firmware!

(Last edited by Beaker1024 on 10 Oct 2014, 14:22)

Beaker1024 wrote:

Do you compile your trunk builds with Luci?  Basically wondering if I can easily test CC w/Qos or not.  Also do you think you will do a BB build with the SQM Qos in it?

Luci is included, as is indicated in the message #1 of this thread.
I will probably do a BB build with SQM QoS built in, at some point. Not quite sure yet, when.

EDIT 12 Oct 14:
I added SQM to the BB build.

(Last edited by hnyman on 12 Oct 2014, 18:42)

Hello hnyman,
I would like to buy another router like my WNDR3700v2, but unfortunately now it seems impossible to find a v2 on the market.
Only v3 available, but v3 is a completely different board with different processor.
WNDR3800 also impossible to find on the market.
What about WNDR4300? is your build working on it?
Actually on the market there is WNDR4300-100PES. As far as I can understand there is only one version of this model and seems to be similar to the WNDR3700 V1/V2
What about other routers?
Awaiting your comments.

WNDR4300 and WNDR3700v4 are pretty much the same hardware, but somewhat different than WNDR3700v1/v2/WNDR3800. Different enough that you can't use the same firmware files (different SoC, NAND flash, etc.).

Look for wndr3800 on ebay, a lot of them are wndr3800ch.  First flash you need to use a 3800ch image, but after that you can use regular 3800 for sysupgrades

@drawz
thanks I saw that there is a specific directory /snapshots/trunk/ar71xx.nand/ where is stored firmware for WNDR4300

@robnitro
I found only very high prices.
In this case I would prefer to migrate to a TP-LINK Archer C7. Think this is, in my opinion, one of the best ar71xx quality/price at the moment. It seems that now is fully supported both 2.4 and 5 ghz.
Thel I'll try to compile the hnyman build making right changes to .config file.

How can we set SQM to throttle uplink only?  I don't see an option for 'half duplex' like qos-scripts had.

I don't have enough cpu to do both and uplink is where latency can spike badly if saturated.


@Sergio
Nice choice and not a bad price.  74kc cpu at 720 mhz is a bit faster than the 24kc in wndr3800/3700 overclocked to 800 mhz.
Hopefully yours comes with more than 8MB flash.  I don't understand why they are so cheap with flash!

I think that if you set the downstream limit to a value greater than your actual downstream it will basically ignore it.

Hi Rob,

robnitro wrote:

How can we set SQM to throttle uplink only?  I don't see an option for 'half duplex' like qos-scripts had.

I don't have enough cpu to do both and uplink is where latency can spike badly if saturated.


@Sergio
Nice choice and not a bad price.  74kc cpu at 720 mhz is a bit faster than the 24kc in wndr3800/3700 overclocked to 800 mhz.
Hopefully yours comes with more than 8MB flash.  I don't understand why they are so cheap with flash!

      If you set the bandwidth for either ingress or egress to 0, no shaping will happen on that device. Have a look at logread or call "/etc/init.d/sqm stop ; /etc/init.d/sqm start" and/or "tc -d qdisc" and "tc class show dev wan ; tc class show dev ifb4wan" (not sure about that device names), to get some information about the effective shaper setup. Unfortunately this is not well documented in luci-app-sqm... (zero was used as that is not a viable option fr shaping, if you truly shape one direction to zero, no useful tcp connections are possible...)
       Note, the SQM developers should be more responsible to questions/issues raised on https://github.com/dtaht/ceropackages-3.10/issues or posted to https://lists.bufferbloat.net/listinfo/cerowrt-devel.

Best Regards
        M.

Hi The_Decryptor,

The_Decryptor wrote:

I think that if you set the downstream limit to a value greater than your actual downstream it will basically ignore it.

      This will not help too much with Rob's issue as the shaper is then still running and consuming CPU (I think it is soft-irqs), so setting it to zero should be better.

Best Regards
        M.

Thanks... it didn't help much,  just FYI:
800 mhz OC WNDR3800ch
Fiber that tests stably to 80/90 (can dl more with video on demand through cablebox).
I used 2 speed tests simultaneously, starting the second one DL when the first one starts UL.
Used task manager and hwinfo to see the network speeds (listed in MB/s)

SQM simplest or simple - same speeds maxed out.  Max DL set to 74Mbit  UL to 78Mbit

Both directions qos, UL and DL same time:   50/50 or so, varied.

DL qos set to 0- turning off the qdisc for downlink:    63 down/  60 up on avg.

Similar results for openwrt qos-scripts, despite them using hsfc instead of HTB.

I returned to using my Verizon Fios router double NAT, use that router for the speed limiting. 

I know I tested stock buffalo firmware on a ag300h (same cpu as wndr) with stock cpu speed of 680... it only had UL limiting, and allowed speedtests to run  80 down/ 78 up (up being limited to 78).
Shame that even buffalo's official dd-wrt couldnt match that!

(Last edited by robnitro on 15 Oct 2014, 01:37)

Hi Robnitro,

robnitro wrote:

Thanks... it didn't help much,  just FYI:
800 mhz OC WNDR3800ch
Fiber that tests stably to 80/90 (can dl more with video on demand through cablebox).
I used 2 speed tests simultaneously, starting the second one DL when the first one starts UL.
Used task manager and hwinfo to see the network speeds (listed in MB/s)

SQM simplest or simple - same speeds maxed out.  Max DL set to 74Mbit  UL to 78Mbit

Both directions qos, UL and DL same time:   50/50 or so, varied.

DL qos set to 0- turning off the qdisc for downlink:    63 down/  60 up on avg.

Similar results for openwrt qos-scripts, despite them using hsfc instead of HTB.

I returned to using my Verizon Fios router double NAT, use that router for the speed limiting. 

I know I tested stock buffalo firmware on a ag300h (same cpu as wndr) with stock cpu speed of 680... it only had UL limiting, and allowed speedtests to run  80 down/ 78 up (up being limited to 78).
Shame that even buffalo's official dd-wrt couldnt match that!

     Yeah, the wndr3700/3800 seem to be capable of shaping around 50 to 60 Mbps total (sum of downlink and uplink shaping rates). And HTB and HFSC are equally costly, so shaping a 100/100 connection is pushing well beyond the hardware's capabilities (with the current software stack). If you ssh into the router while running your tests run "top" and look at the %idle and %sirq, I would assume idle to be close to 0% and sirq close to 100% while testing, which basically shows that your router is running against a hard computation limit.
      I would recommend to use netperf-wrapper's RRUL test for quantification of the results as that is easily repeatable (you do need a linux or macosx machine to run netperf-wrapper though).
Here are three command lines to test shaping and latency under load against three more or less public netperf servers on the net, just pick the closest to you (the one with shortest ping RTT)
date ; ping -c 10 netperf-west.bufferbloat.net ; ./netperf-wrapper --ipv4 -l 300 -H netperf-west.bufferbloat.net rrul -p all_scaled --disable-log -t noAQM_noLLA_16M2M_hms-beagle_2_netperf-west
date ; ping -c 10 netperf-east.bufferbloat.net ; ./netperf-wrapper --ipv4 -l 300 -H netperf-east.bufferbloat.net rrul -p all_scaled --disable-log -t noAQM_noLLA_16M2M_hms-beagle_2_netperf-east
date ; ping -c 10 netperf-eu.bufferbloat.net ; ./netperf-wrapper --ipv4 -l 300 -H netperf-eu.bufferbloat.net rrul -p all_scaled --disable-log -t noAQM_noLLA_16M2M_hms-beagle_2_netperf-eu

With that out of the way, you could try to just set the shaping speed much higher and see how that affects achievable shaping rates. Or if you are willing to test I could send you a modified SQM script that might improve things a bit (or might not, I have a 16M/2.5M connection, so I can not saturate my link wink ).

Best Regards
       M.

I'm planning to upgrade from the AA builds to the stable/BB one soon.

Is it worth resetting my config before upgrading? I first set up OpenWRT about a year ago and I heard the config might miss some stuff if I just upgrade it as is.

ceri wrote:

I'm planning to upgrade from the AA builds to the stable/BB one soon.

Is it worth resetting my config before upgrading? I first set up OpenWRT about a year ago and I heard the config might miss some stuff if I just upgrade it as is.

You should build at least the network config again, especially regarding ipv6 defaults. Otherwise, I think that e.g. firewall config has not changed much. Also wifi config options have changed slightly etc.

In general, configuring everything up from scratch is good to do every now and then.

(Last edited by hnyman on 16 Oct 2014, 15:00)

moeller0 wrote:

the wndr3700/3800 seem to be capable of shaping around 50 to 60 Mbps total (sum of downlink and uplink shaping rates).  ... the hardware's capabilities (with the current software stack). If you ssh into the router while running your tests run "top" and look at the %idle and %sirq, I would assume idle to be close to 0% and sirq close to 100% while testing, which basically shows that your router is running against a hard computation limit.

I just tested with WNDR3700v2 and my 100/~15 Mb/s connection:
router was able to do 8 MB/s in while 1 MB/s out. Load was generated with Ubuntu torrents with about 200 simultaneous connections. Surfing was still possible ;-)
Like you said, sirq was 100%.

Hi Hnyman,

hnyman wrote:
moeller0 wrote:

the wndr3700/3800 seem to be capable of shaping around 50 to 60 Mbps total (sum of downlink and uplink shaping rates).  ... the hardware's capabilities (with the current software stack). If you ssh into the router while running your tests run "top" and look at the %idle and %sirq, I would assume idle to be close to 0% and sirq close to 100% while testing, which basically shows that your router is running against a hard computation limit.

I just tested with WNDR3700v2 and my 100/~15 Mb/s connection:
router was able to do 8 MB/s in while 1 MB/s out. Load was generated with Ubuntu torrents with about 200 simultaneous connections. Surfing was still possible ;-)
Like you said, sirq was 100%.

     Well that equals a combined 72Mbps, pretty much in the ballpark the wndr3700 seems capable of handling... If I may ask what limits did you put into SQM?

Best Regards
       M.

moeller0 wrote:

     Well that equals a combined 72Mbps, pretty much in the ballpark the wndr3700 seems capable of handling... If I may ask what limits did you put into SQM?

I think that I had 105000/12000 as settings during that stress test, when I was just testing the max. throughput. Normally I use 85000/10000 as the everyday settings.

hnyman wrote:
ceri wrote:

I'm planning to upgrade from the AA builds to the stable/BB one soon.

Is it worth resetting my config before upgrading? I first set up OpenWRT about a year ago and I heard the config might miss some stuff if I just upgrade it as is.

You should build at least the network config again, especially regarding ipv6 defaults. Otherwise, I think that e.g. firewall config has not changed much. Also wifi config options have changed slightly etc.

In general, configuring everything up from scratch is good to do every now and then.

I see, thanks for the clarification

Yep, when sqm/qos-scripts is on the sirq in htop is maxed out.  If I use the actiontec mi424wr as qos, max I see if 45% sirq.  It has a kirkwood cpu, I believe 1.4 ghz?

From a second openwrt router, which is acting as a client (wifi access point) to eliminate cpu use by netperf.
No QOS at all:   {ping to first hop on desktop had 150 ms spikes}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:30:30 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  64.26 Mbps
   Upload:  88.27 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 30.296
    10pct: 77.105
   Median: 120.852
      Avg: 117.349
    90pct: 146.180
      Max: 158.932

SQM- 0 down (disabled)   78000 up  {ping to first hop on desktop had 150 ms spikes}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:33:05 Testing netperf.bufferbloat.net (ipv4) with 4 streams downnd up while pinging gstatic.com. Takes about 45 seconds.
Download:  80.62 Mbps
   Upload:  48.94 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.874
    10pct: 58.908
   Median: 70.479
      Avg: 77.985
    90pct: 106.114
      Max: 111.192

SQM 75000 down  78000 up   {desktop ping first hop around 10-40 ms}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:35:06 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  59.09 Mbps
   Upload:  61.34 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.937
    10pct: 30.267
   Median: 31.142
      Avg: 31.309
    90pct: 32.100
      Max: 35.308

Qos-scripts  half duplex,  78000 up   {desktop had around 40-70 ms pings first hop}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:37:25 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  82.57 Mbps
   Upload:  16.89 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 41.955
    10pct: 54.745
   Median: 73.377
      Avg: 78.529
    90pct: 104.058
      Max: 114.809

qos-scripts full duplex, 75000 dn  78000 up  {10-30 pings from desktop to first hop}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:39:04 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  55.47 Mbps
   Upload:  59.41 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.934
    10pct: 30.671
   Median: 31.713
      Avg: 32.119
    90pct: 33.524
      Max: 37.133

NOW WITH ACTIONTEC AS QOS MANAGER set to 75 dn  78 up (openwrt no qos/sqm):  {desktop 10-30ms random 100 spike}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:43:02 Testing netperf.bufferbloat.net (ipv4) with 4 streams dow and up while pinging gstatic.com. Takes about 45 seconds.
Download:  71.41 Mbps
   Upload:  74.5 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 30.646
    10pct: 33.643
   Median: 35.780
      Avg: 36.779
    90pct: 40.423
      Max: 45.587

Hi Robnitro,

robnitro wrote:

Yep, when sqm/qos-scripts is on the sirq in htop is maxed out.  If I use the actiontec mi424wr as qos, max I see if 45% sirq.  It has a kirkwood cpu, I believe 1.4 ghz?

From a second openwrt router, which is acting as a client (wifi access point) to eliminate cpu use by netperf.
No QOS at all:   {ping to first hop on desktop had 150 ms spikes}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:30:30 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  64.26 Mbps
   Upload:  88.27 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 30.296
    10pct: 77.105
   Median: 120.852
      Avg: 117.349
    90pct: 146.180
      Max: 158.932

SQM- 0 down (disabled)   78000 up  {ping to first hop on desktop had 150 ms spikes}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:33:05 Testing netperf.bufferbloat.net (ipv4) with 4 streams downnd up while pinging gstatic.com. Takes about 45 seconds.
Download:  80.62 Mbps
   Upload:  48.94 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.874
    10pct: 58.908
   Median: 70.479
      Avg: 77.985
    90pct: 106.114
      Max: 111.192

SQM 75000 down  78000 up   {desktop ping first hop around 10-40 ms}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:35:06 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  59.09 Mbps
   Upload:  61.34 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.937
    10pct: 30.267
   Median: 31.142
      Avg: 31.309
    90pct: 32.100
      Max: 35.308

Qos-scripts  half duplex,  78000 up   {desktop had around 40-70 ms pings first hop}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:37:25 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  82.57 Mbps
   Upload:  16.89 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 41.955
    10pct: 54.745
   Median: 73.377
      Avg: 78.529
    90pct: 104.058
      Max: 114.809

qos-scripts full duplex, 75000 dn  78000 up  {10-30 pings from desktop to first hop}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:39:04 Testing netperf.bufferbloat.net (ipv4) with 4 streams down and up while pinging gstatic.com. Takes about 45 seconds.
Download:  55.47 Mbps
   Upload:  59.41 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 29.934
    10pct: 30.671
   Median: 31.713
      Avg: 32.119
    90pct: 33.524
      Max: 37.133

NOW WITH ACTIONTEC AS QOS MANAGER set to 75 dn  78 up (openwrt no qos/sqm):  {desktop 10-30ms random 100 spike}
#/etc# ./netperfrunner.sh -t 45
2014-10-17 11:43:02 Testing netperf.bufferbloat.net (ipv4) with 4 streams dow and up while pinging gstatic.com. Takes about 45 seconds.
Download:  71.41 Mbps
   Upload:  74.5 Mbps
  Latency: (in msec, 46 pings, 0.00% packet loss)
      Min: 30.646
    10pct: 33.643
   Median: 35.780
      Avg: 36.779
    90pct: 40.423
      Max: 45.587


      Thanks a lot, this nicely shows:
a) there is a lot of value in shaping egress and ingress if possible, for both SQM and QOS bi-directional shaping halved the ping RTTs (rough estimation looking at Median and Mean)

b) Both SQM and QOS are more or less CPU bound to similar combined rates (actually I wonder how you achieved 110 to 120 Mbps, typically I think the wndrs run out of steam around 60 to 70 Mbps, maybe this is your over clocking?)

c) both QOS and SQM seem to keep latency under load under tighter control than the actiontec (judged from the 90pct and max), but the proprietary? actiontec QOS looks quite useable.

d) it would be interesting to repeat the same experiments with IPv6 wink


Many thanks
       M.

If I read robnitro's results correctly, they match pretty much my own observations: when the connection bandwidth is wide enough (I have 100/15), both qos-scripts and sqm are able to manage nice latency and their outcome does not differ much. I guess that the possible difference is shown with the more constrained conditions.

EDIT:
just adding that for wndr3700v2 running sqm, netperf-wrapper gives me about 86 Mb/s down + 11 Mb/s up without overclocking, and the test was ran from Ubuntu running inside Virtualbox running in PC.

(Last edited by hnyman on 17 Oct 2014, 17:35)

Hi Hnyman,

hnyman wrote:

If I read robnitro's results correctly, they match pretty much my own observations: when the connection bandwidth is wide enough (I have 100/15), both qos-scripts and sqm are able to manage nice latency and their outcome does not differ much. I guess that the possible difference is shown with the more constrained conditions.

EDIT:
just adding that for wndr3700v2 running sqm, netperf-wrapper gives me about 86 Mb/s down + 11 Mb/s up without overclocking, and the test was ran from Ubuntu running inside Virtualbox running in PC.

     So one thing that is interesting in judging SQM and QOS is how bad is the latency under load increase without any shaper. For example, while using SQM on a secondary router latency under load increase was bound to ~300ms as that is what my ISPs modem-router enforced. Getting that router out of the link, by running a ppoe client on the router immediately resulted in worse latency spikes (1000ms and larger). In your case I assume the link itself is not too bad to start out with, just like in Robnitro's case the worst case was ~160ms without QOS.

Best Regards
        M.

Previous tests were with yesterday's BB14.07 build.
I flashed today's CC trunk build, and tested netperf-wrapper again with no shaping, with SQM and with QoS:
Nothing: 50 ms latency, about 98/11 Mb speed  (ellipsis center).
SQM 100/11 Mb: 18 ms and 82/9 Mb speed.
QoS 100/11 Mb: 18 ms and 95/9 Mb speed

So, both SQM and QoS decrease nicely latency, but there is some speed hit. Still a good tradeoff.

I have sent the data to your email.

Sorry, posts 1126 to 1125 are missing from our archive.