OpenWrt Forum Archive

Topic: poor wired speed with wdr-4300 and 15.05.

The content of this topic has been archived on 18 Apr 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

Greetings,

my main server is connected to a tl wdr-4300 router with 15.05 installed on it via a 3 meters of cable.
from what I can see, the server's eth ports support 1Gbps speed (the vendor says the router supports that speed), see:

root@OpenWrt:~# ethtool eth0
Settings for eth0:
        Supported ports: [ ]
        Supported link modes:   1000baseT/Full 
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  1000baseT/Full 
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        Current message level: 0x000000ff (255)
                               drv probe link timer ifdown ifup rx_err tx_err
        Link detected: yes
root@OpenWrt:~# ethtool eth0.1
Settings for eth0.1:
        Supported ports: [ ]
        Supported link modes:   1000baseT/Full 
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  1000baseT/Full 
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        Link detected: yes
root@OpenWrt:~# ethtool eth0.2
Settings for eth0.2:
        Supported ports: [ ]
        Supported link modes:   1000baseT/Full 
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  1000baseT/Full 
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        Link detected: yes

the server's eth port supports 1Gbps too, see:

dagg@NCC-5001-D ~ $ ethtool eth0
Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full 
                                100baseT/Half 100baseT/Full 
                                1000baseT/Half 1000baseT/Full 
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full 
                                100baseT/Half 100baseT/Full 
                                1000baseT/Half 1000baseT/Full 
        Advertised pause frame use: Symmetric Receive-only
        Advertised auto-negotiation: Yes
        Link partner advertised link modes:  10baseT/Half 10baseT/Full 
                                             100baseT/Half 100baseT/Full 
                                             1000baseT/Full 
        Link partner advertised pause frame use: Symmetric Receive-only
        Link partner advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
Cannot get wake-on-lan settings: Operation not permitted
        Current message level: 0x00000033 (51)
                               drv probe ifdown ifup
        Link detected: yes

and I'm using a CAT5e cable which supports up to 1Gbps.

so based on the above, I'm suppose to get 1Gbps or at least 512Mbps but using a simple dd into netcat session to the router from the server gives me 20MB/s which are 160Mbps.

any idea why is that?

Thanks,

dagg

iliiike wrote:

Hardware NAT (1Gbps) is supported ONLY by stock firmware; you will get around 200Mbps with OpenWRT.
or ~300Mbps with some custom patched (overclocked) variations:
https://forum.openwrt.org/viewtopic.php … 28#p278228

seriously?

why such difference?

I'm not a developer but i assume has to be proprietary drivers (blobs) of the "hardware" thing ... not being open source nor reversed; so you're left with raw CPU power.

im not sure it's a nat related problem
is your server outside nat?
anyway, inside it or outside, it's not a right way to test throughput by manipulating with smth at the router host
to measure real values use two different machines connected to router, from machine to machine

iliiike wrote:

I'm not a developer but i assume has to be proprietary drivers (blobs) of the "hardware" thing ... not being open source nor reversed; so you're left with raw CPU power.

I see, thanks for the answer.

stas2z wrote:

im not sure it's a nat related problem
is your server outside nat?
anyway, inside it or outside, it's not a right way to test throughput by manipulating with smth at the router host
to measure real values use two different machines connected to router, from machine to machine

not sure what you mean by "server outside nat"
my test is as follows:
1. ssh to the router.
2. execute "nc -l -p 22222 > /dev/null" on the router.
3. execute "dd if=/dev/zero bs=1024K count=1024 | nc router 22222" on the server.

so here is not a softnat/hwnat issue, just a slow router host, it's ok for 560mhz mips74k
but software nat throughput will be almost the same as your test

(Last edited by stas2z on 11 Dec 2015, 09:05)

stas2z wrote:

so here is not a softnat/hwnat issue, just a slow router host, it's ok for 560mhz mips74k
but software nat throughput will be almost the same as your test

so as iliiike said, it is a driver issue.

thanks.

The discussion might have continued from here.