[Solved] Ethernet speed on My Book Live

Hello:

I ws getting very slow LAN rsync speeds and found that the cause was the on-board ethernet adapter not being set properly:

root@OpenWrt:~# ethtool eth0
Settings for eth0:
	Supported ports: [ TP MII ]
	Supported link modes:   10baseT/Full 
	                        100baseT/Full 
	                        1000baseT/Full 
	Supported pause frame use: Symmetric Receive-only
	Supports auto-negotiation: Yes
	Supported FEC modes: Not reported
	Advertised link modes:  10baseT/Full 
	                        100baseT/Full 
	                        1000baseT/Full 
	Advertised pause frame use: Symmetric Receive-only
	Advertised auto-negotiation: Yes
	Advertised FEC modes: Not reported
	Speed: 100Mb/s    <-------------------------- # should be 1000Mb/s
	Duplex: Full
	Port: MII
	PHYAD: 1
	Transceiver: internal
	Auto-negotiation: on
	Link detected: yes
root@OpenWrt:~# 

So I tried to change the speed setting with ethtool, which is what I would do in a Linux box:

root@OpenWrt:~# ethtool -s eth0 speed 1000
Cannot advertise speed 1000
root@OpenWrt:~# 

For some reason, ethtool will not change to a speed that the adapter is capable of.
Attempting to change Auto-negotiation to off results in a total system lockup, the only way out being a hard shutdown.

Is there a setting within OpenWRT for this?

Thanks in advance,

PCL

(https://serverfault.com/questions/766945/how-to-change-the-auto-negotiation-to-yes-in-linux-servers)

Looks like Advertised auto-negotiation: No and Auto-negotiation: off in the example.

Try that, since you want to do the opposite of what the poster wants.

I get 1 Gbps with Yes and on...

root@OpenWrt:~# ethtool eth0
Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: Symmetric Receive-only
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        Current message level: 0x000000ff (255)
                               drv probe link timer ifdown ifup rx_err tx_err
        Link detected: yes

Could be your ethernet cable.

Hello:

Yes, it should work.

But if my system locks up when attempting to do the same thing via the command line, wouldn't the same thing happen using the solution in the example?

In any case, I can't find /etc/sysconfig/network-scripts/ifcfg-eth0 in my installation.

Yes, I think that could well be it.

Hold on a minute ...
[changing cables, checking dmesg on NAS ... ]
Yes ! 8^D
That was it.

I hooked up a CAT5 cable (not 5e) and it worked:

~# dmesg --- snip --- eth0: link is up, 1000 FDX, pause enabled

Thank you very much for your input.
Have a good week-end.

Best,

PCL

Hello:

Indeed it did.

Went out to get a set of new CAT5e cables and got everything plugged in.
Here's the result of running iperf between my box and the NAS:

root@OpenWrt:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.3 port 5001 connected with 192.168.1.2 port 35408
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   771 MBytes   645 Mbits/sec
root@OpenWrt:~# 
groucho@devuan:~$ iperf -c 192.168.1.3
------------------------------------------------------------
Client connecting to 192.168.1.3, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.2 port 35408 connected with 192.168.1.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   771 MBytes   646 Mbits/sec
pcl@devuan:~$ 

Good enough, I guess.
But I was expecting something like the 94Mb /s I get between my netbook which has a 10/100 ethernet adapter.

ie: if with a 10/100 connection I can get 94% of 10/100, I was expecting at least 85% from the box to NAS connection - 850 Mbits /s instead of 645.

Any idea why this should be so?

Thanks in advance.

Best,

PCL.

Have you looked at the CPU utilization while running iperf? I wouldn't be surprised if the system is CPU limited.

Hello:

No.
I'll see if I can catch a glimpse of it in the Processes page while I run it from the cmd line.

In any case, running iperf as server (-s) or as client (-c) in the NAS gave me the same result.

Thanks for your input.

Best,

PCL

Hello:

I'm at this moment running the same rsync job that took over 18 hours (first time) the last time I ran it ie: before switching all the patch cables for new CAT5e ones and as a result getting the iperf speeds I reported in my previous post:

groucho@devuan:~$ iperf -c 192.168.1.3
------------------------------------------------------------
Client connecting to 192.168.1.3, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.2 port 35408 connected with 192.168.1.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   771 MBytes   646 Mbits/sec
pcl@devuan:~$ 

But something seems amiss ...

My conky panel is reporting a speed of 7.46MiB /s

Incoming: 114KiB /s
Outgoing: 7.46MiB /s

The NAS's CPU/Memory load at this moment are these:

PID   Owner  CPU   Mem    Process
 213  root    9%    0%    kswapd0
1904  root   67%    0%    Dropbear
1905  root    0%    7%    rsync  --server
1906  root   22%   13%    rsync  --server

I've checked with ethtool again and both links (box and NAS) are negiociated at 1000Mb/s Full Duplex.

So ...
Why the 7.47MiB /s?
Shouldn't it be 7 or 8 times higher?

Maybe the NAS CPU is maxed out?
ie: kswap 9% + Dropbear 67% + rsync 22% = 98%

My rsync stanza is this:

time rsync -a /media/bkups root@192.168.1.3:/mnt/sda3

Added time so I can leave it running and know just how much time it took to finish.

I'm at a loss here.

Any suggestions?

Thanks in advance.

Best,

PCL

Apparently rsync is tunneled through ssh. Which means encryption is added, which is expensive. To stream data fast, you can use a netcat tunnel. https://linuxhint.com/use-netcat-to-transfer-files/. But rsync won't support that.

Hello:

Must be, Dropbear is OpenWRT's de-facto SSL2 client/server application.

But I don't really neeed encryption as it is all local traffic.
ie: going nowhere else but from a back-up drive inside my box to the NAS under my desk.

But I do need rsync.

Given the CPU/Mem this board has, a 67% load is expensive.
ie: APM82181 @ 800 MHz + RAM 256 MB

Can encryption in Dropbear be turned off?

Thank you very much for your input.

Best,

PCL

No. Theoretically it could, but cypher 'none' is not supported. But you can run an rsync daemon, which is available as package rsyncd. A bit less convenient than an ssh tunnel (which doesn't need any configuration at server side). More here: https://linuxconfig.org/how-to-setup-the-rsync-daemon-on-linux

Hello:

I see ...

I came across this yesterday when I was looking around for ways to get more throughput with the WD-MBL:

Can a pocket router go fast? (GL-AR750)
[https://www.glidk.com/2021/12/22/benchmarking-crappy-router]

It's a post by a chap using a small travel router with what seems to be a much lower spec than the WD-MBL:

Brand/Name		WD-MBL		GL-AR750
CPU:			PowePC 44x  Qualcomm Atheros
Model:			APM82181	QCA9531
CPU Cores:		1		    1
CPU MHz:		800		    650       <----- #
Flash Mb:		512		    16
RAM MB:			256		    128       <----- #
BogoMIPS:		1600		432.53    <----- # 

The WD-MBL, save for the HDD/SD card difference, seems to be much heftier.
Could I be able to get at least his 10.0Mb/s?

To my chagrin, I have not been able to make heads or tails of what he did.
The link you've provided me with will surely have clearer instructions. ;^ )

Thank you very much for your input.
I'll try to get this set up asap and report on how I do.

Best,

PCL

Hello:

Didn't do so good.
I managed to set it up, server running on the NAS.

A small directory /media/stuff on one of my drives.

~$ du -sh /media/stuff
2.0G	/media/stuff
~$

The command line:

:~$ time rsync --progress -r /media/stuff root@192.168.1.3:/mnt/sda3/testdir

The CPU/Mem load according to the OpemWRT UI:

2517 root /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -b /etc/config/dbrbanner -p 192.168.1.3:22 -K 300 -T 3 64% 0%
2519 root rsync --server -re.iLsfxC --log-format=X . /mnt/sda3/testdir 36% 3%

So ...
dropbear 64% + rsync 36% = 100% CPU load

It would be OK if it were not for the time:

:~$
--- snip ---
real 4m59.943s
user 0m24.129s
sys  0m11.690s
:~$

If my math is right ...
It took 5 minutes to move 2.0Gb -> 400Mb/m -> 6.67Mb /s

I wonder what is going on.

This is supposed to be running with no encryption, ie: much less CPU load.
Yet the CPU load is more or less the same as before.

Any suggestions?

Thanks in advance.

Best,

PCL

You might check if write caching is enabled on the drive.

Hello:

Let me see:

:~$ sudo hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
	Model Number:       WDC WD10EURX-63FH1Y0                    
	Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Commands/features:
	Enabled	Supported:
	   *	SMART feature set
	    	Security Mode feature set
	   *	Power Management feature set
	   *	Write cache
	   *	Look-ahead
--- snip ---
:~$ 

Yes, it is enabled.

Thanks for your input.

Best,

PCL

As a test, you could kill the dropbear process before running rsync.

Hello:

No.
I could not be Hung-Up, Terminated or Killed before starting rsync.

Doing it afterwards, when it started working at 67%, killed the sync.

rsync error: unexplained error (code 255) at log.c(245) [sender=3.1.3]

There's something the chap with the travel router does that gets him a great throughput with that puny thingie.

What could I be doing wrong?

Thanks for your input.

Best,

PCL

You're still going through dropbear/ssh, not to the rsyncd directly, which explains the low speed and high load caused by dropbear.

Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified

See the syntax:

Access via rsync daemon:
  Pull: [...]
        rsync [OPTION...] rsync://[USER@]HOST[:PORT]/SRC... [DEST]
  Push: [...]
        rsync [OPTION...] SRC... rsync://[USER@]HOST[:PORT]/DEST

(You will need to set up the "sources"/"destinations" in /etc/rsyncd.conf, it will only work with defined destinations, not with random directories. See the manpage for that.)

I got around 20 MB/s between two MBLs using rsync ... with 18.06, back when I last consciously benchmarked it. Still not blazingly fast, especially compared to the speeds I got with rsync on the original firmware, there's probably room for optimization. But it can certainly do better than what you're getting right now.

Hello:

I see.
Got the syntax wrong. :^/
I'll check the man page again and see if /etc/rsyncd.conf is correctly set.
I have not found straightforward examples on the web.

Still 3x times what I'm getting now.
I just recalled a post by you from back in 10/18.
Came across it when I was having issues installing OpenWRT on the MBL.

Your post is from when OpenWRT was using kernel 4.19, we're at 5.41.
Seems the apparent speed limit was never taken care of.

Maybe the OpenWRT firmware is also responsible for the low speed I get with iperf. ie: ~645 Mbits /sec between my box and the WD-MB NAS.

Or maybe it is the driver?

groucho@OpenWrt:~$ ethtool -i eth0
driver: ibm_emac             <----  ###                                   
version: 3.54                <----  ### 
firmware-version: 
expansion-rom-version: 
bus-info: PPC 4xx EMAC-0 /plb/opb/etherne
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
groucho@OpenWrt:~$ 

What version is the ibm_emac driver at now?

I'll check then rsync syntax/.conf file and see what I can get.

Thank you very much for your input.

Best.

PCL

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.