Actually, the root problem is reading the CPU utilisation percentage without adjusting for the current CPU frequency.
So, looks like the CPU utilisation stats will be wrong for the frequency-scaling devices (as the stats do not reflect the true utilisation of the full CPU capacity). There might be something mentioned in the help texts / advice, as I guess that this will hit many users and the underlying reason to the high CPU stats is not obvious.
Others have stumbled into this utlisation reporting inconsistency, and there is e.g. this kind of discussion:
This CPU utilisation is a side-track for the high-power devices, and really useful for CPU-constrained old devices, so the stats itself is good to have. So, I do not think that you need to spend time with trying to fix the calculation, but there should be some advice about the phenomenom.
Yes, the utilization by definition is a time-based "duty cycle", and so independent of whether a Commodore64 or Cray-1 is doing the work. And I'd be wary of fudging it to create a new synthetic parameter (as I've seen some suggest in the past) and creating inconsistency with top and htop.
True, I agree the stat could provide good discriminating power with these older devices. Let me look into the feasibility of adding the stat without over-complicating things. I'm thinking of a simple average freq. per core over the test duration.
That's much appreciated. Many hands make lighter work, so thanks for your time testing and troubleshooting.
I agree there's no script issue I can see here, and it seems like a name resolution problem. Do you see problems with DDNS or NTP for example, which are also on-router services?
Could you try debugging DNS and netperf using the commands below on your router? I've included the output from my system for comparison if you could post yours as well.
First a few nslookup:
# nslookup netperf.bufferbloat.net
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: netperf.bufferbloat.net
netperf.bufferbloat.net canonical name = netperf.richb-hanover.com
Name: netperf.richb-hanover.com
netperf.richb-hanover.com canonical name = atl.richb-hanover.com
Name: atl.richb-hanover.com
Address 1: 23.226.232.80
netperf.bufferbloat.net canonical name = netperf.richb-hanover.com
netperf.richb-hanover.com canonical name = atl.richb-hanover.com
# netperf -4 -H netperf.bufferbloat.net -t TCP_STREAM -l 10 -d
resolve_host called with host 'netperf.bufferbloat.net' port '(null)' family AF_INET
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '(null)' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 23 226 232 80 0 0 0 0 0 0 0 0 0 0
scan_omni_args called with the following argument vector
netperf -4 -H netperf.bufferbloat.net -t TCP_STREAM -l 10 -d
sizeof(omni_request_struct)=200/648
sizeof(omni_response_struct)=204/648
sizeof(omni_results_struct)=284/648
Program name: netperf
Local send alignment: 8
Local recv alignment: 8
Remote send alignment: 8
Remote recv alignment: 8
Local socket priority: -1
Remote socket priority: -1
Local socket TOS: cs0
Remote socket TOS: cs0
Report local CPU 0
Report remote CPU 0
Verbosity: 1
Debug: 1
Port: 12865
Test name: TCP_STREAM
Test bytes: 0 Test time: 10 Test trans: 0
Host name: netperf.bufferbloat.net
installing catcher for all signals
Could not install signal catcher for sig 32, errno 22
Could not install signal catcher for sig 33, errno 22
Could not install signal catcher for sig 34, errno 22
Could not install signal catcher for sig 128, errno 22
remotehost is netperf.bufferbloat.net and port 12865
resolve_host called with host 'netperf.bufferbloat.net' port '12865' family AF_INET
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '12865' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 50 65 23 226 232 80 0 0 0 0 0 0 0 0 0 0
resolve_host called with host '0.0.0.0' port '0' family AF_INET
getaddrinfo returned the following for host '0.0.0.0' port '0' family AF_INET
cannonical name: '0.0.0.0'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
establish_control called with host 'netperf.bufferbloat.net' port '12865' remfam AF_INET
local '0.0.0.0' port '0' locfam AF_INET
bound control socket to 0.0.0.0 and 0
successful connection to remote netserver at netperf.bufferbloat.net and 12865
complete_addrinfo using hostname netperf.bufferbloat.net port 0 family AF_INET type SOCK_STREAM prot IPPROTO_TCP flags 0x0
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '0' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 23 226 232 80 0 0 0 0 0 0 0 0 0 0
local_data_address not set, using local_host_name of '0.0.0.0'
complete_addrinfo using hostname 0.0.0.0 port 0 family AF_UNSPEC type SOCK_STREAM prot IPPROTO_TCP flags 0x1
getaddrinfo returned the following for host '0.0.0.0' port '0' family AF_UNSPEC
cannonical name: '0.0.0.0'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0) port 0 AF_INET to atl.richb-hanover.com (23.) port 0 AF_INET : demo
create_data_socket: socket 4 obtained...
netperf: get_sock_buffer: send socket size determined to be 16384
netperf: get_sock_buffer: receive socket size determined to be 87380
send_omni_inner: 2 entry send_ring obtained...
recv_response: received a 656 byte response
remote listen done.
remote port is 34436
About to start a timer for 10 seconds.
netperf: get_sock_buffer: receive socket size determined to be 341760
netperf: get_sock_buffer: send socket size determined to be 376320
disconnect_data_socket sock 4 init 1 do_close 1 protocol 6
Adjusting elapsed_time by 0 seconds
recv_response: received a 656 byte response
remote results obtained
calculate_confidence: itr 1; time 10.296365; res 4.979503
lcpu -1.000000; rcpu -1.000000
lsdm -1.000000; rsdm -1.000000
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
174760 16384 16384 10.30 4.98
shutdown_control: shutdown of control connection requested.
Following up from my earlier post, I looked into the possibility of getting CPU frequency stats in addition to the normal processor utilization. Mostly for fun and curiosity...
In general, it's unnecessary since the % busy by itself is sufficient to determine whether a system is CPU-bound, independent of frequency scaling. That's because if one observes e.g. 98% average usage over 60 seconds, it means the maximum frequency upscaling (which kicks in at 95% even for the conservative default) will have already been reached. I also think it turns speedtest.sh into more of a processor monitoring tool than it's intended purpose of a latency-under-load tool. But I'll have to mull it over more...
But the fun and interesting stuff is a couple of examples below, run on an 8-core Ubuntu system.
First, a short concurrent test across the local LAN:
$ sh speedtest.sh -H 192.168.1.1 -p 192.168.1.1 -c -t 10
2018-11-09 01:52:40 Starting speedtest for 10 seconds per transfer session.
Measure speed to 192.168.1.1 (IPv4) while pinging 192.168.1.1.
Download and upload sessions are concurrent, each with 5 simultaneous streams.
...........
Download: 32.66 Mbps
Upload: 33.09 Mbps
Latency: (in msec, 11 pings, 0.00% packet loss)
Min: 1.770
10pct: 1.770
Median: 303.000
Avg: 249.706
90pct: 365.000
Max: 436.000
Processor: (as avg +/- stddev % busy @ avg freq, 9 samples)
cpu0: 8% +/- 4% @ 1529 MHz
cpu1: 9% +/- 5% @ 1502 MHz
cpu2: 11% +/- 3% @ 1449 MHz
cpu3: 7% +/- 4% @ 1396 MHz
cpu4: 4% +/- 2% @ 1862 MHz
cpu5: 0% +/- 0% @ 1197 MHz
cpu6: 1% +/- 1% @ 1303 MHz
cpu7: 0% +/- 1% @ 1223 MHz
Overhead: 2% total CPU used by netperf
Next, a fun loopback test on the same server:
$ sh speedtest.sh -H 127.0.0.1 -p 127.0.0.1 -c -t 10
2018-11-09 01:25:12 Starting speedtest for 10 seconds per transfer session.
Measure speed to 127.0.0.1 (IPv4) while pinging 127.0.0.1.
Download and upload sessions are concurrent, each with 5 simultaneous streams.
..........
Download: 16248.00 Mbps
Upload: 16722.80 Mbps
Latency: (in msec, 11 pings, 0.00% packet loss)
Min: 0.038
10pct: 0.038
Median: 0.061
Avg: 0.065
90pct: 0.082
Max: 0.100
Processor: (as avg +/- stddev % busy @ avg freq, 8 samples)
cpu0: 100% +/- 0% @ 2794 MHz
cpu1: 100% +/- 0% @ 2794 MHz
cpu2: 100% +/- 0% @ 2794 MHz
cpu3: 100% +/- 0% @ 2794 MHz
cpu4: 100% +/- 0% @ 2794 MHz
cpu5: 100% +/- 0% @ 2794 MHz
cpu6: 100% +/- 0% @ 2794 MHz
cpu7: 100% +/- 0% @ 2794 MHz
Overhead: 53% total CPU used by netperf
A quick update: I've updated the OP with a new link for a speedtest version that also measures the average CPU frequency, as in the examples above.
This also includes some README and commentary updates, and will likely be the version for PR to openwrt-packages as "1.0".
@davidc502 Did you have any success understanding the DNS problem you encountered with netperf? Were the test suggestions above any help? That one's still bothering me...
I made further tests with R7800, CPU scaling disabled (full CPU frequency). I ran the test (and netperf) in another device in the network. I looked at the router's CPU utilisation with htop during the test. I made an interesting observation:
The high CPU usage is caused by the Cake qdisc.
Without SQM: 99/8 Mbit/s with 44 ms latency, no CPU utilisation
with simple fq_codel: 90/7 Mbit/s with 15 ms latency, low CPU utilisation
with cake layer_cake: 88/8 Mbit/s with 16 ms latency, high CPU utilisation
with cake piece_cake: 89/7 Mbit/s with 16 ms latency, high CPU utilisation
I see this very rarely too, during the course of much testing, and it boils down to a problem with netperf . The script kicks off netperf instances configured for a 60-second test, and then waits for them to complete.
If netperf fails to start, that can yield the zeros seen by @davidc502. Similarly, netperf can timeout after a while, or simply take a long time to start up and complete, which can lead to the long runs seen by @hnyman.
It's hard to reproduce (maybe some network problems or the servers?) so I don't precisely know the root cause, but I'll take a look at improving the logging in the script. If either of you can remember/compare notes on the circumstances around the long runs or zero results, please let me know.
I run the netperf-east server. I know that if the netperf server on my end goes down, the speedtest will return zeros. (I've had outages in the last month, and I think that netperf-west may also have: that may explain these reports of zero speeds.)
Perhaps the speedtest script could detect the "0.00 Mbps" state, and offer a warning that perhaps the server is off-line.
I used betterspeedtest.sh to test connectivity between netperf-east (netperf.bufferbloat.net) and the two other publicly available netperf servers: netperf-west and netperf-eu. (There's no way I can test this from home: the best service I can get is DSL at 7mbps/768kbps. Praise be to fq_codel/cake!)
richb@netperf:~/src/OpenWrtScripts$ sh betterspeedtest.sh -t 30 -H netperf-west.bufferbloat.net
2018-11-11 08:46:26 Testing against netperf-west.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging gstatic.com (30 seconds in each direction)
...............................
Download: 372.22 Mbps
Latency: (in msec, 31 pings, 0.00% packet loss)
Min: 0.974
10pct: 1.000
Median: 1.070
Avg: 1.113
90pct: 1.260
Max: 1.520
...............................
Upload: 752.77 Mbps
Latency: (in msec, 31 pings, 0.00% packet loss)
Min: 1.010
10pct: 1.070
Median: 1.480
Avg: 3.565
90pct: 7.430
Max: 15.400
richb@netperf:~/src/OpenWrtScripts$ sh betterspeedtest.sh -t 30 -H netperf-eu.bufferbloat.net
2018-11-11 08:47:37 Testing against netperf-eu.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging gstatic.com (30 seconds in each direction)
................................
Download: 603.21 Mbps
Latency: (in msec, 32 pings, 0.00% packet loss)
Min: 1.000
10pct: 1.020
Median: 1.130
Avg: 1.179
90pct: 1.330
Max: 1.560
..................................
Upload: 402.07 Mbps
Latency: (in msec, 34 pings, 0.00% packet loss)
Min: 0.988
10pct: 0.993
Median: 1.030
Avg: 1.132
90pct: 1.170
Max: 3.110
richb@netperf:
Here are some more results. With these results I stopped forwarding to dnscrypt proxy and just forwarded to 1.1.1.1 and 1.0.0.1. Even so, netperf refuses to work.
root@lede:/usr/bin# speedtest.sh
2018-11-11 20:46:19 Starting speedtest for 60 seconds per transfer session.
Measure speed to netperf.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
.
Download: 0.00 Mbps
Latency: (in msec, 1 pings, 0.00% packet loss)
Min: 35.123
10pct: 0.000
Median: 0.000
Avg: 35.123
90pct: 0.000
Max: 35.123
Processor: (in % busy, avg +/- stddev, -1 samples)
Overhead: (in % total CPU used)
netperf: 0
.
Upload: 0.00 Mbps
Latency: (in msec, 1 pings, 0.00% packet loss)
Min: 16.721
10pct: 0.000
Median: 0.000
Avg: 16.721
90pct: 0.000
Max: 16.721
Processor: (in % busy, avg +/- stddev, -1 samples)
Overhead: (in % total CPU used)
netperf: 0
root@lede:/usr/bin# speedtest.sh
2018-11-11 20:46:22 Starting speedtest for 60 seconds per transfer session.
Measure speed to netperf.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
.
Download: 0.00 Mbps
Latency: (in msec, 1 pings, 0.00% packet loss)
Min: 19.562
10pct: 0.000
Median: 0.000
Avg: 19.562
90pct: 0.000
Max: 19.562
Processor: (in % busy, avg +/- stddev, -1 samples)
Overhead: (in % total CPU used)
netperf: 0
....................................................................................................................................
Upload: 0.00 Mbps
Latency: (in msec, 133 pings, 0.00% packet loss)
Min: 12.133
10pct: 12.649
Median: 13.677
Avg: 15.950
90pct: 14.354
Max: 66.922
Processor: (in % busy, avg +/- stddev, 130 samples)
cpu0: 8 +/- 9
cpu1: 7 +/- 8
Overhead: (in % total CPU used)
netperf: 0
nslookup
root@lede:/usr/bin# nslookup netperf.bufferbloat.net
Server: 216.165.129.158
Address: 216.165.129.158#53
Name: netperf.bufferbloat.net
netperf.bufferbloat.net canonical name = netperf.richb-hanover.com
Name: netperf.richb-hanover.com
netperf.richb-hanover.com canonical name = atl.richb-hanover.com
Name: atl.richb-hanover.com
Address 1: 23.226.232.80
netperf.bufferbloat.net canonical name = netperf.richb-hanover.com
netperf.richb-hanover.com canonical name = atl.richb-hanover.com
root@lede:/usr/bin# netperf -4 -H netperf.bufferbloat.net -t TCP_STREAM -l 10 -d
resolve_host called with host 'netperf.bufferbloat.net' port '(null)' family AF_INET
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '(null)' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 23 226 232 80 0 0 0 0 0 0 0 0 0 0
scan_omni_args called with the following argument vector
netperf -4 -H netperf.bufferbloat.net -t TCP_STREAM -l 10 -d
sizeof(omni_request_struct)=200/648
sizeof(omni_response_struct)=204/648
sizeof(omni_results_struct)=284/648
Program name: netperf
Local send alignment: 8
Local recv alignment: 8
Remote send alignment: 8
Remote recv alignment: 8
Local socket priority: -1
Remote socket priority: -1
Local socket TOS: cs0
Remote socket TOS: cs0
Report local CPU 0
Report remote CPU 0
Verbosity: 1
Debug: 1
Port: 12865
Test name: TCP_STREAM
Test bytes: 0 Test time: 10 Test trans: 0
Host name: netperf.bufferbloat.net
installing catcher for all signals
Could not install signal catcher for sig 32, errno 22
Could not install signal catcher for sig 33, errno 22
Could not install signal catcher for sig 34, errno 22
Could not install signal catcher for sig 65, errno 22
remotehost is netperf.bufferbloat.net and port 12865
resolve_host called with host 'netperf.bufferbloat.net' port '12865' family AF_INET
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '12865' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 50 65 23 226 232 80 0 0 0 0 0 0 0 0 0 0
resolve_host called with host '0.0.0.0' port '0' family AF_INET
getaddrinfo returned the following for host '0.0.0.0' port '0' family AF_INET
cannonical name: '0.0.0.0'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
establish_control called with host 'netperf.bufferbloat.net' port '12865' remfam AF_INET
local '0.0.0.0' port '0' locfam AF_INET
bound control socket to 0.0.0.0 and 0
successful connection to remote netserver at netperf.bufferbloat.net and 12865
complete_addrinfo using hostname netperf.bufferbloat.net port 0 family AF_INET type SOCK_STREAM prot IPPROTO_TCP flags 0x0
getaddrinfo returned the following for host 'netperf.bufferbloat.net' port '0' family AF_INET
cannonical name: 'atl.richb-hanover.com'
flags: 0 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP addrlen 16
sa_family: AF_INET sadata: 0 0 23 226 232 80 0 0 0 0 0 0 0 0 0 0
local_data_address not set, using local_host_name of '0.0.0.0'
complete_addrinfo using hostname 0.0.0.0 port 0 family AF_UNSPEC type SOCK_STREAM prot IPPROTO_TCP flags 0x1
complete_addrinfo: could not resolve '0.0.0.0' port '0' af 0
getaddrinfo returned -11 System error
My thanks to everyone for the continued and thoughtful feedback. After catching up following a brief absence...
That's really great as follow-up! I was going to suggest running speedtest.sh off the router to take its on-router load out of the equation. I'm afraid the CAKE CPU comes as not too much of a surprise, as @dtaht has been pointing out for a little while. As additional context, could you provide a ballpark for the "low CPU" and "high CPU" figures?
That's a good suggestion, and it's been bothering me since realizing there's little error checking for netperf. I'll look into making an update that does two things:
Check netperf return status and warn/abort if detected.
Check elapsed time-to-complete for all netperf processes and warn/abort if too far out of bounds.
These two items should catch the "0.0" speeds as well as the "run extra long" situations.
@davidc502 Thanks for sticking with the troubleshooting. Your DNS service does seem to work, both with nslookup and netperf itself resolving netperf.bufferbloat.net (note the correct IP 23.226.232.80 in sadata output below):
The problem you're seeing seems related to the local host name lookup. Refer to this snippet below from my working netperf example above:
And compare it to the corresponding error output in your posted netperf debug log:
Does any of this ring a bell with respect to your DNS/resolution setup? The error code appears to be EAGAIN from errno.h. @hnyman Something you've seen before perhaps?
One further suggestion to help narrow things down. If you could try what @hnyman did, install netperf on a Linux box in your LAN, and then run the same "debug" netperf command from there:
If this does work, then it might point to some discrepency between your on-router vs. LAN DNS resolution.
@egross Thanks for your results! Those are very interesting. Nice to see consistent speeds in the gigabit range, and also the potential variation among netperf servers (as @richb-hanover-priv also highlighted).
I notice you see aggregate throughput around 800 or 900 Mbps, whether testing sequentially or concurrent. It's not obvious if that's related to your network link or CPU exhaustion. Your CPU usage is high, but you have "headroom" available by taking netperf out of the picture (i.e. normal operation). If you could run speedtest.sh from a Linux server on your LAN with netperf installed, that would provide some good additional data.
One other odd thing I noticed is that you are tweaking the CPU frequency scaling on your 2-core router, but speedtest.sh doesn't find any CPU frequency information to display from /proc/cpuinfo. Are you certain scaling is working for you? Could you post the output of your /proc/cpuinfo file for my benefit?
Thanks again everyone for the feedback and testing!
Thanks for the clarification. I honestly didn't think the CAKE overhead was that high from the little A/B testing I did in the past, but note that was done by "eyeballing" top and without the benefit of this script. I'm hoping @dtaht could share his observations...
@hnyman@egross Could you please confirm seeing CPU average frequency on your multi-core/freq. scaling boxes, since my last update to the script added this capability?
I've successfully tested on multi-core Ubuntu and single-core (no freq. scaling) OpenWrt, but don't have an OpenWrt router that does frequency scaling to check on. Thanks again!
Thanks @m4r for the extra data points. The lack of frequency information still wasn't clear until the "raspberry pi3+" hint, and checking online for sample /proc/cpuinfo files. Apparently, the contents of /proc/cpuinfo aren't standardized and vary between Linux platforms, and is a point of complaint for many.
In particular, Linux/arm doesn't include CPU frequency in /proc/cpuinfo while Linux/x86_64 does. For more reliable output, I've updated speedtest to use the sysfs interface for CPU frequency monitoring, which should be more robust.
I do see frequency in the output now. You will notice that I also started irqbalance first and that seems to help lower the cpu slightly (in the upload test, one cpu would get maxed):
root@OpenWrt:~# irqbalance
root@OpenWrt:~# speedtest.sh -H netperf-west.bufferbloat.net -p 1.1.1.1
2018-12-18 11:45:21 Starting speedtest for 60 seconds per transfer session.
Measure speed to netperf-west.bufferbloat.net (IPv4) while pinging 1.1.1.1.
Download and upload sessions are sequential, each with 5 simultaneous streams.
............................................................
Download: 910.64 Mbps
Latency: [in msec, 61 pings, 0.00% packet loss]
Min: 4.924
10pct: 5.323
Median: 6.689
Avg: 6.769
90pct: 7.466
Max: 18.762
CPU Load: [in % busy (avg +/- std dev), 57 samples]
cpu0: 77.7% +/- 9.6% @ 1725 MHz
cpu1: 59.7% +/- 10.0% @ 1725 MHz
Overhead: [in % used of total CPU available]
netperf: 49.2%
...........................................................
Upload: 815.01 Mbps
Latency: [in msec, 60 pings, 0.00% packet loss]
Min: 5.199
10pct: 10.478
Median: 21.578
Avg: 27.282
90pct: 51.101
Max: 75.928
CPU Load: [in % busy (avg +/- std dev), 52 samples]
cpu0: 91.3% +/- 0.0% @ 1725 MHz
cpu1: 95.0% +/- 0.0% @ 1725 MHz
Overhead: [in % used of total CPU available]
netperf: 39.2%
Concurrent test:
root@OpenWrt:~# speedtest.sh -H netperf-west.bufferbloat.net -p 1.1.1.1 --concurrent
2018-12-18 11:49:30 Starting speedtest for 60 seconds per transfer session.
Measure speed to netperf-west.bufferbloat.net (IPv4) while pinging 1.1.1.1.
Download and upload sessions are concurrent, each with 5 simultaneous streams.
............................................................
Download: 136.61 Mbps
Upload: 672.10 Mbps
Latency: [in msec, 60 pings, 0.00% packet loss]
Min: 5.317
10pct: 12.724
Median: 22.947
Avg: 24.738
90pct: 39.668
Max: 54.965
CPU Load: [in % busy (avg +/- std dev), 55 samples]
cpu0: 89.0% +/- 0.0% @ 1725 MHz
cpu1: 97.1% +/- 1.9% @ 1725 MHz
Overhead: [in % used of total CPU available]
netperf: 42.8%