How do the results of relate to a browser-based and ISP-advertised speed?


Please re-direct me to where I should properly be asking this question if this isn’t the place to discuss this, but how do the results of relate to a browser-based and ISP-advertised speed?

When I run an Internet Speedtest from, say, Ookla, I often see myself getting the speed that I am paying for that was advertised by my ISP (about 11Mbps according to their marketing material).

I’m happy with this speed (because any faster would be wasted on me) however I notice that when I run the numbers that return are significantly less, in the range of 2 - 5 Mbps.

What is going on here? I do get the message “ WARNING: netperf returned errors. Results may be inaccurate!” and my preliminary research into this leads me to believe this has something to do with the upstream servers the test is using to run becoming unavailable to abuse (???).

But is there anything else I’m missing? Am I supposed to do a conversion of the number that returns to equate it to that which a browser based test would give back?

Both attempt to report the IP/TCP payload rate between sender and receiver, which is also often called goodput. Also both will use (by default) multiple concurrent TCP streams to be able to essentially saturate the link, the exact number of flows is implementation dependent.

I would guess this runs into the problem that the infrastructure used by is paid by a volunteer (this forum's excellent @richb-hanover-priv ) out of his own pocket, and bandwidth/traffic volume adds up quickly for a speedtest end point. One option to test this would be to repeat that test after the new months (and assumably billing period) has started hoping that there might be less throttling.

BUT really, using is not a long-term stable strategy... IMHO the latency measuring and reporting parts of that test should be combined with a different load generating component that employs beefier/better connected remote measurement servers, like e.g. Ookla's However that does not work all that well from a router. No it can be argued, that terminating a speedtest on a router is not the best test for a router's routing performance (it is rather a test for using a router as server) and hence running the speedtest from a device in the internal/home-network might be a better test, in which case using the speedtest-cli application for load generation might be an option again...

Hope that helps...

1 Like

Thanks @moeller0

Not going to lie, I'll have to research some of the terms you used such as payload rate before I can understand what you are saying. I'll search them up later on.

I do understand what you said about the economics of the infrastructure however.

My router is a Raspberry Pi with 8GB of LPDDR4-3200 SDRAM and a Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz. I could well be wrong about this, but that should be enough to mitigate the problem of system load influencing the test, right? I mean, the CPU's all-time-high is 2% and I've never seen more than 1% of the memory being used either.

Ah, sorry in "packet networking" each packet needs to carry both administrative overhead (think the sender and receiver address on a packet you send by mail) as well as the actual data a user wants to transfer. The "payload" is exactly that data, and the payload rate the achievable rate for packets of a given size. Why does size matter? Because the administrative overhead typically is a fixed number of bytes independent of the actual payload size in a given packet, so smaller packets have a worse payload to overhead ration than bigger packets (for bulk data transfers, TCP tries to always use the biggest possible/allowed packets to maximize the payload rate).

Yes and no, really sourcing and sinking data is a different task than "just" routing, and you might run into subtle cache usage issues here. But for your link speed and beefiness of router, I fully agree you have the CPU cycles to spare.

Side note, measuring CPU load is trickier than it seems, as not everything is reported as load, and e.g. top's single line usage summary is hard to interpret on multicore routers.
First the best way to estimate CPU load is to look at 100-idle% (if this gets above "90%" per CPU you can expect some CPU latency issues to crop up).
The multicore issue is that busybox top will report 100% as all CPUs maximally loaded, so when you calculate e.g. 25% load (method see above) this can mean one CPU running at 100% and the others at 0% or any combination of loads that result in an average of 25%.

Typically I use htop as that will allow to see the load per CPU (opkg update; opkg install htop) and configure it:
press F2 for Setup
in the left most list navigate down to Display options
there check the "Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest)" checkbox
then go back and navigate to Right column
select/add CPU 1 and CPU 2 and toggle though both to get the textual representation
press F10 to save the changes
The result should look somewhat like:

  0[||||||                                                                            5.9% N/A] Tasks: 56, 32 thr, 117 kthr; 1 running
  1[||||                                                                              3.9% N/A] Load average: 0.12 0.12 0.09 
Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||340M/1005M] Uptime: 5 days, 02:24:28
Swp[                                                                                     0K/0K] PSI some CPU:      nan%   nan%   nan% 
                                                                                                PSI full memory:   nan%   nan%   nan% 
                                                                                                  0:  4.9% sy:  1.0% ni:  0.0% hi:  0.0% si:  0.0% st:  0.0% gu:  0.0% wa:  0.0% 
                                                                                                  1:  1.0% sy:  2.9% ni:  0.0% hi:  0.0% si:  0.0% st:  0.0% gu:  0.0% wa:  0.0% 

albeit in color

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.