Recommended settings for the netflix speedtest (fast.com)

Dear All,

netflix offers a great speedtest (https://fast.com), served from its own content delivery nodes aimed at allowing endusers to assess how well netflix will/does work on their internet access link. IMHO this is really great, because here the goals of the party responsible for operating the measurement infrastructure and the goal of us end users to veridically measure our link's capacity is well-aligned. This means that this is an excellent tool for generating loads for bufferbloat assessments (the test will also report latency numbers, but not at the required/desired temporal resolution for buffer debloating yet (but Netflix is aware of the issue and is trying to come up with a solution that hits the common goal of keeping the test's report simlple and easy to understand for normal users, as well as our goal of getting more out of the latency under load probes that are measured as well)).

Now, by default https://fast.com will simply measure the download direction for a relative short amount of time, but following the listed steps will turn this into a longer running bi-directional load generator:

  1. Browse to https://fast.com and run a test.

  2. Click the "Show more info" button.

  3. Click the "Settings" link and set the following values:

  4. Parallel connections: Set the "Min: and "Max" number of streams to your desired number, like "Min: 16 Max: 16" for a fast link of say 100 Mbps (and "Min: 1 Max: 1" for a slow ADSL link), this might require some trial and error depending on the link.

  5. Test duration (seconds): Set the "Min: and "Max" number of seconds to test to a duration long enough to actually saturate your link and ideally longer than typical speedtests so that the test can not be easily "gamed" by an ISP by giving users transiently more priority. IMHO "Min: 30 Max: 30" is a decent value.

  6. Check the Measure loaded latency during upload checkbox: currently the reporting is not ideal for de-bloating, but this is a number we really want

  7. Check the Always show all metrics checkbox: rationale should be obvious :wink:

  8. Optionally check the Save config for this device checkbox: unless you want to start with the default tests everytime you navigate to fast.com.

  9. Click the "Save" button, and your first test with the new settings will start.

Here is a slightly corrupted example of what you will see after such a test:

Note: The client address reveals whether the test was run with IPv6 (as in the example) or with IPv4 (the geoip however is approximate at best, the test was not run from Lueneburg, not even close ;)).

So for debloating this is a great bi-directional load generator with a relative potent infrastructure to back this up. I have no experience whether this is suitable for really fast links in the 1Gbps class (Netflix should be mostly interested whether a given link is capable to one to a few UHD streams, so probably will not try to hard to saturate the link once that goal has been reached, especially since the same infrastructure used in the test will also serve actual Netflix customers, so I would also expect less priority for tests during prime time).
As @slh noted, and I already phantasised about, the download test seems to currently reach a ceiling at around the 100 Mbps that Netflix' premium plan requires for the allowed 4 concurrent UHD streams at ~25Mbps each. That still should allow reasonable testing for many of the slower links, but people with faster links might need to wait for a faster.com, :wink:

All in all a great service by Netflix, which hopefully will grow a better latency under load reporting in the future so it can serve as stand alone bufferbloat test.

Crude hack alert: As a quick and dirty method to use this for de-bloating just run an latency probe in parallel. For example, install mtr on the router (opkg update; opkg install mtr) and run a test in a ssh session opened in parallel to the browser targeted at a decent ICMP reflector like Google's DNS-servers:

mtr -ezbw -i 0.2 -c 70 8.8.8.8

Then after fast.com reports you will see something like:

root@XXXXXX:~# mtr -ezbw -i 0.2 -c 70 8.8.8.8
Start: 2020-12-20T12:54:18+0100
HOST: XXXXXX                                                                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS6805   loopback1.0002.acln.01.ham.de.net.telefonica.de (62.52.200.148)      0.0%    70   11.7  16.7  10.8  71.4  10.8
  2. AS6805   bundle-ether28.0006.dbrx.01.ham.de.net.telefonica.de (62.53.12.18)   0.0%    70   13.4  15.9  11.1  59.2   9.4
  3. AS6805   ae1-0.0001.prrx.01.ham.de.net.telefonica.de (62.53.25.59)            0.0%    70   13.1  15.7  10.7  51.7   9.0
  4. AS15169  74.125.48.102 (74.125.48.102)                                        0.0%    70   12.2  15.0  10.8  63.9   8.6
  5. AS15169  216.239.63.17 (216.239.63.17)                                        0.0%    70   14.4  16.8  12.8  77.0   9.1
  6. AS15169  209.85.245.203 (209.85.245.203)                                      0.0%    70   13.7  16.1  11.6  75.8   9.5
  7. AS15169  dns.google (8.8.8.8)                                                 0.0%    70   12.3  15.6  11.0  74.5   9.9
root@XXXXXX:~# 

Then compare the best and worst column of the last hop . In this example the last hop accumulated around 60ms somewhen during the test, which is not great, but also not terrible.

Looking at the timing statistics from SQM/cake:

root@XXXXXX:~# tc -s qdisc
[...]
qdisc cake 81ff: dev pppoe-wan root refcnt 2 bandwidth 31Mbit diffserv3 dual-srchost nat nowash no-ack-filter split-gso rtt 100.0ms noatm overhead 50 mpu 88 
 Sent 1880359214 bytes 8201470 pkt (dropped 14264, overlimits 7101827 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 1368192b of 4Mb
 capacity estimate: 31Mbit
 min/max network layer size:           28 /    1492
 min/max overhead-adjusted size:       88 /    1542
 average network hdr offset:            0

                   Bulk  Best Effort        Voice
  thresh       1937Kbit       31Mbit     7750Kbit
  target          9.4ms        5.0ms        5.0ms
  interval      104.4ms      100.0ms      100.0ms
  pk_delay          1us         18us        219us
  av_delay          0us          3us         30us
  sp_delay          0us          2us          3us
  backlog            0b           0b           0b
  pkts                2      8211943         3789
  bytes             120   1900943423       677302
  way_inds            0       652292           76
  way_miss            1        98498          669
  way_cols            0            0            0
  drops               0        14264            0
  marks               0           43            0
  ack_drop            0            0            0
  sp_flows            1            5            0
  bk_flows            0            1            0
  un_flows            0            0            0
  max_len            60        30660          576
  quantum           300          946          300

[...]
qdisc cake 8200: dev ifb4pppoe-wan root refcnt 2 bandwidth 95Mbit diffserv3 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100.0ms noatm overhead 50 mpu 88 
 Sent 23319202722 bytes 17249006 pkt (dropped 26139, overlimits 26952163 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 781696b of 4750000b
 capacity estimate: 95Mbit
 min/max network layer size:           28 /    1492
 min/max overhead-adjusted size:       88 /    1542
 average network hdr offset:            0

                   Bulk  Best Effort        Voice
  thresh       5937Kbit       95Mbit    23750Kbit
  target          5.0ms        5.0ms        5.0ms
  interval      100.0ms      100.0ms      100.0ms
  pk_delay        700us        4.3ms        117us
  av_delay        163us        3.7ms         25us
  sp_delay         11us         74us          4us
  backlog            0b           0b           0b
  pkts            23665     17250252         1228
  bytes        30248775  23327569124        84431
  way_inds            0      1107591            0
  way_miss          164        99572          167
  way_cols            0            2            0
  drops               4        26135            0
  marks               0          818            0
  ack_drop            0            0            0
  sp_flows            1            5            0
  bk_flows            0            1            0
  un_flows            0            0            0
  max_len          1470         1492          793
  quantum           300         1514          724

[...]
root@XXXXXX:~# 

The reported peak delay (pk_delay) is 4.3ms so the 60+ ms latency under load increase happened either in my WiFi network or upstream of my internet access link... in other words my SQM configuration sees to be okayish.
My bet is on WiFi (as I see similar magnitude spikes when running the mtr test without fast.com in the background), my apple laptop has periodic latency spikes over Wifi when it insists upon scanning all channels, but for today's quick and dirt test, I was not working on my normal work place where I use a wired ethernet connection, but from the couch).

I haven't seen fast.com scaling much beyond 100 MBit/s in download direction so far (coincidentally the upstream direction does not appear to be throttled), despite being on a 400/200 ftth link (which fast.com benchmarks as 100/210 MBit/s, while real-world testing and other speedtests usually arrive at 420/220 MBit/s; fast.com was better before summer).

2 Likes

Mmmh, due to being "blessed" with only a 100/40 (effectively 100/31) link I do not see that ceiling, but I have a hunch that from netflix's perspective the reason for the test might not be veridical capacity testing, but merely whether a link is fast enough for Netflix highest plan with a little reserve, and 100 Mbps, would allow the 4 concurrent UHD (@25 Mbps) streams that their Premium tier seems to allow. I guess it is a blessing and a curses that Netflix uses the same infrastructure it uses to serve customers here.

I will add a bigger disclaimer to the main test, thanks!

Not sure how it's measuring, but it's not very exact, I'm on 1/1 gbit :wink:

Unless it's combined speed it's showing.

Well, as indicated above, this test might not be ideal for very fast links...

Plus, you did not actually follow the recommendations, otherwise we would see a bit more data about the test.