SQM for Video streaming and Steam In home streaming over wifi

The way I see it I've got 3 primary concerns, and I do them all at the same time

  • Streaming video: Amazon/Netflix/Hulu ( wifi < Internet )
  • Playing Multiplayer games: Starcraft 2, Marvel Heroes ( wifi <> internet)
  • In home game streaming from 1 PC to another (wifi <> wifi (no internet)) (requires 50-60Mbits bandwidth, and low latency)

after installing SQM the only problems I still seem to have is with in home game streaming (and it's better), which makes me think I've missed something (sometimes the in home streaming hangs, could be more of a problem with the software)

part of my problem is just figuring out how to monitor what's going on when make changes

My router is a Toplink WDR4300 (mips), everything runs over an AN wifi. my WAN appears to be eth0.2 and is connected to my cable modem. I set eth0:2 to Cake/piece_of_cake, and wlan1 to Cake/layer_cake

 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP qlen 1000
    link/ether e8:de:27:6d:ac:b5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::eade:27ff:fe6d:acb5/64 scope link
       valid_lft forever preferred_lft forever
7: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether e8:de:27:6d:ac:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global br-lan
       valid_lft forever preferred_lft forever
    inet6 2605:6000:1025:bd::1/64 scope global dynamic
       valid_lft 554519sec preferred_lft 554519sec
    inet6 fd6e:7c96:4eeb::1/60 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::eade:27ff:fe6d:acb5/64 scope link
       valid_lft forever preferred_lft forever
8: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP qlen 1000
    link/ether e8:de:27:6d:ac:b5 brd ff:ff:ff:ff:ff:ff
9: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP qlen 1000
    link/ether e8:de:27:6d:ac:b5 brd ff:ff:ff:ff:ff:ff
    inet 72.182.29.72/19 brd 72.182.31.255 scope global eth0.2
       valid_lft forever preferred_lft forever
    inet6 2605:6000:ffc0:94:4e:4103:aa95:f740/128 scope global dynamic
       valid_lft 554519sec preferred_lft 554519sec
    inet6 fe80::eade:27ff:fe6d:acb5/64 scope link
       valid_lft forever preferred_lft forever
10: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake master br-lan state UP qlen 1000
    link/ether e8:de:27:6d:ac:b7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::eade:27ff:fe6d:acb7/64 scope link
       valid_lft forever preferred_lft forever
11: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc hfsc state UNKNOWN qlen 32
    link/ether a6:00:cf:8e:b1:5c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a400:cfff:fe8e:b15c/64 scope link
       valid_lft forever preferred_lft forever
537: ifb4eth0.2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether 9e:78:40:1e:36:65 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c78:40ff:fe1e:3665/64 scope link
       valid_lft forever preferred_lft forever
540: ifb4wlan1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether aa:88:dc:91:84:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a888:dcff:fe91:845b/64 scope link
       valid_lft forever preferred_lft forever
419: ifb4eth0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether 52:65:b7:55:54:47 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5065:b7ff:fe55:5447/64 scope link
       valid_lft forever preferred_lft forever

.

root@OpenWrt:~# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc cake 80a1: dev eth0 root refcnt 2 bandwidth 5Mbit diffserv3 triple-isolate rtt 100.0ms raw
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-lan root refcnt 2
qdisc noqueue 0: dev eth0.1 root refcnt 2
qdisc cake 80fb: dev eth0.2 root refcnt 2 bandwidth 5500Kbit besteffort triple-isolate rtt 100.0ms raw
qdisc ingress ffff: dev eth0.2 parent ffff:fff1 ----------------
qdisc cake 80fe: dev wlan1 root refcnt 5 bandwidth 295Mbit diffserv3 triple-isolate rtt 100.0ms raw
qdisc ingress ffff: dev wlan1 parent ffff:fff1 ----------------
qdisc hfsc 1: dev ifb0 root refcnt 2 default 30
qdisc fq_codel 100: dev ifb0 parent 1:10 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 300: dev ifb0 parent 1:30 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 400: dev ifb0 parent 1:40 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc cake 80a2: dev ifb4eth0 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate wash rtt 100.0ms raw
qdisc cake 80fc: dev ifb4eth0.2 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate wash rtt 100.0ms raw
qdisc cake 80ff: dev ifb4wlan1 root refcnt 2 bandwidth 295Mbit besteffort triple-isolate wash rtt 100.0ms raw

Well the wlan shaper seems superflous, so I would disable that, it is the only thing that will affect your wifi <> wifi streaming. Then again wifi is a bit icky and if all the gaming machines are also connected via the same radio thing can get a rough.

But I see a few more points that could be improved:

  1. you have a hfsc shaper on ifb0 "qdisc hfsc 1: dev ifb0 root refcnt 2 default 30". That might be a leftover from testing qos-scripts. You should really get rid of it.

  2. you have cake both on eth0 and on eth0.2 that seems wrong, could you please post the output of:
    a) cat /etc/config/sqm
    b) tc -s qdisc

For yor use case strict per-internal host fairness might be better suited than triple-isolate, please have a look at https://lede-project.org/docs/user-guide/sqm especially also the last section for instructions on how to configure this.

Best Regards

Well the wlan shaper seems superflous, so I would disable that, it is the only thing that will affect your wifi <> wifi streaming. Then again wifi is a bit icky and if all the gaming machines are also connected via the same radio thing can get a rough.

yeah the reason I've been experimenting with shapers on this, is that

  1. The mouse pointer isn't updating right in games like starcraft, some shaping seems to improve this
  2. The whole thing seems to drop off then depending on the type of shaping I do at some points, which is weird

I could put some things back on BGN, but most things (obviously) seemed to do better with the higher bandwidth.

That might be a leftover from testing qos-scripts. You should really get rid of it.
hmm... didn't notice any of my other port based rules, but is there a quick way to flush them?

I'll post the other updates when I get home tonight, as I actually have to have access to the router.

you may want to look at airtime fairness for your wifi (if you are able to use
and ath9k or ath10k based router)

unfortunantly, traffic shaping doesn't work very well with wifi until you have a
good wifi setup to start with.

check how many other things your router sees on wifi (iwlist scan), if you see a
lot, try putting a smaller antenna on your router, putting it down on the floor,
or arranging stuff to shield it in the direction of some of the other stations

reducing the interference from the other stations can make things work a lot
better (and reducing your output power will make you interefere less with the
other stations, which can help your throughput by not having them retransmit as
frequently)

David Lang

you may want to look at airtime fairness for your wifi

TP WDR 4300 is an ath9k, what is "Airtime fairness" how do I configure that?

lot, try putting a smaller antenna on your router, putting it down on the floor,
or arranging stuff to shield it in the direction of some of the other stations

uncertain how I'd accomplish a smaller antenna (it has 3), but it does site behind my couch next to the outer wall, though I do believe I'm likely getting interference, kind of hard not to in an apartment.

educing your output power will make you interfere less with the
other stations, which can help your throughput by not having them retransmit as
frequently

did try reducing power output (and finding a channel no one else was using) but it just seemed to make other signals stand out more. Not saying more tuning wouldn't be a good idea though. It is hard to know though when something is better in some of these cases (since I only really see problems in this one gaming use case, and they're not 100% like I see them over a 20 minute game, but not every 10secs or whatever), not sure if maybe there's a better way to analyze or test.

SQM_LIB_DIR=/usr/lib/sqm
SQM_STATE_DIR=/var/run/sqm
SQM_QDISC_STATE_DIR=${SQM_STATE_DIR}/available_qdiscs
SQM_CHECK_QDISCS="fq_codel codel pie sfq cake"
SQM_SYSLOG=1

since II was playing with it after I posted the initial it reflects the differences

qdisc noqueue 0: dev lo root refcnt 2
qdisc cake 80a1: dev eth0 root refcnt 2 bandwidth 5Mbit diffserv3 triple-isolate rtt 100.0ms raw
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-lan root refcnt 2
qdisc noqueue 0: dev eth0.1 root refcnt 2
qdisc cake 811d: dev eth0.2 root refcnt 2 bandwidth 5500Kbit besteffort triple-isolate rtt 100.0ms raw
qdisc ingress ffff: dev eth0.2 parent ffff:fff1 ----------------
qdisc htb 1: dev wlan1 root refcnt 5 r2q 10 default 10 direct_packets_stat 0 direct_qlen 1000
qdisc fq_codel 110: dev wlan1 parent 1:10 limit 1001p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc ingress ffff: dev wlan1 parent ffff:fff1 ----------------
qdisc hfsc 1: dev ifb0 root refcnt 2 default 30
qdisc fq_codel 100: dev ifb0 parent 1:10 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 300: dev ifb0 parent 1:30 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 400: dev ifb0 parent 1:40 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc cake 80a2: dev ifb4eth0 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate wash rtt 100.0ms raw
qdisc cake 811e: dev ifb4eth0.2 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate wash rtt 100.0ms raw
qdisc htb 1: dev ifb4wlan1 root refcnt 2 r2q 10 default 10 direct_packets_stat 0 direct_qlen 32
qdisc fq_codel 110: dev ifb4wlan1 parent 1:10 limit 1001p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
root@OpenWrt:~# iw wlan1 scan | grep "primary channel:"
                 * primary channel: 40
                 * primary channel: 40
                 * primary channel: 44
                 * primary channel: 136
                 * primary channel: 149
                 * primary channel: 149
                 * primary channel: 149
                 * primary channel: 149
                 * primary channel: 153
                 * primary channel: 157
                 * primary channel: 161
                 * primary channel: 136
                 * primary channel: 104
                 * primary channel: 104
                 * primary channel: 136
                 * primary channel: 149
root@OpenWrt:~#
root@OpenWrt:~#
root@OpenWrt:~# iw wlan1 scan | grep "primary channel:" | sort
command failed: Resource busy (-16)

have no idea what's going on with "resource busy"

Mode: Master | SSID: Bifrost
BSSID: E8:DE:27:6D:AC:B7 | Encryption: WPA2 PSK (CCMP)
Channel: 116 (5.580 GHz) | Tx-Power: 21 dBm
Signal: -43 dBm | Noise: -91 dBm
Bitrate: 133.9 Mbit/s | Country: US

trying to clear out all the rules, but some of them don't want to go? probably just doing something wrong there

root@OpenWrt:~# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-lan root refcnt 2
qdisc noqueue 0: dev eth0.1 root refcnt 2
qdisc noqueue 0: dev eth0.2 root refcnt 2
qdisc ingress ffff: dev eth0.2 parent ffff:fff1 ----------------
qdisc fq_codel 0: dev ifb0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev ifb4eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc noqueue 0: dev wlan1 root refcnt 2
qdisc ingress ffff: dev wlan1 parent ffff:fff1 ----------------
qdisc fq_codel 0: dev ifb4eth0.2 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev ifb4wlan1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
root@OpenWrt:~# tc qdisc del dev wlan1 root
RTNETLINK answers: No such file or directory
root@OpenWrt:~# tc qdisc del dev br-lan root
RTNETLINK answers: No such file or directory
root@OpenWrt:~# tc qdisc del dev eth0.2
RTNETLINK answers: Invalid argument
root@OpenWrt:~# tc qdisc del dev eth0.2 root
RTNETLINK answers: No such file or directory
root@OpenWrt:~# tc qdisc del dev ifb0 root
RTNETLINK answers: No such file or directory
root@OpenWrt:~# tc qdisc del dev ifb4eth0 root
RTNETLINK answers: No such file or directory
root@OpenWrt:~# tc qdisc del dev ifb4eth0.2 root
RTNETLINK answers: No such file or directory

able to get it down to this by shutting sqm on interfaces, but I haven't been able to delete what looks like these extra rules?

root@OpenWrt:~# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-lan root refcnt 2
qdisc noqueue 0: dev eth0.1 root refcnt 2
qdisc noqueue 0: dev eth0.2 root refcnt 2
qdisc fq_codel 0: dev ifb0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev ifb4eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc noqueue 0: dev wlan1 root refcnt 2
qdisc noqueue 0: dev wlan0 root refcnt 2

does this mean airtime fairness is enabled?

root@OpenWrt:~# cat  /sys/kernel/debug/ieee80211/phy0/ath9k/airtime_flags
7
root@OpenWrt:~# cat  /sys/kernel/debug/ieee80211/phy1/ath9k/airtime_flags
7

what are the possible different settings for airtime_flags? wondering if fairness is part of the problem, e.g. my cell getting maybe some stupid amount of transmission time over the wifi.

Unfortunately this is not what I was looking for:

root@router:~# cat /etc/config/sqm

config queue
	option debug_logging '0'
	option verbosity '5'
	option upload '9545'
	option linklayer 'ethernet'
	option overhead '34'
	option linklayer_advanced '1'
	option tcMTU '2047'
	option tcTSIZE '128'
	option tcMPU '64'
	option qdisc_advanced '1'
	option ingress_ecn 'ECN'
	option egress_ecn 'NOECN'
	option qdisc_really_really_advanced '1'
	option squash_dscp '0'
	option squash_ingress '0'
	option download '46246'
	option qdisc 'cake'
	option script 'layer_cake.qos'
	option iqdisc_opts 'nat dual-dsthost mpu 64'
	option eqdisc_opts 'nat dual-srchost mpu 64'
	option interface 'pppoe-wan'
	option enabled '1'
	option linklayer_adaptation_mechanism 'default'

I was mostly concerned about the residual shapers (htb/hfsc/cake) the fq_codels are fine, as I believe lede defaults to fq_codel on all interfaces (which is a sane and decent choice). The easiest way to get rid of tjose shapers that are just leftovers from non-running sqm or qos instances is a reboot of the router...

as I said, the fq_codel instances are just fine and dandy...
But
Sorry no real clue about the wifi settings... But I would always take @dlang 's input very serious as I know that he has first hand experience in making conference scale wifi work well.

 cat /etc/config/sqm

config queue
        option debug_logging '0'
        option verbosity '5'
        option linklayer 'none'
        option enabled '1'
        option interface 'eth0.2'
        option qdisc 'cake'
        option qdisc_advanced '0'
        option download '55000'
        option upload '5000'
        option script 'layer_cake.qos'

config queue
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option linklayer 'none'
        option enabled '1'
        option interface 'wlan1'
        option download '295000'
        option upload '295000'
        option qdisc_advanced '0'

But I would always take @dlang 's input very serious as I know that he has first hand experience in making conference scale wifi work well.

right, and I'm not, not, Set my transmit power to 0, 1 Mw. Tried using my bgn antenna for the tv, but apparently some of the streams didn't like not having that extra bandwidth on wireless.

Though one observation tonight, the network periodically gets "slow" even when other things aren't happening. Something is obviously happening, but I'm not sure what it is. Most of the time now I have 1-5ms ping, but occasionally it spikes to over 100, and then it goes back down after about 1-2 seconds.

The 2.4ghz channel is horrible for gaming. If you can't wire your place with CAT 6A cable because you're leasing and don't want the hassle of a long exposed cable may I suggest a powerline adapter? If you have everything set up right they are pretty good. I used to run one when gaming on my PS4 and had a much more stable connection over WIFI. It would also probably take care of some of that fairness too without having to use SQM or Traffic Shapers as the powerline adapter on average can only utilize around 40% of the overall speed allowed by your ISP. Maybe newer models have improved that. If you want improved latency you should still run SQM. I was running a 200/20 connection and would get around 60-80 Mbps on the downstream and 20 Mbps on the upstream over a powerline adapter.

If you decide to go this route remember you want all devices that have motorized fans and AC to DC power converters (phone chargers, laptop chargers, etc...) to be behind a surge protector if they're on that circuit to get less noise on the line.

@mj5030 This is interesting. I had always thought of powerline adapters as a useful add-on if there was no other way to extend your network, but I had always suspected they would be a less-than-perfect solution, especially if they induce latency.

So, I'm curious to hear your experience:

  • What brand/model are you using?
  • Standard US 120V outlets? Distance between outlets (if you can say)?
  • What data rates do they achieve (compared to your direct wire/wi-fi connection)
  • Do they induce latency (more than the direct wire/wi-fi)? The easiest way to find out is to use the procedure at Tests for Bufferbloat. I'd be curious to know the numeric results from either the DSLReports site or the simple ping-while-downloading test.

Thanks!

The 2.4ghz channel is horrible for gaming.

I'm on 5Ghz, and really have no problems with the games themselves, I'm trying to figure out though if there's a way to optimize Steam's "In home streaming" (streaming the IO from one computer to another), it's obvious I've managed to optimize it to some degree. That said I have thought about throwing a line over the back of the couch, like I did just to update OpenWrt -> LEDE.

TP-Link AV500 Nano Kit (TL-PA4010KIT)

  • Don't let it fool you, the connection between them can support up to 500 Mbps, but it only has a 10/100 ethernet adapter, so overall device to router can only be 100Mbps throughput (newer and more expensive models have 10/100/1000 adapters)

Standard US 120V outlets
Outlets are currently on different circuits judging from the breaker box, estimated maybe 50-100 feet worth of wiring between them and passing through breaker box

Direct wire I get 200+ Mbps down 20+ Mbps up
2.4ghz Wireless 40-50 Mbps down 20+ Mbps up (Really crowded where I live)
5ghz Wireless Router doesn't support it
Powerline adapter 35-40% ISP allowed bandwidth down 20+ Mbps up
(I had 100Mbps before and would get around 35-40 Mbps, once I upgraded to 200 Mbps I got around 65-80 Mbps, not sure why Powerline adapters act in this way but I read somewhere that's what to expect, not sure if newer models support better throughput)

Currently not running SQM due to the processor of my router can't handle above 140 Mbps with it enabled. But I did test it and the downstream and upstream was about the same on both WIFI and Powerline adapter, but with WIFI I noticed spikes. Speedtest.net ping about the same, 9 ms direct connect and WIFI, 11 ms Powerline adapter. Not too significant of a difference.

This is very helpful info. And Powerline adapters seem to be more powerful than I expected.

The one remaining question I have is whether they induce latency, whether you're using SQM or not... (Using most speed test's reported ping time is bogus. Those tests only measure latency when the line's quiet - giving the best possible number. Their tests ignore the latency when you're actually using the line - uploading or downloading - which is when latency/lag really matters.)

A simple test procedure is:

  • Start a continuous ping to Google. Notice the response times (in msec)
  • Start a speedtest - any of the major vendors would be fine
  • During the speed test, notice how much change there is in the response times of the ping test

You'd have to do this twice: first for an ethernet/wifi connection directly through the router, and second for a computer that's connected through the powerline adapter. I'd be interested to see whether the powerline adapter adds additional latency to the ping numbers. Thanks again!

you may want to look at airtime fairness for your wifi

TP WDR 4300 is an ath9k, what is "Airtime fairness" how do I configure that?

Airtime Fairness is a way for the access point to give each device a fair share
of time instead of a fair share of data transfer.

The problem with giving devices a fair share of data transfer is that one device
may be operating at 1Mb/s and another may be operating at 56Mb/s. If you give
them the same bandwitdth, your network's total data transfer is very slow
(because the slow device gets 56x as much time as the fast device)[1]

With Airtime Fairness, each device gets the same amount of time to transmit (or
closer to it anyway), which will mean that your faster device will transmit a
lot more data, while your slower device will get much smaller transmission
slots. besides the amount of data transferred, this means that the fast device
never has to wait as long for the slow device as it would if the slow device was
getting a much larger timeslot.

This is going through a lot of development right now, the make-wifi-fast mailing
list is where a lot of the discussion is happening, and the patches are being
fed into LEDE, some of them are in the recent releases, some are too new to be
in anything but the nightlies, and some are just patches on the list.

David Lang

[1] it's actually even worse than this, while the slow device is taking so much
time for such a large chunk of data, it's far more likely that something with
interfere with the transmission and the entire large block will need to be
retransmitted

lot, try putting a smaller antenna on your router, putting it down on the floor,
or arranging stuff to shield it in the direction of some of the other stations

uncertain how I'd accomplish a smaller antenna (it has 3), but it does site behind my couch next to the outer wall, though I do believe I'm likely getting interference, kind of hard not to in an apartment.

disconnect one or two of them. Get a cheap directional antenna and point it away
from the outside wall, try putting a cookie sheet or aluminum foil on the outer
wall (ideally position the device so that the antenna is ~3cm from the metal)

above all, try not to use 2.4GHz band, there are only three channels that are
effectively available (1,3,11) and someone using something in the middle is
going to clobber multiple of these. 5GHz has many more channels, so it's easier
to not interfere with other users.

It is possible to cover an area effectively on 2.4GHz (I do it at the Scale
conference), but it's pretty much impossible without the cooperation of the
other access points near you.

David Lang

So here I would add:
option qdisc_advanced '1'
option qdisc_really_really_advanced '1'
option iqdisc_opts 'nat dual-dsthost mpu 64'
option eqdisc_opts 'nat dual-srchost mpu 64'
option linklayer 'ethernet'
option overhead '18'
option linklayer_adaptation_mechanism 'cake'
To better account for your cable link and to implement per-internal-host-IP fairness for the wan link..

Is this really periodic? I observed something similar from my macbook in the past, and I now believe those where the times the macbook scanned all wifi frequencies for APs.

Wireless - 40.95 Mbps Down / 21.71 Mbps Up

ping -i 2 www.google.com
#ping www.google.com 2 second interval
PING www.google.com (172.217.2.228) 56(84) bytes of data.
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=1 ttl=56 time=13.3 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=2 ttl=56 time=17.3 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=3 ttl=56 time=13.8 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=4 ttl=56 time=13.8 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=5 ttl=56 time=13.5 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=6 ttl=56 time=13.4 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=7 ttl=56 time=13.7 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=8 ttl=56 time=13.9 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=9 ttl=56 time=14.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=10 ttl=56 time=13.6 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=11 ttl=56 time=13.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=12 ttl=56 time=13.4 ms
#Downstream test started
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=13 ttl=56 time=50.8 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=14 ttl=56 time=34.4 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=15 ttl=56 time=100 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=16 ttl=56 time=33.2 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=17 ttl=56 time=31.4 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=18 ttl=56 time=68.2 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=19 ttl=56 time=43.1 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=20 ttl=56 time=50.3 ms
#Downstream test ended
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=21 ttl=56 time=13.3 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=22 ttl=56 time=17.3 ms
#Upstream test started
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=23 ttl=56 time=36.9 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=24 ttl=56 time=43.3 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=25 ttl=56 time=39.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=26 ttl=56 time=52.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=27 ttl=56 time=44.1 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=28 ttl=56 time=32.5 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=29 ttl=56 time=47.2 ms
#upstream test ended
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=30 ttl=56 time=13.1 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=31 ttl=56 time=167 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=32 ttl=56 time=13.9 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=33 ttl=56 time=12.9 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=34 ttl=56 time=14.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=35 ttl=56 time=15.0 ms
64 bytes from dfw28s01-in-f4.1e100.net (172.217.2.228): icmp_seq=36 ttl=56 time=12.9 ms
^C
--- www.google.com ping statistics ---
36 packets transmitted, 36 received, 0% packet loss, time 70078ms
rtt min/avg/max/mdev = 12.937/32.059/167.200/30.069 ms

Direct Connection - 209.03 Mbps Down / 21.79 Mbps Up

ping -i 2 www.google.com
PING www.google.com (172.217.6.164) 56(84) bytes of data.
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=1 ttl=56 time=12.4 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=2 ttl=56 time=12.9 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=3 ttl=56 time=12.0 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=4 ttl=56 time=11.9 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=5 ttl=56 time=11.8 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=6 ttl=56 time=11.5 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=7 ttl=56 time=13.1 ms
#Downstream test started
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=8 ttl=56 time=30.9 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=9 ttl=56 time=39.9 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=10 ttl=56 time=117 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=11 ttl=56 time=196 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=12 ttl=56 time=257 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=13 ttl=56 time=320 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=14 ttl=56 time=370 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=15 ttl=56 time=58.7 ms
#Downstream test ended
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=16 ttl=56 time=11.9 ms
#Upstream test started
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=17 ttl=56 time=47.4 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=18 ttl=56 time=45.6 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=19 ttl=56 time=46.4 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=20 ttl=56 time=45.5 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=21 ttl=56 time=43.8 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=22 ttl=56 time=44.7 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=23 ttl=56 time=42.4 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=24 ttl=56 time=40.7 ms
#Upstream test ended
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=25 ttl=56 time=12.7 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=26 ttl=56 time=13.0 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=27 ttl=56 time=12.0 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=28 ttl=56 time=12.3 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=29 ttl=56 time=11.7 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=30 ttl=56 time=12.1 ms
64 bytes from dfw25s17-in-f164.1e100.net (172.217.6.164): icmp_seq=31 ttl=56 time=12.9 ms
^C
--- www.google.com ping statistics ---
31 packets transmitted, 31 received, 0% packet loss, time 60059ms
rtt min/avg/max/mdev = 11.555/62.352/370.323/91.905 ms

Powerline Adapter - 57.44 Mbps Down / 21.72 Mbps Up

ping -i 2 www.google.com
PING www.google.com (172.217.9.132) 56(84) bytes of data.
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=1 ttl=56 time=17.4 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=2 ttl=56 time=21.2 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=3 ttl=56 time=16.6 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=4 ttl=56 time=17.5 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=5 ttl=56 time=16.8 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=6 ttl=56 time=18.0 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=7 ttl=56 time=16.7 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=8 ttl=56 time=19.9 ms
#Downstream test started
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=9 ttl=56 time=40.5 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=10 ttl=56 time=20.7 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=11 ttl=56 time=20.5 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=12 ttl=56 time=24.8 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=13 ttl=56 time=31.9 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=14 ttl=56 time=37.6 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=15 ttl=56 time=29.9 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=16 ttl=56 time=26.6 ms
#Downstream test ended
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=17 ttl=56 time=17.3 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=18 ttl=56 time=17.4 ms
#Upstream test started
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=19 ttl=56 time=49.0 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=20 ttl=56 time=49.4 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=21 ttl=56 time=47.8 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=22 ttl=56 time=48.6 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=23 ttl=56 time=49.5 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=24 ttl=56 time=49.7 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=25 ttl=56 time=49.2 ms
#Upstream test ended
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=26 ttl=56 time=15.4 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=27 ttl=56 time=16.6 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=28 ttl=56 time=17.3 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=29 ttl=56 time=19.1 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=30 ttl=56 time=18.3 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=31 ttl=56 time=17.6 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=32 ttl=56 time=17.3 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=33 ttl=56 time=18.0 ms
64 bytes from dfw25s26-in-f4.1e100.net (172.217.9.132): icmp_seq=34 ttl=56 time=16.9 ms
^C
--- www.google.com ping statistics ---
34 packets transmitted, 34 received, 0% packet loss, time 66075ms
rtt min/avg/max/mdev = 15.441/26.843/49.746/12.718 ms

Please note that on stock firmware that my 10/100 devices can get around 96 Mbps, but on LEDE they only get around 60 Mbps max. I have posted about this issue. Even though both the router and the laptop had 10/100/1000 NICs the Powerline adapters having only 10/100 cut the overall throughput on the downstream some.

The high ping on the direct connect was most likely the router being under a higher load at the time due to the increased speed on the downstream. I will have to try the SFE build sometime and see if it reduces the latency at those speeds.

Is this really periodic?

I think so, but I'm not sure if it's a consistent period or not, I've yet to figure out how to measure it. It's really only noticable in the unbufferable constant stream of homestreaming.

from my macbook in the past, and I now believe those where the times the macbook scanned all wifi frequencies for APs

hmm, I wonder if there's a way to shut that off or reduce it, only scan for saved APs you know about, or don't scan while connected.