I am running gwlim's Sept 2017 fast path build on a TL-WR1043NDv3 with SQM on the LAN interface (eth1). It's been running amazing. However, I noticed a problem in what I would guess in the queueing and would like advice on how to address it.
I was playing a video on YouTube, on my Playstation 4, which is connected via LAN (eth1) and the audio would skip when I would check my Playstation Messages on my Android device (specifically messages with images) connected via wireless. My wireless devices are not controlled by the qdiscs of SQM. My SQM settings on a overall 210~215Mbps/21-22Mbps connection are set at 195,000kbps and 19,500kbps respectively. It should leave enough room to load these messages without problem, and YouTube does not nearly use that much bandwidth. So my assumption was a conflict in the queueing system. My /etc/config/SQM is as follows:
config queue 'eth1'
option interface 'eth1'
option egress_ecn 'ECN'
option itarget 'auto'
option etarget 'auto'
option debug_logging '0'
option verbosity '5'
option qdisc 'cake'
option qdisc_advanced '1'
option qdisc_really_really_advanced '1'
option ingress_ecn 'NOECN'
option linklayer 'ethernet'
option iqdisc_opts 'nat dual-srchost mpu 64'
option eqdisc_opts 'nat dual-dsthost mpu 64'
option overhead '18'
option squash_dscp '1'
option squash_ingress '1'
option script 'piece_of_cake.qos'
option download '19500'
option upload '195000'
option enabled '1'
Please bare in mind, this is on an inward interface.
My wireless devices are on a specific IP block reserved in DHCP (as well as all my other devices that are not guests). Is there any IP Table rules I can add to ensure my wireless devices take a lower priority over my wired devices so this does not occur? Or any other work around?
Seems like you have no queueing for packets flowing wan to wireless. So if wireless device starts big up or download, you can easily saturate your ISP queue or similar. I don't think there is anything you can do unless you force all packets to go through a queue. You can do that by bridging a veth pair into your LAN bridge and put a queue on the injector end, then policy route all LAN bound traffic through the veth...
Tricky but it works, not sure if the overhead essentially erases your fastpath advantages though
Thank you for your help @dlakelan. I will give try that out and see how it works.
does putting sqm on the wan interface instead of lan not solve your problem?
Sqm on wan is not compatible with the fastpath acceleration
That's kinda sad for ppl with no powerful device, do you think that will change in the future or is it not likely ?
The whole concept of SQM is to have full control over the packets, all packets - to send or delay them exactly at the intended time. Here it is essential that the packet passes the full networking stack normally.
The whole whole concept of fastpath or flow-offloading however is to bypass as much of the kernel as possible, ideally to even offload it to the hardware, without the kernel even knowing about the packet. This only works for established connections, where fastpath/ flow-offloading just categorizes it into the same 'flow' and applies the same rules as for all previous packets without forcing it all the way through the kernel's firewall rule sets.
Both strategies are completely orthogonal and will 'never' work together, the only thing you can hope for is fastpath/ flow-offload to notice that they can't cooperate with SQM and to silently disable itself (respectively to notice that this traffic, which in case of SQM would be all traffic, is too complex to be offloaded).
@slh, thanks for the details...
Now it makes sense for me that these 2 techniques doesn't go well together !
mmmh, I thought that the new upstream kernel software "fastpath" is compatible with sqm, did I get this wrong?
That would be software flow-offload, yes, it's compatible (hardware flow-offload is not) - in the way that it doesn't consider SQM traffic as offloadable and let's the kernel deal with it as usual.