QoS help please

All i want is for downloads to be given lowest priority over everything else, and to keep pings low as possible while the connection is fully saturated. I know how to use cake to keep the pings low and all, but the problem im having is that internet traffic isn't being prioritized the way i want it to be.

For example if i am watching a video or stream or someone else in the house is doing so, and downloads are going at the same time i want the download to slow down accordingly so we dont get any buffering. Can this be done in the router without having to go to every device in the house to limit download speeds?

You might look into FireQOS.

Related forum threads...

https://forum.openwrt.org/search?expanded=true&q=fireqos

Short answer is yes. Long answer requires knowing your level of configuration complexity tolerance.

Yes see fireqos as referred above. What you want is connbytes based firewall rules to tag your large downloads and then classify as below default priority

I doubt that this requirement is going to be easy to fulfill; modern quality adaptive streaming applications typically do not use a nice constant stream of packets (with approximately equal inter-packet gaps) but rather get a burst of packets every few seconds (or maybe fractions of seconds but certainly bursty) and if the bursts are not delivered fast enough to keep the play-buffer filled above a threshold they will either request lower quality data (as that requires fewer packets per time unit) or will go into a "buffering-stall" waiting for enough data in the playback buffer to restart the stream (this is more prominent for non-adaptive applications). Now your background traffic will see periods of almost no competing flows and ramp up its send window, which will then cause problems once the "stream server" sends the next burst. And for downloads no matter of prioritization on your side will fix the issue that in those epochs when stream bursts and downloads coincides the upstream buffers will fill and you will see increased latency under load.
I guess the only solution for that is to only allow the downloading applications a fixed bandwidth so they will always leave room for the streamers. It might help id your streaming service would switch to "paced" sending (basically stretching the packets so that the desired average bitrate is achieved over shorter periods as compared to the currently often seen bursty delivery).

BTW, have you tried cake's "nat dial-srchost" keywords for egress and "nat dual-dsthost" for ingress? These at least try to offer per internal-IP-fairness inside your network (but that does not fix the issue with bursty hight bandwidth senders). If you have not, and are willing to do so, please report back your impression. (Also note that this will only has a chance to do the right thing if video watching and downloading is performed from different IP-addresses/hosts, but it also should keep the "pain" of doing both from the same IP-address isolated to those IP-addresses ding this; meaning a pure video watching host should not suffer too much from what other's are doing, but I digress).

In short I concur that you could try to get something configured that improves upon the status quo, but I doubt it will be perfect.

Connbytes to put long downloads in bulk class and a ceiling on bulk class of say 90% of interface definition will work fine in my testing.

Note: requires veth based solution on inbound so that iptables happens before queueing