PS: My son is not home until the weekend, so I really cannot test performance game wise, but since the rest of the family is still quiet, I take that as a good sign.
This switch have more configs to mark dscp packts without a need of iptables in the firewall but i cant setup (advanced users), but with this my game fills fuid
(tins 6 + 7 ~ upload are somewhat cramped... i'm cheating with some web bursts in there or something I think for you guys would be accidentally stuffing too much gaming here (i.e. EF) maybe...)
Hmm, so what this will do on a normal link is to massively throttle all flows that ever queue more than one packet, so at least run a speedtest (from a different computer) parallel to your gaming session to assess the utilization cost inflicted on making cake drop/mark that aggressively. Well, possible that you are happy with the trade-off you are making, but please make sure you know what you are getting yourself into Also note that a smaller interval will make cake more RTT-biased, that is flows with RTTs considerable larger than cake's hidden interval (so 0.1ms for daracentre) will see less throughput than flows with RTT closer to 0.1ms, again this consequence might be acceptable to you, but to make an educated decisiion, you might want to know the details.
Well, the target defines the acceptable persistent queuing delay, so as long as the currently experienced sojourn time stays below that threshold cake will neither drop nor mark a packet. Cake tries to automatically scale the target such that it allows at least 1.5 times the MTU, so I am doubtful, that when requesting datacentre you actually got 0.05/0.1 ms configured. If you could post the output of tc -s qdisc here we might be able to figure out which parameters cake actually used....
Like I mentioned earlier, this thing is the most complicated thing I have ever seen in many years... It's like black magic, only accessible to a few wizards...
As far as I can tell, I'm only using two:
$IPT -t mangle -A PREROUTING -p tcp -m conntrack --ctorigsrc 192.168.1.150 -m multiport ! --ports 80,443,8080 -j DSCP --set-dscp-class CS6 -m comment --comment "PS4" #for PS4 or xbox etc, change the ip according to your ip setting.
$IPT -t mangle -A PREROUTING -p udp -m conntrack --ctorigsrc 192.168.1.150 -m multiport ! --ports 80,443,8080 -j DSCP --set-dscp-class CS6 -m comment --comment "PS4"
I don't know, but since my son is not home until de weekend and, of course, the PS4 is off, maybe that explains why we don't see packets going elsewhere.
Exactly the drop would be exorbitant at least for my download speed 15mbps (capped at 14.5mbps) shows 13mbps with datacenter and without it shows 13.8mbps.
So what you see in there is that cake adjusted the target up to 1.25-3.19ms (which is roughly 1.5 times the duration required to transmit a full MTU sizes packet) a far cry from the requested 0.05 ms, the interval however was only upscaled to the default minimum of two times the target. That still means that cake is much more prone to drop/mark packets due to the low interval setting, but at least the sojourn time is sane.
So a speedtest with the effective ~2/4ms target/interval combination only gives you a reduction by 100-100*13/13.8 = 5.8%? Not that bad, I guess the question then is how this affects throughput to/from servers further away.
BTW with your settings you can expect at best:
14.5 * ((1500-8-20-20)/(1500+14)) = 13.91 Mbps throughput, so 13.8 is already pretty much spot on*
*) or would be if speedtests would not generally tend to overestimate the speed a bit.
Because you have two WANs this is not going to work exactly correctly the LAN side queue will be getting packets from both WANs and so will control only the aggregate speed, this is not really enough to stop bufferbloat on both WANs. It will work reasonably well much of the time but if any stream is saturating one of the WANs it will not be throttled by a queue that only throttles the total speed.
To solve this, you could add queues on download of each WAN, perhaps using simplest qos just to control the speed on each WAN separately.
Thanks for chiming in about this particular scenario and all the knowledge you've been sharing in this and other threads.
I will try your suggestion once I get home tonight and post back with the results.
Absolutely which aids benefit for us gamers at the cost of speed. I was thinking is there a way we can customize shaper to send game packets as soon as they arrive? Especially when the network is congested.
Just putting it here for anyone wondering meaning of tc -s qdisc table values it is discussed in this post SQM Reporting
In terms of cake output, cake can be viewed as an overall shaper with traffic classified into groups called Tins. A cake instance can have 1, 3, 4 or 8 tins. A 4 tin configured cake looks like:
For the 3 & 4 tin category instances of cake, the tins are given a descriptive name, the other instances they're just called 'Tin 'n''. Tins on the left have lower priority than tins on the right. Tin selection is (usually) determined by the packet's DSCP value. In terms of the stats values reported here's what they mean AFAIUI.
Thres: Defines how much bandwidth is consume in this tin before it switches to a lower priority. In the above example, voice is guaranteed 5Mbit of bandwidth. If video needed up to 10Mbit and voice was consuming more than 5Mbit, video would start stealing bandwidth from voice until the bandwidth minimums were reached. They're soft limits in that anyone can have all of the bandwidth as long as no one with higher priority needs it.
target: the 'ideal' target delay that we'll tolerate for an individual flow. ie. how old stuff can be in the queue before we'll consider taking action, like shooting packets to tell people to slow down.
interval: I need to check this!
pk_delay: peak packet delay, the oldest packet we had in the queue, ie. how long the oldest packet hung around before we got to dequeue it.
av_delay: the average delay of the packets in the queue that we dequeued.
sp_delay: the delay in the queue for sparse packets. Oh boy: packet flows that send on a continuous basis are regarded as 'bulk' flows (ftp transfer). Packets that flow occasionally are regarded as 'sparse' (think interactive ssh). So this tells you how delayed the 'interactive' packets have been.
NB: All the above delay stats are EWMA averages so they can lag slightly or if there's no packet in a tin it can appear to stall.
backlog: number of bytes in the queue, waiting to be sent when the shaper says there's space/time to be able to send.
pkts(c): number of packets that have flowed through this tin
bytes(c): number of bytes that have flowed through this tin
way_inds(c), way_miss(c), way_cols(c): Each packet flow is ideally put into an individual queue, these are almost like cache stats and show how succesful we were in achieving that. Mostly uninteresing.
drops(c): number of packets we dropped as part of our queue control mechanism
marks(c): number of packets we ECN marked (on ECN capable flows) in preference to dropping them.
ack_drop(c): If ack-filtering enabled, number of unnecessary ack only packets that were dropped.
sp_flows: the number of sparse packet flows in this tin
bk_flows: the number of bulk packet flows in this tin
un_flows: the number of unresponsive packet flows in this tin. If a flow doesn't respond to codel style 'slow down' signalling in a normal manner then it is considered unresponsive and is handled by the 'blue' aqm instead.
maxlen: the largest packet we've seen in the queue.
quantum: granularity in bytes of how much we can de-queue in our queues and release to the shaper.
Most of the figures are an instantaneous snapshot or 'gauge' of the current state, I've indicated with a (c) where the values accumulate and thus would need 2 samples over time to produce a 'rate'
I can help here, from CoDel's RFC which mostly applies to cake as well:
4.2. Setting INTERVAL The INTERVAL value is chosen to give endpoints time to react to a drop without being so long that response times suffer. CoDel's estimator, TARGET, and control loop all use INTERVAL. Understanding their derivation shows that CoDel is the most sensitive to the value of INTERVAL for single long-lived TCPs with a decreased sensitivity for traffic mixes. This is fortunate, as RTTs vary across connections and are not known a priori. The best policy seems to be to use an INTERVAL value slightly larger than the RTT seen by most of the connections using a link, a value that can be determined as the largest RTT seen if the value is not an outlier (use of a 95-99th percentile value should work). In practice, this value is not known or measured (however, see Appendix A for an application where INTERVAL is measured). An INTERVAL setting of 100 ms works well across a range of RTTs from 10 ms to 1 second (excellent performance is achieved in the range from 10 ms to 300 ms). For devices intended for the normal terrestrial Internet, INTERVAL SHOULD have a value of 100 ms. This will only cause overdropping where a long-lived TCP has an RTT longer than 100 ms and there is little or no mixing with other connections through the link.
Think about interval like this, codel/cake will only engage in scheduling a drop/mark, if the minimum delay in a bucket/flow has been > target for at least a full interval period, and then it will drop/mark immediately and also schedule the next drop time in the future based on doings its inverse square law calculations using on the same interval value as starting value. In short, the smaller interval is the drop/mark happier CoDel/cake become, and the more utilization/throughput will take a hit (latency for non-queueing flows however should be better).
The last part is important, these are not simple to interpret value, but exponentially weighted moving averages, especially in the case of pk_delay, this is not simply the maximum delay a packet experienced. Arguably it would be helpful to get the true maximum sojourn time since the last statistics query as well, but cake does not offer that currently.
BTW, the link you supplied results in a:
" Oops! That page doesn’t exist or is private."
a question, if I put in sqm the same value in both download and upload (for example 10000kbps) I do a test with dslreport, why upload is it always the half of the download?does anyone know the answer?