[any gamer here] i experience a lot of lag and low outgoing pps in games?

ok so i have a connexion 80 mega download so i make 3000 ? it's right

Honestly, with SQM on the wan side and a shaper rate well below the interface speed, there is zero need to reduce this buffer in size.

1 Like

https://www.linuxjournal.com/content/queueing-linux-network-stack

Byte Queue Limits (BQL)

Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) that attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer that enables and disables queueing to the driver queue based on calculating the minimum queue size required to avoid starvation under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum latency experienced by queued packets.

It is key to understand that the actual size of the driver queue is not changed by BQL. Rather, BQL calculates a limit of how much data (in bytes) can be queued at the current time. Any bytes over this limit must be held or dropped by the layers above the driver queue.

A real-world example may help provide a sense of how much BQL affects the amount of data that can be queued. On one of the author's servers, the driver queue size defaults to 256 descriptors. Since the Ethernet MTU is 1,500 bytes, this means up to 256 * 1,500 = 384,000 bytes can be queued to the driver queue (TSO, GSO and so forth are disabled, or this would be much higher). However, the limit value calculated by BQL is 3,012 bytes. As you can see, BQL greatly constrains the amount of data that can be queued.

BQL reduces network latency by limiting the amount of data in the driver queue to the minimum required to avoid starvation. It also has the important side effect of moving the point where most packets are queued from the driver queue, which is a simple FIFO, to the queueing discipline (QDisc) layer, which is capable of implementing much more complicated queueing strategies.

If the qdisc operates well below the interface speed, then the driver byte queue never has more than 1 packet in it. For example suppose you operate cake at 100Mbps but have a gigabit ethernet. Cake sends a packet to the ethernet hardware, which sends it at gigabit speed... a 1500 byte packet takes 12 microseconds.

Cake then waits 1500*8/100e6 = 120 microseconds before it hands the next packet to the ethernet. So 90% of the time the ethernet has zero packets, and 10% of the time it has 1 packet.

And this is great, if you are connected to your upstream at full ethernet rates, just put fq_codel on top of an BQL'd ethernet driver and you should see low latency under load increases at very low CPU usage!. But once you need to traffic shape the egress/ingress rate of the BQL'd interface it is not going to help anymore, as now the shaper (if properly configured) will never allow unduly queueing in the ethernet driver. And more, occasionally, as a compromise to work around the combination of anemic router CPUs and fast internet access links, SQM wants to push a somewhat largish batch of packets to the ethernet driver (in that case SQM can be configured how much extra latency one is willing to accept for better utilization). Now if BQL has restricted the driver to accept say 3KB but SQM wants to "dump" 5 milliseconds worth at 150 Mbps (or 93.750 KB) that is not going to work well...

And on ingress things get even trickier, since there is no BQL on ingress at all (in an ideal world the egress side of the upstream node connected to the local ingress would be properly BQL'd, but in reality...)

This is a good point. Rather than going packet by packet if the qdisc dumps batches of packets, then it wants the ethernet to be able to buffer them and it won't.

For example, in the example above the buffer always had 1 or 0 packets in it. But if instead cake dumps 10 packets at a time... then the ethernet layer will start with 10 packets, and drop to 1 or 0 packets by the time cake hands it another 10... so it's very efficient pipelining. But with BQL and a too small buffer, you'll get dropped packets or something.

Then Why is fq_codel the default qdisc on all interfaces? Why is bql enabled by default? A qdisc doesn't perform any real work unless it's constrained using a limiter, and bql apparently serves no purpose unless it's applied directly at the root of the edge link.

BQL is designed to handle datacenter traffic etc. 10GigE connections where running a shaper is insane.

Because is is way superior to the old default pfifo_fast.

Because for an interface running at line rate BQL is exactly the right thing to do.

BQL will give back-pressure to fq_codel so fq_codel works and latency stays low, but that only works of the BQL'd ethernet link is the true bottleneck. BQL can only signal about the queue it knows, but if say a Gbps ethernet adapter feeds a 10Mbps VDSL2 modem, the likely problematic queueing happens in the device with the fast to slow transition, so BQL's back-pressure is not telling fq_codel about the relevant queue in my example and that is the situation where BQL + fq_codel alone can not fix bufferbloat, but a traffic shaper plus fq_codel can, but then we need to make sure that BQL does not get in the way (which we only need if we have to batch packets, but we always have to do that to a degree, only most often our batch size is a single packet :wink: )

It works quite well, if the slowest link is an ethernet link.

Well can be done, and actually is done in practice IIRC, but ideally at such speeds we would prefer an ethernet drver with configurable egress rates (in which case the costly shaping would be left to the hardware, and BQL + fq_codel would again be sufficient). In a properly controlled environment, where we can select the devices at both ends of a link, just havong properly configured ethernet rates, BQL and fq_codel would do wonders.

So the plan is clear, convince ISPs to deploy exactly that on their end of a link :wink:

It's not clear to me that ISPs setting ethernet rates necessarily fixes end users experience. Suppose you're an ISP with 100 customers in some local loop... You give 10Mbps to each. So now your egress needs anywhere from say 10Mbps to 1000Mbps depending on how much demand there is from customers.

It still seems they need some kind of qdisc to limit each of those 100 users to their 10Mbps contracted rate...

note that I personally think that artificial rate limits for customers is a sub-optimal solution. It would make more sense to use fair queuing and let whoever has demand use the full link, but there are real-world reasons why that doesn't work (including probably lots of customer complaints that they were getting 100Mbps speed test yesterday why now do they only have 24?)

Well, in my hypothetical the ISP would instantiate BQL on the per client interface and that per clent interface would also be the rate-limiting node... which works for DSL, but is not the right mental model for a true shared medium like DOCSIS/cable....

Yes, this model works well within groups that have some social cohesion, like a family or a business, but tends not to work too well with in anonymous groups without enforcement of proper behavior.
Plus ISPs market differential access speeds, and as long as they do they need to deliver something along that axis :wink:

I don't know, I don't think it's abuse that's the real issue. I think it's a combination of misunderstanding and customer support costs.

Suppose in some neighborhood we had the equivalent of a QFQ with 1000 different weighted queues. Instead of selling "10Mbps" or "50Mbps" we sell "priority 1" or "priority 5".

Now, if you're the only customer on the link you get 100% of the link. but if there's you at priority 5 and one priority 1 customer, you get 5/(5+1) = 83% of the link. If there is you and 7 other priority 5 customers and 30 priority 1 customers you get 5/(5*8+30) = 7.1% of the link

Now, imagine this is a gigabit link. You'd see speed tests anywhere from 830Mbps to 70Mbps depending on other people's usage. But you'd always be getting your "fair share" of the link.

I believe this is the main problem with link-sharing, that immediately as soon as you implement this, people think they're getting screwed and call your customer service and start saying "you were selling me 830Mbps yesterday why don't I get that today?" and your customer service has to explain it to them, and they still feel cheated.

Instead, people prefer to actually GET cheated, and just cap their speed at 10Mbps and basically always get no more than that... Of course in reality there's still plenty of time they don't get fully that because links get congested elsewhere, but they don't ever see more than 10 so they never feel that they're "getting screwed because they only get 10 but were getting 100 yesterday"

humans... can't live with them. can't LART them all.

Well, yes and no. I am a strong believer in sharing.
But I can also see how selling internet access by access speed is conceptually simple (at least in Europe, where the regulators established, that goodput as measured in typical speedtests can and should be compared against the contractual rates an ISP needs to declare (in Germany, ISPs need to specify a max rate (which has little relevance), a typical rate which should be there 90% of the time and a minimum that needs to be there 100%).
Every end-customer is able to run such speedtests to confirm whether his ISP delivers to his advertised promises... (well, if the ISP does not deliver the only remedy is getting out of the contract immediately and without additional costs, which is fine for people with real options...)

Agreed that it can make sense to have a guaranteed minimum in practice often what happens here in the US is that you agree to never go above a maximum and then accept whatever crap the ISP actually gives you less than that or complain on the phone to a guy in India who is paid to ignore you politely and then pay to break your contract if you wind up not liking it :wink:

The US is the land of government supported monopolies who pretend to be all about "free market"

If I ever ran a Wireless ISP (which is actually a thing I occasionally think about doing but probably will never do) it would work a lot differently from a typical ISP and focus on low latency, high reliability, and friendly for gamers and people who want to run their own routers. We'd also do ipv6 only with a 464xlat

All I need is someone who really wants to run a small business and handle billing and sales and marketing :wink:

The CIR Committed information rate is never explained on an ISP website nor in a contract. The advertised rates are usually inferred as the CIR but don't even meet the CIR here in the USA. The EIR is not posted either which is usually what the customer is seeing on a speedtest for short amounts of time. Tricky business practices.

It's all about "speeds up to 200Mbps" meaning "we've got a shaper that makes sure you never get more than 200Mbps"

Kinda like in marketing "up to or greater than 200Mbps"

which mathematically means "as long as the bandwidth is a real number we've met our contract"

Hello, how would the DSCP of CS7 be?

I'm a little confused about the question.

Follow this official link CS7 explanation and read sections 3.1 & 3.2
A quick summary is that CS7 wasn't created nor intended for marking user applications such as voip, gaming, etc. It's a place marker for lower level routing protocols that work behind the scenes to be expedited. I think it's more applicable in a datacenter or something similar.

1 Like