Doubt about ack-filter

they recommended me to use nat dual-srchost ack-filter and nat dual-dsthost ingress because my network is asymmetric 80 down 8 up and my isp router is a DOCSIS (ARRIS TG2482A)hfc

I read in another forum they use docsis besteffort ingress nat and docsis ack-filter nat for very poor asymmetric networks
So what is the difference?
because I used both in games and I don't notice anything,
I still have bufferbloat



Are you you asking for suggestions on how to help with bufferbloat on an OpenWrt device?

...or about settings on another device?

It's not really clear what you're seeking here.

here it says
that this command should be used docsis besteffort ingress nat docsis ack-filter nat for very poor asymmetric networks but The sqm guide suggests to use the options nat dual-dsthost ingress and nat dual-srchost .
my doubt what is the difference

1 Like

nat dual-srchost ack-filter with nat dual-dsthost ingress will result in per host isolation, and dynamically distributes the available bandwidth fairly between the currently active ip addresses.

nat besteffort ack-filter with nat besteffort ingress with nat only keyword you balance only the internal hosts and not external connections.

other keyword definition:

Equivalent to overhead 18 mpu 64 noatm

Instructs Cake to perform a NAT lookup before applying flow-
isolation rules, to determine the true addresses and port numbers of
the packet, to improve fairness between hosts "inside" the NAT. This
has no practical effect in "flowblind" or "flows" modes, or if NAT is
performed on a different host.

Disables priority queuing by placing all traffic in one tin.

Flows are defined by the 5-tuple, and fairness is applied first
over source addresses, then over individual flows. Good for use on
egress traffic from a LAN to the internet, where it'll prevent any
one LAN host from monopolising the uplink, regardless of the number
of flows they use.

Flows are defined by the 5-tuple, and fairness is applied first
over destination addresses, then over individual flows. Good for use
on ingress traffic to a LAN from the internet, where it'll prevent
any one LAN host from monopolising the downlink, regardless of the
number of flows they use.


Good description overall! But in this case cake simply tries to treat each flow equally, so not really balancing by internal host IPs. The hist with the most greedy flows will get the highest share of the capacity.

maybe nat besteffort triple-isolate ack-filter with nat besteffort triple-isolate ingress would have been better :thinking:

thank you for all the details/clarification you always provide on SQM in the forum :wink:

I guess it depends, I originally recommended the dual-XXXhost options, since @edwpat originally reported issues when sharing the link. Mostly triple-isolate aims to perform similar to dual-XXXhost except without knowing the direction towards the internet it can not do that strictly, so it performs a somewhat softer version. For normal use cases with multiple active flows the differences should be small, but for extreme situation, like single flow speedtests, or VPNs tripl-isolate's isolation characteristics are harder to predict.

If the pk_dealy and av_delay counters in the output of tc -s qdisc show high numbers (and repetedly after stoping and re-starting sqm) then cake is to blame (most likely the CPU in the router is overloaded). Otherwise if cake thinks delays are fin, but measurement of end to end delay show otherwise, then cake is not in control of the bottleneck. he bottleneck could be upstream, like an ISP's partially overloaded transit link or upstream link of the aggregation netwirk of overly crowded DOCSIS segment or downstream, like very low effective throughput on WiFi with an wifi stack not enabling airtime fairness and/or an fq_AQM.

I mean that the two cores that my router has are not enough.

If that is a question, even with multicore routers you can run oyt of CPU. Qdiscs like cake can onlyvuse one CPU (at a time) and hence do not really profit from multicore routers much (multicores still allow to movw other work from the qdisc CPU, but that is not the same as having 2 or more CPUs processing packets for thecqdisc simultaneously.
Not sure whether that help with your issue though.

still in bridge mode?

Bridge mode only affects how your cable modem operates on its LAN side, on the coax side not much changes, all the DOCSIS low level stuff stays the same, and if your segment should be overloaded you will encounter unexpected delays.

Well, the problem is that in my country all internet operators use hfc or coaxial, there would be no change, what could be the solution? to the overload and lags you mentioned

Hard to tell, I think your tc -s qdisc results indicated that it does not seem to be your own link that causes the delays, and that means something upstream of it. And that is all under the responsibility of your ISP, not sure what the recommend. Sometimes on an overloaded segment one simply gets a reliable share of upload that is simply smaller than expected. In that case trying with a significantly smaller upload shaper setting might help, but there is no guarantee.
There are some methods and projects out there that use linux and USB cable receivers to measure the loading of a segment, that might allow to measure whether your cable segment is overloaded. But I have no first hand experience in any of that and the only forum post about that I could quickly find is in German, so probably not too helpful.

This is a very annoying problem since my ISP's customer support doesn't know anything, the only thing they know to do when their service fails is reboot their router and nothing else. and this doesn't help at all.

What do you mean, I'm a fool handling words

Sorry, I do not know any Spanish, but putting this through google translate looks like something got garbled during translation

It should be in English as I always use the translator.

I mean your ISP will probably not be a Ti-ISP itself with direct interconnections with all other Ti-ISPs, but probably will buy transit (aka access to all of the internet the ISP is not itself directly connected with) from somebody. But that transit will be rate-limited, so if your ISP's users in the aggregate use up all of that rate, than the share per user can easily fall below the contracted egress rate per user, in that case SQM will simply be set to too high an egress rate. A test for that would be to run something like
mtr -ezb4 while stressing your network (with the help of your brothers?).
Then copy and paste the output of a running mtr session (once with load and once without load). If we are lucky, we might see signs of congested links in there. BUT even if we would be successful, there really is not much you can do.
(In theory you could try to find a local virtual server hoster, with better transit/peering links and rent a server there, then re-route all your traffic via that VPS/VPN, but that is going to cost money, and will only help if the root cause is your ISPs overloaded uplink).

at some point if I managed to overload the Internet service of my ISP and the speed dropped to 13 below and 8 above but the contracted service is 80/8 something strange since before it was 60 below and 6 above