Well, for the egress direction I think we already established that method.
Did you test whether your gaming already improves when you prioritize in egress direction?
Well, for the egress direction I think we already established that method.
Did you test whether your gaming already improves when you prioritize in egress direction?
Probably, but as I have said before due to using an old router I can not test (and have not tested) qosify myself, so I refrain from giving detailed recommendations how to configure it, since I lack the necessary first-hand experience, sorry.
yea it feels like my shots connect better but when someone is coming around a corner i cant reakt and i think an ingress prioritization would help there
also what should i take for
option overhead_type none
is option options "docsis"
enough ?
OK, I guess the problem with this is that it is hard to quantify improvements making stringent A/B test difficult. I guess if you can clearly define a port range for the ingress traffic, we might be able to come up with a tc filter invocation that might actually work for testing... (in the end I think that qosify should be properly tested and any port bugs reported upstream and fixed, but not being able/willing to actually test qosify I will not be able to help in that).
this is my config now:
and i think its working
but where can i set my per packet overhead for my connection
i changed option options "docsis" with option options "overhead 22" but im not sure if this is right because there is still option overhead_type none
This looks like it works, but please repeat the capture on the router to see whether you see EF marks for the egress packets as well.
Again, not an expert on qosify's configuration, could you please post the output of tc -s qdisc
so we see what is actually configured?
Could I ask you for a favor please?
Instead of going through the hassle of creating a screenshot and pasting that here, could you simply select all the relevant text in the terminal window and copy and paste it into the forum editor as "Preformatted text"? To paste as preformatted text, click on the </>
icon in the toolbar and replace the "type or paste here" placeholder with what you copied from the terminal window.
Or if you want to do it in the forum editor purely with entering text:
Just make sure you "sandwich" your text between two rows of backtick characters ` (which themselves will be invisible in the preview) looking in something like this in the editor:
```
Your Pasted Text as preformatted text with fixed width font
1
1111 (note with fixed-width fonts the numbers are right-aligned)
```
but looking like this in the rendered forum:
Your Pasted Text as preformatted text with fixed width font
1
1111 (note with fixed-width fonts the numbers are right-aligned)
root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8013: dev eth0 root refcnt 9 bandwidth 42Mbit diffserv4 dual-srchost nat nowash no-ack-filter split-gso rtt 100ms noatm overhead 18
Sent 33480212 bytes 263100 pkt (dropped 1, overlimits 30183 requeues 1)
backlog 0b 0p requeues 1
memory used: 45472b of 4Mb
capacity estimate: 42Mbit
min/max network layer size: 28 / 1500
min/max overhead-adjusted size: 46 / 1518
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 2625Kbit 42Mbit 21Mbit 10500Kbit
target 6.92ms 5ms 5ms 5ms
interval 102ms 100ms 100ms 100ms
pk_delay 0us 14us 0us 21us
av_delay 0us 3us 0us 10us
sp_delay 0us 2us 0us 6us
backlog 0b 0b 0b 0b
pkts 0 178649 0 84452
bytes 0 18524071 0 14957627
way_inds 0 814 0 0
way_miss 0 1759 0 43
way_cols 0 0 0 0
drops 0 1 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 1 0 1
bk_flows 0 1 0 0
un_flows 0 0 0 0
max_len 0 25262 0 1342
quantum 300 1281 640 320
qdisc clsact ffff: dev eth0 parent ffff:fff1
Sent 9219449783 bytes 17876577 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 10372130910 bytes 7785575 pkt (dropped 3291, overlimits 0 requeues 113173)
backlog 0b 0p requeues 113173
maxpacket 1514 drop_overlimit 2752 new_flow_count 78394 ecn_mark 0 drop_overmemory 2752
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev ifb-dns root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 93622 bytes 569 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 8014: dev ifb-eth0 root refcnt 2 bandwidth 850Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms noatm overhead 18
Sent 1375328856 bytes 1417003 pkt (dropped 77, overlimits 40519 requeues 0)
backlog 0b 0p requeues 0
memory used: 5253462b of 15140Kb
capacity estimate: 850Mbit
min/max network layer size: 46 / 1500
min/max overhead-adjusted size: 64 / 1518
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 53125Kbit 850Mbit 425Mbit 212500Kbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 0us 13.8ms 0us 611us
av_delay 0us 13.8ms 0us 34us
sp_delay 0us 1.46ms 0us 1us
backlog 0b 0b 0b 0b
pkts 0 956886 0 460194
bytes 0 1265476040 0 109967702
way_inds 0 832 0 0
way_miss 0 1360 0 28
way_cols 0 0 0 0
drops 0 77 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 1 0 0
bk_flows 0 1 0 1
un_flows 0 0 0 0
max_len 0 68130 0 1242
quantum 1514 1514 1514 1514
qdisc clsact ffff: dev ifb-eth0 parent ffff:fff1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Thank you, much easier to read for me. This looks like things are working out for you, both ingress and egress cake instances show traffic in only Best Effort and Voice, as expected from your configuration.
If you want to configure the overhead manually, just add overhead 18 mpu 64
to both:
option ingress_options ""
option egress_options ""
while you are at it you might want to try adding ack-filter
to egress_options
...
config defaults
list defaults /etc/qosify/*.conf
config class voice
option ingress EF
option egress EF
config interface wan
option name wan
option disabled 0
option bandwidth_up 42mbit
option bandwidth_down 850mbit
option overhead_type none
# defaults:
option ingress 1
option egress 1
option mode diffserv4
option nat 1
option host_isolate 1
option autorate_ingress 0
option ingress_options "overhead 18 mpu 64"
option egress_options "overhead 18 mpu 64 ack-filter"
option options ""
what does the ack filter do ?
So ACK filter will aggregate queued up ACKs, so let's assume for a given TCP flow, we have say three ACK packets in the egress queue, now if this queue is serviced again we could either send these three ACKs back to back or (since ACKs are accumulative) we could drop the first two ACKs and only send the last one with the highest sequence number. The receiver of the ACKs will perform more or less the same whether three consecutive ACKs arrive back to back of only the last one, but we removed 66% of the traffic volume of that ACK burst. AND if we would not have send all three ACKs at the same time, but say only the first two the improvement not only affects the ACK volume, but the ACK signal of the last ACK would be delayed, potentially stalling the sender (while with an ACK filter the most recently received segment would be acknowledged quicker).
In short on bursty MAC layers like DOCSIS ack-filtering can help.
i tried ack filtering but it clearly felt better without it so this is my final config:
config defaults
list defaults /etc/qosify/*.conf
config class voice
option ingress EF
option egress EF
config interface wan
option name wan
option disabled 0
option bandwidth_up 42mbit
option bandwidth_down 850mbit
option overhead_type none
# defaults:
option ingress 1
option egress 1
option mode diffserv4
option nat 1
option host_isolate 1
option autorate_ingress 0
option ingress_options "overhead 22 mpu 64"
option egress_options "overhead 22 mpu 64"
option options ""
# Games
udp:27000-27031 voice
udp:27036 voice
udp:3074 voice
udp:3478 voice
udp:4379 voice
udp:4380 voice
ACK filtering should have zero impact on pure UDP flows, but in theory ACK filtering might overload your CPU. But I wonder how you assessed "clearly felt better", because that can be a fluke...
That said, for your gaming issue ACK-filtering is completely orthogonal, as long as your game traffic is the only traffic in the Voice priority class your game should not care much about what happens with TCPs....
Just to illustrate the issue though, with classic TCP Reno there will be one small ACK packet for every two full MSS segment received, assuming MTU 1500 and IPv4/TCP a pure ACK packet is roughly 1/20 the size of a full-MTU/full-MSS packet, and one ACK every two full segments gives us:
ACK/Data = 1/40
So for a saturating download on a 1 Gbps link you can expect 1000/40 = 25 Mbps
of reverse ACK upload traffic, on a 1000/50 link that is already 50% of you upload capacity...
More modern TCPs than Reno, especially with aggregation techniques like GRO/GSO will emit noticeably fewer ACKs ameliorating the problem.
my shots didnt register right (same server) but i tried it with overhead 18 mpu 64 ack-filter
i will try it later again with overhead 22 mpu 64 ack-filter
Again, unlikely to be involved in your issue. Overhead 18 or 22 is unlikely to be an issue unless you saturate at least one direction of your link with really small packets,.
So unless you did heavily load your internet acces link in parallel to playing your game this is unlikely to affect your game at all.
I understand that testing responsiveness in games is quite tricky and I do net envy you....
I think this will answer your question: The ingress & egress setup calls
egress() {
SILENT=1 $TC qdisc del dev $IFACE root
$TC qdisc add dev $IFACE root cake bandwidth ${UPLINK}kbit \
$( get_cake_lla_string ) ${EGRESS_CAKE_OPTS} ${EQDISC_OPTS}
# Put act_ctinfo on the egress interface to set DSCP from the stored connmark.
# This seems counter intuitive but it ensures once the mark is set that all
# subsequent egress packets have the same stored DSCP. This avoids the need
# for iptables to run/mark every packet.
$TC filter add dev $IFACE matchall \
action ctinfo dscp ${DSCP} ${DSCPS}
}
ingress() {
SILENT=1 $TC qdisc del dev $IFACE handle ffff: ingress
$TC qdisc add dev $IFACE handle ffff: ingress
SILENT=1 $TC qdisc del dev $DEV root
[ "$ZERO_DSCP_INGRESS" -eq "1" ] && INGRESS_CAKE_OPTS="$INGRESS_CAKE_OPTS wash"
$TC qdisc add dev $DEV root cake bandwidth ${DOWNLINK}kbit \
$( get_cake_lla_string ) ${INGRESS_CAKE_OPTS} ${IQDISC_OPTS}
$IP link set dev $DEV up
# restore DSCP from conntrack mark into packet
# redirect all packets arriving in $IFACE to ifb0
$TC filter add dev $IFACE parent ffff: matchall \
action ctinfo dscp ${DSCP} ${DSCPS} \
action mirred egress redirect dev $DEV
}
I guess not, since there would be no scheduler to honors priorities. One more issue, NAT can/will remap the internal-port numbers so it might be safer to attach your rules to the remote port ranges instead. (I might be misreading your screenshot though, maybe better post the relevant sections from your \etc/config/firewall
as text?)
Silly questions:
A) If the IFB goes away for what ever reason (say the underlaying pppoe interface ceased to exist) the action will also be gone for good? Or put differently is there anything else required than a $TC qdisc del dev $DEV root
to get rid of the action?
B) What requirement for additional packets exists for using this (I would like to add this great capability to a variant of layer_cake.qos in the main repository, but want to make it safe even on systems lacking the requirements)
Please note: my idea is to keep actual egress marking rules out of the scripts and delegate users to using the firewalls DSCP classification action to set egress DSCPs to their hearts content, so all we need would be two additional tc lines guarded by a check whether the required functionality exists on the system...