see post
your rules above specifying udp:3074 as gaming, CS4.
@moeller0 possible saturation?
root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 1: dev eth0 root refcnt 6 bandwidth 23Mbit diffserv4 dual-srchost nat wash no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64
Sent 1457977151 bytes 8355695 pkt (dropped 239, overlimits 3028990 requeues 20127)
backlog 0b 0p requeues 20127
memory used: 964792b of 4Mb
capacity estimate: 23Mbit
min/max network layer size: 28 / 1500
min/max overhead-adjusted size: 64 / 1518
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 1437Kbit 23Mbit 11500Kbit 5750Kbit
target 12.6ms 5ms 5ms 5ms
interval 108ms 100ms 100ms 100ms
pk_delay 115us 1.37ms 0us 82us
av_delay 8us 91us 0us 5us
sp_delay 3us 4us 0us 2us
backlog 0b 0b 0b 0b
pkts 5469038 2883252 0 3644
bytes 827825621 630307469 0 168592
way_inds 74983 96638 0 0
way_miss 5140 132132 0 2
way_cols 0 0 0 0
drops 75 164 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 2 3 0 1
bk_flows 0 0 0 0
un_flows 0 0 0 0
max_len 31988 14870 0 71
quantum 300 701 350 300
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
Sent 31113598438 bytes 27129773 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 37074099806 bytes 32555617 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1514 drop_overlimit 0 new_flow_count 14 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 9142810711 bytes 7760013 pkt (dropped 1, overlimits 0 requeues 68)
backlog 0b 0p requeues 68
maxpacket 1514 drop_overlimit 0 new_flow_count 72 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 1: dev ifb-eth0 root refcnt 2 bandwidth 90Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64
Sent 31473839892 bytes 27067892 pkt (dropped 61881, overlimits 33321918 requeues 0)
backlog 0b 0p requeues 0
memory used: 4503749b of 4500000b
capacity estimate: 90Mbit
min/max network layer size: 46 / 1500
min/max overhead-adjusted size: 64 / 1518
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 5625Kbit 90Mbit 45Mbit 22500Kbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 1.46ms 174us 0us 85us
av_delay 591us 62us 0us 8us
sp_delay 21us 7us 0us 3us
backlog 0b 0b 0b 0b
pkts 23030292 4016617 0 82864
bytes 29065484518 2496505286 0 4971840
way_inds 154471 69729 0 0
way_miss 5165 131915 0 1
way_cols 0 0 0 0
drops 61166 715 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 3 1 0 1
bk_flows 0 1 0 0
un_flows 0 0 0 0
max_len 68130 24224 0 60
quantum 300 1514 1373 686
Impossible to tell from those stats... the only parameters that tell us something about "congestion/saturation" are:
xx_delay: but these age out quickly, so they are only diagnostic during and shortly after congestion events
backlog: diagnostic during congestion
drops/marks: but there it is important at what rate these events happen, and that is impossible to see in the stats.
So all the stats tell me is:
a) that during the time these were taken there was no congestion
b) you put stuff into VOICE but not VIDEO, wether that is OK depends on your policy and is not for me to judge though.
While not the correct measure, looking at the number of packets in the different tins, your prioritization approach seems sane, you only have a small fraction in tins with higher priority than best effort. (The problem here is that these packet/byte counters only tell us that on average prioritization looks okay, there might still have been epochs with too high traffic rates in VOICE, but you would probably notice from the link's behavior).
For my taste there is quite a lot of stuff in Bulk, but again that is a policy question and might be exactly what you are after.
I found this information about packet length size to prioritize non-bulk unmarked traffic like games, VoIP, etc. and added it to my config and wanted to share it with you.
Clarification
If you have a bad or unstable connection even using CAKE (because your ISP is bad), you will still have a bad connection even if you use DSCP marking.
The DSCP marking doesn't help you FIX the bufferbloat, it's CAKE that fixes that problem and the DSCP marking only helps you use the categories in CAKE to prioritize one traffic over another and ensure a certain amount of bandwidth for that traffic.
To confirm that the DSCP marking doesn't help you FIX the bufferbloat, you have to add a 0 in the options "bandwidth_up" and "bandwidth_down" (like this 0mbit) to not limit the bandwidth on CAKE (but you can use the DSCP marking of Qosify) and then do this test:
Qosify configuration
-
I use the options "dscp_default_tcp" and "dscp_default_udp" to wash all DSCP marks from my ISP on my ingress traffic and make that class the default for the unmarked traffic.
-
unmarked_traffic class:
- Unmarked traffic with packet length size greater than 1256 bytes is classified as CS1 like torrents and other unknown services that most likely don't care and that are generally not time sensitive.
- Unmarked traffic with packet length size less than 1256 bytes is prioritized to CS4 like gaming, VoIP, etc.
-
Unmarked traffic with more than 250 packets per second is deprioritized to CS1 for 10 seconds and if that traffic does not decrease the pps, it will remain in CS1 until pps decreases.
(Recommendation: On your BitTorrent client (qBittorrent) only use the TCP protocol, because with the μTP (UDP) protocol the size of the packets varies between 590 bytes and 1444 bytes)
-
browsing class:
- Traffic from ports 80 and 443 with packet length size greater than 575 bytes is classified as CS0 like browsing, games lobby, live streaming with port 443 (Facebook Live), etc.
- Traffic from ports 80 and 443 with packet length size less than 575 bytes is prioritized to AF41 like light browsing (text/live chat/code?) and VoIP (these are the fallback ports for VoIP).
- Traffic from ports 80 and 443 with more than 1000 packets per second is deprioritized to CS1 for 10 seconds and if that traffic does not decrease the pps, it will remain in CS1 until pps decreases.
-
bulk class:
- Hostnames or domains used for downloads to CS1 like Microsoft, MEGA, Dropbox, Google, Steam and Epic Games.
- BitTorrent and Usenet ports to CS1.
-
besteffort class:
- ICMP (Ping) to CS0.
-
Hostnames or domains used to watch video streaming to CS0 like YouTube, Facebook, Twitch, TikTok, Netflix, Amazon Prime Video, Disney Plus and HBO.
(These hostnames ensure that the streaming services don't end in CS1).
-
network_services class:
- SSH, NTP and DNS ports to CS2.
-
broadcast_video class:
- Live Streaming ports to CS3 like YouTube Live, Twitch, Vimeo and LinkedIn Live.
-
gaming class:
- Known game ports and game consoles ports to CS4 like Xbox, PlayStation, Call of Duty, FIFA, Minecraft and Supercell Games.
-
multimedia_conferencing class:
- Known video conferencing ports and hostnames to AF4x like Zoom, Microsoft Teams, Skype, GoToMeeting, Webex Meeting, Jitsi Meet, Google Meet, FaceTime and TeamViewer.
-
telephony class:
- Known VoIP and VoWiFi ports to EF.
-
I only use those DSCP values to prioritize the traffic in these CAKE categories:
- Bulk: CS1
- Best Effort: CSO
- Video: CS2, CS3 and AF4x
-
Voice: CS4 and EF
(The traffic in the last category in CAKE always has higher priority than the others)
-
I add "wash" parameter in "egress_options" to wash outgoing custom DSCP marking for this reason:
- "wash" only clears all DSCP marks after the traffic has been tinned. - Don't wash incoming (ingress) DSCP marks, because also wash the custom DSCP marking from Qosify and Qosify already washes the ISP marks with the options "dscp_default_tcp" and "dscp_default_udp". - Wash outgoing (egress) DSCP marking to ISP, because may be mis-marked from ISP perspective. # Recommendation: Don't use "wash" on ingress so that the "Wi-Fi Multimedia (WMM) QoS" can make use of the custom DSCP marking and just use "wash" on egress.
-
Information about keywords to write in "overhead_type" option:
/etc/config/qosify
config defaults
list defaults /etc/qosify/*.conf
option dscp_icmp +besteffort
option dscp_default_tcp unmarked_traffic
option dscp_default_udp unmarked_traffic
config class unmarked_traffic
option ingress CS1
option egress CS1
option prio_max_avg_pkt_len 1256
option dscp_prio CS4
option bulk_trigger_pps 250
option bulk_trigger_timeout 10
option dscp_bulk CS1
config class browsing
option ingress CS0
option egress CS0
option prio_max_avg_pkt_len 575
option dscp_prio AF41
option bulk_trigger_pps 1000
option bulk_trigger_timeout 10
option dscp_bulk CS1
config class bulk
option ingress CS1
option egress CS1
config class besteffort
option ingress CS0
option egress CS0
config class network_services
option ingress CS2
option egress CS2
config class broadcast_video
option ingress CS3
option egress CS3
config class gaming
option ingress CS4
option egress CS4
config class multimedia_conferencing
option ingress AF42
option egress AF42
option prio_max_avg_pkt_len 575
option dscp_prio AF41
config class telephony
option ingress EF
option egress EF
config interface wan
option name wan
option disabled 0
option bandwidth_up 50mbit
option bandwidth_down 320mbit
option overhead_type docsis
# defaults:
option ingress 1
option egress 1
option mode diffserv4
option nat 1
option host_isolate 1
option autorate_ingress 0
option ingress_options ""
option egress_options "wash"
option options "ether-vlan"
config device wandev
option disabled 1
option name wan
option bandwidth 100mbit
/etc/qosify/00-defaults.conf
# SSH
tcp:22 network_services
# NTP
udp:123 network_services
# DNS
tcp:53 network_services
tcp:5353 network_services
udp:53 network_services
udp:5353 network_services
# DNS over TLS (DoT)
tcp:853 multimedia_conferencing
udp:853 multimedia_conferencing
# HTTP/HTTPS/QUIC
tcp:80 browsing
tcp:443 browsing
udp:80 browsing
udp:443 browsing
# Microsoft (Download)
dns:*1drv* bulk
dns:*backblaze* bulk
dns:*backblazeb2* bulk
dns:*ms-acdc.office* bulk
dns:*onedrive* bulk
dns:*sharepoint* bulk
dns:*update.microsoft* bulk
dns:*windowsupdate* bulk
# MEGA (Download)
dns:*mega* bulk
# Dropbox (Download)
dns:*dropboxusercontent* bulk
# Google (Download)
dns:*drive.google* bulk
dns:*googleusercontent* bulk
# Steam (Download)
dns:*steamcontent* bulk
# Epic Games (Download)
dns:*download.epicgames* bulk
dns:*download2.epicgames* bulk
dns:*download3.epicgames* bulk
dns:*download4.epicgames* bulk
dns:*epicgames-download1* bulk
# YouTube
dns:*googlevideo* besteffort
# Facebook
dns:*fbcdn* besteffort
# Twitch
dns:*ttvnw* besteffort
# TikTok
dns:*tiktok* besteffort
# Netflix
dns:*nflxvideo* besteffort
# Amazon Prime Video
dns:*aiv-cdn* besteffort
dns:*aiv-delivery* besteffort
dns:*pv-cdn* besteffort
# Disney Plus
dns:*disney* besteffort
dns:*dssott* besteffort
# HBO
dns:*hbo* besteffort
dns:*hbomaxcdn* besteffort
# BitTorrent
tcp:6881-7000 bulk
tcp:51413 bulk
udp:6771 bulk
udp:6881-7000 bulk
udp:51413 bulk
# Usenet
tcp:119 bulk
tcp:563 bulk
# Live Streaming to YouTube Live, Twitch, Vimeo and LinkedIn Live
tcp:1935-1936 broadcast_video
tcp:2396 broadcast_video
tcp:2935 broadcast_video
# Xbox
tcp:3074 gaming
udp:88 gaming
#udp:500 gaming # UDP port already used in "VoWiFi" rules
udp:3074 gaming
udp:3544 gaming
#udp:4500 gaming # UDP port already used in "VoWiFi" rules
# PlayStation
tcp:3478-3480 gaming
#udp:3478-3479 gaming # UDP ports already used in "Zoom" rules
# Call of Duty
#tcp:3074 gaming # TCP port already used in "Xbox" rules
tcp:3075-3076 gaming
#udp:3074 gaming # UDP port already used in "Xbox" rules
udp:3075-3079 gaming
udp:3658 gaming
# FIFA
tcp:3659 gaming
udp:3659 gaming
# Minecraft
tcp:25565 gaming
udp:19132-19133 gaming
udp:25565 gaming
# Supercell Games
tcp:9339 gaming
udp:9339 gaming
# Zoom, Microsoft Teams, Skype and FaceTime (they use these same ports)
udp:3478-3497 multimedia_conferencing
# Zoom
dns:*zoom* multimedia_conferencing
tcp:8801-8802 multimedia_conferencing
udp:8801-8810 multimedia_conferencing
# Skype
dns:*skype* multimedia_conferencing
# FaceTime
udp:16384-16387 multimedia_conferencing
udp:16393-16402 multimedia_conferencing
# GoToMeeting
udp:1853 multimedia_conferencing
udp:8200 multimedia_conferencing
# Webex Meeting
tcp:5004 multimedia_conferencing
udp:9000 multimedia_conferencing
# Jitsi Meet
tcp:5349 multimedia_conferencing
udp:10000 multimedia_conferencing
# Google Meet
udp:19302-19309 multimedia_conferencing
# TeamViewer
tcp:5938 multimedia_conferencing
udp:5938 multimedia_conferencing
# Voice over Internet Protocol (VoIP)
tcp:5060-5061 telephony
udp:5060-5061 telephony
# Voice over WiFi or WiFi Calling (VoWiFi)
udp:500 telephony
udp:4500 telephony
Not a big fan of port based rules, so I think "behavioral" rules will generally work better, so this seems reasonably sane... (except I guess the simplest rule would be some thing like: priorise packet sizes from ~80-1200 bytes and leave the rest alone )
i will test in 30 min after my work and keep inform on result of my gaming thanks for your work
do you know how to do port mirroring on firewall4 I have the subject here on the belkin rt3200, I tried but in vain last night thank you it's to see if the traffic is correct on my console with qosify and dscp mark
re: i'm enter , i test cod now edit 18:15
hello everyone after 1h30 of play, the results are very good in the feelings I have a great hitreg for the moment knowing that 5 devices are simultaneously connected to my house including the neighbor who asked me for my wifi I cannot copy on mac at the moment my tc -s qdisc so I attach the capture if you want to interpret the results;)
Well, all we can see is there is traffic in all 4 tins, but drops only in Bulk and BestEffort (let's ignore the 2 drops in wan's egress Video tin), indicating that the two high priority tins do not seem to contain too much traffic. peak delay (pk_delay_ in egress Best Effort sits at 9.42ms implying that your egress link was/is seeing quite some traffic around the time you sampled the statistics...
Personally I would not specify ptm, but simply make sure the shaper rate is at least 2% smaller than the sync rate.
Finally the overhead looks odd: overhead 26
, but at the same time accounted max packet size is 1550 for 1500 payloads, which seems unexpected to me
But then this is probably the result from ptm
:
(1500+26) * 65/64 = 1549.84375
-> which rounds to 1550...
ho hum, still think PTM's 64/65 coding is better dealt with statically by simply making sure the shaper rate is <= (64/65) * syncrate, but that is my personal preference.
ok thanks for clairification my config actual is like this
config interface wan
option name wan
option disabled 0
option bandwidth_up 16mbit
option bandwidth_down 56mbit
option overhead_type bridged-ptm
# defaults:
option ingress 1
option egress 1
option mode diffserv4
option nat 1
option host_isolate 1
option autorate_ingress 0
option ingress_options ""
option egress_options "wash"
option options "ether-vlan"
iam behind modem router + router OpenWrt
maybe not configurate in "option option "ether-vlan "
the result witouth ethervlan is ptm overhead 22
like my new screen
Yeah, I dislike these compound keywords and instead would add the following to egress_options
and ingress_options
:
mpu 68 overhead 26
and for bandwidth_up
and bandwidth_down
(I am not sure that "bandwidth" is the best term here) I would simply plug in the results achievable in a decent reliable speedtest... (which automatically deals with ptm's 64/65 coding).
But that really will not change anything substantial, except fitting better t my personal taste (the mpu stanza might change things but you would need to have substantial ACK traffic under saturation before this is likely to become an issue).
config interface wan
option name wan
option disabled 0
option bandwidth_up 16mbit
option bandwidth_down 56mbit
option overhead_type none
# defaults:
option ingress 1
option egress 1
option mode diffserv4
option nat 1
option host_isolate 1
option autorate_ingress 0
option ingress_options "mpu 68 overhead 26"
option egress_options "mpu 68 overhead 26 wash"
option options ""
??
Since I have no device with qosify installed I have no idea whether that is a valid configuration, but if you post the tc -s qdisc
output after feeding that config to qosify we should be able to see how cake ends up being configured.
yes he seems goods now thanks i will try
it's hard to get an idea of the game because unfortunately the game I'm playing is full of cheats some have modified controllers other so called whallack aim etc, but I know one thing it's the way of which my character moves if it's fluid or not, this evening I had good hitregs but two or three character movement problems the servers were updated today I will test tomorrow to see if they have stabilized because I think that the problem comes from them and not from the network at home because my line is good having watched everything;)
It didn't work for me but I didn't notice any improvement
I find it works best for me to prioritize just my game and everything else to the best effort class
hello if you prefer so, but the current settings are really very good on my side;)
I think you misunderstand me I am not saying that it does not work it is that in my link it seems that it does not work
hello everyone, @moeller0
can you give me the command to perform for adsl
and for fiber optic please
for qosify option.. thanks in advance
I can only guess:
atm mpu 96 overhead 48
on adsl quite a number of different encapsulations can be used (including ptm) so this is really guess work, alternatively if the link uses atm/aal5 you can follow https://github.com/moeller0/ATM_overhead_detector and try to heuristically deduce the actual per packet overhead on the link, 48 however is the largest per packet overhead I have personally encountered. mpu 96 the payload of 2 ATM cells is also mainly a guess.
That is not specific enough, as it depends on the exact link, e.g. most active optical ethernet will require something like mpu 84 overhead 38
, but GPON/XGSPON will be different...
Why? Isn't that the usual standar used by ISPs?