don't need add veth0 and veth1
For your rules, would you prefer them in the FORWARD Chain, or in the POSTROUTING chain? Please explain your logic?
When do you speculate this will become available, and is there away to test out this approach now?
Where did you find this config?
I do not use this script at all so can not answer your question, sorry.
it seems strange to me that it doesn't work for you i am using the script on a rpi4b and it works fine i haven't noticed the internet going down
This sounds like a very interesting solution.
Where can I learn more about this approach? Is there a thread about it or a package or repo?
This looks excellent. We just need dnsmasq 2.87 so that we can tag traffic by domain / nftset/ ipset.
i am trying dscpclassify my question is how can i add my games ?
config global 'global'
option class_bulk 'le'
option class_high_throughput 'af13'
option client_hints '1'
option threaded_client_min_bytes '10000'
option threaded_service_min_bytes '1000000'
option wmm '1'
config set
option name 'xcloud'
option family 'ipv4'
option interval '1'
list element '13.104.0.0/14' # Western Europe
config rule
option name 'DNS'
list proto 'tcp'
list proto 'udp'
list dest_port '53'
list dest_port '853'
list dest_port '5353'
option class 'cs5'
config rule
option name 'BOOTP/DHCP'
option proto 'udp'
list dest_port '67'
list dest_port '68'
option class 'cs5'
config rule
option name 'NTP'
option proto 'udp'
option dest_port '123'
option class 'cs5'
config rule
option name 'SSH'
option proto 'tcp'
option dest_port '22'
option class 'cs2'
config rule
option name 'Xbox Cloud Gaming'
option proto 'udp'
option dest_ip '@xcloud'
option dest_port '1000-1150'
option class 'af41'
option family 'ipv4'
config rule
option name 'Teams voice'
option proto 'udp'
option src_port '50000-50019'
option dest_port '3478-3481'
option class 'ef'
config rule
option name 'Teams video'
option proto 'udp'
option src_port '50020-50039'
option dest_port '3478-3481'
option class 'af41'
config rule
option name 'Teams sharing'
option proto 'udp'
option src_port '50040-50059'
option dest_port '3478-3481'
option class 'af21'
config rule
option name 'ICMP'
option proto 'icmp'
option class 'cs5'
option enabled '0'
Credit should really go to @yelreve who created the dscpclassify repo:
Either way, it’s all good!
Looks cool. One thing though - so that includes a map between a special user config and nftables? But why not just have the user edit nftables directly? Either something could go wrong in the mapping or nftables coding seems easy enough?
can we disable this script in the download direction i tried it but stopping the script only works with upload as mentioned above thank you
because i have a problem with pppoe thats keeps disconnecting ,so i switched from regular bridge mode to this method poor man's bridge mode
so my question is do these dscp markings still work with this method ? considering my gateway(non openwrt) has its own qos which is disabled but im wondering if this messes with the markings?
i was trying to capture my gaming packets
here is a picture from wireshark using sshdump
why the same packet have diffrent marking ?
Look at the destination address, the CS0 marked packet goes to 192.168.2.3, the CS4 marked packet goes to 192.168.1.144, so these are clearly different flows. You will need to look into your marking rules to decide whether that makes sense or not...
So sqm-scripts itself leverages OpenWrt's hotplug system to automatically restart if an interface reconnects, not sure how/if that works with veth interfaces...
192.168.1.244 is my pc LAN adress that im using for gaming and 192.168.2.3 is my WAN(eth0.2) adress
are my markings working right according to wireshark? considering i want my game packets to be marked with CS4
Well, assuming it is the same packet it's DSCP was changed from CS0 to CS4, and roughly where you expect it after the network address translation. So far so good.
Question: How did you set-up this capture? I tend to capture from individual interfaces and hence do not see the same packet twice?
However that does not strictly prove that the marking happened before your priority scheduler got hold of the packet. For this you should also look at the output of tc -s qdisc
and confirm that the byte and packetcounter of the priority tier carrying CS4 increases when you see such packets in your captures.
There are two things to consider here:
a) are the re-marking rules work as documented
b) do your marking rules adequately describe your intent
Not much I can say for any of those questions (I have no first-hand experience with the script of this thread).
im using remote ssh capture feature with username and password that allows wireshark to sniff traffic directly from router .
root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 357989272 bytes 543773 pkt (dropped 0, overlimits 0 requeues 5)
backlog 0b 0p requeues 5
maxpacket 1399 drop_overlimit 0 new_flow_count 30 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev wlan0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 800c: dev eth0.2 root refcnt 2 bandwidth 800Kbit diffserv8 dual-srchost nat wash ack-filter split-gso rtt 47ms atm overhead 40
Sent 1105468 bytes 7618 pkt (dropped 255, overlimits 5500 requeues 0)
backlog 0b 0p requeues 0
memory used: 89440b of 4Mb
capacity estimate: 800Kbit
min/max network layer size: 28 / 1492
min/max overhead-adjusted size: 106 / 1696
average network hdr offset: 14
Tin 0 Tin 1 Tin 2 Tin 3 Tin 4 Tin 5 Tin 6 Tin 7
thresh 800Kbit 700Kbit 612496bit 535928bit 468936bit 410312bit 359016bit 314136bit
target 22.8ms 26ms 29.7ms 34ms 38.8ms 44.4ms 50.7ms 58ms
interval 67.4ms 70.7ms 74.4ms 78.6ms 83.5ms 89ms 101ms 116ms
pk_delay 0us 97.6ms 5.42ms 20.2ms 40us 0us 1.18ms 2.03ms
av_delay 0us 3.5ms 262us 3.65ms 1us 0us 86us 39us
sp_delay 0us 1.22ms 31us 80us 1us 0us 28us 21us
backlog 0b 0b 0b 0b 0b 0b 0b 0b
pkts 0 46 4622 566 10 0 2549 80
bytes 0 59658 321666 192623 900 0 549271 5892
way_inds 0 0 0 0 0 0 21 0
way_miss 0 11 225 58 10 0 104 4
way_cols 0 0 0 0 0 0 0 0
drops 0 2 0 0 0 0 1 0
marks 0 0 0 0 0 0 0 0
ack_drop 0 0 252 0 0 0 0 0
sp_flows 0 0 1 1 0 0 0 1
bk_flows 0 0 0 0 0 0 1 0
un_flows 0 0 0 0 0 0 0 0
max_len 0 5864 1506 1270 90 0 1342 393
quantum 300 300 300 300 300 300 300 300
qdisc cake 800b: dev br-lan root refcnt 2 bandwidth 12Mbit diffserv8 dual-dsthost nonat nowash ingress no-ack-filter split-gso rtt 47ms atm overhead 40
Sent 74528683 bytes 56463 pkt (dropped 414, overlimits 81583 requeues 0)
backlog 7530b 5p requeues 0
memory used: 183276b of 4Mb
capacity estimate: 12Mbit
min/max network layer size: 28 / 1492
min/max overhead-adjusted size: 106 / 1696
average network hdr offset: 14
Tin 0 Tin 1 Tin 2 Tin 3 Tin 4 Tin 5 Tin 6 Tin 7
thresh 12Mbit 10500Kbit 9187Kbit 8039Kbit 7034Kbit 6154Kbit 5385Kbit 4712Kbit
target 2.35ms 2.35ms 2.35ms 2.35ms 2.58ms 2.95ms 3.37ms 3.85ms
interval 47ms 47ms 47ms 47ms 47.2ms 47.6ms 48ms 48.5ms
pk_delay 0us 0us 6.39ms 0us 809us 0us 2.51ms 92us
av_delay 0us 0us 5.24ms 0us 52us 0us 797us 5us
sp_delay 0us 0us 4.35ms 0us 52us 0us 92us 5us
backlog 0b 0b 7530b 0b 0b 0b 0b 0b
pkts 0 0 48489 0 40 0 8317 36
bytes 0 0 68834493 0 5724 0 6311000 8043
way_inds 0 0 0 0 0 0 21 0
way_miss 0 0 212 0 2 0 98 2
way_cols 0 0 0 0 0 0 0 0
drops 0 0 414 0 0 0 0 0
marks 0 0 0 0 0 0 0 0
ack_drop 0 0 0 0 0 0 0 0
sp_flows 0 0 0 0 1 0 1 1
bk_flows 0 0 1 0 0 0 0 0
un_flows 0 0 0 0 0 0 0 0
max_len 0 0 7530 0 518 0 1322 353
quantum 366 320 300 300 300 300 300 300
qdisc noqueue 0: dev eth0.1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Not sure about the exact numbers, but according to cake's code:
and comments:
static int cake_config_diffserv8(struct Qdisc *sch)
{
/* Pruned list of traffic classes for typical applications:
*
* Network Control (CS6, CS7)
* Minimum Latency (EF, VA, CS5, CS4)
* Interactive Shell (CS2, TOS1)
* Low Latency Transactions (AF2x, TOS4)
* Video Streaming (AF4x, AF3x, CS3)
* Bog Standard (CS0 etc.)
* High Throughput (AF1x, TOS2)
* Background Traffic (CS1)
*
* Total 8 traffic classes.
*/
CS4 maps into Tin 6, and your cake statistics show some packets in Tin 6 so things might work as expected. Only you can figure out whether the number of packets meets your expectation.
This BTW also illustrates why I personally would always try with the most minimal prioritization rules (e.g. only for the game in question) because then confirmation might be done by just checking if packets arrive in the targeted Tin instead of having to look closely at the counts. But maybe you are lucky and no other application but your game gets its packets marked CS4.
when i used tc -s qdisc i was running the game so the traffic that is mapped in tin 6 is for sure belongs to the game and it doesn't require much bandwith anyway , so i think its working as intended.