For a network that 10 people use, I think 5 would not fit?

Accoding to my notes the obvious correct but unhelpful number is 42....

Just kidding, the point is you really really need to understand that prioritization (under full load) is a zero sum game, for every packet you transmit preferentially faster some other packet(s) need to be delayed. Bulk detection really just allows you to declare largish data transfers as less important automatically and deprioroitize them making some more 'room' for prioritizing other packets, but at the potential cost of lower throughput of the bulk flows.

Which thresholds to use to declare a flow as bulk or shift back from bulk to a different class really depends on what you want to achieve here and it is unlikely that my thesholds would work for your traffic mix (mind you I do not use qosify yet and do not have any rules).

smashed my git push the other day to 150kbps or something with no other traffic on the network (30Mbs pipe)... (but that seems to be more an artifact of diffserv4 LE bw allocation)

definitely need to tune up the params somewhat... then again... I might not have flushed my old custom rules properly also...

That would be a bug worth reporting on LKML or net-dev, without any competing traffic the bulk tier should still get the full capacity, albeit with largish target and intervall values IIRC. So pure LE traffic without competition should still saturate a link.

That said personally I typically up prioriitize few things like ssh/mosh andblet cake's sparse boosting deal with the rest, which for my traffic pattterns typically works satisfactorily.

What does tc -s qdisc reort and is your git mainly uploading or downloading?

1 Like

trashed it straight away and went back to sqm-scripts (next time will gather more info), was a git push so upload...

at the time I assumed it was http/s but git is all ssh these days so i'm assuming I might need to remove the + here to circumvent bulk detection...?

[root@dca632 /usbstick 55°]# grep -A1 SSH /etc/qosify/00-defaults.conf 
# SSH
tcp:22		+video

or I really did have some stale rules somewhere...

but lowest tin on diffserv4 was really tiny... less than 1M I think... from memory (could be wrong)

ok just fired it up (for tin width/thresh);

 capacity estimate: 36Mbit
                   Bulk  Best Effort        Video        Voice
  thresh       2250Kbit       36Mbit       18Mbit        9Mbit

was not really having a good crack at it, so take the report with a grain of salt I suppose... but as the discussion above was about bulk stuff, could not resist throwing the experience out there :wink:


I run similar rules with sqm-scripts and have never had issues... so my assumption was it had to do with qosify itself... and can't rule out the git client playing some role here suppose... I think it's a bit more 'considered' than typical network clients...

1 Like

This is misleading a bit, as this only shows the guaranteed capacity of the bulk tier, without competing traffic in other tiers bulk classified traffic is still supposed to use the full capacity (for the other tiers the threshold shows up to which rate flows stay in that tier if there is too much traffic some packets/flows get demoted to lower priority tiers).

1 Like

cheers for that... I was not sure about it so next time I mess around I can be a bit more confident of where to look...

did not pay special attention but there is no doubt it was heavily clamped to a steady crawl... ( hence the references, description above)... fairly certain it's not cake so my end or qosify I guess...

ok... looks like I might have been talking shit as just re-ran the test and git push showed around 2.5MiB/s

tc -s qdisc
ingress status:
qdisc cake 1: root refcnt 2 bandwidth 36Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms raw overhead 0 
 Sent 2446666 bytes 35097 pkt (dropped 0, overlimits 13533 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 16426b of 4Mb
 capacity estimate: 36Mbit
 min/max network layer size:           60 /    1514
 min/max overhead-adjusted size:       60 /    1514
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh       2250Kbit       36Mbit       18Mbit        9Mbit
  target         8.07ms          5ms          5ms          5ms
  interval        103ms        100ms        100ms        100ms
  pk_delay          0us        685us         94us         16us
  av_delay          0us         32us          9us          1us
  sp_delay          0us          4us          6us          1us
  backlog            0b           0b           0b           0b
  pkts                0          502        34562           33
  bytes               0       134247      2309421         2998
  way_inds            0            0            0            0
  way_miss            0          105            2           17
  way_cols            0            0            0            0
  drops               0            0            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            1
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len             0         3028         1584          131
  quantum           300         1098          549          300

must have been some other external factors involved (i.e. there was competing traffic, something to do with time, sqm maybe started or accidentally left running at the same time > nope it was likely server side clamping) apologies everyone for the noise

Of course, as I said, I only prioritize the ports of my game and nothing more than when the threshold of 100 pps is exceeded and that queue has to leave the prioritization queue for 5 seconds to normalize and return to the priority queue Saying what I understood that time that separates me seems little to me since I suppose that more queues that are considered massive may arise from that queue and since in my network there are several users using it, that is why my doubt

sorry my language is not english so the translator did what he could

No need to apologize ascfar as I am concerned, no code is perfect and bugs do happen. It is IMHO also just clear that the reported min capacity looks sane, still possible that actual bulk traffic runs into problems in cake.....

So again I never tested qosify and hence am probably not the best person to ask.

That said, I would simply disable the bulk de-prioritization rules during testing, remember leave EVErYTHING in best effort except for the game.

If that works I would slowly start to reintroduce all rules I would consider reasonable/helpful, but one-by-one to make sure none of these rules degrade/negatively affect gaming. But keep in mind that gamind depends on many factors so I would probably give each individual change a week or so to develop a judgement whether it hurts gaming.

I suppose that the creator of qosify can help with these doubts that I have @nbd

Possible, but my hunch is that your questions are less functional 'how does qosify work' and more policy related 'how should I configure qosify to achieve goal X'. I wonder whether anybody will be able to take the burden of selecting the 'optimal' policy for your network from your shoulders.

2 Likes

Do I also deactivate the rules of deprioritization of the game or leave it active?

Your choice, but the simplest test is literally to keep EVERYTHING in best effort BUT your game's packets. If that actually helps then it makes sense to spend more time refining your qosify configuration. BUT if that does NOT solve your issues with the game you need to explore different solutions.

okay. I already put everything in the best effort class, including my game.

You are free to do that, but to test whether prioritization can help I would put the game and ONLY the game's packets into a higher priority bin. As it appears we are communicating past each other here, and I clearly am not helping, I will try to reduce my footprint in this thread.

2 Likes

Well, I don't know if it's weird, but when I put my game in the best effort class, the latency stayed low and I was able to play smoothly.
wz

Last mail from me and purely speculative.

This might either show that:
a) the typical disturbances in your home network that hinder your gaming were inactive.
b) the problems upstream of your home in your ISP's network were absent during the test.
c) both of the above.

As always it is worth testing each change for at least a week to get a feel what actually helps and what does not help. Ideally one would use some hard measurements and compare hard numbers, but these are, as far as I understand somewhat hard to get since gaming services do not offer great measurement infrastructure to test against.

I also thought the same but I went out to observe and everyone used the network but as you say I would have to try like that for a week I guess

1 Like

I did the test by putting the dns ports in the video class, the latency went up