Network traffic priorization

When having Discord open while playing using NGN (Nvidia Geforce Now).
I frequently get a spotty connection warning in NGN, which is resolved by closing Discord. This is annoying bc this means I can't use Discord to chat or use Voice Call while I'm playing games.

Is there a way to prioritize the Nvidia Geforce Now traffic over the Discord traffic?
My router is a TP-Link Archer C60 v2.

I apologize if this makes little sense, I'm not very experienced.

EDIT: Added router model.

Not sure how much traffic/flows Discord uses, but i would not be surprised if installing and enabling SQM might not help, See:

and

for instructions. This is not guaranteed to fix your problem, but often does.

Out of curiosity how fast is your internet access (in download and upload direction) and what technology (DSL/Cable-DOCSIS/Fiber/?) are you using.

I have 200 mbits download and 60 upload.
I don't know hot to respond about the technology. Idk, exactly how to respond.
I live in an apartment building. It's supposed to be fiber. But from outside the apartment it comes through an ethernet cable. And on the router it's using the PPPoe protocol.

Ah okay, your router will not be capabable of traffic shaping anywhere close to your contracted rates, I would guess something like 60-80 Mbps combined over both directions (so you can split as you like) should be the limit for SQM on that hardware.

OK, so not DSL or DOCSIS then.

Dang, PPPoE has some noticeable CPU cost, not helping on your router.

If you want to test SQM maybe start by setting the shaper to say 30/30 (or even 25/25) Mbps just to start with values your hardware should be able to actually deliver (and that might still allow you to tests whether SQM helps). But at the same time 30 Mbps might be too little for game streaming and discord, so also try say 50/10 to account for the expected higher download traffic.

Assuming that actually helps (which is not guaranteed) you can either opt to:
a) just accept the steep throughput cost all the time and leave SQM enabled
b) only enable SQM when you need its better latency under load behaviour
c) buy a beefier router that can actually traffic shape at 200+60 Mbps (which is harder than one typically assumes)

I installed SQM and tried 45/15. Seems to be working a bit better.
But still get better quality in my cloud gaming while closing discord.

Intersting, could you post the output of the following commands on the router's command line (use ssh to log in):

  1. cat /etc/config/sqm
  2. tc -s qdisc
  3. ifstatus wan | grep -e device
config queue 'eth1'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option linklayer 'none'
        option interface 'pppoe-wan'
        option download '45000'
        option debug_logging '0'
        option verbosity '5'
        option enabled '1'
        option upload '15000'
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 25616803926 bytes 26346169 pkt (dropped 0, overlimits 0 requeues 10)
 backlog 0b 0p requeues 10
  maxpacket 1514 drop_overlimit 0 new_flow_count 13 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 1734001535 bytes 12462951 pkt (dropped 0, overlimits 0 requeues 431)
 backlog 0b 0p requeues 431
  maxpacket 1502 drop_overlimit 0 new_flow_count 486 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8009: dev pppoe-wan root refcnt 2 bandwidth 15Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 1312732685 bytes 12293882 pkt (dropped 5336, overlimits 411621 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 3851712b of 4Mb
 capacity estimate: 15Mbit
 min/max network layer size:           30 /    1480
 min/max overhead-adjusted size:       30 /    1480
 average network hdr offset:            0

                  Tin 0
  thresh         15Mbit
  target            5ms
  interval        100ms
  pk_delay        823us
  av_delay         74us
  sp_delay         21us
  backlog            0b
  pkts         12299218
  bytes      1320439410
  way_inds       274399
  way_miss        89125
  way_cols            0
  drops            5336
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len         31080
  quantum           457

qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ----------------
 Sent 48937104761 bytes 43145154 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 800a: dev ifb4pppoe-wan root refcnt 2 bandwidth 45Mbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 48885344632 bytes 43109230 pkt (dropped 35924, overlimits 59125076 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1344Kb of 4Mb
 capacity estimate: 45Mbit
 min/max network layer size:           28 /    1480
 min/max overhead-adjusted size:       28 /    1480
 average network hdr offset:            0

                  Tin 0
  thresh         45Mbit
  target            5ms
  interval        100ms
  pk_delay        493us
  av_delay        107us
  sp_delay         13us
  backlog            0b
  pkts         43145154
  bytes     48937104761
  way_inds       361752
  way_miss       113972
  way_cols            0
  drops           35924
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1480
  quantum          1373
 "l3_device": "pppoe-wan",
        "device": "eth1",

Mmmh, given the massive difference between shaper speed and true link speed I would recommend to use the following /etc/config/sqm:

config queue 'eth1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option itarget 'auto'
        option etarget 'auto'
        option verbosity '5'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option qdisc_advanced '1'
        option squash_dscp '0'
        option squash_ingress '1'
        option qdisc_really_really_advanced '1'
        option eqdisc_opts 'nat dual-srchost'
        option linklayer 'ethernet'
        option linklayer_advanced '1'
        option tcMTU '2047'
        option tcTSIZE '128'
        option linklayer_adaptation_mechanism 'default'
        option debug_logging '1'
        option iqdisc_opts 'nat dual-dsthost ingress'
        option interface 'pppoe-wan'
        option tcMPU '84'
        option enabled '1'
        option overhead '42'
        option download '45000'
        option upload '15000'

The overhead/tcMPU settings are relevant if lots of small packets (like TCP-ACKs) are being sent and if the shaper rate is close to the true bottleneck/link speed; so in your current case this is mostly cosmetic but prepares your sqm configuration for operating closer to the true limit.

The nat dual-xxxhost settings configure cake for per-internal-IP isolation which in theory should do the right thing except in your case with dscord and gforce now on the same computer will not exchange much...

Regarding the fact that the issue still is there, it would be good to check whether the router's CPU is maxed out while playing and using discord. The quick and dirty way of doing that on your single core (on a multicore router you would need htop and adjust its configuration) router is:

  1. log into the router via SSH
  2. `top -d 1'
  3. start playing/discording
  4. Calculate 100- %-idle as an indicator for the CPU load. As an example without load:
Mem: 75760K used, 44728K free, 33160K shrd, 0K buff, 47456K cached
CPU:   0% usr   2% sys   0% nic  95% idle   0% io   0% irq   0% sirq
Load average: 0.00 0.00 0.00 2/88 18544

so the CPU load would be 100-95 = 5%, the device is not doing anything

with a speedtest:

Mem: 76296K used, 44192K free, 33160K shrd, 0K buff, 47660K cached
CPU:   1% usr   6% sys   4% nic  33% idle   0% io   0% irq  54% sirq
Load average: 0.18 0.06 0.01 2/87 18595

so the CPU load would be 100-33 = 67%, which is a lot, given that this is a dual core router 67% actually means one core is maxed out ('50%' of total while the second core was at '17%' of total).

  1. use CTRL-c to end top (or press q to ed htop)

If the load gets into the 90-100% during your gaming when the issue occur, this would indicate the router already being overloaded (leaving the option to reduce the shaper rates even further or get a beefier router). The first option however is likely to run into the issue that gforce now requires 15 Mbps for 720, but already >= 25 Mbps for 1920 (and more for higher spatial and temporal resolutions).

Thanks. I tried your setting.
While having discord in a vc + geforce now + streaming on obs on my computer,
and having a live stream running on the smart tv, the lowest idle % I got was 37%.

Seems to be working great.

That indicates that you might have some slack for higher shaper rates, maybe try 45/20?

So where cake really shines is in isolating different machines from each other, so it should isolate the smart tv stream from vc + geforce now + OBS on your computer, however vc + geforce now + OBS will all compete directly on your computer. There cake will still offer per-flow fairness, which works fine as long as your application use similar numbers of flows for their main data.

Could you elaborate on this, please?

Nice, I'll try that and see how it goes. Thanks again

cake by default uses stochastic per flow queueing, so unless you have higher numbers of concurrent flows each flow will find its own dedicated queue, and these queues are serviced fairly, so that all queues (that have queued packets) will get an equal share of the capacity. If one application, say the OBS stream uses a single flow, but another application, say a bittorrent client seeding to 100 peers uses 100 flows, each flow will get its equitable share of the capacity, but on an application level bittorrent will get 100/101 (~99%) of the link's capacity... if however the uploading application uses a single flow upload will only get 1/2 (50%).
SQM with cake and the proposed configuration will however first share the capacity equally between the active IP addresses, and within each IP address by flow. So in our example, if we move the bittorrent client to a different computer with a different IP, in spite of it haven 100 flows it will only get as much capacity as the single OBS stream on the other machine.

Does this make sense?

Yes, very well explained. Thanks!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.