Packet Steering question

I think that's more suited to a server environment vs a router in this case. I've test 1 vs 2 with the default hash of equal 1 or weight 0 1, and the default of 2 pairs with equal 2(spreading the hash more even without a hash key) is more profitable. 1 pair seems cause more cpu usage on only 2 cores vs across them. I guess the jury is still out on that. My tests have been from nperf.com and waveform.

new driver error from that link :frowning_face:
Cannot get RX network flow hashing options: Not supported
This means it's not spreading across the cores effectively.

There is a bug tracker on the driver repository.
It would be useful to try to place a ticket regarding the issue with UDP RX flow hash function.

sqm script simplest_tbf.qos gives great performance and the lowest cpu usage +/-15% per core for 200/20 docsis.

Cool, thanks for your contributions in this thread.

Can you please confirm if you are using the stock OpenWRT igb driver or the csrutil ones?

The stock one in 22.03.3 seems to support RSS already, so I'm leery of using another driver.

One more question: are you using igb RSS 2,2 in /etc/modules.d/35-igb or still using your script that use ethtool to set the NICs?

I'm currently using stock v22.03.3 combined squash as ANY EFI build has weird errors ranging from stalled reboots to loading issues.

RSS=x,x,x,x has no effect on stock. still using my ethtool script.

RSS is enabled but only select distrubtions such as Mellanox or Nvidia are actually taking advantage of the hash that assigns the cores to cpus, thus RSS should be assigned 1 queue per cpu and utilize RPS and RFS.

By default linux supposedly uses the sysct /proc/sys/net/core/netdev_rss_key but this is not the case as the drivers don't take advantage of this either, so RSS is kinda useless if you can't guide the system on how to direct the queues.

1 Like