R7800 and PS4 performance

Looking to get the best performance now that I figured out speed issues. I have the latest firmware for openWRT. Enabled qos with fq_codel with simple.qos software flow overloading is on. Dns is 1.1.1.1 Ps4 is wired behind router.

I have a moderate type 2 connection. It was Open Nat before installing openWRT. Also looking to get better latency.

What kind of performance are you looking for here? Wireless? WAN to LAN (Ethernet)? USB?

Have you looked around at the numerous existing R7800 topics relating to performance?

^^ wired behind router performance to get best latency and open nat.

What performance issue are you facing currently? Also, you didn’t answer my question about whether you’ve looked at the many R7800 threads already here. There’s a lot of good info in existing threads and I many contain good things you can tweak for extra performance.

lol I said twice man.. I need open nat and better latency for ps4

I’ve looked yes
Haven’t tweaked cpu or other options other than what I listed.

So what do you want then? You have options you haven’t tried—why not try them?

You have given virtually nothing to go off of here. You haven’t explained your configuration with any real details. Connection speed? Connection type? What latency are you seeing now? What performance issues are you facing now, specifically? Details matter.

Help us help you.

1 Like

Wanted to fix latency and nat type first
I have cable 600 down 40up
Latency spikes from 75ms to 150ms
Performance issues.. lag due to latency

For the record, fixing latency first isn’t a mutually exclusive thing from CPU and other performance tweaks. They typically are very much hand-in-hand, especially with SQM in play. SQM when tuned a bit provides excellent improvements in overall latency, but it comes at the expense of CPU processing to shape your flows. Don’t discount CPU at this point thinking you can just deal with it later.

Also, if you really have read through many of the other R7800 performance threads, you should have seen mention of CPU governor settings. Don’t assume that’s a “later thing” either. If you want packets to move from WAN to LAN and back faster, your CPU has to be fast to get the packets processed. Again, SQM needs CPU power, but the faster your CPU ramps to full-speed also plays a part in that. Therefore, go back and look at governor settings and things like “up_threshold” and don’t discount them until your latency is fixed.

If you can get some CPU tweaks in place and working well, you might even be able to try cake/piece_of_cake for an even better experience. But, I’m not sure if your R7800 can handle cake on 600mbps of ingress.

Questions for you...

What version of OpenWrt are you running?

Did you look into irqbalance, or some other manual IRQ balancing?

Could you post your SQM config here?

Are the latency spikes random or do you see them only when running a speed test?

Have you looked at your CPU load (both cores) when you notice the latency spike(s)? If so, what kind of load are you seeing?

What is the actual measured speed you are seeing during a wired speed test?

FWIW, I don’t own a PS4. But from some initial looking I just did, type 2 NAT appears to be Sony’s recommendation. Type 1 (open) would indicate no firewall in front of the PS4 which would be a very insecure way to operate. (https://ps4dns.com/how-to-change-nat-type-ps4/#Ps4_NAT_Types)

1 Like

My apologies for not being more specific
Let me get back to you on those later today

im open to doing the cpu tweaks now..

What version of OpenWrt are you running?
OpenWrt 19.07.3 r11063-85e04e9f46 / LuCI openwrt-19.07 branch git-20.136.49537-fb2f363

Did you look into irqbalance, or some other manual IRQ balancing?
no, not sure what that is.

Could you post your SQM config here?
root@OpenWrt:~# cat /etc/config/sqm
config queue
option debug_logging '0'
option verbosity '5'
option qdisc 'fq_codel'
option script 'simple.qos'
option linklayer 'none'
option interface 'eth0'
option download '0'
option qdisc_advanced '0'
option enabled '1'
option upload '27500'

Are the latency spikes random or do you see them only when running a speed test?
random

Have you looked at your CPU load (both cores) when you notice the latency spike(s)? If so, what kind of load are you seeing?
no, not sure how to do that.

What is the actual measured speed you are seeing during a wired speed test?
i have a 1gb connection with 40 or 50 up
xfinity came out two days ago and did test behind modem at 762down and behind router in port 1 ethernet at 704down
my iphone gets 500mb on 5ghz
my laptop is pretty old and only gets 10download and 300 wired.

Ps4 works great when no one using WiFi
Then when someone uses Netflix I get horrible lag and latency :confused:

Do you have speedtest-netperf installed? If not, could you install and run it, then post your result here?

installed.. where do I find and run it?

You'll need to SSH into your router and run:
# speedtest-netperf.sh

did that.. came back with errors.. and not sure how to use the commands from a readme file

root@OpenWrt:~# speedtest-netperf.sh
2020-07-23 01:32:18 Starting speedtest for 60 seconds per transfer session.
Measure speed to netperf.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
.
WARNING: netperf returned errors. Results may be inaccurate!

 Download:   0.00 Mbps
  Latency: [in msec, 0 pings, 0.00% packet loss]
 CPU Load: [in % busy (avg +/- std dev), 0 samples]
 Overhead: [in % used of total CPU available]
  netperf:   0.0
.
WARNING: netperf returned errors. Results may be inaccurate!

   Upload:   0.00 Mbps
  Latency: [in msec, 1 pings, 0.00% packet loss]
      Min:  47.721
    10pct:   0.000
   Median:   0.000
      Avg:  47.721
    90pct:   0.000
      Max:  47.721
 CPU Load: [in % busy (avg +/- std dev), 0 samples]
 Overhead: [in % used of total CPU available]
  netperf:   0.0

Try specifying this server instead (assuming you are in the US...):
# speedtest-netperf.sh -H flent-newark.bufferbloat.net

When you post console output, it's best to put it in preformatted text blocks. The way you do that is look for the </> button on the toolbar.

root@OpenWrt:~# speedtest-netperf.sh -H flent-newark.bufferbloat.net
2020-07-23 01:39:19 Starting speedtest for 60 seconds per transfer session.
Measure speed to flent-newark.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
............................................................
 Download: 690.94 Mbps
  Latency: [in msec, 61 pings, 0.00% packet loss]
      Min:  36.121
    10pct:  40.485
   Median:  62.897
      Avg:  67.117
    90pct: 101.102
      Max: 139.142
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 57 samples]
     cpu0:  77.0 +/-  3.8  @ 1540 MHz
     cpu1:  52.3 +/-  3.4  @ 1404 MHz
 Overhead: [in % used of total CPU available]
  netperf:  54.4
.............................................................
   Upload:  26.02 Mbps
  Latency: [in msec, 61 pings, 0.00% packet loss]
      Min:  35.530
    10pct:  37.849
   Median:  39.595
      Avg:  39.744
    90pct:  41.683
      Max:  42.773
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 57 samples]
     cpu0:  13.5 +/-  3.9  @ 1152 MHz
     cpu1:   7.9 +/-  2.7  @  741 MHz
 Overhead: [in % used of total CPU available]
  netperf:   4.0

Try updating your SQM config to this and run that test again:

config queue eth0
    option debug_logging '0'
    option verbosity '5'
    option qdisc 'fq_codel'
    option script 'simple.qos'
    option linklayer 'none'
    option interface 'eth0'
    option download '500000'
    option qdisc_advanced '0'
    option enabled '1'
    option upload '27500'

not sure where to change debug logging, verbosity, qdisc_advanced
but I changed download to 500000

root@OpenWrt:~# speedtest-netperf.sh -H flent-newark.bufferbloat.net
2020-07-23 01:47:37 Starting speedtest for 60 seconds per transfer session.
Measure speed to flent-newark.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
............................................................
 Download: 441.29 Mbps
  Latency: [in msec, 61 pings, 0.00% packet loss]
      Min:  35.425
    10pct:  38.354
   Median:  44.360
      Avg:  44.067
    90pct:  48.410
      Max:  50.533
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 57 samples]
     cpu0:  63.8 +/-  0.0  @ 1550 MHz
     cpu1:  77.9 +/- 10.8  @ 1571 MHz
 Overhead: [in % used of total CPU available]
  netperf:  43.2
.............................................................
   Upload:  25.85 Mbps
  Latency: [in msec, 61 pings, 0.00% packet loss]
      Min:  36.987
    10pct:  37.580
   Median:  39.935
      Avg:  49.035
    90pct:  44.474
      Max: 175.598
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 58 samples]
     cpu0:  12.2 +/-  3.7  @ 1108 MHz
     cpu1:   7.6 +/-  2.4  @  725 MHz
 Overhead: [in % used of total CPU available]
  netperf:   3.8

Okay, so a couple things to note here. Your first test download indicated you are diving deep into your cable provider's over provisioning margin (usually 10-20% over stated rates), which leads to buffer bloat at your cable modem. This is seen by the 31ms average over your min, but more apparent at 101ms 90pct.

For that reason, I had you set your ingress bandwidth to ~500mbps, which is below your subscribed rate, but I wanted to make sure we shifted the "bottleneck" away from your cable modem (which you cannot control) to your router (which you can control). Tweaking that 500mbps number upward is a bit of an art, but for now I'd recommend leaving it there. As you can see in the second test, your download latency was much improved with an average of only +9ms over min, and a 90pct at only +13ms.

Let's look at the download now. I know you said you have 40mbps up, but your actual results are telling a different story right now. This is often the case during busy times of day when oversubscription on your cable connection between you and your provider causes this type of behavior.

For the sake of another test, try setting your upload to 20000 and send the test results back...